Image Title

Search Results for 24 racks:

Sachin Gupta, Google Cloud | CUBE Conversation 2021


 

(upbeat music) >> Welcome to this Cube Conversation. I'm Dave Nicholson, and this is continuing coverage of Google Cloud Next '21. I'm joined today by Sachin Gupta, General Manager and Vice President of Open Infrastructure at Google Cloud. Sachin, welcome to theCube. >> Thanks Dave, it's great to be here. >> So, you and I both know that the definition of what constitutes Cloud has been hotly contested by some over the last 20 years. But I think you and I both know that in some quarters there really has never been a debate. NIST, for example, the standard body that calls out what constitutes Cloud, has always considered Cloud an operational model, a set of capabilities, and it has never considered Cloud specifically tied to a location. With that in mind, how about if you share with us what was announced at Cloud Next '21 around Google Distributed Cloud? >> Yeah, thanks Dave. The power of Cloud in terms of automation, simplicity, observability, is undeniable, but our mission at Google Cloud is to ensure that we're meeting customers where they are, in their digital transformation journey. And so in talking to customers, we found that there are some reasons that could prevent them to move certain workloads to Cloud. And that could be because there's a low latency requirement. There is high amounts of data processing that needs to happen on-prem. So taking data from on-prem, moving into the Cloud to get it processed and all the way back may not be very efficient. There could be security, privacy, data residency, compliance requirements that they're dealing with. And then some industries, for some customers, there's some very strict data sovereignty requirements that don't allow them to move things into the public Cloud. And so when we talked to customers, we realized that we needed to extend the Cloud, and therefore we introduced Google Distributor Cloud at Next 2021. And what Google Distributed Cloud provides is all of that power of Cloud anywhere the customers need it. And this could be at a Google network edge, it could be at an operator or communication server provider edge as well. It could be at the customer edge, so right on-premise at their site, it could be in their data centers. And so a lot of flexibility in how you deploy three fully managed hardware and software solutions delivered through Google. >> Yeah it's interesting because often statistics are cited that somewhere near 75% of of what we do in IT, is still "on-premises." The reality is, however, that what's happening in those physical locations on the edge is looking a lot more Cloudy, isn't it. (laughs) >> Yes, and the customers are looking for that computational power, storage, automation, simplicity, in all of these locations. >> So what does this look like from an infrastructure stack perspective? Is there some secret sauce that you're layering into this that we should know about? >> Yeah, so let me just talk about it a little bit more. So we start off with third party hardware. So we're sourcing from Dell, HPE, Cisco, Nvidia, NetApp, bringing it together. We're using Anthos, you are hopefully familiar with Anthos, which is our hybrid multi-cloud software layer. And then on top of that, we use open source technologies. For example, built on Kubernetes. We offer a containerized environment, a VM environment, that enables both Google first-party services, as well as third-party services that customers may choose to deploy, on top of this infrastructure. And so the management of the entire infrastructure, top to bottom, is delivered to Google directly, and therefore customers can focus on applications, they can focus on business initiatives, and not worry about the infrastructure complexity. They can just leave that to us. >> So you mentioned both Kubernetes, thinking of containerization as Cloud native, you also said VMs. So this spans the divide between containerized microservices-based applications and say VM-ware style of virtual machines or other VMs? >> Yes, look, the majority of customers are looking to modernize and move to a containerized environment with Kubernetes, but there are some workloads that they may have that still require a VM-like environment, and having the simplicity and the efficiency of operating VMs like containers on top of Google Distributed Cloud, built on Anthos, is extremely powerful for them. And so it goes back to our mission. We're going to meet customers where they are, and if they need VM support as well, we're providing it. >> So let's talk about initial implementations of this. What kind of scale are you anticipating that customers will deploy? >> The scale is going to vary based on use case. So it could be a very small, let's think about it as a single server type of scale, all the way to many, many dozens of racks that could be going in to support Google Distributed Cloud. And so, for example, from a communication service provider point of view, looking to modernize their 5G network, in the core it could be many, many racks with Google Distributed Cloud the edge product. And for their RAM solutions, it could be a much smaller form factor, as an example. And so depending on use case, you're going to find all kinds of different form factors. And I didn't mention this before, but we also, in addition to scale, we offer two operational modes. One is the edge product. So Google Distributed Cloud edge that is connected to the Cloud. And so it gets operational updates, et cetera, directly through the Cloud. And the second one is something we call the hosted mode, and in hosted mode, it's completely air-gapped. So this infrastructure, what is modernized and provides rich 1PN third party services, does not connect to the Cloud at all. And therefore, the organizations that have the strictest data latency sovereignty requirements, can benefit from a completely air-gapped solution as well. >> So I'm curious, let's say you started with an air-gapped model. Often our capabilities in Cloud exceed our customer's comfort level for a period of time. Can that air-gapped, initial implementation be connected to the Cloud in the future? >> The air-gap implementation, typically customers, the same customer, may have multiple deployments, where one will require the air-gap solution, and another could be the hosted solution, and the other could be the edge product, which is connected. And in both cases, the underlying stack is consistent. So, while I don't hear customers saying, "I want to start from air-gap and move," we are providing Google Distributed Cloud as one portfolio to customers so that we can address these different use cases. In the air-gap solution, the software updates obviously still come from Google, and customers need to move that across the air gap check signatures, check for vulnerability, load it in the system, and the system will then automatically update itself. And so the software we still provide, but in that case, there's additional checks that that customer will typically go through before enabling that software onto their system. >> Yeah, so you mentioned at the outset, some of the drivers, latency security, et cetera, but can you restate that? I'd like to hear what the thinking behind this was at Google when customers were presenting you with a variety of problems they needed solutions for. I think it bears recapping that. >> Right, so let me give you a few examples here. So one is, when you think about 5G, when you think about what 4G did for the industry in terms of enabling the gig economy, with 5G we can really enable richer experiences. And this could be highly immersive experiences, it could be augmented reality, it could be all kinds of technologies that require lower latency. And for this, you need to build out the 5G infrastructure on top of a modernized solution like Google Distributed Cloud. Let me just get into a few use cases though, to just bring some color here. For example, for a retailer, instead of worrying about IP and infrastructure in the store, the people in the store can focus on their customers, and they can implement solutions using Google Distributed Cloud for things like inventory management, asset protection, et cetera, in the store. Inside a manufacturing facility, once again, you can reduce incidents, you can reduce injuries, you can look at your robotic solutions that require low latency feedback, et cetera. There's a whole bunch of emerging applications through ISVs, that a rich, on-prem or anywhere you want it in the edge infrastructure, can enable a new suite of possibilities that weren't possible before. In some cases, customers say, "You know what, I want 5G. But I actually you want a private 5G deployment." And that becomes possible with the Google Distributed Cloud as well. >> So we talked a little bit about scale. What's the smallest increment that someone could deploy? You just gave an example of retail. Some retail outfits are small stores, without any IT staff at all. There's the concept of a single-node Kubernetes cluster, which is something we love to come up with in our business terminology that makes no sense, single node cluster. The point is, these increments, especially in the containerized world, are getting smaller. What's the smallest increment that you can deliver, you're planning to deliver? >> I'll answer this two ways. First of all, we are planning to deliver a smallest increment, think of it as one server. We are planning to deliver that as well, all the way up to many, many racks. But in addition, there's something unique that I wanted to call out. Let's say you're in the medium or larger deployment in the racks, and you want to scale up, compute, and store it separately. That's something we enable as well, because we will work with customers in terms of what they need for their application, and then scale that hardware up and down based on their need. And so there's a lot of flexibility in that, but we will enable scale all the way down to a single server unit as well. >> So what is the feedback been from the partners that will be providing the hardware infrastructure, folks like Dell. What has their reaction been? >> I think that they're obviously very eager to work with us. We're happy to partner with them in order to provide customers flexibility, any kind of scale in any kind of location, different kind of hardware equipment that they need. But in addition to those partners on the hardware side, there are customers and partners as well who are enabling rich experiences and solutions for that retailer, for that manufacturer, for example. And so working with AT&T, we announced partnership on 5G and edge to enable experiences, especially in the areas of retail and manufacturing, like I talked about earlier, but then in Europe, we're partnering with OVHcloud, for example, in order to enable very strict data sovereignty requirements that are happening in that country. And so where there's many communication service providers, there's many partners trying to solve for different use cases for their end customers. >> Yeah, that makes a lot of sense. Let's pretend for a minute that you're getting Yelp reviews of this infrastructure that you're responsible for moving forward. What would a delighted customer's comments look like? >> I think a delighted customer's comments will be probably in two or three areas, all right? So first up will be, it's all about the applications and the end user experience that this can enable. And so the power of Google AI ML technology, third-party software as well, that can run consistently single operational model, build once, deploy anywhere, is extremely powerful. So I would say, the power of the applications and the simplicity that it enables is number one. I think number two is the scale of operations experience that Google has. They don't need to worry about, "do I have 5 sites or 500 sites or 5,000 sites?" It doesn't matter. The fleet operations, the scaled operations capability, the global network capability that Google has, all that experience in site reliability engineering, we can now bring to all of these vast amounts of edge locations, so they don't need to worry about scale at all. And then finally, they can be sort of rest assured that this is built on Anthos, it's built on Kubernetes, there's a lot of open source components here, they have flexibility, they have choice, they can run our one-piece services, they can run third-party services on this, and so we're going to preserve the flexibility in choice. I think these are the things that would likely get highlighted. >> So Sachin, you talk to customers around the world. Where do you see the mix between net-neu stuff going into infrastructure like this, versus modernized and migrated workloads into the solution? What does that mix look like? And I know it's a bit of speculation, but what are your thoughts? >> I think, Dave, that's a great question, I think it's a difficult one to answer because we find that those conversations happen together with the same customers. At least that's what I find. And so they are looking to modernize, create a much richer environment for their developers, so that they can innovate much more quickly, react to business needs much more quickly, to cater to their own end customers in a much better way, get business insights from the data that they have. They're looking to do all of this, but at the same time, they have, perhaps, legacy infrastructure or applications that they just can't easily migrate off of, that may still be in a VM environment, more traditional type of storage environment, and they need to be able to address both worlds. And so, yes, there are some who are so-called "born in the Cloud," everything is Cloud native, but the vast majority of customers that I talked to, are absolutely looking to modernize, like you don't find a customer that says, "Just help me lift and shift, I'm not looking to modernize." I don't quite see that. They are looking to modernize but they want to make sure that we have the options that they need to support different kinds of environment that they have today. >> And you mentioned insights. We should explore that a little further. Can you give us an example of artificial intelligence, machine learning being used now at the edge, where you're putting more compute power at the edge? Can you give us an idea of the kinds of things that that enables specifically? >> Yes, so when you think about video processing, for example, if I have a lot of video feeds and I'm looking based on that, I want to apply artificial intelligence, I'm trying to detect object inventory movement, people movement, et cetera. Again, adhering to all the privacy and local regulations. When I have that much data streaming in, if I have to take that out of my edge all the way across the WHEN network, into the Cloud for processing, and bring it all the way back and then make a decision, I'm just moving a lot of data up and down into the Cloud. And in this case, what you're able to do is say, no, you don't actually need to move it into the public Cloud. You can keep that data locally. You can have a Google Distributed Cloud edge instance there, you're going to run your AI application right there, achieve the insights and take an action very, very quickly. And so it saves you, from a latency point of view, significantly, and it saves you from a data transmission up and down into the Cloud significantly, which you sometimes, you know, you're not supposed to send that data up, that there's data residency requirements, and sometimes the cost of just moving it, it doesn't make sense. >> So do you have any final thoughts? What else should we know about this? Anything that we didn't touch on? >> I think we've touched on a lot of great things. I think I'm just going to reiterate, you started with a "what is the definition of Cloud itself" and our mission once again, is to really understand what customers are trying to do and meet them where they are. And we're finding that they're looking for Cloud solutions in a public region. We've announced a lot more regions. We continue to grow our footprint globally, but in addition, they want to be able to get that power of Google Cloud infrastructure and all the benefits that it provides in many different edge locations all the way on onto their premises. And I think one of the things we perhaps spent less time on is, we're also very unique that in our strategy, we're bringing in underlying third-party hardware, but it's a fully managed solution that can operate in that connected edge mode, as well as a disconnected hosted mode, which just enables pretty much all the use cases we've heard about from customers. So one portfolio that can address any kind of need that they have. >> Fantastic. Well, I said at the outset Sachin, before we got started, you and I could talk for hours on this subject. Sadly, we don't have hours. I'd like to thank you for joining us in theCube. I'd like to thank everyone for joining us for this Cube conversation, covering the events at Google Cloud NEXT 2021. I'm Dave Nicholson. Thanks for joining. (upbeat music)

Published Date : Oct 19 2021

SUMMARY :

Welcome to this Cube Conversation. that the definition of that could prevent them to move on the edge Yes, and the customers are looking for And so the management of So you mentioned both Kubernetes, And so it goes back to our mission. that customers will deploy? that could be going in to Can that air-gapped, And so the software we still some of the drivers, in terms of enabling the gig economy, that you can deliver, in the racks, been from the partners especially in the areas of that you're getting Yelp And so the power of customers around the world. And so they are looking to modernize, of the kinds of things that and bring it all the way back and all the benefits that it provides I'd like to thank you for

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

Dave NicholsonPERSON

0.99+

NvidiaORGANIZATION

0.99+

DellORGANIZATION

0.99+

SachinPERSON

0.99+

CiscoORGANIZATION

0.99+

EuropeLOCATION

0.99+

5 sitesQUANTITY

0.99+

twoQUANTITY

0.99+

500 sitesQUANTITY

0.99+

Sachin GuptaPERSON

0.99+

5,000 sitesQUANTITY

0.99+

GoogleORGANIZATION

0.99+

NetAppORGANIZATION

0.99+

OneQUANTITY

0.99+

HPEORGANIZATION

0.99+

AT&TORGANIZATION

0.99+

two waysQUANTITY

0.99+

firstQUANTITY

0.99+

one serverQUANTITY

0.99+

both casesQUANTITY

0.99+

todayDATE

0.99+

YelpORGANIZATION

0.99+

OVHcloudORGANIZATION

0.99+

bothQUANTITY

0.99+

FirstQUANTITY

0.98+

AnthosTITLE

0.98+

CloudTITLE

0.97+

both worldsQUANTITY

0.97+

three areasQUANTITY

0.97+

second oneQUANTITY

0.97+

oneQUANTITY

0.97+

Google Cloud Next '21TITLE

0.97+

dozens of racksQUANTITY

0.97+

singleQUANTITY

0.92+

KubernetesTITLE

0.9+

one-pieceQUANTITY

0.9+

last 20 yearsDATE

0.89+

single serverQUANTITY

0.89+

NISTORGANIZATION

0.88+

near 75%QUANTITY

0.88+

threeQUANTITY

0.87+

two operational modesQUANTITY

0.83+

CloudEVENT

0.82+

Google Distributed CloudTITLE

0.8+

one portfolioQUANTITY

0.73+

Google Distributed CloudORGANIZATION

0.71+

number twoQUANTITY

0.68+

F1 Racing at the Edge of Real-Time Data: Omer Asad, HPE & Matt Cadieux, Red Bull Racing


 

>>Edge computing is predict, projected to be a multi-trillion dollar business. You know, it's hard to really pinpoint the size of this market. Let alone fathom the potential of bringing software, compute, storage, AI, and automation to the edge and connecting all that to clouds and on-prem systems. But what, you know, what is the edge? Is it factories? Is it oil rigs, airplanes, windmills, shipping containers, buildings, homes, race cars. Well, yes and so much more. And what about the data for decades? We've talked about the data explosion. I mean, it's mind boggling, but guess what, we're gonna look back in 10 years and laugh. What we thought was a lot of data in 2020, perhaps the best way to think about edge is not as a place, but when is the most logical opportunity to process the data and maybe it's the first opportunity to do so where it can be decrypted and analyzed at very low latencies that that defines the edge. And so by locating compute as close as possible to the sources of data, to reduce latency and maximize your ability to get insights and return them to users quickly, maybe that's where the value lies. Hello everyone. And welcome to this cube conversation. My name is Dave Vellante and with me to noodle on these topics is Omar Assad, VP, and GM of primary storage and data management services at HPE. Hello, Omer. Welcome to the program. >>Hey Steve. Thank you so much. Pleasure to be here. >>Yeah. Great to see you again. So how do you see the edge in the broader market shaping up? >>Uh, David? I think that's a super important, important question. I think your ideas are quite aligned with how we think about it. Uh, I personally think, you know, as enterprises are accelerating their sort of digitization and asset collection and data collection, uh, they're typically, especially in a distributed enterprise, they're trying to get to their customers. They're trying to minimize the latency to their customers. So especially if you look across industries manufacturing, which is distributed factories all over the place, they are going through a lot of factory transformations where they're digitizing their factories. That means a lot more data is being now being generated within their factories. A lot of robot automation is going on that requires a lot of compute power to go out to those particular factories, which is going to generate their data out there. We've got insurance companies, banks that are creating and interviewing and gathering more customers out at the edge for that. >>They need a lot more distributed processing out at the edge. What this is requiring is what we've seen is across analysts. A common consensus is that more than 50% of an enterprise is data, especially if they operate globally around the world is going to be generated out at the edge. What does that mean? More data is new data is generated at the edge, but needs to be stored. It needs to be processed data. What is not required needs to be thrown away or classified as not important. And then it needs to be moved for Dr. Purposes either to a central data center or just to another site. So overall in order to give the best possible experience for manufacturing, retail, uh, you know, especially in distributed enterprises, people are generating more and more data centric assets out at the edge. And that's what we see in the industry. >>Yeah. We're definitely aligned on that. There's some great points. And so now, okay. You think about all this diversity, what's the right architecture for these deploying multi-site deployments, robo edge. How do you look at that? >>Oh, excellent question. So now it's sort of, you know, obviously you want every customer that we talk to wants SimpliVity, uh, in, in, and, and, and, and no pun intended because SimpliVity is reasoned with a simplistic edge centric architecture, right? So because let's, let's take a few examples. You've got large global retailers, uh, they have hundreds of global retail stores around the world that is generating data that is producing data. Then you've got insurance companies, then you've got banks. So when you look at a distributed enterprise, how do you deploy in a very simple and easy to deploy manner, easy to lifecycle, easy to mobilize and easy to lifecycle equipment out at the edge. What are some of the challenges that these customers deal with these customers? You don't want to send a lot of ID staff out there because that adds costs. You don't want to have islands of data and islands of storage and promote sites, because that adds a lot of States outside of the data center that needs to be protected. >>And then last but not the least, how do you push lifecycle based applications, new applications out at the edge in a very simple to deploy better. And how do you protect all this data at the edge? So the right architecture in my opinion, needs to be extremely simple to deploy. So storage, compute and networking, uh, out towards the edge in a hyperconverged environment. So that's, we agree upon that. It's a very simple to deploy model, but then comes, how do you deploy applications on top of that? How do you manage these applications on top of that? How do you back up these applications back towards the data center, all of this keeping in mind that it has to be as zero touch as possible. We at HBS believe that it needs to be extremely simple. Just give me two cables, a network cable, a power cable, tied it up, connected to the network, push it state from the data center and back up at state from the ed back into the data center. Extremely simple. >>It's gotta be simple because you've got so many challenges. You've got physics that you have to deal your latency to deal with. You got RPO and RTO. What happens if something goes wrong, you've gotta be able to recover quickly. So, so that's great. Thank you for that. Now you guys have hard news. W what is new from HPE in this space >>From a, from a, from a, from a deployment perspective, you know, HPE SimpliVity is just gaining like it's exploding, like crazy, especially as distributed enterprises adopt it as it's standardized edge architecture, right? It's an HCI box has got stories, computer networking, all in one. But now what we have done is not only you can deploy applications all from your standard V-Center interface, from a data center, what have you have now added is the ability to backup to the cloud, right? From the edge. You can also back up all the way back to your core data center. All of the backup policies are fully automated and implemented in the, in the distributed file system. That is the heart and soul of, of the SimpliVity installation. In addition to that, the customers now do not have to buy any third-party software into backup is fully integrated in the architecture and it's van efficient. >>In addition to that, now you can backup straight to the client. You can backup to a central, uh, high-end backup repository, which is in your data center. And last but not least, we have a lot of customers that are pushing the limit in their application transformation. So not only do we previously were, were one-on-one them leaving VMware deployments out at the edge sites. Now revolver also added both stateful and stateless container orchestration, as well as data protection capabilities for containerized applications out at the edge. So we have a lot, we have a lot of customers that are now deploying containers, rapid manufacturing containers to process data out at remote sites. And that allows us to not only protect those stateful applications, but back them up, back into the central data center. >>I saw in that chart, it was a light on no egress fees. That's a pain point for a lot of CEOs that I talked to. They grit their teeth at those entities. So, so you can't comment on that or >>Excellent, excellent question. I'm so glad you brought that up and sort of at that point, uh, uh, pick that up. So, uh, along with SimpliVity, you know, we have the whole green Lake as a service offering as well. Right? So what that means, Dave, is that we can literally provide our customers edge as a service. And when you compliment that with, with Aruba wired wireless infrastructure, that goes at the edge, the hyperconverged infrastructure, as part of SimpliVity, that goes at the edge, you know, one of the things that was missing with cloud backups is the every time you backup to the cloud, which is a great thing, by the way, anytime you restore from the cloud, there is that breastfeed, right? So as a result of that, as part of the GreenLake offering, we have cloud backup service natively now offered as part of HPE, which is included in your HPE SimpliVity edge as a service offering. So now not only can you backup into the cloud from your edge sites, but you can also restore back without any egress fees from HBS data protection service. Either you can restore it back onto your data center, you can restore it back towards the edge site and because the infrastructure is so easy to deploy centrally lifecycle manage, it's very mobile. So if you want to deploy and recover to a different site, you could also do that. >>Nice. Hey, uh, can you, Omar, can you double click a little bit on some of the use cases that customers are choosing SimpliVity for, particularly at the edge, and maybe talk about why they're choosing HPE? >>What are the major use cases that we see? Dave is obviously, uh, easy to deploy and easy to manage in a standardized form factor, right? A lot of these customers, like for example, we have large retailer across the us with hundreds of stores across us. Right now you cannot send service staff to each of these stores. These data centers are their data center is essentially just a closet for these guys, right? So now how do you have a standardized deployment? So standardized deployment from the data center, which you can literally push out and you can connect a network cable and a power cable, and you're up and running, and then automated backup elimination of backup and state and BR from the edge sites and into the data center. So that's one of the big use cases to rapidly deploy new stores, bring them up in a standardized configuration, both from a hardware and a software perspective, and the ability to backup and recover that instantly. >>That's one large use case. The second use case that we see actually refers to a comment that you made in your opener. Dave was where a lot of these customers are generating a lot of the data at the edge. This is robotics automation that is going to up in manufacturing sites. These is racing teams that are out at the edge of doing post-processing of their cars data. Uh, at the same time, there is disaster recovery use cases where you have, uh, you know, campsites and local, uh, you know, uh, agencies that go out there for humanity's benefit. And they move from one site to the other. It's a very, very mobile architecture that they need. So those, those are just a few cases where we were deployed. There was a lot of data collection, and there's a lot of mobility involved in these environments. So you need to be quick to set up quick, to up quick, to recover, and essentially you're up to your next, next move. >>You seem pretty pumped up about this, uh, this new innovation and why not. >>It is, it is, uh, you know, especially because, you know, it is, it has been taught through with edge in mind and edge has to be mobile. It has to be simple. And especially as, you know, we have lived through this pandemic, which, which I hope we see the tail end of it in at least 2021, or at least 2022. They, you know, one of the most common use cases that we saw, and this was an accidental discovery. A lot of the retail sites could not go out to service their stores because, you know, mobility is limited in these, in these strange times that we live in. So from a central center, you're able to deploy applications, you're able to recover applications. And, and a lot of our customers said, Hey, I don't have enough space in my data center to back up. Do you have another option? So then we rolled out this update release to SimpliVity verse from the edge site. You can now directly back up to our backup service, which is offered on a consumption basis to the customers, and they can recover that anywhere they want. >>Fantastic Omer, thanks so much for coming on the program today. >>It's a pleasure, Dave. Thank you. >>All right. Awesome to see you. Now, let's hear from red bull racing and HPE customer, that's actually using SimpliVity at the edge. Countdown really begins when the checkered flag drops on a Sunday. It's always about this race to manufacture >>The next designs to make it more adapt to the next circuit to run those. Of course, if we can't manufacture the next component in time, all that will be wasted. >>Okay. We're back with Matt kudu, who is the CIO of red bull racing? Matt, it's good to see you again. >>Great to say, >>Hey, we're going to dig into a real-world example of using data at the edge and in near real time to gain insights that really lead to competitive advantage. But, but first Matt, tell us a little bit about red bull racing and your role there. >>Sure. So I'm the CIO at red bull racing and that red bull race. And we're based in Milton Keynes in the UK. And the main job job for us is to design a race car, to manufacture the race car, and then to race it around the world. So as CIO, we need to develop the ITT group needs to develop the applications is the design, manufacturing racing. We also need to supply all the underlying infrastructure and also manage security. So it's really interesting environment. That's all about speed. So this season we have 23 races and we need to tear the car apart and rebuild it to a unique configuration for every individual race. And we're also designing and making components targeted for races. So 20 a movable deadlines, um, this big evolving prototype to manage with our car. Um, but we're also improving all of our tools and methods and software that we use to design and make and race the car. >>So we have a big can do attitude of the company around continuous improvement. And the expectations are that we continuously make the car faster. That we're, that we're winning races, that we improve our methods in the factory and our tools. And, um, so for, I take it's really unique and that we can be part of that journey and provide a better service. It's also a big challenge to provide that service and to give the business the agility, agility, and needs. So my job is, is really to make sure we have the right staff, the right partners, the right technical platforms. So we can live up to expectations >>That tear down and rebuild for 23 races. Is that because each track has its own unique signature that you have to tune to, or are there other factors involved there? >>Yeah, exactly. Every track has a different shape. Some have lots of strengths. Some have lots of curves and lots are in between. Um, the track surface is very different and the impact that has some tires, um, the temperature and the climate is very different. Some are hilly, some, a big curves that affect the dynamics of the power. So all that in order to win, you need to micromanage everything and optimize it for any given race track. >>Talk about some of the key drivers in your business and some of the key apps that give you a competitive advantage to help you win races. >>Yeah. So in our business, everything is all about speed. So the car obviously needs to be fast, but also all of our business operations needed to be fast. We need to be able to design a car and it's all done in the virtual world, but the, the virtual simulations and designs need to correlate to what happens in the real world. So all of that requires a lot of expertise to develop the simulation is the algorithms and have all the underlying infrastructure that runs it quickly and reliably. Um, in manufacturing, um, we have cost caps and financial controls by regulation. We need to be super efficient and control material and resources. So ERP and MES systems are running and helping us do that. And at the race track itself in speed, we have hundreds of decisions to make on a Friday and Saturday as we're fine tuning the final configuration of the car. And here again, we rely on simulations and analytics to help do that. And then during the race, we have split seconds, literally seconds to alter our race strategy if an event happens. So if there's an accident, um, and the safety car comes out, or the weather changes, we revise our tactics and we're running Monte Carlo for example. And he is an experienced engineers with simulations to make a data-driven decision and hopefully a better one and faster than our competitors, all of that needs it. Um, so work at a very high level. >>It's interesting. I mean, as a lay person, historically we know when I think about technology and car racing, of course, I think about the mechanical aspects of a self-propelled vehicle, the electronics and the light, but not necessarily the data, but the data's always been there. Hasn't it? I mean, maybe in the form of like tribal knowledge, if somebody who knows the track and where the Hills are and experience and gut feel, but today you're digitizing it and you're, you're processing it and close to real time. >>It's amazing. I think exactly right. Yeah. The car's instrumented with sensors, we post-process at Virgin, um, video, um, image analysis, and we're looking at our car, our competitor's car. So there's a huge amount of, um, very complicated models that we're using to optimize our performance and to continuously improve our car. Yeah. The data and the applications that can leverage it are really key. Um, and that's a critical success factor for us. >>So let's talk about your data center at the track, if you will. I mean, if I can call it that paint a picture for us, what does that look like? >>So we have to send, um, a lot of equipment to the track at the edge. Um, and even though we have really a great wide area network linked back to the factory and there's cloud resources, a lot of the trucks are very old. You don't have hardened infrastructure, don't have ducks that protect cabling, for example, and you could lose connectivity to remote locations. So the applications we need to operate the car and to make really critical decisions, all that needs to be at the edge where the car operates. So historically we had three racks of equipment, like a safe infrastructure, um, and it was really hard to manage, um, to make changes. It was too flexible. Um, there were multiple panes of glass, um, and, um, and it was too slow. It didn't run her applications quickly. Um, it was also too heavy and took up too much space when you're cramped into a garage with lots of environmental constraints. >>So we, um, we'd, we'd introduced hyperconvergence into the factory and seen a lot of great benefits. And when we came time to refresh our infrastructure at the track, we stepped back and said, there's a lot smarter way of operating. We can get rid of all the slow and flexible, expensive legacy and introduce hyperconvergence. And we saw really excellent benefits for doing that. Um, we saw a three X speed up for a lot of our applications. So I'm here where we're post-processing data, and we have to make decisions about race strategy. Time is of the essence in a three X reduction in processing time really matters. Um, we also, um, were able to go from three racks of equipment down to two racks of equipment and the storage efficiency of the HPE SimpliVity platform with 20 to one ratios allowed us to eliminate a rack. And that actually saved a hundred thousand dollars a year in freight costs by shipping less equipment, um, things like backup, um, mistakes happen. >>Sometimes the user makes a mistake. So for example, a race engineer could load the wrong data map into one of our simulations. And we could restore that VDI through SimpliVity backup at 90 seconds. And this makes sure it enables engineers to focus on the car to make better decisions without having downtime. And we sent them to, I take guys to every race they're managing 60 users, a really diverse environment, juggling a lot of balls and having a simple management platform like HPE SimpliVity gives us, allows them to be very effective and to work quickly. So all of those benefits were a huge step forward relative to the legacy infrastructure that we used to run at the edge. >>Yeah. So you had the nice Petri dish and the factory. So it sounds like your, your goals, obviously your number one KPI is speed to help shave seconds time, but also costs just the simplicity of setting up the infrastructure. >>Yeah. It's speed. Speed, speed. So we want applications absolutely fly, you know, get to actionable results quicker, um, get answers from our simulations quicker. The other area that speed's really critical is, um, our applications are also evolving prototypes, and we're always, the models are getting bigger. The simulations are getting bigger and they need more and more resource and being able to spin up resource and provision things without being a bottleneck is a big challenge in SimpliVity. It gives us the means of doing that. >>So did you consider any other options or was it because you had the factory knowledge? It was HCI was, you know, very clearly the option. What did you look at? >>Yeah, so, um, we have over five years of experience in the factory and we eliminated all of our legacy, um, um, infrastructure five years ago. And the benefits I've described, um, at the track, we saw that in the factory, um, at the track we have a three-year operational life cycle for our equipment. When into 2017 was the last year we had legacy as we were building for 2018. It was obvious that hyper-converged was the right technology to introduce. And we'd had years of experience in the factory already. And the benefits that we see with hyper-converged actually mattered even more at the edge because our operations are so much more pressurized time has even more of the essence. And so speeding everything up at the really pointy end of our business was really critical. It was an obvious choice. >>Why, why SimpliVity? What why'd you choose HPE SimpliVity? >>Yeah. So when we first heard about hyperconverged way back in the, in the factory, um, we had, um, a legacy infrastructure, overly complicated, too slow, too inflexible, too expensive. And we stepped back and said, there has to be a smarter way of operating. We went out and challenged our technology partners. We learned about hyperconvergence within enough, the hype, um, was real or not. So we underwent some PLCs and benchmarking and, and the, the PLCs were really impressive. And, and all these, you know, speed and agility benefits, we saw an HP for our use cases was the clear winner in the benchmarks. So based on that, we made an initial investment in the factory. Uh, we moved about 150 VMs in the 150 VDI into it. Um, and then as, as we've seen all the benefits we've successfully invested, and we now have, um, an estate to the factory of about 800 VMs and about 400 VDI. So it's been a great platform and it's allowed us to really push boundaries and, and give the business, um, the service that expects. >>So w was that with the time in which you were able to go from data to insight to recommendation or, or edict, uh, was that compressed, you kind of indicated that, but >>So we, we all telemetry from the car and we post-process it, and that reprocessing time really it's very time consuming. And, um, you know, we went from nine, eight minutes for some of the simulations down to just two minutes. So we saw big, big reductions in time and all, ultimately that meant an engineer could understand what the car was during a practice session, recommend a tweak to the configuration or setup of it, and just get more actionable insight quicker. And it ultimately helps get a better car quicker. >>Such a great example. How are you guys feeling about the season, Matt? What's the team's sentiment? >>Yeah, I think we're optimistic. Um, we w we, um, uh, we have a new driver >>Lineup. Uh, we have, um, max for stopping his carries on with the team and Sergio joins the team. So we're really excited about this year and, uh, we want to go and win races. Great, Matt, good luck this season and going forward and thanks so much for coming back in the cube. Really appreciate it. And it's my pleasure. Great talking to you again. Okay. Now we're going to bring back Omer for quick summary. So keep it real >>Without having solutions from HB, we can't drive those five senses, CFD aerodynamics that would undermine the simulations being software defined. We can bring new apps into play. If we can bring new them's storage, networking, all of that can be highly advises is a hugely beneficial partnership for us. We're able to be at the cutting edge of technology in a highly stressed environment. That is no bigger challenge than the formula. >>Okay. We're back with Omar. Hey, what did you think about that interview with Matt? >>Great. Uh, I have to tell you I'm a big formula one fan, and they are one of my favorite customers. Uh, so, you know, obviously, uh, one of the biggest use cases as you saw for red bull racing is Trackside deployments. There are now 22 races in a season. These guys are jumping from one city to the next, they've got to pack up, move to the next city, set up, set up the infrastructure very, very quickly and average formula. One car is running the thousand plus sensors on that is generating a ton of data on track side that needs to be collected very quickly. It needs to be processed very quickly, and then sometimes believe it or not, snapshots of this data needs to be sent to the red bull back factory back at the data center. What does this all need? It needs reliability. >>It needs compute power in a very short form factor. And it needs agility quick to set up quick, to go quick, to recover. And then in post processing, they need to have CPU density so they can pack more VMs out at the edge to be able to do that processing now. And we accomplished that for, for the red bull racing guys in basically two are you have two SimpliVity nodes that are running track side and moving with them from one, one race to the next race, to the next race. And every time those SimpliVity nodes connect up to the data center collector to a satellite, they're backing up back to their data center. They're sending snapshots of data back to the data center, essentially making their job a whole lot easier, where they can focus on racing and not on troubleshooting virtual machines, >>Red bull racing and HPE SimpliVity. Great example. It's agile, it's it's cost efficient, and it shows a real impact. Thank you very much. I really appreciate those summary comments. Thank you, Dave. Really appreciate it. All right. And thank you for watching. This is Dave Volante. >>You.

Published Date : Mar 30 2021

SUMMARY :

as close as possible to the sources of data, to reduce latency and maximize your ability to get Pleasure to be here. So how do you see the edge in the broader market shaping up? A lot of robot automation is going on that requires a lot of compute power to go out to More data is new data is generated at the edge, but needs to be stored. How do you look at that? a lot of States outside of the data center that needs to be protected. We at HBS believe that it needs to be extremely simple. You've got physics that you have to deal your latency to deal with. In addition to that, the customers now do not have to buy any third-party In addition to that, now you can backup straight to the client. So, so you can't comment on that or So as a result of that, as part of the GreenLake offering, we have cloud backup service natively are choosing SimpliVity for, particularly at the edge, and maybe talk about why from the data center, which you can literally push out and you can connect a network cable at the same time, there is disaster recovery use cases where you have, uh, out to service their stores because, you know, mobility is limited in these, in these strange times that we always about this race to manufacture The next designs to make it more adapt to the next circuit to run those. it's good to see you again. insights that really lead to competitive advantage. So this season we have 23 races and we So my job is, is really to make sure we have the right staff, that you have to tune to, or are there other factors involved there? So all that in order to win, you need to micromanage everything and optimize it for Talk about some of the key drivers in your business and some of the key apps that So all of that requires a lot of expertise to develop the simulation is the algorithms I mean, maybe in the form of like tribal So there's a huge amount of, um, very complicated models that So let's talk about your data center at the track, if you will. So the applications we need to operate the car and to make really Time is of the essence in a three X reduction in processing So for example, a race engineer could load the wrong but also costs just the simplicity of setting up the infrastructure. So we want applications absolutely fly, So did you consider any other options or was it because you had the factory knowledge? And the benefits that we see with hyper-converged actually mattered even more at the edge And, and all these, you know, speed and agility benefits, we saw an HP So we saw big, big reductions in time and all, How are you guys feeling about the season, Matt? we have a new driver Great talking to you again. We're able to be at Hey, what did you think about that interview with Matt? and then sometimes believe it or not, snapshots of this data needs to be sent to the red bull And we accomplished that for, for the red bull racing guys in And thank you for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

SergioPERSON

0.99+

MattPERSON

0.99+

DavidPERSON

0.99+

DavePERSON

0.99+

two racksQUANTITY

0.99+

StevePERSON

0.99+

Dave VolantePERSON

0.99+

2020DATE

0.99+

OmarPERSON

0.99+

Omar AssadPERSON

0.99+

2018DATE

0.99+

Matt CadieuxPERSON

0.99+

20QUANTITY

0.99+

Red Bull RacingORGANIZATION

0.99+

HBSORGANIZATION

0.99+

Milton KeynesLOCATION

0.99+

2017DATE

0.99+

23 racesQUANTITY

0.99+

60 usersQUANTITY

0.99+

22 racesQUANTITY

0.99+

three-yearQUANTITY

0.99+

90 secondsQUANTITY

0.99+

eight minutesQUANTITY

0.99+

Omer AsadPERSON

0.99+

UKLOCATION

0.99+

two cablesQUANTITY

0.99+

One carQUANTITY

0.99+

more than 50%QUANTITY

0.99+

twoQUANTITY

0.99+

nineQUANTITY

0.99+

each trackQUANTITY

0.99+

ITTORGANIZATION

0.99+

SimpliVityTITLE

0.99+

last yearDATE

0.99+

two minutesQUANTITY

0.99+

VirginORGANIZATION

0.99+

HPE SimpliVityTITLE

0.99+

three racksQUANTITY

0.99+

Matt kuduPERSON

0.99+

oneQUANTITY

0.99+

hundreds of storesQUANTITY

0.99+

five sensesQUANTITY

0.99+

hundredsQUANTITY

0.99+

about 800 VMsQUANTITY

0.99+

bothQUANTITY

0.98+

green LakeORGANIZATION

0.98+

about 400 VDIQUANTITY

0.98+

10 yearsQUANTITY

0.98+

second use caseQUANTITY

0.98+

one cityQUANTITY

0.98+

ArubaORGANIZATION

0.98+

one siteQUANTITY

0.98+

five years agoDATE

0.98+

F1 RacingORGANIZATION

0.98+

todayDATE

0.98+

SimpliVityORGANIZATION

0.98+

this yearDATE

0.98+

150 VDIQUANTITY

0.98+

about 150 VMsQUANTITY

0.98+

SundayDATE

0.98+

red bullORGANIZATION

0.97+

firstQUANTITY

0.97+

OmerPERSON

0.97+

multi-trillion dollarQUANTITY

0.97+

over five yearsQUANTITY

0.97+

one large use caseQUANTITY

0.97+

first opportunityQUANTITY

0.97+

HPEORGANIZATION

0.97+

eachQUANTITY

0.96+

decadesQUANTITY

0.96+

one ratiosQUANTITY

0.96+

HPORGANIZATION

0.96+

one raceQUANTITY

0.95+

GreenLakeORGANIZATION

0.94+

Omer Asad, HPE ft Matt Cadieux, Red Bull Racing full v1 (UNLISTED)


 

(upbeat music) >> Edge computing is projected to be a multi-trillion dollar business. It's hard to really pinpoint the size of this market let alone fathom the potential of bringing software, compute, storage, AI and automation to the edge and connecting all that to clouds and on-prem systems. But what is the edge? Is it factories? Is it oil rigs, airplanes, windmills, shipping containers, buildings, homes, race cars. Well, yes and so much more. And what about the data? For decades we've talked about the data explosion. I mean, it's a mind-boggling but guess what we're going to look back in 10 years and laugh what we thought was a lot of data in 2020. Perhaps the best way to think about Edge is not as a place but when is the most logical opportunity to process the data and maybe it's the first opportunity to do so where it can be decrypted and analyzed at very low latencies. That defines the edge. And so by locating compute as close as possible to the sources of data to reduce latency and maximize your ability to get insights and return them to users quickly, maybe that's where the value lies. Hello everyone and welcome to this CUBE conversation. My name is Dave Vellante and with me to noodle on these topics is Omer Asad, VP and GM of Primary Storage and Data Management Services at HPE. Hello Omer, welcome to the program. >> Thanks Dave. Thank you so much. Pleasure to be here. >> Yeah. Great to see you again. So how do you see the edge in the broader market shaping up? >> Dave, I think that's a super important question. I think your ideas are quite aligned with how we think about it. I personally think enterprises are accelerating their sort of digitization and asset collection and data collection, they're typically especially in a distributed enterprise, they're trying to get to their customers. They're trying to minimize the latency to their customers. So especially if you look across industries manufacturing which has distributed factories all over the place they are going through a lot of factory transformations where they're digitizing their factories. That means a lot more data is now being generated within their factories. A lot of robot automation is going on, that requires a lot of compute power to go out to those particular factories which is going to generate their data out there. We've got insurance companies, banks, that are creating and interviewing and gathering more customers out at the edge for that. They need a lot more distributed processing out at the edge. What this is requiring is what we've seen is across analysts. A common consensus is this that more than 50% of an enterprises data especially if they operate globally around the world is going to be generated out at the edge. What does that mean? New data is generated at the edge what needs to be stored. It needs to be processed data. Data which is not required needs to be thrown away or classified as not important. And then it needs to be moved for DR purposes either to a central data center or just to another site. So overall in order to give the best possible experience for manufacturing, retail, especially in distributed enterprises, people are generating more and more data centric assets out at the edge. And that's what we see in the industry. >> Yeah. We're definitely aligned on that. There's some great points and so now, okay. You think about all this diversity what's the right architecture for these multi-site deployments, ROBO, edge? How do you look at that? >> Oh, excellent question, Dave. Every customer that we talked to wants SimpliVity and no pun intended because SimpliVity is reasoned with a simplistic edge centric architecture, right? Let's take a few examples. You've got large global retailers, they have hundreds of global retail stores around the world that is generating data that is producing data. Then you've got insurance companies, then you've got banks. So when you look at a distributed enterprise how do you deploy in a very simple and easy to deploy manner, easy to lifecycle, easy to mobilize and easy to lifecycle equipment out at the edge. What are some of the challenges that these customers deal with? These customers, you don't want to send a lot of IT staff out there because that adds cost. You don't want to have islands of data and islands of storage and promote sites because that adds a lot of states outside of the data center that needs to be protected. And then last but not the least how do you push lifecycle based applications, new applications out at the edge in a very simple to deploy manner. And how do you protect all this data at the edge? So the right architecture in my opinion needs to be extremely simple to deploy so storage compute and networking out towards the edge in a hyper converged environment. So that's we agree upon that. It's a very simple to deploy model but then comes how do you deploy applications on top of that? How do you manage these applications on top of that? How do you back up these applications back towards the data center, all of this keeping in mind that it has to be as zero touch as possible. We at HPE believe that it needs to be extremely simple, just give me two cables, a network cable, a power cable, fire it up, connect it to the network, push it state from the data center and back up it state from the edge back into the data center, extremely simple. >> It's got to be simple 'cause you've got so many challenges. You've got physics that you have to deal, you have latency to deal with. You got RPO and RTO. What happens if something goes wrong you've got to be able to recover quickly. So that's great. Thank you for that. Now you guys have heard news. What is new from HPE in this space? >> Excellent question, great. So from a deployment perspective, HPE SimpliVity is just gaining like it's exploding like crazy especially as distributed enterprises adopted as it's standardized edge architecture, right? It's an HCI box has got storage computer networking all in one. But now what we have done is not only you can deploy applications all from your standard V-Center interface from a data center, what have you have now added is the ability to backup to the cloud right from the edge. You can also back up all the way back to your core data center. All of the backup policies are fully automated and implemented in the distributed file system that is the heart and soul of the SimpliVity installation. In addition to that, the customers now do not have to buy any third-party software. Backup is fully integrated in the architecture and it's then efficient. In addition to that now you can backup straight to the client. You can back up to a central high-end backup repository which is in your data center. And last but not least, we have a lot of customers that are pushing the limit in their application transformation. So not only, we previously were one-on-one leaving VMware deployments out at the edge site now evolved also added both stateful and stateless container orchestration as well as data protection capabilities for containerized applications out at the edge. So we have a lot of customers that are now deploying containers, rapid manufacture containers to process data out at remote sites. And that allows us to not only protect those stateful applications but back them up back into the central data center. >> I saw in that chart, it was a line no egress fees. That's a pain point for a lot of CIOs that I talked to. They grit their teeth at those cities. So you can't comment on that or? >> Excellent question. I'm so glad you brought that up and sort of at the point that pick that up. So along with SimpliVity, we have the whole Green Lake as a service offering as well, right? So what that means Dave is, that we can literally provide our customers edge as a service. And when you compliment that with with Aruba Wired Wireless Infrastructure that goes at the edge, the hyperconverged infrastructure as part of SimpliVity that goes at the edge. One of the things that was missing with cloud backups is that every time you back up to the cloud, which is a great thing by the way, anytime you restore from the cloud there is that egress fee, right? So as a result of that, as part of the GreenLake offering we have cloud backup service natively now offered as part of HPE, which is included in your HPE SimpliVity edge as a service offering. So now not only can you backup into the cloud from your edge sites, but you can also restore back without any egress fees from HPE's data protection service. Either you can restore it back onto your data center, you can restore it back towards the edge site and because the infrastructure is so easy to deploy centrally lifecycle manage, it's very mobile. So if you want to deploy and recover to a different site, you could also do that. >> Nice. Hey, can you, Omer, can you double click a little bit on some of the use cases that customers are choosing SimpliVity for particularly at the edge and maybe talk about why they're choosing HPE? >> Excellent question. So one of the major use cases that we see Dave is obviously easy to deploy and easy to manage in a standardized form factor, right? A lot of these customers, like for example, we have large retailer across the US with hundreds of stores across US, right? Now you cannot send service staff to each of these stores. Their data center is essentially just a closet for these guys, right? So now how do you have a standardized deployment? So standardized deployment from the data center which you can literally push out and you can connect a network cable and a power cable and you're up and running and then automated backup, elimination of backup and state and DR from the edge sites and into the data center. So that's one of the big use cases to rapidly deploy new stores, bring them up in a standardized configuration both from a hardware and a software perspective and the ability to backup and recover that instantly. That's one large use case. The second use case that we see actually refers to a comment that you made in your opener, Dave, was when a lot of these customers are generating a lot of the data at the edge. This is robotics automation that is going up in manufacturing sites. These is racing teams that are out at the edge of doing post-processing of their cars data. At the same time there is disaster recovery use cases where you have campsites and local agencies that go out there for humanity's benefit. And they move from one site to the other. It's a very, very mobile architecture that they need. So those are just a few cases where we were deployed. There was a lot of data collection and there was a lot of mobility involved in these environments, so you need to be quick to set up, quick to backup, quick to recover. And essentially you're up to your next move. >> You seem pretty pumped up about this new innovation and why not. >> It is, especially because it has been taught through with edge in mind and edge has to be mobile. It has to be simple. And especially as we have lived through this pandemic which I hope we see the tail end of it in at least 2021 or at least 2022. One of the most common use cases that we saw and this was an accidental discovery. A lot of the retail sites could not go out to service their stores because mobility is limited in these strange times that we live in. So from a central recenter you're able to deploy applications. You're able to recover applications. And a lot of our customers said, hey I don't have enough space in my data center to back up. Do you have another option? So then we rolled out this update release to SimpliVity verse from the edge site. You can now directly back up to our backup service which is offered on a consumption basis to the customers and they can recover that anywhere they want. >> Fantastic Omer, thanks so much for coming on the program today. >> It's a pleasure, Dave. Thank you. >> All right. Awesome to see you, now, let's hear from Red Bull Racing an HPE customer that's actually using SimpliVity at the edge. (engine revving) >> Narrator: Formula one is a constant race against time Chasing in tens of seconds. (upbeat music) >> Okay. We're back with Matt Cadieux who is the CIO Red Bull Racing. Matt, it's good to see you again. >> Great to see you Dave. >> Hey, we're going to dig in to a real world example of using data at the edge in near real time to gain insights that really lead to competitive advantage. But first Matt tell us a little bit about Red Bull Racing and your role there. >> Sure. So I'm the CIO at Red Bull Racing and at Red Bull Racing we're based in Milton Keynes in the UK. And the main job for us is to design a race car, to manufacture the race car and then to race it around the world. So as CIO, we need to develop, the IT group needs to develop the applications use the design, manufacturing racing. We also need to supply all the underlying infrastructure and also manage security. So it's really interesting environment that's all about speed. So this season we have 23 races and we need to tear the car apart and rebuild it to a unique configuration for every individual race. And we're also designing and making components targeted for races. So 23 and movable deadlines this big evolving prototype to manage with our car but we're also improving all of our tools and methods and software that we use to design make and race the car. So we have a big can-do attitude of the company around continuous improvement. And the expectations are that we continue to say, make the car faster. That we're winning races, that we improve our methods in the factory and our tools. And so for IT it's really unique and that we can be part of that journey and provide a better service. It's also a big challenge to provide that service and to give the business the agility of needs. So my job is really to make sure we have the right staff, the right partners, the right technical platforms. So we can live up to expectations. >> And Matt that tear down and rebuild for 23 races, is that because each track has its own unique signature that you have to tune to or are there other factors involved? >> Yeah, exactly. Every track has a different shape. Some have lots of straight, some have lots of curves and lots are in between. The track surface is very different and the impact that has on tires, the temperature and the climate is very different. Some are hilly, some have big curbs that affect the dynamics of the car. So all that in order to win you need to micromanage everything and optimize it for any given race track. >> COVID has of course been brutal for sports. What's the status of your season? >> So this season we knew that COVID was here and we're doing 23 races knowing we have COVID to manage. And as a premium sporting team with Pharma Bubbles we've put health and safety and social distancing into our environment. And we're able to able to operate by doing things in a safe manner. We have some special exceptions in the UK. So for example, when people returned from overseas that they did not have to quarantine for two weeks, but they get tested multiple times a week. And we know they're safe. So we're racing, we're dealing with all the hassle that COVID gives us. And we are really hoping for a return to normality sooner instead of later where we can get fans back at the track and really go racing and have the spectacle where everyone enjoys it. >> Yeah. That's awesome. So important for the fans but also all the employees around that ecosystem. Talk about some of the key drivers in your business and some of the key apps that give you competitive advantage to help you win races. >> Yeah. So in our business, everything is all about speed. So the car obviously needs to be fast but also all of our business operations need to be fast. We need to be able to design a car and it's all done in the virtual world, but the virtual simulations and designs needed to correlate to what happens in the real world. So all of that requires a lot of expertise to develop the simulations, the algorithms and have all the underlying infrastructure that runs it quickly and reliably. In manufacturing we have cost caps and financial controls by regulation. We need to be super efficient and control material and resources. So ERP and MES systems are running and helping us do that. And at the race track itself. And in speed, we have hundreds of decisions to make on a Friday and Saturday as we're fine tuning the final configuration of the car. And here again, we rely on simulations and analytics to help do that. And then during the race we have split seconds literally seconds to alter our race strategy if an event happens. So if there's an accident and the safety car comes out or the weather changes, we revise our tactics and we're running Monte-Carlo for example. And use an experienced engineers with simulations to make a data-driven decision and hopefully a better one and faster than our competitors. All of that needs IT to work at a very high level. >> Yeah, it's interesting. I mean, as a lay person, historically when I think about technology in car racing, of course I think about the mechanical aspects of a self-propelled vehicle, the electronics and the light but not necessarily the data but the data's always been there. Hasn't it? I mean, maybe in the form of like tribal knowledge if you are somebody who knows the track and where the hills are and experience and gut feel but today you're digitizing it and you're processing it and close to real time. Its amazing. >> I think exactly right. Yeah. The car's instrumented with sensors, we post process and we are doing video image analysis and we're looking at our car, competitor's car. So there's a huge amount of very complicated models that we're using to optimize our performance and to continuously improve our car. Yeah. The data and the applications that leverage it are really key and that's a critical success factor for us. >> So let's talk about your data center at the track, if you will. I mean, if I can call it that. Paint a picture for us what does that look like? >> So we have to send a lot of equipment to the track at the edge. And even though we have really a great wide area network link back to the factory and there's cloud resources a lot of the tracks are very old. You don't have hardened infrastructure, don't have ducks that protect cabling, for example and you can lose connectivity to remote locations. So the applications we need to operate the car and to make really critical decisions all that needs to be at the edge where the car operates. So historically we had three racks of equipment like I said infrastructure and it was really hard to manage, to make changes, it was too flexible. There were multiple panes of glass and it was too slow. It didn't run our applications quickly. It was also too heavy and took up too much space when you're cramped into a garage with lots of environmental constraints. So we'd introduced hyper convergence into the factory and seen a lot of great benefits. And when we came time to refresh our infrastructure at the track, we stepped back and said, there's a lot smarter way of operating. We can get rid of all the slow and flexible expensive legacy and introduce hyper convergence. And we saw really excellent benefits for doing that. We saw up three X speed up for a lot of our applications. So I'm here where we're post-processing data. And we have to make decisions about race strategy. Time is of the essence. The three X reduction in processing time really matters. We also were able to go from three racks of equipment down to two racks of equipment and the storage efficiency of the HPE SimpliVity platform with 20 to one ratios allowed us to eliminate a rack. And that actually saved a $100,000 a year in freight costs by shipping less equipment. Things like backup mistakes happen. Sometimes the user makes a mistake. So for example a race engineer could load the wrong data map into one of our simulations. And we could restore that DDI through SimpliVity backup at 90 seconds. And this enables engineers to focus on the car to make better decisions without having downtime. And we sent two IT guys to every race, they're managing 60 users a really diverse environment, juggling a lot of balls and having a simple management platform like HPE SimpliVity gives us, allows them to be very effective and to work quickly. So all of those benefits were a huge step forward relative to the legacy infrastructure that we used to run at the edge. >> Yeah. So you had the nice Petri dish in the factory so it sounds like your goals are obviously number one KPIs speed to help shave seconds, awesome time, but also cost just the simplicity of setting up the infrastructure is-- >> That's exactly right. It's speed, speed, speed. So we want applications absolutely fly, get to actionable results quicker, get answers from our simulations quicker. The other area that speed's really critical is our applications are also evolving prototypes and we're always, the models are getting bigger. The simulations are getting bigger and they need more and more resource and being able to spin up resource and provision things without being a bottleneck is a big challenge in SimpliVity. It gives us the means of doing that. >> So did you consider any other options or was it because you had the factory knowledge? It was HCI was very clearly the option. What did you look at? >> Yeah, so we have over five years of experience in the factory and we eliminated all of our legacy infrastructure five years ago. And the benefits I've described at the track we saw that in the factory. At the track we have a three-year operational life cycle for our equipment. When in 2017 was the last year we had legacy as we were building for 2018, it was obvious that hyper-converged was the right technology to introduce. And we'd had years of experience in the factory already. And the benefits that we see with hyper-converged actually mattered even more at the edge because our operations are so much more pressurized. Time is even more of the essence. And so speeding everything up at the really pointy end of our business was really critical. It was an obvious choice. >> Why SimpliVity, why'd you choose HPE SimpliVity? >> Yeah. So when we first heard about hyper-converged way back in the factory, we had a legacy infrastructure overly complicated, too slow, too inflexible, too expensive. And we stepped back and said there has to be a smarter way of operating. We went out and challenged our technology partners, we learned about hyperconvergence, would enough the hype was real or not. So we underwent some PLCs and benchmarking and the PLCs were really impressive. And all these speed and agility benefits we saw and HPE for our use cases was the clear winner in the benchmarks. So based on that we made an initial investment in the factory. We moved about 150 VMs and 150 VDIs into it. And then as we've seen all the benefits we've successfully invested and we now have an estate in the factory of about 800 VMs and about 400 VDIs. So it's been a great platform and it's allowed us to really push boundaries and give the business the service it expects. >> Awesome fun stories, just coming back to the metrics for a minute. So you're running Monte Carlo simulations in real time and sort of near real-time. And so essentially that's if I understand it, that's what ifs and it's the probability of the outcome. And then somebody got to make, then the human's got to say, okay, do this, right? Was the time in which you were able to go from data to insight to recommendation or edict was that compressed and you kind of indicated that. >> Yeah, that was accelerated. And so in that use case, what we're trying to do is predict the future and you're saying, well and before any event happens, you're doing what ifs and if it were to happen, what would you probabilistic do? So that simulation, we've been running for awhile but it gets better and better as we get more knowledge. And so that we were able to accelerate that with SimpliVity but there's other use cases too. So we also have telemetry from the car and we post-process it. And that reprocessing time really, is it's very time consuming. And we went from nine, eight minutes for some of the simulations down to just two minutes. So we saw big, big reductions in time. And ultimately that meant an engineer could understand what the car was doing in a practice session, recommend a tweak to the configuration or setup of it and just get more actionable insight quicker. And it ultimately helps get a better car quicker. >> Such a great example. How are you guys feeling about the season, Matt? What's the team's sentiment? >> I think we're optimistic. Thinking our simulations that we have a great car we have a new driver lineup. We have the Max Verstapenn who carries on with the team and Sergio Cross joins the team. So we're really excited about this year and we want to go and win races. And I think with COVID people are just itching also to get back to a little degree of normality and going racing again even though there's no fans, it gets us into a degree of normality. >> That's great, Matt, good luck this season and going forward and thanks so much for coming back in theCUBE. Really appreciate it. >> It's my pleasure. Great talking to you again. >> Okay. Now we're going to bring back Omer for quick summary. So keep it right there. >> Narrator: That's where the data comes face to face with the real world. >> Narrator: Working with Hewlett Packard Enterprise is a hugely beneficial partnership for us. We're able to be at the cutting edge of technology in a highly technical, highly stressed environment. There is no bigger challenge than Formula One. (upbeat music) >> Being in the car and driving in on the limit that is the best thing out there. >> Narrator: It's that innovation and creativity to ultimately achieves winning of this. >> Okay. We're back with Omer. Hey, what did you think about that interview with Matt? >> Great. I have to tell you, I'm a big formula One fan and they are one of my favorite customers. So obviously one of the biggest use cases as you saw for Red Bull Racing is track side deployments. There are now 22 races in a season. These guys are jumping from one city to the next they got to pack up, move to the next city, set up the infrastructure very very quickly. An average Formula One car is running the thousand plus sensors on, that is generating a ton of data on track side that needs to be collected very quickly. It needs to be processed very quickly and then sometimes believe it or not snapshots of this data needs to be sent to the Red Bull back factory back at the data center. What does this all need? It needs reliability. It needs compute power in a very short form factor. And it needs agility quick to set up, quick to go, quick to recover. And then in post processing they need to have CPU density so they can pack more VMs out at the edge to be able to do that processing. And we accomplished that for the Red Bull Racing guys in basically two of you have two SimpliVity nodes that are running track side and moving with them from one race to the next race to the next race. And every time those SimpliVity nodes connect up to the data center, collect up to a satellite they're backing up back to their data center. They're sending snapshots of data back to the data center essentially making their job a whole lot easier where they can focus on racing and not on troubleshooting virtual machines. >> Red bull Racing and HPE SimpliVity. Great example. It's agile, it's it's cost efficient and it shows a real impact. Thank you very much Omer. I really appreciate those summary comments. >> Thank you, Dave. Really appreciate it. >> All right. And thank you for watching. This is Dave Volante for theCUBE. (upbeat music)

Published Date : Mar 5 2021

SUMMARY :

and connecting all that to Pleasure to be here. So how do you see the edge in And then it needs to be moved for DR How do you look at that? and easy to deploy It's got to be simple and implemented in the So you can't comment on that or? and because the infrastructure is so easy on some of the use cases and the ability to backup You seem pretty pumped up about A lot of the retail sites on the program today. It's a pleasure, Dave. SimpliVity at the edge. a constant race against time Matt, it's good to see you again. in to a real world example and then to race it around the world. So all that in order to win What's the status of your season? and have the spectacle So important for the fans So the car obviously needs to be fast and close to real time. and to continuously improve our car. data center at the track, So the applications we Petri dish in the factory and being able to spin up the factory knowledge? And the benefits that we see and the PLCs were really impressive. Was the time in which you And so that we were able to about the season, Matt? and Sergio Cross joins the team. and thanks so much for Great talking to you again. going to bring back Omer comes face to face with the real world. We're able to be at the that is the best thing out there. and creativity to ultimately that interview with Matt? So obviously one of the biggest use cases and it shows a real impact. Thank you, Dave. And thank you for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Matt CadieuxPERSON

0.99+

DavePERSON

0.99+

Dave VellantePERSON

0.99+

Sergio CrossPERSON

0.99+

2017DATE

0.99+

2018DATE

0.99+

Red Bull RacingORGANIZATION

0.99+

MattPERSON

0.99+

2020DATE

0.99+

Milton KeynesLOCATION

0.99+

two weeksQUANTITY

0.99+

three-yearQUANTITY

0.99+

20QUANTITY

0.99+

Red Bull RacingORGANIZATION

0.99+

Omer AsadPERSON

0.99+

Dave VolantePERSON

0.99+

USLOCATION

0.99+

OmerPERSON

0.99+

Red BullORGANIZATION

0.99+

UKLOCATION

0.99+

two racksQUANTITY

0.99+

23 racesQUANTITY

0.99+

Max VerstapennPERSON

0.99+

90 secondsQUANTITY

0.99+

60 usersQUANTITY

0.99+

22 racesQUANTITY

0.99+

eight minutesQUANTITY

0.99+

more than 50%QUANTITY

0.99+

each trackQUANTITY

0.99+

twoQUANTITY

0.99+

one raceQUANTITY

0.99+

two minutesQUANTITY

0.99+

two cablesQUANTITY

0.99+

nineQUANTITY

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

150 VDIsQUANTITY

0.99+

SimpliVityTITLE

0.99+

Pharma BubblesORGANIZATION

0.99+

oneQUANTITY

0.99+

five years agoDATE

0.99+

first opportunityQUANTITY

0.99+

last yearDATE

0.99+

OneQUANTITY

0.99+

about 800 VMsQUANTITY

0.99+

three racksQUANTITY

0.98+

firstQUANTITY

0.98+

one siteQUANTITY

0.98+

HPEORGANIZATION

0.98+

Monte CarloTITLE

0.98+

about 400 VDIsQUANTITY

0.98+

Primary Storage and Data Management ServicesORGANIZATION

0.98+

hundreds of storesQUANTITY

0.98+

Red bull RacingORGANIZATION

0.98+

bothQUANTITY

0.98+

thousand plus sensorsQUANTITY

0.98+

tens of secondsQUANTITY

0.98+

second use caseQUANTITY

0.98+

multi-trillion dollarQUANTITY

0.98+

over five yearsQUANTITY

0.98+

todayDATE

0.97+

GreenLakeORGANIZATION

0.97+

one cityQUANTITY

0.97+

10 yearsQUANTITY

0.96+

HPE SimpliVityTITLE

0.96+

COVIDOTHER

0.96+

hundreds of global retail storesQUANTITY

0.96+

about 150 VMsQUANTITY

0.96+

Matt Cadieux, CIO Red Bull Racing v2


 

(mellow music) >> Okay, we're back with Matt Cadieux who is the CIO Red Bull Racing. Matt, it's good to see you again. >> Yeah, great to see you, Dave. >> Hey, we're going to dig into a real world example of using data at the edge and in near real-time to gain insights that really lead to competitive advantage. But first Matt, tell us a little bit about Red Bull Racing and your role there. >> Sure, so I'm the CIO at Red Bull Racing. And at Red Bull Racing we're based in Milton Keynes in the UK. And the main job for us is to design a race car, to manufacture the race car, and then to race it around the world. So as CIO, we need to develop, the IT team needs to develop the applications used for the design, manufacturing, and racing. We also need to supply all the underlying infrastructure, and also manage security. So it's a really interesting environment that's all about speed. So this season we have 23 races, and we need to tear the car apart, and rebuild it to a unique configuration for every individual race. And we're also designing and making components targeted for races. So 23 immovable deadlines, this big evolving prototype to manage with our car. But we're also improving all of our tools and methods and software that we use to design and make and race the car. So we have a big can-do attitude in the company, around continuous improvement. And the expectations are that we continue to make the car faster, that we're winning races, that we improve our methods in the factory and our tools. And so for IT it's really unique and that we can be part of that journey and provide a better service. It's also a big challenge to provide that service and to give the business the agility it needs. So my job is really to make sure we have the right staff, the right partners, the right technical platforms, so we can live up to expectations. >> And Matt that tear down and rebuild for 23 races. Is that because each track has its own unique signature that you have to tune to or are there other factors involved there? >> Yeah, exactly. Every track has a different shape. Some have lots of straight, some have lots of curves and lots are in between. The track's surface is very different and the impact that has on tires, the temperature and the climate is very different. Some are hilly, some are big curves that affect the dynamics of the car. So all that in order to win, you need to micromanage everything and optimize it for any given race track. >> And, you know, COVID has, of course, been brutal for sports. What's the status of your season? >> So this season we knew that COVID was here and we're doing 23 races knowing we have COVID to manage. And as a premium sporting team we've formed bubbles, we've put health and safety and social distancing into our environment. And we're able to operate by doing things in a safe manner. We have some special exhibitions in the UK. So for example, when people return from overseas that they do not have to quarantine for two weeks but they get tested multiple times a week and we know they're safe. So we're racing, we're dealing with all the hassle that COVID gives us. And we are really hoping for a return to normality sooner instead of later where we can get fans back at the track and really go racing and have the spectacle where everyone enjoys it. >> Yeah, that's awesome. So important for the fans but also all the employees around that ecosystem. Talk about some of the key drivers in your business and some of the key apps that give you competitive advantage to help you win races. >> Yeah, so in our business everything is all about speed. So the car obviously needs to be fast but also all of our business operations need to be fast. We need to be able to design our car and it's all done in the virtual world but the virtual simulations and designs need to correlate to what happens in the real world. So all of that requires a lot of expertise to develop the simulations, the algorithms, and have all the underlying infrastructure that runs it quickly and reliably. In manufacturing, we have cost caps and financial controls by regulation. We need to be super efficient and control material and resources. So ERP and MES systems are running, helping us do that. And at the race track itself in speed, we have hundreds of decisions to make on a Friday and Saturday as we're fine tuning the final configuration of the car. And here again, we rely on simulations and analytics to help do that. And then during the race, we have split seconds, literally seconds to alter our race strategy if an event happens. So if there's an accident and the safety car comes out or the weather changes, we revise our tactics. And we're running Monte Carlo for example. And using experienced engineers with simulations to make a data-driven decision and hopefully a better one and faster than our competitors. All of that needs IT to work at a very high level. >> You know it's interesting, I mean, as a lay person, historically when I think about technology and car racing, of course, I think about the mechanical aspects of a self-propelled vehicle, the electronics and the like, but not necessarily the data. But the data's always been there, hasn't it? I mean, maybe in the form of like tribal knowledge, if it's somebody who knows the track and where the hills are and experience and gut feel. But today you're digitizing it and you're processing it in close to real-time. It's amazing. >> Yeah, exactly right. Yeah, the car is instrumented with sensors, we post-process, we're doing video, image analysis and we're looking at our car, our competitor's car. So there's a huge amount of very complicated models that we're using to optimize our performance and to continuously improve our car. Yeah, the data and the applications that leverage it are really key. And that's a critical success factor for us. >> So let's talk about your data center at the track, if you will, I mean, if I can call it that. Paint a picture for us. >> Sure. What does that look like? >> So we have to send a lot of equipment to the track, at the edge. And even though we have really a great lateral network link back to the factory and there's cloud resources, a lot of the tracks are very old. You don't have hardened infrastructure, you don't have docks that protect cabling, for example, and you can lose connectivity to remote locations. So the applications we need to operate the car and to make really critical decisions, all that needs to be at the edge where the car operates. So historically we had three racks of equipment, legacy infrastructure and it was really hard to manage, to make changes, it was too inflexible. There were multiple panes of glass, and it was too slow. It didn't run our applications quickly. It was also too heavy and took up too much space when you're cramped into a garage with lots of environmental constraints. So we'd introduced hyper-convergence into the factory and seen a lot of great benefits. And when we came time to refresh our infrastructure at the track, we stepped back and said there's a lot smarter way of operating. We can get rid of all this slow and inflexible expensive legacy and introduce hyper-convergence. And we saw really excellent benefits for doing that. We saw a three X speed up for a lot of our applications. So here where we're post-processing data, and we have to make decisions about race strategy, time is of the essence and a three X reduction in processing time really matters. We also were able to go from three racks of equipment down to two racks of equipment and the storage efficiency of the HPE SimpliVity platform with 20 to one ratios allowed us to eliminate a rack. And that actually saved a $100,000 a year in freight costs by shipping less equipment. Things like backup, mistakes happen. Sometimes a user makes a mistake. So for example a race engineer could load the wrong data map into one of our simulations. And we could restore that DDI through SimpliVity backup in 90 seconds. And this makes sure, enables engineers to focus on the car, to make better decisions without having downtime. And we send two IT guys to every race. They're managing 60 users, a really diverse environment, juggling a lot of balls and having a simple management platform like HP SimpliVity gives us, allows them to be very effective and to work quickly. So all of those benefits were a huge step forward relative to the legacy infrastructure that we used to run at the edge. >> Yes, so you had the nice Petri dish in the factory, so it sounds like your goals obviously, number one KPI is speed to help shave seconds off the time, but also cost. >> That's right. Just the simplicity of setting up the infrastructure is key. >> Yeah, that's exactly right. >> It's speed, speed, speed. So we want applications that absolutely fly, you know gets actionable results quicker, get answers from our simulations quicker. The other area that speed's really critical is our applications are also evolving prototypes and we're always, the models are getting bigger, the simulations are getting bigger, and they need more and more resource. And being able to spin up resource and provision things without being a bottleneck is a big challenge. And SimpliVity gives us the means of doing that. >> So did you consider any other options or was it because you had the factory knowledge, HCI was, you know, very clearly the option? What did you look at? >> Yeah, so we have over five years of experience in the factory and we eliminated all of our legacy infrastructure five years ago. And the benefits I've described at the track we saw that in the factory. At the track, we have a three-year operational life cycle for our equipment. 2017 was the last year we had legacy. As we were building for 2018, it was obvious that hyper-converged was the right technology to introduce. And we'd had years of experience in the factory already. And the benefits that we see with hyper-converged actually mattered even more at the edge because our operations are so much more pressurized. Time is even more of the essence. And so speeding everything up at the really pointy end of our business was really critical. It was an obvious choice. >> So why SimpliVity? Why do you choose HPE SimpliVity? >> Yeah, so when we first heard about hyper-converged, way back in the factory. We had a legacy infrastructure, overly complicated, too slow, too inflexible, too expensive. And we stepped back and said there has to be a smarter way of operating. We went out and challenged our technology partners. We learned about hyper-convergence. We didn't know if the hype was real or not. So we underwent some PLCs and benchmarking and the PLCs were really impressive. And all these, you know, speed and agility benefits we saw and HPE for our use cases was the clear winner in the benchmarks. So based on that we made an initial investment in the factory. We moved about 150 VMs and 150 VDIs into it. And then as we've seen all the benefits we've successfully invested, and we now have an estate in the factory of about 800 VMs and about 400 VDIs. So it's been a great platform and it's allowed us to really push boundaries and give the business the service it expects. >> Well that's a fun story. So just coming back to the metrics for a minute. So you're running Monte Carlo simulations in real-time and sort of near real-time. >> Yeah. And so essentially that's, if I understand it, that's what-ifs and it's the probability of the outcome. And then somebody's got to make, >> Exactly. then a human's got to say, okay, do this, right. And so was that, >> Yeah. with the time in which you were able to go from data to insight to recommendation or edict was that compressed? You kind of indicated that, but. >> Yeah, that was accelerated. And so in that use case, what we're trying to do is predict the future and you're saying well, and before any event happens, you're doing what-ifs. Then if it were to happen, what would you probabilistically do? So, you know, so that simulation we've been running for a while but it gets better and better as we get more knowledge. And so that we were able to accelerate that with SimpliVity. But there's other use cases too. So we offload telemetry from the car and we post-process it. And that reprocessing time really is very time consuming. And, you know, we went from nine, eight minutes for some of the simulations down to just two minutes. So we saw big, big reductions in time. And ultimately that meant an engineer could understand what the car was doing in a practice session, recommend a tweak to the configuration or setup of it, and just get more actionable insight quicker. And it ultimately helps get a better car quicker. >> Such a great example. How are you guys feeling about the season, Matt? What's the team's, the sentiment? >> Yeah, I think we're optimistic. We with thinking our simulations that we have a great car. We have a new driver lineup. We have Max Verstappen who carries on with the team and Sergio Perez joins the team. So we're really excited about this year and we want to go and win races. And I think with COVID people are just itching also to get back to a little degree of normality, and, you know, and going racing again, even though there's no fans, it gets us into a degree of normality. >> That's great, Matt, good luck this season and going forward and thanks so much for coming back in theCUBE. Really appreciate it. >> It's my pleasure. Great talking to you again. >> Okay, now we're going to bring back Omar for a quick summary. So keep it right there. (mellow music)

Published Date : Mar 4 2021

SUMMARY :

Matt, it's good to see you again. and in near real-time and that we can be part of that journey And Matt that tear down and the impact that has on tires, What's the status of your season? and have the spectacle and some of the key apps So the car obviously needs to be fast the electronics and the like, and to continuously improve our car. data center at the track, What does that look like? So the applications we Petri dish in the factory, Just the simplicity of And being able to spin up And the benefits that we and the PLCs were really impressive. So just coming back to probability of the outcome. And so was that, from data to insight to recommendation And so that we were able to What's the team's, the sentiment? and Sergio Perez joins the team. and going forward and thanks so much Great talking to you again. So keep it right there.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Max VerstappenPERSON

0.99+

Matt CadieuxPERSON

0.99+

Sergio PerezPERSON

0.99+

MattPERSON

0.99+

two weeksQUANTITY

0.99+

Milton KeynesLOCATION

0.99+

Red Bull RacingORGANIZATION

0.99+

DavePERSON

0.99+

OmarPERSON

0.99+

2018DATE

0.99+

60 usersQUANTITY

0.99+

UKLOCATION

0.99+

20QUANTITY

0.99+

90 secondsQUANTITY

0.99+

23 racesQUANTITY

0.99+

150 VDIsQUANTITY

0.99+

three-yearQUANTITY

0.99+

two racksQUANTITY

0.99+

each trackQUANTITY

0.99+

2017DATE

0.99+

two minutesQUANTITY

0.99+

eight minutesQUANTITY

0.99+

nineQUANTITY

0.99+

three racksQUANTITY

0.99+

last yearDATE

0.99+

five years agoDATE

0.98+

hundredsQUANTITY

0.98+

todayDATE

0.98+

about 800 VMsQUANTITY

0.98+

HPORGANIZATION

0.98+

about 150 VMsQUANTITY

0.98+

about 400 VDIsQUANTITY

0.98+

one ratiosQUANTITY

0.98+

firstQUANTITY

0.96+

over five yearsQUANTITY

0.95+

this yearDATE

0.95+

SimpliVityTITLE

0.94+

$100,000 a yearQUANTITY

0.93+

23 immovableQUANTITY

0.93+

HCIORGANIZATION

0.93+

two ITQUANTITY

0.91+

SaturdayDATE

0.91+

Monte CarloTITLE

0.91+

oneQUANTITY

0.88+

Every trackQUANTITY

0.84+

a minuteQUANTITY

0.77+

COVIDOTHER

0.77+

threeQUANTITY

0.76+

Monte CarloCOMMERCIAL_ITEM

0.75+

every raceQUANTITY

0.75+

times a weekQUANTITY

0.75+

secondsQUANTITY

0.64+

FridayDATE

0.6+

of curvesQUANTITY

0.58+

noQUANTITY

0.56+

number oneQUANTITY

0.56+

straightQUANTITY

0.52+

SimpliVityOTHER

0.52+

COVIDTITLE

0.5+

HPETITLE

0.34+

Michael Sotnick, Pure Storage & Rob Czarnecki, AWS Outposts | AWS re:Invent 2020 Partner Network Day


 

>>from >>around the globe. It's the Cube with digital coverage of AWS reinvent 2020. Special coverage sponsored by AWS Global Partner Network. >>Hi. Welcome to the Cube. Virtual and our coverage of AWS reinvent 2020 with special coverage of a PM partner experience. I'm John for your host. We are the Cube. Virtual. We can't be there in person with a remote. And our two next guests are We have pure storage. Michael Slotnick, VP of Worldwide Alliances, Pure storage. And Robert Czarnecki, principal product manager for a U. S. Outposts. Welcome to the Cube. >>Wonderful to be here. Great to see you. And thanks for having us, >>Michael. Great to see you pure. You guys had some great Momenta, um, earnings and some announcements. You guys have some new news? We're here. Reinvent all part of a W s and outpost. I want to get into it right away. Uh, talk about the relationship with AWS. I know you guys have some hot news. Just came out in late November. We're here in the event. All the talk is about new higher level services. Hybrid edge. What do you guys doing? What's the story? >>Yeah, Look, I gotta tell you the partnership with AWS is a very high profile and strategic partnership for pure storage. We've worked hard with our cloud block store for AWS, which is an extensive bility solution for pure flash array and into a W s. I think the big news and one of things that we're most proud of is the recent establishment of pure being service ready and outpost ready. And the first and Onley on Prem storage solution and were shoulder to shoulder with AWS is a W s takes outpost into the data center. Now they're going after key workloads that were well known for. And we're very excited Thio, partner with AWS in that regard, >>you know, congratulations to pure. We've been following you guys from the beginning since inception since it was founded startup. And now I'll see growing public company on the next level kind of growth plan. You guys were early on all this stuff with with with flash with software and cloud. So it's paying off. Rob, I wanna get toe Outpost because this was probably most controversial announcements I've ever covered at reinvent for the past eight years. It really was the first sign that Andy was saying, You know what? We're working backwards from the customers and they all are talking Hybrid. We're gonna have Outpost. Give us the update. What kind of workloads and verticals are seeing Success without post? Now that that's part of the portfolio, How does it all working out? Give us the update on the workloads in the verticals. >>Absolutely. Although I have to say I'd call it more exciting than controversial. We're so excited about the opportunities that outpost opened for our customers. And, you know, customers have been asking us for years. How can we bring AWS services to our data centers? And we thought about it for a long time. And until until we define the outpost service, we really I thought we could do better. And what outpost does it lets us take those services that customers are familiar with? It lets us bring it to their data center and and one of the really bright spots over the past year has just been how many different industries and market segments have shown interest. Outpost right. You could have customers, for example, with data residency needs, those that have to do local data processing. Uh, maybe have Leighton see needs on a specific workload that needs to run near their end users. We're just folks trying to modernize their data center, and that's a journey. That transformation takes time, right? So So Outpost works for all of those customers. And one of the things that's really become clear to us is that to enable the success that we think L Post can have, we need to meet customers where they are. And and one of the fantastic things about the outpost ready program is many of those customers air using pure and they have pure hardware and way. Send an outpost over to the pure lab recently, and I have to tell you a picture of those two racks next to each other looks really good. >>You know, 20 used to kind of welcome back my controversial comments. You know, I meant in the sense of that's when Cloud really got big into the enterprise and you have to deal with hybrid. So I do think it's exciting because the edges a big theme here. Can you just share real quick before I get in some of the pure questions on this edge piece with the hybrid because what what's the customer need? And when you talk to customers, I know you guys, you know, really kind of work backwards from the customer. What are their needs? What causes them to look at Outpost as part of their hybrid? What's the Keith consideration? >>Yeah, so? So there are a couple of different needs. John, right? One, for example, is way have regions and local zones across the globe. But we're not everywhere and and their their data residency regulations that they're becoming increasingly common and popular. So customers I come to us and say, Look, I really need to run, for example, of financial services workload. It needs to be in Thailand, and we don't have a reason or local zone in Thailand. But we could get him an outpost to to places where they need to be right. So the that that requirement to keep data, whether it's by regulation or by a contractual agreement, that's a that's a big driver. The other pieces there's There's a tremendous amount of interest in the that top down executive sponsorship across enterprise customers to transform their operations right to modernize their their digital approach but there, when they actually look a look at their estate, they do see an awful lot of hardware, and that's a hard challenge. Thio Plan the migration when you could bring an outpost right into that data center. It really makes it much easier because AWS is right there. There could be a monolithic architecture that it doesn't lend well toe having part of the workload running in the region, part of the workload running in their data center. But with an outpost, they can extend AWS to their data center, and that just makes it so much easier for them to get started on their digital transformation. >>Michael, this is This is the key trend. You guys saw early Cloud operations on premise. It becomes cloud ified at that point when you have Dev ops on on Premises and then cloud pure cloud for bursting all that stuff. And now you've got the edge exploding as well of growth and opportunity. What causes the customer to get the pure option on outputs? What's the What's the angle for you guys? Obviously storage, you get data and I can see this whole Yeah, there's no region and certainly outpost stores data, and that's a requirement for a lot of, you know, certainly global customers and needs. What's the pure angle on this? >>Yeah, I appreciate that. And appreciate Rob's comments around what AWS sees in the wild in terms of yours footprint in the market share that we've established his company over 11 years in business and, you know, over eight years of shipping product. You know, what I would tell you is one of the things that that a lot of people misses the simplicity and the consistency that air characteristically, you know very much in the AWS experience and equally within the pure experience and that that's really powerful. So as we were successful in putting pure into workloads that, you know, for for all the reasons that Rob talked about right data gravity, you know, the the regulatory issues, you know, just application architecture and its inability to move to the public cloud. Um, you know, our predictability are simplicity. Are consistency really match with the costumers getting with other work clothes that they had in AWS? And so with a W S outposts that's really bringing to the customer that single pane of glass to manage their entire environment. And so we saw that we made the three year investment in Outpost. Is Rob said Just having our solution? Inp Yours Data center. It's set up and running today with a solution built on flash Blade, which is our unstructured data solution and, you know, delivering fantastic performance results in a I and ML workloads. We see the same opportunity within backup and disaster recovery workloads and into analytics and then equally the opportunity toe build. You know, Flash Ray and our other storage solutions, and to build architectures with outposts in our data center and bring them to market >>real quick just to follow up on that. What use cases are you seeing that are most successful without post and in general in general, how do you guys get your customers to integrate with the rest of, uh, their environment? Because you you no one's got. Now this operating environments not just cloud public, is cloud on premise and everything else. >>Yeah, you know what's cool is, and then Rob hit right on. It is the the wide range of industries and the wide range of use cases and workloads that air finding themselves attracted to the outpost offering on DSO. You know, without a doubt there's gonna be, You know, I think what people would immediately believe ai and ml workloads and the importance of having high performance storage and to have a high performance outpost environment, you know, as close to the center as possible of those solutions. But it doesn't stop there. Traditional virtualized database workloads that for reasons of application architecture, aren't candidates to move. AWS is public cloud offering our great fit for outpost and those air workloads that we've always traditionally been successful within the market and see a great opportunity. Thio, you know, build on that success as an outpost partner. >>Rob, I gotta ask, you last reinvent when we're in person. When we had real life back then e was at the replay party and hanging out, and this guy comes out to me. I don't even know who he was. Obviously big time engineer over there opens his hand up and shows me this little processor and I'm like, closes and he's like and I go take a picture and it was like freaking out. Don't take a picture. It was it was the big processor was the big, uh, kind of person. Uh, I think it was the big monster. And it was just so small. See the innovation and hard where you guys have done a lot, there s that's cool. I like get your thoughts on where the future is going there because you've got great hardware innovation, but you got the higher level services with containers. I know you guys took your time. Containers are super important because that's going to deal with that. So how do you look at that? You got the innovation in the hardware check containers. How does that all fit in? Because you guys have been making a lot of investments in some of these cloud native projects. What's your position on that? >>You know, it's all part of one common story, John right customers that they want an easy path to delivering impact for their business. Right. And, you know, you've heard us speak a lot over the past few years about how we're really seeing these two different types of customers. We have those customers that really loved to get those foundational core building blocks and stitch them together in a creative way. But then you have more and more customers that they wanna. They wanna operate at a different level, and and that's okay. We want to support both of them. We want to give both of them all the tools that they need. Thio spend their time and put their resource is towards what differentiates their business and just be able to give them support at whatever level they need on the infrastructure side. And it's fantastic that are combination of investments in hardware and services. And now, with Outpost, we can bring those investments even closer to the customer. If you really think about it that way, the possibilities become limitless. >>Yeah, it's not like the simplicity asked, but it was pretty beautiful to the way it looks. It looks nice. Michael. Gotta ask you on your side. A couple of big announcements over that we've been following from pure looking back. You already had the periods of service announcement you bought the port Works was acquisition. Yeah, that's container management. Across the data center, including outposts you got pure is a service is pure. Is the service working with outpost and how and if so, how and what's the consumption model for customers there. >>Yeah, thanks so much, John. And appreciate you following us the way that you do it. Zits meaningful and appreciate it. Listen, you know, I think the customers have made it clear and in AWS is, you know, kind of led the way in terms of the consumption and experience expectations that customers have. It's got to be consumable. They've got to pay for what they use. It's got to be outcome oriented and and we're doing that with pure is a service. And so I think we saw that early and have invested in pure is a service for our customers. And, you know, we look at the way we acquired outposts as ah customer and a partner of AWS aan dat is exactly the same way customers can consume pure. You know, all of our solutions in a, you know, use what you need, pay for what you use, um, environment. And, you know, one of the exciting things about AWS partnership is its wide ranging and one of the things that AWS has done, I think world class is marketplace. And so we're excited to share with this audience, you know, really? On the back of just recent announcement that, pure is the service is available within the AWS marketplace. And so you think about the, you know, simplicity and the consistency that pure and AWS delivered to the market. AWS customers demand that they get that in the marketplace, and and we're proud to have our offerings there. And Port Works has been in the marketplace and and will continue to be showcased from a container management standpoint. So as those workloads increasingly become, you know, the cloud native you know, Dev Ops, Containerized workloads. We've got a solution and to end to support >>that great job. Great insight. Congratulations to pure good moves as making some good moves. Rob, I want to just get to the final word here on Outpost again. Great. Everyone loves this product again. It's a lot of attention. It's really that that puts the operating models cloud firmly on the in the on premise world for Amazon opens up a lot of good conversation and business opportunities and technical integrations or are all around you. So what's your message to the ecosystem out there for outposts? How do I What's the what's the word? I wanna do I work with you guys? How do I get involved? What are some of the opportunities? What's your position? How do you talk to the ecosystem? >>Yeah, You know, John, I think the best way to frame it is we're just getting started. We've got our first year in the books. We've seen so many promising signals from customers, had so many interesting conversations that just weren't possible without outposts. And, uh, you know, working with partners like pure and expanding our outpost. Ready program is just the beginning. Right? We launched back in September. We've We've seen another meaningful set of partners come out. Uh, here it reinvent, and we're gonna continue toe double down on both the outpost business, but specifically on on working with our partners. I think that the key to unlocking the magic of outpost is meeting customers where they are. And those customers are using our partners. And there's no reason that it shouldn't just work when they move there. Their partner based workload from their existing infrastructure right over to the outpost. >>All right, I'll leave it there. Michael saw the VP of worldwide alliances that pier storage congratulations. Great innovation strategy It's easy to do alliances when you've got a great product and technology congratulated. Rob Kearney Key principle product manager. Outpost will be speaking more to you throughout the next couple of weeks. Here at Reinvent Virtual. Thanks for coming. I appreciate it. >>Thank you. Thank you. >>Okay. So cute. Virtual. We are the Cube. Virtual. We wish we could be there in person this year, but it's a virtual event. Over three weeks will be lots of coverage. I'm John for your host. Thanks for watching.

Published Date : Dec 3 2020

SUMMARY :

It's the Cube with digital coverage We are the Cube. Great to see you. Great to see you pure. And the first and Onley on Prem storage And now I'll see growing public company on the next level kind of growth plan. Send an outpost over to the pure lab recently, and I have to tell you a picture of those two racks next to I meant in the sense of that's when Cloud really got big into the enterprise and you So the that that requirement to keep data, What's the What's the angle for you guys? the the regulatory issues, you know, just application architecture and its inability in general in general, how do you guys get your customers to integrate with the rest of, the importance of having high performance storage and to have a high performance outpost See the innovation and hard where you guys have done And, you know, you've heard us speak a lot You already had the periods of service announcement you bought the port Works was acquisition. to share with this audience, you know, really? It's really that that puts the And, uh, you know, working with partners like pure and expanding our outpost. Outpost will be speaking more to you throughout the next couple of weeks. Thank you. We are the Cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

AWSORGANIZATION

0.99+

Michael SotnickPERSON

0.99+

Robert CzarneckiPERSON

0.99+

Rob CzarneckiPERSON

0.99+

ThailandLOCATION

0.99+

MichaelPERSON

0.99+

Michael SlotnickPERSON

0.99+

AndyPERSON

0.99+

RobPERSON

0.99+

SeptemberDATE

0.99+

AmazonORGANIZATION

0.99+

three yearQUANTITY

0.99+

late NovemberDATE

0.99+

Rob KearneyPERSON

0.99+

Reinvent VirtualORGANIZATION

0.99+

two racksQUANTITY

0.99+

AWS Global Partner NetworkORGANIZATION

0.99+

bothQUANTITY

0.99+

first yearQUANTITY

0.99+

this yearDATE

0.99+

over 11 yearsQUANTITY

0.99+

L PostORGANIZATION

0.99+

oneQUANTITY

0.98+

over eight yearsQUANTITY

0.98+

LeightonORGANIZATION

0.98+

OutpostORGANIZATION

0.98+

firstQUANTITY

0.97+

KeithPERSON

0.97+

OneQUANTITY

0.97+

two next guestsQUANTITY

0.97+

outpostORGANIZATION

0.96+

todayDATE

0.96+

first signQUANTITY

0.96+

ThioPERSON

0.95+

Over three weeksQUANTITY

0.94+

2020TITLE

0.89+

Worldwide AlliancesORGANIZATION

0.88+

CubeCOMMERCIAL_ITEM

0.88+

single paneQUANTITY

0.88+

Flash RayORGANIZATION

0.87+

U. S.LOCATION

0.87+

two different typesQUANTITY

0.87+

re:Invent 2020 Partner Network DayEVENT

0.84+

past yearDATE

0.83+

outpostsORGANIZATION

0.83+

past eight yearsDATE

0.8+

Clif Dorsey, The Warrell Corporation | CUBE Conversation, November 2020


 

from the cube studios in palo alto in boston connecting with thought leaders all around the world this is a cube conversation [Music] welcome to this cube conversation lisa martin here with one of dell technologies customers what a great time to be talking about a candy company clifton dorsey joins me the vp at world corporation clifton welcome to the cube all right thank you i appreciate being here so i know royal corporation does more than candy but you know here we are approaching the holidays and and i think all of us could just use a feel-good story about candy but i know you do more tell our audience a little bit more about where else from a corporation standpoint sure war el corporation has been in family and it's a family-owned business for 50-plus years we're a co-packer co-manufacturer so you won't really see a world candy bar out there but we really help bring to life a lot of innovative ideas around the candy and the good for you products for a lot of our partners so talk to us about you're obviously working with dell technologies give us uh an insight into your data center operations your it environment before the solution that we're going to be talking about what was that like physical on-prem cloud sure well our data center kind of falls right in line with the same time of year as a horror story when i first came here and got on board a lot of outdated equipment despair and equipment separated our backups were they were few and far between um the the equipment was so old that some of it wasn't even supported anymore we had antiquated systems but the biggest thing is the confidence level you have zero confidence that your system stay running your backup it was an everyday occurrence just to try to get a reliable backup for just a piece of the data let alone all of our um all of our data and information that we had that confidence factor is table stakes and especially now it's it's if a company can't have confidence that their data is secure and protected that's you know folks that can't might not be around tomorrow but talk to me a little bit about from a workload's perspective what were you protecting vm's erp systems give us a slice of that data in terms of its impact in the business sure and you're correct you know having that that confidence level in the systems is what you need our sql databases are in there our erp system's in there our file servers and at the time even our email server was in there so any one of those goes down it's a business impact you know you have to look at what can't you do when that happens on our manufacturing floor we're collecting a lot of data coming back off the floor for all of our folks and purchasing and procurement and our run times to make sure that we're hitting our our dates on time so even our shipping and receiving team needs to know how the floor is running to know if we're going to hit those ship dates and when to schedule the truck so it literally all correlates and all comes together and i imagine also not just does that involve every aspect of world's business internally but relationships with the partners that you are helping fulfill right that's correct because we're making their product if they can't have a product on the shelves that impacts them you know so we really have to do everything that we can to make sure our systems are up make sure everything is running and really fulfill their order and get it out there so you can have that product yeah that's why often we talk about data protection and brand reputation go hand in hand so talk to me when you came into world you must have seen nowhere to go but up talk to me about some of the things that you said we have got to for example aging physical infrastructure we've got to replace that we obviously have to be able to have reliable data protection because we have to have the confidence that we can enable our teams and uh our partners but what were some of the things that you said all right let me kind of get a phased approach here what are we gonna change and why are we gonna do it with dell technologies sure so looking at our business continuity plan you don't have a good dr so you have to start looking backwards of where do you start so we have zero backups our data's not protected we have zero confidence in it we go back a little farther our systems aren't really there for us to back up so it really started at the appliance level in our server level to get rid of all of the old information get a new subset of servers in so we have a new vxrail environment in it integrates great with the integrated appliance ties directly into it so we have backups we have everyday backups we have fast and speedy backups then we can offload those to an off site so we now truly have a full business continuity plan and a dr disaster recovery situation that's critical because i was reading your case study and where there was no dr before talk to me about the ability to to leverage the clouds of now with what you've implemented uh and the power protect series you've now got the ability to from a dr perspective i just want to understand that especially here we are you know towards the end of 2020 when there's been such a shift to remote operations what's that cloud benefit like from a dr standpoint it's been great um you know we were old tape backups so someone had to be here to switch out the old tapes you know and hope that how do we get them off site who's taking them home what vault are they living in you know with the remote aspect all of that worry goes away everything is offloaded everything goes into a cloud if there is a situation we just work through our cloud environment we can reinstate business literally a click of a switch yeah i was looking at some of your statistics and it looks like about a 6x reduction in your backup windows and a huge reduction in your physical footprint the data center also imagine more green tell us a little bit about it from that perspective sure yeah a couple points should hit there i mean our deduplication and compression rates are higher than what we're expected by far so that's been a great spot as far as we don't need as much physical storage to hold all of that well with that we were able to take our racks we had three server racks we went down to half of one server rack so your heating and air conditioning your heating and cooling cost comes down your power cost comes down all of that soft cost that stretches around this environment really has a benefit for us i'm also thinking too in this time of everything shifting and data protection becoming business critical your team's productivity won we talked about um the the backup big reduction in backup windows but your team must be must be much more productive and also i imagine from confidence from a reputation standpoint your executives or senior management probably now has the confidence because you have the confidence that the data is secure yeah lisa that's true and even down to our user level when i first got here i mean our users were complaining about not being able to print a word document i mean not being able to print a document that's that's fundamental stuff started diving into it and our systems are just old disparate and weren't configured the best so our team's really been able to revamp all of that redo all of that and the confidence level around the organization has just really improved and it's great to see we now have a lot more time to work on tomorrow than living in the trenches of today which everyone needs right now since this it's pivot after pivot after pivot i'm curious how long ago did you come in like how many years ago did you find this antiquated system yeah i've been here a little over two years now okay so fairly recently what were some of the things that you think world was stuck in was it cultural was it operational what was it that you helped influence in terms of we have to make a change now sure one of the things for me is one of the old phrases i think is the worst phrase is well because we've always done it that way you know so getting a fresh line and a fresh look on something was able to really help out the innovation that dell brings to the table falls in line with the innovation that we bring for candy so let's look at what they have let's look at how they tie together but we have to do a full forklift of our infrastructure in our data center you want to look at the integrated systems and you know how you get that best performance not the best bang for the buck when it comes to the budget because if they're unbudgeted dollars you really got to get every dollar to stretch farther and looking at the vxrail with the integrated appliance and with the cdra to offload to you know a cloud site it was a perfect package it was the perfect pairing for what we needed but going from you talk about you know we've always done it this way and you're right we hear that a lot and it's well why you can do it so much differently can you imagine if they were still doing it that way in the era of covid but thinking about this big switch from um a big physical um footprint to going to hyper converged infrastructure how was that transition that you helped drive how did it shift the culture at world because i imagine it 50 year old company in leicester right yeah i believe it did i believe it helped everyone kind of just look at everything and say just because it works to get us here is it going to work with get us to tomorrow so everybody really started looking at different situations and different things when we sat down with our users we changed the entire desktop experience you know we have new laptops we have new operating systems we have the way things are working better so it really changed the culture through and through so you know when you go to work you want systems to work when you come in the last thing you want to deal with is oh is a computer is going to be down do i have to call it today so getting that scalability was great but getting that reliance from the user in from the keyboard the whole way back to your edge was a huge win for us and let's look at your team now having more time to be innovative especially i'm curious what you've been able to do the last six to seven months that you now have this reliable infrastructure data secure internally people can print things they can check their emails without having to bother i.t what are some of the new maybe strategic areas that you've been able to get involved in because you have the time to focus on them we're really getting involved with more of the plant all the equipment on the floor trying to collect that data and correlate all that data coming off the floor and now we're able to have a little fun of how do we get the data on the floor real-time collection back into the system and how can we have technology help drive that innovation on the floor when our r d department comes to us about we have a new product that needs to run where do we run it i'm now able to work with the manufacturing manager and say that type of product runs best on this machine and here's the data to support that that's really the fun of what we can do for tomorrow so does this now enable you to become data-driven whereas before maybe not so much yeah i agree yes very much so and it's good data it's not hypothetical data that someone put on a piece of paper and thought through it's really good data that we can correlate and collect and that's critical especially right now as everybody wants everything real time and the consumer demand is changes every industry and i think it's probably going to be a pretty big demand still this year for candy i know uh that sounds pretty good right about now i'd love to get your advice for men and women in your position coming into maybe a legacy business that has an antiquated infrastructure how do you recommend how do you advise that they go about approaching leadership and their teams to do a complete transformation i think it starts with a good partner you've got to have a good partner and able to put things in order i like to call it one hand to high five you have to have that good partner to fall back on you build a good solid solution and then you look at your budget can do but it's all about the culture if you can find out where your culture is suffering because people are upset when they come in because something doesn't work what's the root cause of that how do you get that out of play you know work with your folks i always say i want people to drive in happy i want people to drive home happy how do we make sure that is and i know it sounds weird coming from vit guy but you have a huge impact so when you can look at everyday experience of sitting down coming into your office and sitting down at your technology and it works it's just one level of stress that we no longer have i said it's a huge level of stress that you don't have and i think that's that's an important point that you bring up you want people to come in happy and leave happy but you also really challenge them to get out of your comfort zone just because we've always done it this way doesn't mean we should still should and actually if we do there might be a competitor right behind us that's ready to come in and take over this is a competitive differentiator and especially in the time of this dynamic environment in which we live the status quo the comfort zone probably going to be a factor in determining i think the winners of tomorrow do you agree with that i do agree with that and what we actually found is when we asked people to kind of think outside of their box and step back a minute we found that they were doing something as a as a band-aid well i have to now do it this way and it just became status quo when we pulled that band-aid off we kept going back kept going back kept going back found out you're doing something because it wasn't fixed four processes ago let's fix that now it's not even a thing you know so it just leverages them more time to think more outside of the box so how do i better this situation and they can really look at everything they do how do i make it better down to how the orders come in how we process the orders to even how we ship them out and how we package them up in the truck well when you were talking about band-aids like band-aid on band-aid on band-aid i just think inherently complexity yes so talk to me in the last question here as we wrap up here from a from a simplification perspective how has dell technologies helped transform and simplify the environment i don't know that i have the word i've been thinking for that how is that because it's been so monumental that they've done for us i mean down to we've been able to revamp our data center and i know that that sounds odd well it's just a vxrail it's just it's not it's being able to simplify all of the stuff we had down into 5u of rackspace allowed us to clean up our data center clean up that complexity everything's running inside of there we no longer have you know a tape drive sitting somewhere else we now have more man hours and the soft cost we have more man hours to do a lot more of the tomorrow world so the complexity we can at our own level of complexity when we want for security or anything else but we're not we don't have a whole spider web of stuff going on that we have to work through just to see where we need to start and that's really as you said what's the word it's transformative but it sounds to me like what you're doing as a leader yourself and with dell technologies is really enabling the organization or has enabled it to get out of its comfort zone embrace modernizing take out complexity where it's not needed and focus on business outcomes which at the end of the day is the most important thing it is and you know we have our own research and development team here for you know what's the next candy you're going to see on the market we have our own innovation team i challenged every one of the departments that i work with to think the same thing what's next in your world how can you re-innovate what you have what what haven't we thought of um you know and the old thing is no idea is a bad idea let's put it on the table let's bet vetted out let's see if it works but then also working with the other departments the other departments are now able to collaborate well i didn't know that you needed that yep that's the data i need oh that's easy here you go and it has really streamlined processes from from start to finish that collaboration is essential will clifton thanks for sharing what you're doing with dell technologies the um the new dp series great work there we look forward to hearing more of what's to come from world well i do appreciate thank you so much for your time lisa and the dell team um the appliances and everything has been great for us so we appreciate everything dell's done thank you excellent i know you have that confidence because you talked about it all right guys for clifton dorsey i'm lisa martin you're watching this cube [Music] conversation

Published Date : Nov 13 2020

SUMMARY :

the last six to seven months that you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
November 2020DATE

0.99+

lisa martinPERSON

0.99+

Clif DorseyPERSON

0.99+

todayDATE

0.99+

end of 2020DATE

0.99+

tomorrowDATE

0.99+

50-plus yearsQUANTITY

0.99+

The Warrell CorporationORGANIZATION

0.98+

bostonLOCATION

0.98+

50 year oldQUANTITY

0.98+

lisa martinPERSON

0.97+

dellORGANIZATION

0.95+

one levelQUANTITY

0.95+

oneQUANTITY

0.95+

lisaPERSON

0.94+

over two yearsQUANTITY

0.94+

this yearDATE

0.94+

yearsDATE

0.9+

clifton dorseyPERSON

0.9+

firstQUANTITY

0.89+

one of the old phrasesQUANTITY

0.88+

zeroQUANTITY

0.86+

halfQUANTITY

0.85+

couple pointsQUANTITY

0.82+

lot of dataQUANTITY

0.81+

seven monthsQUANTITY

0.81+

5uQUANTITY

0.8+

three server racksQUANTITY

0.78+

fourDATE

0.78+

one server rackQUANTITY

0.69+

every dollarQUANTITY

0.67+

vxrailCOMMERCIAL_ITEM

0.64+

a minuteQUANTITY

0.63+

about aQUANTITY

0.63+

thingsQUANTITY

0.63+

processesQUANTITY

0.63+

cliftonPERSON

0.57+

palo altoORGANIZATION

0.53+

6xQUANTITY

0.53+

vxrailORGANIZATION

0.49+

last sixDATE

0.39+

elORGANIZATION

0.38+

Computer Science & Space Exploration | Exascale Day


 

>>from around the globe. It's the Q. With digital coverage >>of exa scale day made possible by Hewlett Packard Enterprise. We're back at the celebration of Exa Scale Day. This is Dave Volant, and I'm pleased to welcome to great guests Brian Dance Berries Here. Here's what The ISS Program Science office at the Johnson Space Center. And Dr Mark Fernandez is back. He's the Americas HPC technology officer at Hewlett Packard Enterprise. Gentlemen, welcome. >>Thank you. Yeah, >>well, thanks for coming on. And, Mark, Good to see you again. And, Brian, I wonder if we could start with you and talk a little bit about your role. A T. I s s program Science office as a scientist. What's happening these days? What are you working on? >>Well, it's been my privilege the last few years to be working in the, uh, research integration area of of the space station office. And that's where we're looking at all of the different sponsors NASA, the other international partners, all the sponsors within NASA, and, uh, prioritizing what research gets to go up to station. What research gets conducted in that regard. And to give you a feel for the magnitude of the task, but we're coming up now on November 2nd for the 20th anniversary of continuous human presence on station. So we've been a space faring society now for coming up on 20 years, and I would like to point out because, you know, as an old guy myself, it impresses me. That's, you know, that's 25% of the US population. Everybody under the age of 20 has never had a moment when they were alive and we didn't have people living and working in space. So Okay, I got off on a tangent there. We'll move on in that 20 years we've done 3000 experiments on station and the station has really made ah, miraculously sort of evolution from, ah, basic platform, what is now really fully functioning national lab up there with, um, commercially run research facilities all the time. I think you can think of it as the world's largest satellite bus. We have, you know, four or five instruments looking down, measuring all kinds of things in the atmosphere during Earth observation data, looking out, doing astrophysics, research, measuring cosmic rays, X ray observatory, all kinds of things, plus inside the station you've got racks and racks of experiments going on typically scores, you know, if not more than 50 experiments going on at any one time. So, you know, the topic of this event is really important. Doesn't NASA, you know, data transmission Up and down, all of the cameras going on on on station the experiments. Um, you know, one of one of those astrophysics observatory's you know, it has collected over 15 billion um uh, impact data of cosmic rays. And so the massive amounts of data that that needs to be collected and transferred for all of these experiments to go on really hits to the core. And I'm glad I'm able toe be here and and speak with you today on this. This topic. >>Well, thank you for that, Bryan. A baby boomer, right? Grew up with the national pride of the moon landing. And of course, we've we've seen we saw the space shuttle. We've seen international collaboration, and it's just always been something, you know, part of our lives. So thank you for the great work that you guys were doing their mark. You and I had a great discussion about exa scale and kind of what it means for society and some of the innovations that we could maybe expect over the coming years. Now I wonder if you could talk about some of the collaboration between what you guys were doing and Brian's team. >>Uh, yeah, so yes, indeed. Thank you for having me early. Appreciate it. That was a great introduction. Brian, Uh, I'm the principal investigator on Space Born computer, too. And as the two implies, where there was one before it. And so we worked with Bryant and his team extensively over the past few years again high performance computing on board the International Space Station. Brian mentioned the thousands of experiments that have been done to date and that there are currently 50 orm or going on at any one time. And those experiments collect data. And up until recently, you've had to transmit that data down to Earth for processing. And that's a significant amount of bandwidth. Yeah, so with baseball and computer to we're inviting hello developers and others to take advantage of that onboard computational capability you mentioned exa scale. We plan to get the extra scale next year. We're currently in the era that's called PETA scale on. We've been in the past scale era since 2000 and seven, so it's taken us a while to make it that next lead. Well, 10 years after Earth had a PETA scale system in 2017 were able to put ah teraflop system on the International space station to prove that we could do a trillion calculations a second in space. That's where the data is originating. That's where it might be best to process it. So we want to be able to take those capabilities with us. And with H. P. E. Acting as a wonderful partner with Brian and NASA and the space station, we think we're able to do that for many of these experiments. >>It's mind boggling you were talking about. I was talking about the moon landing earlier and the limited power of computing power. Now we've got, you know, water, cool supercomputers in space. I'm interested. I'd love to explore this notion of private industry developing space capable computers. I think it's an interesting model where you have computer companies can repurpose technology that they're selling obviously greater scale for space exploration and apply that supercomputing technology instead of having government fund, proprietary purpose built systems that air. Essentially, you use case, if you will. So, Brian, what are the benefits of that model? The perhaps you wouldn't achieve with governments or maybe contractors, you know, kind of building these proprietary systems. >>Well, first of all, you know, any any tool, your using any, any new technology that has, you know, multiple users is going to mature quicker. You're gonna have, you know, greater features, greater capabilities, you know, not even talking about computers. Anything you're doing. So moving from, you know, governor government is a single, um, you know, user to off the shelf type products gives you that opportunity to have things that have been proven, have the technology is fully matured. Now, what had to happen is we had to mature the space station so that we had a platform where we could test these things and make sure they're gonna work in the high radiation environments, you know, And they're gonna be reliable, because first, you've got to make sure that that safety and reliability or taken care of so that that's that's why in the space program you're gonna you're gonna be behind the times in terms of the computing power of the equipment up there because, first of all and foremost, you needed to make sure that it was reliable and say, Now, my undergraduate degree was in aerospace engineering and what we care about is aerospace engineers is how heavy is it, how big and bulky is it because you know it z expensive? You know, every pound I once visited Gulfstream Aerospace, and they would pay their employees $1000 that they could come up with a way saving £1 in building that aircraft. That means you have more capacity for flying. It's on the orders of magnitude. More important to do that when you're taking payloads to space. So you know, particularly with space born computer, the opportunity there to use software and and check the reliability that way, Uh, without having to make the computer, you know, radiation resistance, if you will, with heavy, you know, bulky, um, packaging to protect it from that radiation is a really important thing, and it's gonna be a huge advantage moving forward as we go to the moon and on to Mars. >>Yeah, that's interesting. I mean, your point about cots commercial off the shelf technology. I mean, that's something that obviously governments have wanted to leverage for a long, long time for many, many decades. But but But Mark the issue was always the is. Brian was just saying the very stringent and difficult requirements of space. Well, you're obviously with space Born one. You got to the point where you had visibility of the economics made sense. It made commercial sense for companies like Hewlett Packard Enterprise. And now we've sort of closed that gap to the point where you're sort of now on that innovation curve. What if you could talk about that a little bit? >>Yeah, absolutely. Brian has some excellent points, you know, he said, anything we do today and requires computers, and that's absolutely correct. So I tell people that when you go to the moon and when you go to the Mars, you probably want to go with the iPhone 10 or 11 and not a flip phone. So before space born was sent up, you went with 2000 early two thousands computing technology there which, like you said many of the people born today weren't even around when the space station began and has been occupied so they don't even know how to program or use that type of computing. Power was based on one. We sent the exact same products that we were shipping to customers today, so they are current state of the art, and we had a mandate. Don't touch the hardware, have all the protection that you can via software. So that's what we've done. We've got several philosophical ways to do that. We've implemented those in software. They've been successful improving in the space for one, and now it's space born to. We're going to begin the experiments so that the rest of the community so that the rest of the community can figure out that it is economically viable, and it will accelerate their research and progress in space. I'm most excited about that. Every venture into space as Brian mentioned will require some computational capability, and HP has figured out that the economics air there we need to bring the customers through space ball into in order for them to learn that we are reliable but current state of the art, and that we could benefit them and all of humanity. >>Guys, I wanna ask you kind of a two part question. And, Brian, I'll start with you and it z somewhat philosophical. Uh, I mean, my understanding was and I want to say this was probably around the time of the Bush administration w two on and maybe certainly before that, but as technology progress, there was a debate about all right, Should we put our resource is on moon because of the proximity to Earth? Or should we, you know, go where no man has gone before and or woman and get to Mars? Where What's the thinking today, Brian? On that? That balance between Moon and Mars? >>Well, you know, our plans today are are to get back to the moon by 2024. That's the Artemus program. Uh, it's exciting. It makes sense from, you know, an engineering standpoint. You take, you know, you take baby steps as you continue to move forward. And so you have that opportunity, um, to to learn while you're still, you know, relatively close to home. You can get there in days, not months. If you're going to Mars, for example, toe have everything line up properly. You're looking at a multi year mission you know, it may take you nine months to get there. Then you have to wait for the Earth and Mars to get back in the right position to come back on that same kind of trajectory. So you have toe be there for more than a year before you can turn around and come back. So, you know, he was talking about the computing power. You know, right now that the beautiful thing about the space station is, it's right there. It's it's orbiting above us. It's only 250 miles away. Uh, so you can test out all of these technologies. You can rely on the ground to keep track of systems. There's not that much of a delay in terms of telemetry coming back. But as you get to the moon and then definitely is, you get get out to Mars. You know, there are enough minutes delay out there that you've got to take the computing power with you. You've got to take everything you need to be able to make those decisions you need to make because there's not time to, um, you know, get that information back on the ground, get back get it back to Earth, have people analyze the situation and then tell you what the next step is to do. That may be too late. So you've got to think the computing power with you. >>So extra scale bring some new possibilities. Both both for, you know, the moon and Mars. I know Space Born one did some simulations relative. Tomorrow we'll talk about that. But But, Brian, what are the things that you hope to get out of excess scale computing that maybe you couldn't do with previous generations? >>Well, you know, you know, market on a key point. You know, bandwidth up and down is, of course, always a limitation. In the more computing data analysis you can do on site, the more efficient you could be with parsing out that that bandwidth and to give you ah, feel for just that kind of think about those those observatory's earth observing and an astronomical I was talking about collecting data. Think about the hours of video that are being recorded daily as the astronauts work on various things to document what they're doing. They many of the biological experiments, one of the key key pieces of data that's coming back. Is that video of the the microbes growing or the plants growing or whatever fluid physics experiments going on? We do a lot of colloids research, which is suspended particles inside ah liquid. And that, of course, high speed video. Is he Thio doing that kind of research? Right now? We've got something called the I s s experience going on in there, which is basically recording and will eventually put out a syriza of basically a movie on virtual reality recording. That kind of data is so huge when you have a 360 degree camera up there recording all of that data, great virtual reality, they There's still a lot of times bringing that back on higher hard drives when the space six vehicles come back to the Earth. That's a lot of data going on. We recorded videos all the time, tremendous amount of bandwidth going on. And as you get to the moon and as you get further out, you can a man imagine how much more limiting that bandwidth it. >>Yeah, We used to joke in the old mainframe days that the fastest way to get data from point a to Point B was called C Tam, the Chevy truck access method. Just load >>up a >>truck, whatever it was, tapes or hard drive. So eso and mark, of course space born to was coming on. Spaceport one really was a pilot, but it proved that the commercial computers could actually work for long durations in space, and the economics were feasible. Thinking about, you know, future missions and space born to What are you hoping to accomplish? >>I'm hoping to bring. I'm hoping to bring that success from space born one to the rest of the community with space born to so that they can realize they can do. They're processing at the edge. The purpose of exploration is insight, not data collection. So all of these experiments begin with data collection. Whether that's videos or samples are mold growing, etcetera, collecting that data, we must process it to turn it into information and insight. And the faster we can do that, the faster we get. Our results and the better things are. I often talk Thio College in high school and sometimes grammar school students about this need to process at the edge and how the communication issues can prevent you from doing that. For example, many of us remember the communications with the moon. The moon is about 250,000 miles away, if I remember correctly, and the speed of light is 186,000 miles a second. So even if the speed of light it takes more than a second for the communications to get to the moon and back. So I can remember being stressed out when Houston will to make a statement. And we were wondering if the astronauts could answer Well, they answered as soon as possible. But that 1 to 2 second delay that was natural was what drove us crazy, which made us nervous. We were worried about them in the success of the mission. So Mars is millions of miles away. So flip it around. If you're a Mars explorer and you look out the window and there's a big red cloud coming at you that looks like a tornado and you might want to do some Mars dust storm modeling right then and there to figure out what's the safest thing to do. You don't have the time literally get that back to earth have been processing and get you the answer back. You've got to take those computational capabilities with you. And we're hoping that of these 52 thousands of experiments that are on board, the SS can show that in order to better accomplish their missions on the moon. And Omar, >>I'm so glad you brought that up because I was gonna ask you guys in the commercial world everybody talks about real time. Of course, we talk about the real time edge and AI influencing and and the time value of data I was gonna ask, you know, the real time, Nous, How do you handle that? I think Mark, you just answered that. But at the same time, people will say, you know, the commercial would like, for instance, in advertising. You know, the joke the best. It's not kind of a joke, but the best minds of our generation tryingto get people to click on ads. And it's somewhat true, unfortunately, but at any rate, the value of data diminishes over time. I would imagine in space exploration where where you're dealing and things like light years, that actually there's quite a bit of value in the historical data. But, Mark, you just You just gave a great example of where you need real time, compute capabilities on the ground. But but But, Brian, I wonder if I could ask you the value of this historic historical data, as you just described collecting so much data. Are you? Do you see that the value of that data actually persists over time, you could go back with better modeling and better a i and computing and actually learn from all that data. What are your thoughts on that, Brian? >>Definitely. I think the answer is yes to that. And, you know, as part of the evolution from from basically a platform to a station, we're also learning to make use of the experiments in the data that we have there. NASA has set up. Um, you know, unopened data access sites for some of our physical science experiments that taking place there and and gene lab for looking at some of the biological genomic experiments that have gone on. And I've seen papers already beginning to be generated not from the original experimenters and principal investigators, but from that data set that has been collected. And, you know, when you're sending something up to space and it to the space station and volume for cargo is so limited, you want to get the most you can out of that. So you you want to be is efficient as possible. And one of the ways you do that is you collect. You take these earth observing, uh, instruments. Then you take that data. And, sure, the principal investigators air using it for the key thing that they designed it for. But if that data is available, others will come along and make use of it in different ways. >>Yeah, So I wanna remind the audience and these these these air supercomputers, the space born computers, they're they're solar powered, obviously, and and they're mounted overhead, right? Is that is that correct? >>Yeah. Yes. Space borne computer was mounted in the overhead. I jokingly say that as soon as someone could figure out how to get a data center in orbit, they will have a 50 per cent denser data station that we could have down here instead of two robes side by side. You can also have one overhead on. The power is free. If you can drive it off a solar, and the cooling is free because it's pretty cold out there in space, so it's gonna be very efficient. Uh, space borne computer is the most energy efficient computer in existence. Uh, free electricity and free cooling. And now we're offering free cycles through all the experimenters on goal >>Eso Space born one exceeded its mission timeframe. You were able to run as it was mentioned before some simulations for future Mars missions. And, um and you talked a little bit about what you want to get out of, uh, space born to. I mean, are there other, like, wish list items, bucket bucket list items that people are talking about? >>Yeah, two of them. And these air kind of hypothetical. And Brian kind of alluded to them. Uh, one is having the data on board. So an example that halo developers talk to us about is Hey, I'm on Mars and I see this mold growing on my potatoes. That's not good. So let me let me sample that mold, do a gene sequencing, and then I've got stored all the historical data on space borne computer of all the bad molds out there and let me do a comparison right then and there before I have dinner with my fried potato. So that's that's one. That's very interesting. A second one closely related to it is we have offered up the storage on space borne computer to for all of your raw data that we process. So, Mr Scientist, if if you need the raw data and you need it now, of course, you can have it sent down. But if you don't let us just hold it there as long as they have space. And when we returned to Earth like you mentioned, Patrick will ship that solid state disk back to them so they could have a new person, but again, reserving that network bandwidth, uh, keeping all that raw data available for the entire duration of the mission so that it may have value later on. >>Great. Thank you for that. I want to end on just sort of talking about come back to the collaboration between I S s National Labs and Hewlett Packard Enterprise, and you've got your inviting project ideas using space Bourne to during the upcoming mission. Maybe you could talk about what that's about, and we have A We have a graphic we're gonna put up on DSM information that you can you can access. But please, mark share with us what you're planning there. >>So again, the collaboration has been outstanding. There. There's been a mention off How much savings is, uh, if you can reduce the weight by a pound. Well, our partners ice s national lab and NASA have taken on that cost of delivering baseball in computer to the international space station as part of their collaboration and powering and cooling us and giving us the technical support in return on our side, we're offering up space borne computer to for all the onboard experiments and all those that think they might be wanting doing experiments on space born on the S s in the future to take advantage of that. So we're very, very excited about that. >>Yeah, and you could go toe just email space born at hp dot com on just float some ideas. I'm sure at some point there'll be a website so you can email them or you can email me david dot volonte at at silicon angle dot com and I'll shoot you that that email one or that website once we get it. But, Brian, I wanna end with you. You've been so gracious with your time. Uh, yeah. Give us your final thoughts on on exa scale. Maybe how you're celebrating exa scale day? I was joking with Mark. Maybe we got a special exa scale drink for 10. 18 but, uh, what's your final thoughts, Brian? >>Uh, I'm going to digress just a little bit. I think I think I have a unique perspective to celebrate eggs a scale day because as an undergraduate student, I was interning at Langley Research Center in the wind tunnels and the wind tunnel. I was then, um, they they were very excited that they had a new state of the art giant room size computer to take that data we way worked on unsteady, um, aerodynamic forces. So you need a lot of computation, and you need to be ableto take data at a high bandwidth. To be able to do that, they'd always, you know, run their their wind tunnel for four or five hours. Almost the whole shift. Like that data and maybe a week later, been ableto look at the data to decide if they got what they were looking for? Well, at the time in the in the early eighties, this is definitely the before times that I got there. They had they had that computer in place. Yes, it was a punchcard computer. It was the one time in my life I got to put my hands on the punch cards and was told not to drop them there. Any trouble if I did that. But I was able thio immediately after, uh, actually, during their run, take that data, reduce it down, grabbed my colored pencils and graph paper and graph out coefficient lift coefficient of drag. Other things that they were measuring. Take it back to them. And they were so excited to have data two hours after they had taken it analyzed and looked at it just pickled them. Think that they could make decisions now on what they wanted to do for their next run. Well, we've come a long way since then. You know, extra scale day really, really emphasizes that point, you know? So it really brings it home to me. Yeah. >>Please, no, please carry on. >>Well, I was just gonna say, you know, you talked about the opportunities that that space borne computer provides and and Mark mentioned our colleagues at the I S s national lab. You know, um, the space station has been declared a national laboratory, and so about half of the, uh, capabilities we have for doing research is a portion to the national lab so that commercial entities so that HP can can do these sorts of projects and universities can access station and and other government agencies. And then NASA can focus in on those things we want to do purely to push our exploration programs. So the opportunities to take advantage of that are there marks opening up the door for a lot of opportunities. But others can just Google S s national laboratory and find some information on how to get in the way. Mark did originally using s national lab to maybe get a good experiment up there. >>Well, it's just astounding to see the progress that this industry is made when you go back and look, you know, the early days of supercomputing to imagine that they actually can be space born is just tremendous. Not only the impacts that it can have on Space six exploration, but also society in general. Mark Wayne talked about that. Guys, thanks so much for coming on the Cube and celebrating Exa scale day and helping expand the community. Great work. And, uh, thank you very much for all that you guys dio >>Thank you very much for having me on and everybody out there. Let's get the XO scale as quick as we can. Appreciate everything you all are >>doing. Let's do it. >>I've got a I've got a similar story. Humanity saw the first trillion calculations per second. Like I said in 1997. And it was over 100 racks of computer equipment. Well, space borne one is less than fourth of Iraq in only 20 years. So I'm gonna be celebrating exa scale day in anticipation off exa scale computers on earth and soon following within the national lab that exists in 20 plus years And being on Mars. >>That's awesome. That mark. Thank you for that. And and thank you for watching everybody. We're celebrating Exa scale day with the community. The supercomputing community on the Cube Right back

Published Date : Oct 16 2020

SUMMARY :

It's the Q. With digital coverage We're back at the celebration of Exa Scale Day. Thank you. And, Mark, Good to see you again. And to give you a feel for the magnitude of the task, of the collaboration between what you guys were doing and Brian's team. developers and others to take advantage of that onboard computational capability you with governments or maybe contractors, you know, kind of building these proprietary off the shelf type products gives you that opportunity to have things that have been proven, have the technology You got to the point where you had visibility of the economics made sense. So I tell people that when you go to the moon Or should we, you know, go where no man has gone before and or woman and You've got to take everything you need to be able to make those decisions you need to make because there's not time to, for, you know, the moon and Mars. the more efficient you could be with parsing out that that bandwidth and to give you ah, B was called C Tam, the Chevy truck access method. future missions and space born to What are you hoping to accomplish? get that back to earth have been processing and get you the answer back. the time value of data I was gonna ask, you know, the real time, And one of the ways you do that is you collect. If you can drive it off a solar, and the cooling is free because it's pretty cold about what you want to get out of, uh, space born to. So, Mr Scientist, if if you need the raw data and you need it now, that's about, and we have A We have a graphic we're gonna put up on DSM information that you can is, uh, if you can reduce the weight by a pound. so you can email them or you can email me david dot volonte at at silicon angle dot com and I'll shoot you that state of the art giant room size computer to take that data we way Well, I was just gonna say, you know, you talked about the opportunities that that space borne computer provides And, uh, thank you very much for all that you guys dio Thank you very much for having me on and everybody out there. Let's do it. Humanity saw the first trillion calculations And and thank you for watching everybody.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
BrianPERSON

0.99+

MarkPERSON

0.99+

Mark WaynePERSON

0.99+

BryanPERSON

0.99+

NASAORGANIZATION

0.99+

1997DATE

0.99+

MarsLOCATION

0.99+

BryantPERSON

0.99+

EarthLOCATION

0.99+

Dave VolantPERSON

0.99+

£1QUANTITY

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

360 degreeQUANTITY

0.99+

3000 experimentsQUANTITY

0.99+

2017DATE

0.99+

twoQUANTITY

0.99+

PatrickPERSON

0.99+

five hoursQUANTITY

0.99+

nine monthsQUANTITY

0.99+

November 2ndDATE

0.99+

HPORGANIZATION

0.99+

25%QUANTITY

0.99+

TomorrowDATE

0.99+

I S s National LabsORGANIZATION

0.99+

50 per centQUANTITY

0.99+

next yearDATE

0.99+

20 yearsQUANTITY

0.99+

iPhone 10COMMERCIAL_ITEM

0.99+

fourQUANTITY

0.99+

2024DATE

0.99+

1QUANTITY

0.99+

todayDATE

0.99+

earthLOCATION

0.99+

a week laterDATE

0.99+

two partQUANTITY

0.99+

OmarPERSON

0.99+

2000DATE

0.99+

Thio CollegeORGANIZATION

0.99+

11COMMERCIAL_ITEM

0.99+

more than a secondQUANTITY

0.99+

10. 18QUANTITY

0.99+

one timeQUANTITY

0.99+

2 secondQUANTITY

0.99+

BothQUANTITY

0.99+

over 100 racksQUANTITY

0.98+

Phil Bullinger, Western Digital | CUBE Conversation, August 2020


 

>> Announcer: From theCUBE Studios in Palo Alto and Boston, connecting with thought leaders all around the world, this is a Cube conversation. >> Hey welcome back everybody, Jeff Frick here with theCUBE. We are in our Palo Alto studios, COVID is still going on, so all of the interviews continue to be remote, but we're excited to have a Cube alumni, he hasn't been on for a long time, and this guy has been in the weeds of the storage industry for a very very long time and we're happy to have him on and get an update because there continues to be a lot of exciting developments. He's Phil Bullinger, he is the SVP and general manager, data center business unit from Western Digital joining us, I think for Colorado, so Phil, great to see you, how's the weather in Colorado today? >> Hi Jeff, it's great to be here. Well, it's a hot, dry summer here, I'm sure like a lot of places. But yeah, enjoying the summer through these unusual times. >> It is unusual times, but fortunately there's great things like the internet and heavy duty compute and store out there so we can get together this way. So let's jump into it. You've been in the he business a long time, you've been at Western Digital, you were at EMC, you worked on Isilon, and you were at storage companies before that. And you've seen kind of this never-ending up and to the right slope that we see kind of ad nauseum in terms of the amount of storage demands. It's not going anywhere but up, and please increase complexity in terms of unstructure data, sources of data, speed of data, you know the kind of classic big V's of big data. So I wonder, before we jump into specifics, if you can kind of share your perspective 'cause you've been kind of sitting in the Catford seat, and Western Digital's a really unique company; you not only have solutions, but you also have media that feeds other people solutions. So you guys are really seeing and ultimately all this compute's got to put this data somewhere, and a whole lot of it's sitting on Western Digital. >> Yeah, it's a great intro there. Yeah, it's been interesting, through my career, I've seen a lot of advances in storage technology. Speeds and feeds like we often say, but the advancement through mechanical innovation, electrical innovation, chemistry, physics, just the relentless growth of data has been driven in many ways by the relentless acceleration and innovation of our ability to store that data, and that's been a very virtuous cycle through what, for me, has been 30 years in enterprise storage. There are some really interesting changes going on though I think. If you think about it, in a relatively short amount of time, data has gone from this artifact of our digital lives to the very engine that's driving the global economy. Our jobs, our relationships, our health, our security, they all kind of depend on data now, and for most companies, kind of irrespective of size, how you use data, how you store it, how you monetize it, how you use it to make better decisions to improve products and services, it becomes not just a matter of whether your company's going to thrive or not, but in many industries, it's almost an existential question; is your company going to be around in the future, and it depends on how well you're using data. So this drive to capitalize on the value of data is pretty significant. >> It's a really interesting topic, we've had a number of conversations around trying to get a book value of data, if you will, and I think there's a lot of conversations, whether it's accounting kind of way, or finance, or kind of good will of how do you value this data? But I think we see it intrinsically in a lot of the big companies that are really data based, like the Facebooks and the Amazons and the Netflixes and the Googles, and those types of companies where it's really easy to see, and if you see the valuation that they have, compared to their book value of assets, it's really baked into there. So it's fundamental to going forward, and then we have this thing called COVID hit, which I'm sure you've seen all the memes on social media. What drove your digital transformation, the CEO, the CMO, the board, or COVID-19? And it became this light switch moment where your opportunities to think about it are no more; you've got to jump in with both feet, and it's really interesting to your point that it's the ability to store this and think about it now differently as an asset driving business value versus a cost that IT has to accommodate to put this stuff somewhere, so it's a really different kind of a mind shift and really changes the investment equation for companies like Western Digital about how people should invest in higher performance and higher capacity and more unified and kind of democratizing the accessibility that data, to a much greater set of people with tools that can now start making much more business line and in-line decisions than just the data scientist kind of on Mahogany Row. >> Yeah, as you mentioned, Jeff, here at Western Digital, we have such a unique kind of perch in the industry to see all the dynamics in the OEM space and the hyperscale space and the channel, really across all the global economies about this growth of data. I have worked at several companies and have been familiar with what I would have called big data projects and fleets in the past. But at Western Digital, you have to move the decimal point quite a few digits to the right to get the perspective that we have on just the volume of data that the world has just relentless insatiably consuming. Just a couple examples, for our drive projects we're working on now, our capacity enterprise drive projects, you know, we used to do business case analysis and look at their lifecycle capacities and we measured them in exabytes, and not anymore, now we're talking about zettabyte, we're actually measuring capacity enterprise drive families in terms of how many zettabyte they're going to ship in their lifecycle. If we look at just the consumption of this data, the last 12 months of industry TAM for capacity enterprise compared to the 12 months prior to that, that annual growth rate was north of 60%. And so it's rare to see industries that are growing at that pace. And so the world is just consuming immense amounts of data, and as you mentioned, the COVID dynamics have been both an accelerant in some areas, as well as headwinds in others, but it's certainly accelerated digital transformation. I think a lot of companies we're talking about, digital transformation and hybrid models and COVID has really accelerated that, and it's certainly driving, continues to drive just this relentless need to store and access and take advantage of data. >> Yeah, well Phil, in advance of this interview, I pulled up the old chart with all the different bytes, kilobytes, megabytes, gigabytes, terabytes, petabytes, exabytes, and zettabytes, and just per the Wikipedia page, what is a zettabyte? It's as much information as there are grains of sand in all the world's beaches. For one zettabyte. You're talking about thinking in terms of those units, I mean, that is just mind boggling to think that that is the scale in which we're operating. >> It's really hard to get your head wrapped around a zettabyte of storage, and I think a lot of the industry thinks when we say zettabyte scale era, that it's just a buzz word, but I'm here to say it's a real thing. We're measuring projects in terms of zettabytes now. >> That's amazing. Well, let's jump into some of the technology. So I've been fortunate enough here at theCUBE to be there at a couple of major announcements along the way. We talked before we turned the cameras on, the helium announcement and having the hard drive sit in the fish bowl to get all types of interesting benefits from this less dense air that is helium versus oxygen. I was down at the Mammer and Hammer announcement, which was pretty interesting; big heavy technology moves there, to again, increase the capacity of the hard drive's base systems. You guys are doing a lot of stuff on RISC-V I know is an Open source project, so you guys have a lot of things happening, but now there's this new thing, this new thing called zonedd storage. So first off, before we get into it, why do we need zoned storage, and really what does it now bring to the table in terms of a capability? >> Yeah, great question, Jeff. So why now, right? Because I mentioned storage, I've been in storage for quite some time. In the last, let's just say in the last decade, we've seen the advent of the hyperscale model and certainly a whole nother explosion level of data and just the veracity with which they hyperscalers can create and consume and process and monetize data. And of course with that, has also come a lot of innovation, frankly, in the compute space around how to process that data and moving from what was just a general purpose CPU model to GPU's and DPU's and so we've seen a lot of innovation on that side, but frankly, in the storage side, we haven't seen much change at all in terms of how operating systems, applications, file systems, how they actually use the storage or communicate with the storage. And sure, we've seen advances in storage capacities; hard drives have gone from two to four, to eight, to 10 to 14, 16, and now our leading 18 and 20 terabyte hard drives. And similarly, on the SSD side, now we're dealing with the capacities of seven, and 15, and 30 terabytes. So things have gotten larger, as you expect. And some interfaces have improved, I think NVME, which we'll talk about, has been a nice advance in the industry; it's really now brought a very modern scalable low latency multi-threaded interface to a NAM flash to take advantage of the inherent performance of transistor based persistent storage. But really when you think about it, it hasn't changed a lot. But what has changed is workloads. One thing that definitely has evolved in the space of the last decade or so is this, the thing that's driving a lot of this explosion of data in the industry is around workloads that I would characterize as sequential in nature, they're serial, you can capture it in written. They also have a very consistent life cycle, so you would write them in a big chunk, you would read them maybe in smaller pieces, but the lifecycle of that data, we can treat more as a chunk of data, but the problem is applications, operating systems, vial systems continue to interface with storage using paradigms that are many decades old. The old 512 byte or even Forte, Sector size constructs were developed in the hard drive industry just as convenient paradigms to structure what is an unstructured sea of magnetic grains into something structured that can be used to store and access data. But the reality is when we talk about SSDs, structure really matters, and so what has changed in the industry is the workloads are driving very very fresh looks at how more intelligence can be applied to that application OS storage device interface to drive much greater efficiency. >> Right, so there's two things going on here that I want to drill down on. On one hand, you talked about kind of the introduction of NAND and flash, and treating it like you did, generically you did a regular hard drive. But you could get away and you could do some things because the interface wasn't taking full advantage of the speed that was capable in the NAND. But NVME has changed that, and now forced kind of getting rid of some of those inefficient processes that you could live with, so it's just kind of classic next level step up and capabilities. One is you get the better media, you just kind of plug it into the old way. Now actually you're starting to put in processes that take full advantage of the speed that that flash has. And I think obviously prices have come down dramatically since the first introduction, where before it was always kind of a clustered off or super high end, super low latency, super high value apps, it just continues to spread and proliferate throughout the data center. So what did NVME force you to think about in terms of maximizing the return on the NAND and flash? >> Yeah, NVME, which we've been involved in the standardization, I think it's been a very successful effort, but we have to remember NVME is about a decade old, or even more when the original work started around defining this interface, but it's been very successful. The NVME standard's body is very productive cross company effort, it's really driven a significant change, and what we see now is the rapid adoption of NVME in all of data center architectures, whether it's very large hyperscale to classic on prem enterprise to even smaller applications, it's just a very efficient interface mechanism for connecting SSDs into a server. So we continue to see evolution at NVME, which is great, and we'll talk about ZNS today as one of those evolutions. We're also very keenly interested in NVME protocol over fabrics, and so one of the things that Western Digital has been talking about a lot lately is incorporating NVME over fabrics as a mechanism for now connecting shared storage into multiple post architectures. We think this is a very attractive way to build shared storage architectures of the future that are scalable, that are composable, that really have a lot more agility with respect to rack level infrastructure and applying that infrastructure to applications. >> Right, now one thing that might strike some people as kind of counterintuitive is within the zoned storage in zoning off parts of the media, to think of the data also kind of in these big chunks, is it feels contrary to kind of atomization that we're seeing in the rest of the data center, right? So smaller units of compute, smaller units of store, so that you can assemble and disassemble them in different quantities as needed. So what was the special attribute that you had to think about and actually come back and provide a benefit in actually kind of re-chunking, if you will, in these zones versus trying to get as atomic as possible? >> Yeah, it's a great question, Jeff, and I think it's maybe not intuitive in terms of why zoned storage actually creates a more efficient storage paradigm when you're storing stuff essentially in larger blocks of data, but this is really where the intersection of structure and workload and sort of the nature of the data all come together. If you turn back the clock maybe four or five years when SMR hard drives host managers SMR hard drives first emerged on the scene. This was really taking advantage of the fact that the right head on a hard disk drive is larger than the read head, or the read head can be much smaller, and so the notion of overlapping or shingling the data on the drive, giving the read head a smaller target to read, but the writer a larger write pad to write the data could actually, what we found was it increases aerial density significantly. And so that was really the emergence of this notion of sequentially written larger blocks of data being actually much more efficiently stored when you think about physically how it's being stored. What's very new now and really gaining a lot of traction is the SSD corollary to SMR on the hard drive, on the SSD side, we had the ZNS specification, which is, very similarly where you'd divide up the name space of an SSD into fixed size zones, and those zones are written sequentially, but now those zones are intimately tied to the underlying physical architecture of the NAND itself; the dyes, the planes, the read pages, the erase pages. So that, in treating data as a block, you're actually eliminating a lot of the complexity and the work that an SSD has to do to emulate a legacy hard drive, and in doing so, you're increasing performance and endurance and the predictable performance of the device. >> I just love the way that you kind of twist the lens on the problem, and on one hand, by rule, just looking at my notes here, the zoned storage device is the ZSD's introduce a number of restrictions and limitations and rules that are outside the full capabilities of what you might do. But in doing so, an aggregate, the efficiency, and the performance of the system in the whole is much much better, even though when you first look at it, you think it's more of a limiter, but it's actually opens up. I wonder if there's any kind of performance stats you can share or any kind of empirical data just to give people kind of a feel for what that comes out as. >> So if you think about the potential of zoned storage in general and again, when I talk about zoned storage, there's two components; there's an HDD component of zoned storage that we refer to as SMR, and there's an SSD version of that that we call ZNS. So we think about SMR, the value proposition there is additional capacity. So effectively in the same drive architecture, with roughly the same bill of material used to build the drive, we can overlap or shingle the data on the drive. And generally for the customer, additional capacity. Today with our 18, 20 terabyte offerings that's on the order of just over 10%, but that delta is going to increase significantly going forward to 20% or more. And when you think about a hyperscale customer that has not hundreds or thousands of racks, but tens of thousands of racks. A 10 or 20% improvement in effective capacity is a tremendous TCO benefit, and the reason we do that is obvious. I mean, the economic paradigm that drives large at-scale data centers is total custom ownership, both acquisition costs and operating costs. And if you can put more storage in a square tile of data center space, you're going to generally use less power, you're going to run it more efficiently, you're actually, from an acquisition cost, you're getting a more efficient purchase of that capacity. And in doing that, our innovation, we benefit from it and our customers benefit from it. So the value proposition for zoned storage in capacity enterprise HDV is very clear, it's additional capacity. The exciting thing is, in the SSD side of things, or ZNS, it actually opens up even more value proposition for the customer. Because SSDs have had to emulate hard drives, there's been a lot of inefficiency and complexity inside an enterprise SSD dealing with things like garbage collection and right amplification reducing the endurance of the device. You have to over-provision, you have to insert as much as 20, 25, even 28% additional man bits inside the device just to allow for that extra space, that working space to deal with delete of data that are smaller than the block erase that the device supports. So you have to do a lot of reading and writing of data and cleaning up. It creates for a very complex environment. ZNS by mapping the zoned size with the physical structure of the SSD essentially eliminates garbage collection, it reduces over-provisioning by as much as 10x. And so if you were over provisioning by 20 or 25% on an enterprise SSD, and a ZNS SSD, that can be one or two percent. The other thing I have to keep in mind is enterprise SSD is typically incorporate D RAM and that D RAM is used to help manage all those dynamics that I just mentioned, but with a much simpler structure where the pointers to the data can be managed without all the D RAM. We can actually reduce the amount of D RAM in an enterprise SSD by as much as eight X. And if you think about the MILA material of an enterprise SSD, D RAM is number two on the list in terms of the most expensive bomb components. So ZNS and SSDs actually have a significant customer total cost of ownership impact. It's an exciting standard, and now that we have the standard ratified through the NVME working group, it can really accelerate the development of the software ecosystem around. >> Right, so let's shift gears and talk a little bit about less about the tech and more about the customers and the implementation of this. So you talked kind of generally, but are there certain types of workloads that you're seeing in the marketplace where this is a better fit or is it just really the big heavy lifts where they just need more and this is better? And then secondly, within these hyperscale companies, as well as just regular enterprises that are also seeing their data demands grow dramatically, are you seeing that this is a solution that they want to bring in for kind of the marginal kind of next data center, extension of their data center, or their next cloud region? Or are they doing lift and shift and ripping stuff out? Or do they enough data growth organically that there's plenty of new stuff that they can put in these new systems? >> Yeah, I love that. The large customers don't rip and shift; they ride their assets for a long lifecycle, 'cause with the relentless growth of data, you're primarily investing to handle what's coming in over the transom. But we're seeing solid adoption. And in SMRS you know we've been working on that for a number of years. We've got significant interest and investment, co-investment, our engineering, and our customer's engineering adapting the application environment's to take advantage of SMR. The great thing is now that we've got the NVME, the ZNS standard gratified now in the NVME working group, we've got a very similar, and all approved now, situation where we've got SMR standards that have been approved for some time, and the SATA and SCSI standards. Now we've got the same thing in the NVME standard, and the great thing is once a company goes through the lift, so to speak, to adapt an application, file system, operating system, ecosystem, to zoned storage, it pretty much works seamlessly between HDD and SSD, and so it's not an incremental investment when you're switching technologies. Obviously the early adopters of these technologies are going to be the large companies who design their own infrastructure, who have mega fleets of racks of infrastructure where these efficiencies really really make a difference in terms of how they can monetize that data, how they compete against the landscape of competitors they have. For companies that are totally reliant on kind of off the shelf standard applications, that adoption curve is going to be longer, of course, because there are some software changes that you need to adapt to enable zoned storage. One of the things Western Digital has done and taken the lead on is creating a landing page for the industry with zoned storage.io. It's a webpage that's actually an area where many companies can contribute Open source tools, code, validation environments, technical documentation. It's not a marketeering website, it's really a website built to land actual Open source content that companies can use and leverage and contribute to to accelerate the engineering work to adapt software stacks to zoned storage devices, and to share those things. >> Let me just follow up on that 'cause, again, you've been around for a while, and get your perspective on the power of Open source. And it used to be the best secrets, the best IP were closely guarded and held inside, and now really we're in an age where it's not necessarily. And the brilliant minds and use cases and people out there, just by definition, it's more groups of engineers, more engineers outside your building than inside your building, and how that's really changed kind of a strategy in terms of development when you can leverage Open source. >> Yeah, Open source clearly has accelerated innovation across the industry in so many ways, and it's the paradigm around which companies have built business models and innovated on top of it, I think it's always important as a company to understand what value ad you're bringing, and what value ad the customers want to pay for. What unmet needs in your customers are you trying to solve for, and what's the best mechanism to do that? And do you want to spend your RND recreating things, or leveraging what's available and innovating on top of it? It's all about ecosystem. I mean, the days where a single company could vertically integrate top to bottom a complete end solution, you know, those are fewer and far between. I think it's about collaboration and building ecosystems and operating within those. >> Yeah, it's such an interesting change, and one more thing, again, to get your perspective, you run the data center group, but there's this little thing happening out there that we see growing, IOT, in the industrial internet of things, and edge computing as we try to move more compute and store and power kind of outside the pristine world of the data center and out towards where this data is being collected and processed when you've got latency issues and all kinds of reasons to start to shift the balance of where the compute is and where the store and relies on the network. So when you look back from the storage perspective in your history in this industry and you start to see basically everything is now going to be connected, generating data, and a lot of it is even Opensource. I talked to somebody the other day doing kind of Opensource computer vision on surveillance video. So the amount of stuff coming off of these machines is growing in crazy ways. At the same time, it can't all be processed at the data center, it can't all be kind of shipped back and then have a decision and then ship that information back out to. So when you sit back and look at Edge from your kind of historical perspective, what goes through your mind, what gets you excited, what are some opportunities that you see that maybe the laymen is not paying close enough attention to? >> Yeah, it's really an exciting time in storage. I get asked that question from time to time, having been in storage for more than 30 years, you know, what was the most interesting time? And there's been a lot of them, but I wouldn't trade today's environment for any other in terms of just the velocity with which data is evolving and how it's being used and where it's being used. A TCO equation may describe what a data center looks like, but data locality will determine where it's located, and we're excited about the Edge opportunity. We see that as a pretty significant, meaningful part of the TAM as we look three to five years. Certainly 5G is driving much of that, I think just any time you speed up the speed of the connected fabric, you're going to increase storage and increase the processing the data. So the Edge opportunity is very interesting to us. We think a lot of it is driven by low latency work loads, so the concept of NVME is very appropriate for that, we think, in general SSDs deployed and Edge data centers defined as anywhere from a meter to a few kilometers from the source of the data. We think that's going to be a very strong paradigm. The workloads you mentioned, especially IOT, just machine-generated data in general, now I believe, has eclipsed human generated data, in terms of just the amount of data stored, and so we think that curve is just going to keep going in terms of machine generated data. Much of that data is so well suited for zoned storage because it's sequential, it's sequentially written, it's captured, and it has a very consistent and homogenous lifecycle associated with it. So we think what's going on with zoned storage in general and ZNS and SMR specifically are well suited for where a lot of the data growth is happening. And certainly we're going to see a lot of that at the Edge. >> Well, Phil, it's always great to talk to somebody who's been in the same industry for 30 years and is excited about today and the future. And as excited as they have been throughout their whole careers. So that really bodes well for you, bodes well for Western Digital, and we'll just keep hoping the smart people that you guys have over there, keep working on the software and the physics, and the mechanical engineering and keep moving this stuff along. It's really just amazing and just relentless. >> Yeah, it is relentless. What's exciting to me in particular, Jeff, is we've driven storage advancements largely through, as I said, a number of engineering disciplines, and those are still going to be important going forward, the chemistry, the physics, the electrical, the hardware capabilities. But I think as widely recognized in the industry, it's a diminishing curve. I mean, the amount of energy, the amount of engineering effort, investment, that cost and complexity of these products to get to that next capacity step is getting more difficult, not less. And so things like zoned storage, where we now bring intelligent data placement to this paradigm, is what I think makes this current juncture that we're at very exciting. >> Right, right, well, it's applied AI, right? Ultimately you're going to have more and more compute power driving the storage process and how that stuff is managed. As more cycles become available and they're cheaper, and ultimately compute gets cheaper and cheaper, as you said, you guys just keep finding new ways to move the curve in. And we didn't even get into the totally new material science, which is also coming down the pike at some point in time. >> Yeah, very exciting times. >> It's been great to catch up with you, I really enjoy the Western Digital story; I've been fortunate to sit in on a couple chapters, so again, congrats to you and we'll continue to watch and look forward to our next update. Hopefully it won't be another four years. >> Okay, thanks Jeff, I really appreciate the time. >> All right, thanks a lot. All right, he's Phil, I'm Jeff, you're watching theCUBE. Thanks for watching, we'll see you next time.

Published Date : Aug 25 2020

SUMMARY :

all around the world, this so all of the interviews Hi Jeff, it's great to be here. in terms of the amount of storage demands. be around in the future, that it's the ability to store this and the channel, really across and just per the Wikipedia and I think a lot of the and having the hard drive of data and just the veracity with which kind of the introduction and so one of the things of the data center, right? and so the notion of I just love the way that you kind of and the reason we do that is obvious. and the implementation of this. and the great thing is And the brilliant minds and use cases and it's the paradigm around which and all kinds of reasons to start to shift and increase the processing the data. and the mechanical engineering I mean, the amount of energy, driving the storage process I really enjoy the Western Digital story; really appreciate the time. we'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

Jeff FrickPERSON

0.99+

Western DigitalORGANIZATION

0.99+

Phil BullingerPERSON

0.99+

Western DigitalORGANIZATION

0.99+

ColoradoLOCATION

0.99+

oneQUANTITY

0.99+

20QUANTITY

0.99+

PhilPERSON

0.99+

August 2020DATE

0.99+

AmazonsORGANIZATION

0.99+

30 yearsQUANTITY

0.99+

both feetQUANTITY

0.99+

NetflixesORGANIZATION

0.99+

18QUANTITY

0.99+

threeQUANTITY

0.99+

two percentQUANTITY

0.99+

BostonLOCATION

0.99+

twoQUANTITY

0.99+

Palo AltoLOCATION

0.99+

FacebooksORGANIZATION

0.99+

15QUANTITY

0.99+

25%QUANTITY

0.99+

20%QUANTITY

0.99+

fourQUANTITY

0.99+

10QUANTITY

0.99+

GooglesORGANIZATION

0.99+

hundredsQUANTITY

0.99+

28%QUANTITY

0.99+

TodayDATE

0.99+

eightQUANTITY

0.99+

theCUBEORGANIZATION

0.99+

14QUANTITY

0.99+

five yearsQUANTITY

0.99+

30 terabytesQUANTITY

0.99+

EMCORGANIZATION

0.99+

COVID-19OTHER

0.99+

10xQUANTITY

0.99+

more than 30 yearsQUANTITY

0.99+

sevenQUANTITY

0.99+

CubeORGANIZATION

0.99+

two componentsQUANTITY

0.99+

OpensourceORGANIZATION

0.99+

two thingsQUANTITY

0.99+

four yearsQUANTITY

0.99+

IsilonORGANIZATION

0.99+

firstQUANTITY

0.98+

25QUANTITY

0.98+

20 terabyteQUANTITY

0.98+

bothQUANTITY

0.98+

18, 20 terabyteQUANTITY

0.98+

16QUANTITY

0.98+

over 10%QUANTITY

0.98+

COVIDOTHER

0.97+

tens of thousands of racksQUANTITY

0.97+

first introductionQUANTITY

0.97+

todayDATE

0.96+

TAMORGANIZATION

0.96+

theCUBE StudiosORGANIZATION

0.96+

NVMEORGANIZATION

0.95+

last decadeDATE

0.94+

EdgeORGANIZATION

0.94+

Mammer and HammerORGANIZATION

0.93+

OneQUANTITY

0.92+

COVIDORGANIZATION

0.92+

Dan Hubbard, Lacework | Cloud Native Insights


 

>> Narrator: From theCUBE Studios in Palo Alto in Boston, connecting with thought leaders around the globe, these are Cloud Native Insights. >> Hi, I'm Stu Miniman the host of cloud native insights. And when we started this weekly program, we look at Cloud Native and you know, what does that mean? And of course, one of the most important topics in IT coming into 2020 was security. And once the global pandemic hit, security went from the top issue to oh my gosh, it's even more important. I've said a few times on the program while most people are working from home, it did not mean that the bad actors went home, we've actually seen an increase in the need for security. So really happy to be able to dig in and talk about what is Cloud Native security, and what should that mean to users? And to help me dig into this important topic, happy to welcome back to the program one of our CUBE alumni Dan Hubbard, he is the CEO of Lacework. Dan thanks so much for joining us. >> Thanks Stu. Happy to be here. >> Alright, so we don't want to argue too much on the Cloud Native term, I agree with you and your team. It's a term that like cloud before, it doesn't necessarily have a lot of meaning. But when we talk about modernization, we talked about customers leveraging the opportunity in innovation and cloud security of course is super important. You know most of us probably remember back, you go back a few years and it's like, "Oh well I adopt cloud. "It's secure, right? "I mean, it should just be built into my platform. "And I should have to think about that." Well, I don't think there's anybody out there at least hopefully there's not anybody out there that thinks that anything that I go to will just be inherently fully secure. So give us a little bit if you would, you know where you see us here in 2020 security's a complex landscape. What are you seeing? >> Yeah, so you know a lot of people as you said, used to talk about what's called the shared responsibility model, which was the cloud provider is responsible for a bunch of things. Like the physical access to the data center, the network, the hypervisor and you know that the core file system and operating system and then you're responsible for everything else that you could configure. But there's something that's not talked about as much. And that's kind of the shared irresponsibility model that's happening within companies where developers are saying they're not responsible for security saying that they're moving too fast. And so what we are seeing is that you know, as people migrate to the cloud or of course are born in the cloud, this notion of DevSecOps, or you know SecDevOps whatever you want to call it, is really about the architecture and the organization. It's not just about technology, and it's not just about people. And it's more about layer seven and eight, than it is about layer one to three. And so there's a bunch of trends that we're seeing in successful companies and customers and prospects will be seeing the market around how do they get to that level of cooperation between the security and the developers in the operation teams? >> Yeah Dan, first of all fully agree with what you're saying. I know when I go to like serverless.com they've got everybody chanting that security is everyone's responsibility. You know I think back to DevOps as a trend, when I read the Phoenix project it was, oh hey, the security is not something that you do bolt on, we're looking at after it's something that you need to shift into everyone thinking about it. Security is just going to be baked in along the process all the way. So the DevOps fail us when it comes to security, why do we need DevSecOps? You know why are you know as you say seven and eight the you know, political and organizational challenges still so much of an issue you know, decades into this discussion? >> Yeah. You know I think there's a few moving parts here and kind of post COVID is even more interesting is that companies have incredibly strategic initiatives to build applications that are core to their business. And in post COVID it's almost existential to their business. If you think of you know, markets like retail and hospitality and restaurants you know, they have to figure out how to digitize and how to deliver their business without potentially physical you know, access to two locations. So as that speed has happened, some of the safety has been left behind. And it's easy to say you have to kind of you know, one of our mantras is to run with speed and safety. But it's kind of hard to run with scissors you know, and be safe at the same time. So some of it is just speed. And the other is that unfortunately, the security people in many ways and the security products and a lot of the security solutions that are out there, the incumbents if you will, are trying to deliver their current solution in a cloud way. So they're doing sometimes it's called Cloud built or you know what I call Cloud washing and they're delivering a system that's not applicable to the modern infrastructure in the modern way that developers are building. So then you have a clash between the teams of like, "Hey I want to do this." And then I'd be like, "No you can't do that get out of our way. "This is strategic to the business." So a lot of it has just been you know, kind of combination of all those factors. >> Alright so Dan, we'll go back to Cloud Native security, you talked about sometimes people are Cloud washing, or they're just taking what they had putting it in the cloud. Sometimes it's just, oh hey we've got a SaaS model on this. Other times I hear cloud native security, and it just means hey I've got some hooks into Containers or Kubernetes. What does modern security look like? Help us understand a little bit. You mentioned some of the you know, legacy vendors what they're doing. I see lots of new security startups, some in you know specifically in that, you know, Kubernetes space. There's already been some acquisitions there. So you know, what do you see out there? You know what's good, what's bad in the trends that you're seeing? >> Yeah so I think the one thing that we really believe is that this is such a large problem that you have to be 100% focused on it. You know if you're doing this, you know, securing your infrastructure and securing your modern applications, and doing other parts of the business whether it's you know securing the endpoints of the laptops of the company and the firewall and authentication and all kinds of other things you have competing interests. So focus is pretty key. And it's obviously a very large addressable problem. What the market is telling us is a few things. The first one is that automation is critical. They may not have as many people to solve the problem. And the problem set is moving at such a scale that it's very, very hard to keep up. So a lot of people ask me you know, what do I worry about? You know, how do I stay awake at night? Or how do I get to sleep? And really the things I'm worried most about in the way where I spend most of my time on the product side is about how fast are builders building? Not necessarily about the bad guys. Now the bad guys are coming and they're doing all kinds of innovative and interesting things. But usually it starts off with the good guys and how they're deploying and how they're building. And you know, the cloud providers literally are releasing API's and new acronyms almost weekly it seems. So like new technology is being created such a scale. So automation the ability to adapt to that is one key message that we hear from the customers. The other is that it has to solve or go across multiple categories. So although things like Kubernetes and Containers are very popular today. The cloud security tackle and challenges is much more complex than that. You've got infrastructure as code, you've got server lists, you've got kind of fragmented workloads, whether some are Containers, some are VMs, maybe some are armies and then some are Kubernetes. So you've got a very fragmented world out there, and all of it needs to be secured. And then the last one is probably the most consistent theme we're hearing is that as DevOps becomes involved, because they know the application and the stack much better than security, it has to fit into your modern workflow of DevOps. So that means you know, deep integrations into Jira and Slack and PagerDuty and New Relic and Datadog are a lot more important in integrating to your you know, Palo Alto firewall and your Cisco IDs system and your endpoint you know antivirus. So those are the real key trends that we're seeing from the customers. >> Yeah Dan, you bring up a really important point, leveraging automation. I'm wondering what you're hearing from customers, because there definitely is a little bit of concern, especially if you take something like security and say, okay well, automation. Is that something that I'm just going to let the system do it? Or is it giving me to getting me to a certain point that then a human makes the final decision and enacts what's going to happen there? Where are we along that journey? >> Yeah, so I think of automation in two lenses. The first lens is efficacy, which is you know do I have to write rules? And do I have to tune train and alter the system over time? Or can it do that on my behalf? Or is there a combination of both? So the notion of people writing rules and building rules is very, very hard in this world because things are moving so quickly. You know, what is the KMS you know threat surface? The threat attacks are just changing. And typically what happens when you write rules is they're either too narrow and you messed up or they're too broad you just get way too much noise. So there's automating the efficacy of the system. That's one that's really critical. The other one that is becoming more important is in the past it was called enforcement. And this is how do I automate a response to your efficacy. And in this scenario it were very, very early days. Some vendors have come out and said you know, we can do full remediation and blocking. And typically what happens is the DevOps team kind of gives the Heisman to the security team it says, "No, you're not doing that." You know this is my production servers, and my infrastructure that's you know running our business, you can't block anything without us knowing about it. So I think we're really early. I believe that you know we're going to move to a world that's more about orchestration and automation, where there's a set of parameters where you can orchestrate certain things or maybe an ops assist mode. You know for example, we have some customers that will send our alerts to Slack, then they have a Slack bot and they say, "Okay, is it okay that Bob just opened "an S3 bucket in this region, yes or no?" No, and then it runs a serverless function and closes it. So there's kind of a what we call driver assist mode versus you know full you know, no one behind the steering wheel today. But I think it's going to mature over time. >> Yeah, Dan one of the other big challenges customer has is that their environments are even more fragmented than they would in the past. So often they're leveraging multiple cloud providers, multiple SaaS providers then they have their hosting providers. And security is something that I need to have holistically across these environments but not have to worry about okay, do I have the skill set and understanding between those environments? Hopefully you know that's something you see out there and want to understand, you know how the security industry in general and maybe Lacework specifically is helping customers, get their arms a little bit more around that multi cloud challenge if you will? >> Yeah. So I totally agree things are you know, I think we have this Silicon Valley, West Coast bias that the world is all you know, great. And it says to utopia Kubernetes, modern infrastructure, everything runs up and down, and it's all you know super easy. The reality is much different. Even in the most sophisticated sets of infrastructure in the most sophisticated customers are very fragmented and diverse. The other challenge that security runs into is security in the past a lot of traditional security mindsets are all about point in time. And they're really all about inventory. So you know, I know used to be able to ask, you know a security person, how many servers do you have? Where are they? What are they doing this? They say, "Oh, you know we have 10 racks with 42 servers in each rack. "And here's our IP addresses." Nowadays, the answer is kind of like, "I don't know what time is it you know, "how busy is a service?" It's very ephemeral. So you have to have a system which can adapt with the ephemeral nature of everything. So you know in the past it was really difficult to spin up, say 10,000 servers in a Asia data center for four hours to do research you know. Security probably know if that's happening, you know they would know through a number of different ways could make big change control window would be really hard they have to ship the units, they bake them in you know, et cetera. Nowadays that's like three lines of code. So the security people have to know and get visibility into the changes and have an engine which can determine those changes and what the risk profile of those in near real time. >> Yeah it's the what we've seen is the monitoring companies out there now talking all about observability. Its real time, it's streamings. You know it reminds me of you know my physics. So you know Heisenberg's uncertainty principle when you try to measure something, you already can't because it's already changed. So what does that mean-- >> Dan: Yeah. >> You know what does security look like in my you know, real time serverless ever changing world? You know, how is it that we are going to be able to stay secure? >> Yeah, so I think there are some really positive trends. The first one is that this is kind of a reboot. So this is kind of a restart. You know there are things we've learned in the past that we can bring forward but it's also an opportunity to kind of clean the slate and think about how we can rebuild the infrastructure. The first kind of key one is that over time security in the traditional data center started understanding less and less about the application over time, what they did was they built this big fortress around it, some called it defense in depth you know, the Security Onion whatever you want to call it you know, the M&M'S. But they were really lacking in the understanding of the application. So now security really has to understand the application because that's the core of what's important. And that allows them to be smarter about what are the changes in their environment, and if those are good, bad or indifferent. The other thing that I think is interesting is that compliance was kind of a dirty word that no one really wanted to talk about. It was kind of this boring thing or auditors would show up once every six months go through a very complex checklist and say you're okay. Now compliance is actually very sophisticated. And the ability to look at your configuration in near real time and understand if you are compliant or following best practices is real. And we do that for our customers all the time. You know we can tell them how they're doing against the compliance standard within a you know, a minute timeframe. And we can tell that they're drifting in and out of that. And the last one and the one that I think most are excited about is really the journey towards least privileges and minimizing the scope of your attack surface within your developers and their access in your infrastructure. Now it's... We're pretty far from there, it's an easy thing to say it's a pretty hard thing to do. But getting towards and driving towards that journey of least privilege I think is where most people are looking to go. >> Alright Dan, I want to go back to something that we talked about early in the conversation, that relationship with the cloud providers themselves, so you know talking AWS, Azure, Google Cloud and the like. How should customers be thinking about how they manage security, dealing with them dealing with companies like Lacework and the ecosystem you mentioned in companies like Datadog and the New Relic? You know how do they sort through and manage how they can maintain those relationships? >> So there's kind of the layer eight relationships, of course which are starting you know in particular with the cloud providers, it's a lot more about bottoms up relationships and very technical understanding of product and features, than it is about being on the golf course, and you know eating steak dinners. And that's very different you know, security and buying IT infrastructure was very relationship driven in the past. Now you really especially with SaaS and subscriptions, you're really proving out your technology every day. You know I say kind of trust is built on consistent positive results over time. So you really have to have trust within your solution and within that service and that trust is built on obviously a lot of that go to market business side. But more often than not it's now being built on the ability for that solution to get better over time because it's a subscription. You know how do you deliver more features and increase value to the customer as you do more things over time? So that's really, really important. The other one is like, how do I integrate the technology together? And I believe it's more important for us to integrate our stack with the cloud provider with the adjacent spaces like APM and metrics and monitoring and with open source, because open source really is a core component to this. So how do we have the API's and integrations and the hooks and the visibility into all of those is really, really important for our customers in the market? >> Well Dan as I said at the beginning, security is such an important topic to everyone out there. You know we've seen from practitioners we talked to for the last few years not only is it a top issue it's a board level discussion for pretty much every company out there. So I want to give you the final word as to in today's you know modern era, what advice do you give to users out there to make sure that they are staying as secure as possible? >> Yeah so you know first and foremost, people often say, "Hey you know, when we build our business, "you know, it'd be a good problem to start have to worry "about customers and you know, "all kinds of people using the service. "And you know, we'll worry about security then." And it's easy lip service to say start it as early as possible. The reality is sometimes it's hard to do that. You've got all kinds of competing interests, you're trying to build a business and an application and everything else depending obviously, the maturity of your organization. I would say that this is a great time to kind of crawl, walk, run. And you don't have to think about it. If you're building in the cloud you don't have to think of the end game you know right away, you can kind of stair step into that. So you know my suggestion to people that are moving into the cloud is really think about compliance and configuration best practices first and visibility, and then start thinking of the more complex things like triage alerts and how does that fit into my workflow? How do I look at breaches down the line? Now for the more mature orgs that are taking, you know an application or a new application or Stack and just dropping it in, those are the ones that should really think about how do I fit security into this new world order? And how do I make it as part of the design process? And it's not about how do I take my existing security stack and move it over? That's like taking, you know a centralized application moving to the cloud and calling it cloud. You know if you're going to build in the cloud, you have to secure it the same way that you're building it in a modern way. So really think about you know, modern, you know new generation vendors and solutions and a combination of kind of your provider, maybe some open source and then a service, of course like Lacework. >> Alright well Dan Hubbard, thank you so much for helping us dig into this important topic Cloud Native security, pleasure talking with you. >> Thank you. Have a great day. >> And I'm Stu Miniman your hosts for Cloud Native Insights and looking forward to hearing more of your Cloud Native Insights in the future. (upbeat music)

Published Date : Jul 24 2020

SUMMARY :

leaders around the globe, it did not mean that the Happy to be here. I agree with you and your team. the hypervisor and you know the you know, political and And it's easy to say you You mentioned some of the you know, So a lot of people ask me you know, Yeah Dan, you bring up kind of gives the Heisman to that multi cloud challenge if you will? that the world is all you know, great. So you know Heisenberg's the compliance standard within a you know, and the ecosystem you mentioned And that's very different you know, as to in today's you know modern era, So really think about you know, thank you so much for helping us Have a great day. and looking forward to hearing more

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dan HubbardPERSON

0.99+

DanPERSON

0.99+

10 racksQUANTITY

0.99+

100%QUANTITY

0.99+

DatadogORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

Stu MinimanPERSON

0.99+

2020DATE

0.99+

AsiaLOCATION

0.99+

AWSORGANIZATION

0.99+

42 serversQUANTITY

0.99+

10,000 serversQUANTITY

0.99+

HeisenbergPERSON

0.99+

StuPERSON

0.99+

LaceworkORGANIZATION

0.99+

firstQUANTITY

0.99+

CiscoORGANIZATION

0.99+

Silicon ValleyLOCATION

0.99+

BobPERSON

0.99+

two locationsQUANTITY

0.99+

bothQUANTITY

0.99+

New RelicORGANIZATION

0.99+

two lensesQUANTITY

0.99+

one key messageQUANTITY

0.99+

M&M'SORGANIZATION

0.99+

BostonLOCATION

0.98+

Cloud Native InsightsORGANIZATION

0.98+

first oneQUANTITY

0.98+

DevSecOpsTITLE

0.98+

SlackTITLE

0.98+

DevOpsTITLE

0.97+

four hoursQUANTITY

0.97+

Cloud NativeTITLE

0.97+

eightQUANTITY

0.97+

first lensQUANTITY

0.97+

each rackQUANTITY

0.97+

todayDATE

0.97+

CUBEORGANIZATION

0.96+

sevenQUANTITY

0.95+

SecDevOpsTITLE

0.93+

KubernetesTITLE

0.93+

oneQUANTITY

0.92+

COVIDTITLE

0.92+

one thingQUANTITY

0.91+

theCUBE StudiosORGANIZATION

0.9+

PagerDutyORGANIZATION

0.9+

Palo AltoORGANIZATION

0.89+

CloudTITLE

0.89+

threeQUANTITY

0.88+

SlackORGANIZATION

0.87+

AzureORGANIZATION

0.87+

JiraORGANIZATION

0.85+

S3TITLE

0.83+

serverless.comOTHER

0.83+

Cloud Native InsightsORGANIZATION

0.78+

three linesQUANTITY

0.78+

layer sevenOTHER

0.77+

pandemicEVENT

0.76+

West CoastLOCATION

0.75+

Cloud Native InsightsTITLE

0.74+

last few yearsDATE

0.73+

eightOTHER

0.7+

ContainersORGANIZATION

0.69+

Google CloudORGANIZATION

0.69+

KubernetesORGANIZATION

0.68+

every six monthsQUANTITY

0.66+

UNLIST TILL 4/2 - Vertica Big Data Conference Keynote


 

>> Joy: Welcome to the Virtual Big Data Conference. Vertica is so excited to host this event. I'm Joy King, and I'll be your host for today's Big Data Conference Keynote Session. It's my honor and my genuine pleasure to lead Vertica's product and go-to-market strategy. And I'm so lucky to have a passionate and committed team who turned our Vertica BDC event, into a virtual event in a very short amount of time. I want to thank the thousands of people, and yes, that's our true number who have registered to attend this virtual event. We were determined to balance your health, safety and your peace of mind with the excitement of the Vertica BDC. This is a very unique event. Because as I hope you all know, we focus on engineering and architecture, best practice sharing and customer stories that will educate and inspire everyone. I also want to thank our top sponsors for the virtual BDC, Arrow, and Pure Storage. Our partnerships are so important to us and to everyone in the audience. Because together, we get things done faster and better. Now for today's keynote, you'll hear from three very important and energizing speakers. First, Colin Mahony, our SVP and General Manager for Vertica, will talk about the market trends that Vertica is betting on to win for our customers. And he'll share the exciting news about our Vertica 10 announcement and how this will benefit our customers. Then you'll hear from Amy Fowler, VP of strategy and solutions for FlashBlade at Pure Storage. Our partnership with Pure Storage is truly unique in the industry, because together modern infrastructure from Pure powers modern analytics from Vertica. And then you'll hear from John Yovanovich, Director of IT at AT&T, who will tell you about the Pure Vertica Symphony that plays live every day at AT&T. Here we go, Colin, over to you. >> Colin: Well, thanks a lot joy. And, I want to echo Joy's thanks to our sponsors, and so many of you who have helped make this happen. This is not an easy time for anyone. We were certainly looking forward to getting together in person in Boston during the Vertica Big Data Conference and Winning with Data. But I think all of you and our team have done a great job, scrambling and putting together a terrific virtual event. So really appreciate your time. I also want to remind people that we will make both the slides and the full recording available after this. So for any of those who weren't able to join live, that is still going to be available. Well, things have been pretty exciting here. And in the analytic space in general, certainly for Vertica, there's a lot happening. There are a lot of problems to solve, a lot of opportunities to make things better, and a lot of data that can really make every business stronger, more efficient, and frankly, more differentiated. For Vertica, though, we know that focusing on the challenges that we can directly address with our platform, and our people, and where we can actually make the biggest difference is where we ought to be putting our energy and our resources. I think one of the things that has made Vertica so strong over the years is our ability to focus on those areas where we can make a great difference. So for us as we look at the market, and we look at where we play, there are really three recent and some not so recent, but certainly picking up a lot of the market trends that have become critical for every industry that wants to Win Big With Data. We've heard this loud and clear from our customers and from the analysts that cover the market. If I were to summarize these three areas, this really is the core focus for us right now. We know that there's massive data growth. And if we can unify the data silos so that people can really take advantage of that data, we can make a huge difference. We know that public clouds offer tremendous advantages, but we also know that balance and flexibility is critical. And we all need the benefit that machine learning for all the types up to the end data science. We all need the benefits that they can bring to every single use case, but only if it can really be operationalized at scale, accurate and in real time. And the power of Vertica is, of course, how we're able to bring so many of these things together. Let me talk a little bit more about some of these trends. So one of the first industry trends that we've all been following probably now for over the last decade, is Hadoop and specifically HDFS. So many companies have invested, time, money, more importantly, people in leveraging the opportunity that HDFS brought to the market. HDFS is really part of a much broader storage disruption that we'll talk a little bit more about, more broadly than HDFS. But HDFS itself was really designed for petabytes of data, leveraging low cost commodity hardware and the ability to capture a wide variety of data formats, from a wide variety of data sources and applications. And I think what people really wanted, was to store that data before having to define exactly what structures they should go into. So over the last decade or so, the focus for most organizations is figuring out how to capture, store and frankly manage that data. And as a platform to do that, I think, Hadoop was pretty good. It certainly changed the way that a lot of enterprises think about their data and where it's locked up. In parallel with Hadoop, particularly over the last five years, Cloud Object Storage has also given every organization another option for collecting, storing and managing even more data. That has led to a huge growth in data storage, obviously, up on public clouds like Amazon and their S3, Google Cloud Storage and Azure Blob Storage just to name a few. And then when you consider regional and local object storage offered by cloud vendors all over the world, the explosion of that data, in leveraging this type of object storage is very real. And I think, as I mentioned, it's just part of this broader storage disruption that's been going on. But with all this growth in the data, in all these new places to put this data, every organization we talk to is facing even more challenges now around the data silo. Sure the data silos certainly getting bigger. And hopefully they're getting cheaper per bit. But as I said, the focus has really been on collecting, storing and managing the data. But between the new data lakes and many different cloud object storage combined with all sorts of data types from the complexity of managing all this, getting that business value has been very limited. This actually takes me to big bet number one for Team Vertica, which is to unify the data. Our goal, and some of the announcements we have made today plus roadmap announcements I'll share with you throughout this presentation. Our goal is to ensure that all the time, money and effort that has gone into storing that data, all the data turns into business value. So how are we going to do that? With a unified analytics platform that analyzes the data wherever it is HDFS, Cloud Object Storage, External tables in an any format ORC, Parquet, JSON, and of course, our own Native Roth Vertica format. Analyze the data in the right place in the right format, using a single unified tool. This is something that Vertica has always been committed to, and you'll see in some of our announcements today, we're just doubling down on that commitment. Let's talk a little bit more about the public cloud. This is certainly the second trend. It's the second wave maybe of data disruption with object storage. And there's a lot of advantages when it comes to public cloud. There's no question that the public clouds give rapid access to compute storage with the added benefit of eliminating data center maintenance that so many companies, want to get out of themselves. But maybe the biggest advantage that I see is the architectural innovation. The public clouds have introduced so many methodologies around how to provision quickly, separating compute and storage and really dialing-in the exact needs on demand, as you change workloads. When public clouds began, it made a lot of sense for the cloud providers and their customers to charge and pay for compute and storage in the ratio that each use case demanded. And I think you're seeing that trend, proliferate all over the place, not just up in public cloud. That architecture itself is really becoming the next generation architecture for on-premise data centers, as well. But there are a lot of concerns. I think we're all aware of them. They're out there many times for different workloads, there are higher costs. Especially if some of the workloads that are being run through analytics, which tend to run all the time. Just like some of the silo challenges that companies are facing with HDFS, data lakes and cloud storage, the public clouds have similar types of siloed challenges as well. Initially, there was a belief that they were cheaper than data centers, and when you added in all the costs, it looked that way. And again, for certain elastic workloads, that is the case. I don't think that's true across the board overall. Even to the point where a lot of the cloud vendors aren't just charging lower costs anymore. We hear from a lot of customers that they don't really want to tether themselves to any one cloud because of some of those uncertainties. Of course, security and privacy are a concern. We hear a lot of concerns with regards to cloud and even some SaaS vendors around shared data catalogs, across all the customers and not enough separation. But security concerns are out there, you can read about them. I'm not going to jump into that bandwagon. But we hear about them. And then, of course, I think one of the things we hear the most from our customers, is that each cloud stack is starting to feel even a lot more locked in than the traditional data warehouse appliance. And as everybody knows, the industry has been running away from appliances as fast as it can. And so they're not eager to get locked into another, quote, unquote, virtual appliance, if you will, up in the cloud. They really want to make sure they have flexibility in which clouds, they're going to today, tomorrow and in the future. And frankly, we hear from a lot of our customers that they're very interested in eventually mixing and matching, compute from one cloud with, say storage from another cloud, which I think is something that we'll hear a lot more about. And so for us, that's why we've got our big bet number two. we love the cloud. We love the public cloud. We love the private clouds on-premise, and other hosting providers. But our passion and commitment is for Vertica to be able to run in any of the clouds that our customers choose, and make it portable across those clouds. We have supported on-premises and all public clouds for years. And today, we have announced even more support for Vertica in Eon Mode, the deployment option that leverages the separation of compute from storage, with even more deployment choices, which I'm going to also touch more on as we go. So super excited about our big bet number two. And finally as I mentioned, for all the hype that there is around machine learning, I actually think that most importantly, this third trend that team Vertica is determined to address is the need to bring business critical, analytics, machine learning, data science projects into production. For so many years, there just wasn't enough data available to justify the investment in machine learning. Also, processing power was expensive, and storage was prohibitively expensive. But to train and score and evaluate all the different models to unlock the full power of predictive analytics was tough. Today you have those massive data volumes. You have the relatively cheap processing power and storage to make that dream a reality. And if you think about this, I mean with all the data that's available to every company, the real need is to operationalize the speed and the scale of machine learning so that these organizations can actually take advantage of it where they need to. I mean, we've seen this for years with Vertica, going back to some of the most advanced gaming companies in the early days, they were incorporating this with live data directly into their gaming experiences. Well, every organization wants to do that now. And the accuracy for clickability and real time actions are all key to separating the leaders from the rest of the pack in every industry when it comes to machine learning. But if you look at a lot of these projects, the reality is that there's a ton of buzz, there's a ton of hype spanning every acronym that you can imagine. But most companies are struggling, do the separate teams, different tools, silos and the limitation that many platforms are facing, driving, down sampling to get a small subset of the data, to try to create a model that then doesn't apply, or compromising accuracy and making it virtually impossible to replicate models, and understand decisions. And if there's one thing that we've learned when it comes to data, prescriptive data at the atomic level, being able to show end of one as we refer to it, meaning individually tailored data. No matter what it is healthcare, entertainment experiences, like gaming or other, being able to get at the granular data and make these decisions, make that scoring applies to machine learning just as much as it applies to giving somebody a next-best-offer. But the opportunity has never been greater. The need to integrate this end-to-end workflow and support the right tools without compromising on that accuracy. Think about it as no downsampling, using all the data, it really is key to machine learning success. Which should be no surprise then why the third big bet from Vertica is one that we've actually been working on for years. And we're so proud to be where we are today, helping the data disruptors across the world operationalize machine learning. This big bet has the potential to truly unlock, really the potential of machine learning. And today, we're announcing some very important new capabilities specifically focused on unifying the work being done by the data science community, with their preferred tools and platforms, and the volume of data and performance at scale, available in Vertica. Our strategy has been very consistent over the last several years. As I said in the beginning, we haven't deviated from our strategy. Of course, there's always things that we add. Most of the time, it's customer driven, it's based on what our customers are asking us to do. But I think we've also done a great job, not trying to be all things to all people. Especially as these hype cycles flare up around us, we absolutely love participating in these different areas without getting completely distracted. I mean, there's a variety of query tools and data warehouses and analytics platforms in the market. We all know that. There are tools and platforms that are offered by the public cloud vendors, by other vendors that support one or two specific clouds. There are appliance vendors, who I was referring to earlier who can deliver package data warehouse offerings for private data centers. And there's a ton of popular machine learning tools, languages and other kits. But Vertica is the only advanced analytic platform that can do all this, that can bring it together. We can analyze the data wherever it is, in HDFS, S3 Object Storage, or Vertica itself. Natively we support multiple clouds on-premise deployments, And maybe most importantly, we offer that choice of deployment modes to allow our customers to choose the architecture that works for them right now. It still also gives them the option to change move, evolve over time. And Vertica is the only analytics database with end-to-end machine learning that can truly operationalize ML at scale. And I know it's a mouthful. But it is not easy to do all these things. It is one of the things that highly differentiates Vertica from the rest of the pack. It is also why our customers, all of you continue to bet on us and see the value that we are delivering and we will continue to deliver. Here's a couple of examples of some of our customers who are powered by Vertica. It's the scale of data. It's the millisecond response times. Performance and scale have always been a huge part of what we have been about, not the only thing. I think the functionality all the capabilities that we add to the platform, the ease of use, the flexibility, obviously with the deployment. But if you look at some of the numbers they are under these customers on this slide. And I've shared a lot of different stories about these customers. Which, by the way, it still amaze me every time I talk to one and I get the updates, you can see the power and the difference that Vertica is making. Equally important, if you look at a lot of these customers, they are the epitome of being able to deploy Vertica in a lot of different environments. Many of the customers on this slide are not using Vertica just on-premise or just in the cloud. They're using it in a hybrid way. They're using it in multiple different clouds. And again, we've been with them on that journey throughout, which is what has made this product and frankly, our roadmap and our vision exactly what it is. It's been quite a journey. And that journey continues now with the Vertica 10 release. The Vertica 10 release is obviously a massive release for us. But if you look back, you can see that building on that native columnar architecture that started a long time ago, obviously, with the C-Store paper. We built it to leverage that commodity hardware, because it was an architecture that was never tightly integrated with any specific underlying infrastructure. I still remember hearing the initial pitch from Mike Stonebreaker, about the vision of Vertica as a software only solution and the importance of separating the company from hardware innovation. And at the time, Mike basically said to me, "there's so much R&D in innovation that's going to happen in hardware, we shouldn't bake hardware into our solution. We should do it in software, and we'll be able to take advantage of that hardware." And that is exactly what has happened. But one of the most recent innovations that we embraced with hardware is certainly that separation of compute and storage. As I said previously, the public cloud providers offered this next generation architecture, really to ensure that they can provide the customers exactly what they needed, more compute or more storage and charge for each, respectively. The separation of compute and storage, compute from storage is a major milestone in data center architectures. If you think about it, it's really not only a public cloud innovation, though. It fundamentally redefines the next generation data architecture for on-premise and for pretty much every way people are thinking about computing today. And that goes for software too. Object storage is an example of the cost effective means for storing data. And even more importantly, separating compute from storage for analytic workloads has a lot of advantages. Including the opportunity to manage much more dynamic, flexible workloads. And more importantly, truly isolate those workloads from others. And by the way, once you start having something that can truly isolate workloads, then you can have the conversations around autonomic computing, around setting up some nodes, some compute resources on the data that won't affect any of the other data to do some things on their own, maybe some self analytics, by the system, etc. A lot of things that many of you know we've already been exploring in terms of our own system data in the product. But it was May 2018, believe it or not, it seems like a long time ago where we first announced Eon Mode and I want to make something very clear, actually about Eon mode. It's a mode, it's a deployment option for Vertica customers. And I think this is another huge benefit that we don't talk about enough. But unlike a lot of vendors in the market who will dig you and charge you for every single add-on like hit-buy, you name it. You get this with the Vertica product. If you continue to pay support and maintenance, this comes with the upgrade. This comes as part of the new release. So any customer who owns or buys Vertica has the ability to set up either an Enterprise Mode or Eon Mode, which is a question I know that comes up sometimes. Our first announcement of Eon was obviously AWS customers, including the trade desk, AT&T. Most of whom will be speaking here later at the Virtual Big Data Conference. They saw a huge opportunity. Eon Mode, not only allowed Vertica to scale elastically with that specific compute and storage that was needed, but it really dramatically simplified database operations including things like workload balancing, node recovery, compute provisioning, etc. So one of the most popular functions is that ability to isolate the workloads and really allocate those resources without negatively affecting others. And even though traditional data warehouses, including Vertica Enterprise Mode have been able to do lots of different workload isolation, it's never been as strong as Eon Mode. Well, it certainly didn't take long for our customers to see that value across the board with Eon Mode. Not just up in the cloud, in partnership with one of our most valued partners and a platinum sponsor here. Joy mentioned at the beginning. We announced Vertica Eon Mode for Pure Storage FlashBlade in September 2019. And again, just to be clear, this is not a new product, it's one Vertica with yet more deployment options. With Pure Storage, Vertica in Eon mode is not limited in any way by variable cloud, network latency. The performance is actually amazing when you take the benefits of separate and compute from storage and you run it with a Pure environment on-premise. Vertica in Eon Mode has a super smart cache layer that we call the depot. It's a big part of our secret sauce around Eon mode. And combined with the power and performance of Pure's FlashBlade, Vertica became the industry's first advanced analytics platform that actually separates compute and storage for on-premises data centers. Something that a lot of our customers are already benefiting from, and we're super excited about it. But as I said, this is a journey. We don't stop, we're not going to stop. Our customers need the flexibility of multiple public clouds. So today with Vertica 10, we're super proud and excited to announce support for Vertica in Eon Mode on Google Cloud. This gives our customers the ability to use their Vertica licenses on Amazon AWS, on-premise with Pure Storage and on Google Cloud. Now, we were talking about HDFS and a lot of our customers who have invested quite a bit in HDFS as a place, especially to store data have been pushing us to support Eon Mode with HDFS. So as part of Vertica 10, we are also announcing support for Vertica in Eon Mode using HDFS as the communal storage. Vertica's own Roth format data can be stored in HDFS, and actually the full functionality of Vertica is complete analytics, geospatial pattern matching, time series, machine learning, everything that we have in there can be applied to this data. And on the same HDFS nodes, Vertica can actually also analyze data in ORC or Parquet format, using External tables. We can also execute joins between the Roth data the External table holds, which powers a much more comprehensive view. So again, it's that flexibility to be able to support our customers, wherever they need us to support them on whatever platform, they have. Vertica 10 gives us a lot more ways that we can deploy Eon Mode in various environments for our customers. It allows them to take advantage of Vertica in Eon Mode and the power that it brings with that separation, with that workload isolation, to whichever platform they are most comfortable with. Now, there's a lot that has come in Vertica 10. I'm definitely not going to be able to cover everything. But we also introduced complex types as an example. And complex data types fit very well into Eon as well in this separation. They significantly reduce the data pipeline, the cost of moving data between those, a much better support for unstructured data, which a lot of our customers have mixed with structured data, of course, and they leverage a lot of columnar execution that Vertica provides. So you get complex data types in Vertica now, a lot more data, stronger performance. It goes great with the announcement that we made with the broader Eon Mode. Let's talk a little bit more about machine learning. We've been actually doing work in and around machine learning with various extra regressions and a whole bunch of other algorithms for several years. We saw the huge advantage that MPP offered, not just as a sequel engine as a database, but for ML as well. Didn't take as long to realize that there's a lot more to operationalizing machine learning than just those algorithms. It's data preparation, it's that model trade training. It's the scoring, the shaping, the evaluation. That is so much of what machine learning and frankly, data science is about. You do know, everybody always wants to jump to the sexy algorithm and we handle those tasks very, very well. It makes Vertica a terrific platform to do that. A lot of work in data science and machine learning is done in other tools. I had mentioned that there's just so many tools out there. We want people to be able to take advantage of all that. We never believed we were going to be the best algorithm company or come up with the best models for people to use. So with Vertica 10, we support PMML. We can import now and export PMML models. It's a huge step for us around that operationalizing machine learning projects for our customers. Allowing the models to get built outside of Vertica yet be imported in and then applying to that full scale of data with all the performance that you would expect from Vertica. We also are more tightly integrating with Python. As many of you know, we've been doing a lot of open source projects with the community driven by many of our customers, like Uber. And so now with Python we've integrated with TensorFlow, allowing data scientists to build models in their preferred language, to take advantage of TensorFlow. But again, to store and deploy those models at scale with Vertica. I think both these announcements are proof of our big bet number three, and really our commitment to supporting innovation throughout the community by operationalizing ML with that accuracy, performance and scale of Vertica for our customers. Again, there's a lot of steps when it comes to the workflow of machine learning. These are some of them that you can see on the slide, and it's definitely not linear either. We see this as a circle. And companies that do it, well just continue to learn, they continue to rescore, they continue to redeploy and they want to operationalize all that within a single platform that can take advantage of all those capabilities. And that is the platform, with a very robust ecosystem that Vertica has always been committed to as an organization and will continue to be. This graphic, many of you have seen it evolve over the years. Frankly, if we put everything and everyone on here wouldn't fit on a slide. But it will absolutely continue to evolve and grow as we support our customers, where they need the support most. So, again, being able to deploy everywhere, being able to take advantage of Vertica, not just as a business analyst or a business user, but as a data scientists or as an operational or BI person. We want Vertica to be leveraged and used by the broader organization. So I think it's fair to say and I encourage everybody to learn more about Vertica 10, because I'm just highlighting some of the bigger aspects of it. But we talked about those three market trends. The need to unify the silos, the need for hybrid multiple cloud deployment options, the need to operationalize business critical machine learning projects. Vertica 10 has absolutely delivered on those. But again, we are not going to stop. It is our job not to, and this is how Team Vertica thrives. I always joke that the next release is the best release. And, of course, even after Vertica 10, that is also true, although Vertica 10 is pretty awesome. But, you know, from the first line of code, we've always been focused on performance and scale, right. And like any really strong data platform, the execution engine, the optimizer and the execution engine are the two core pieces of that. Beyond Vertica 10, some of the big things that we're already working on, next generation execution engine. We're already actually seeing incredible early performance from this. And this is just one example, of how important it is for an organization like Vertica to constantly go back and re-innovate. Every single release, we do the sit ups and crunches, our performance and scale. How do we improve? And there's so many parts of the core server, there's so many parts of our broader ecosystem. We are constantly looking at coverages of how we can go back to all the code lines that we have, and make them better in the current environment. And it's not an easy thing to do when you're doing that, and you're also expanding in the environment that we are expanding into to take advantage of the different deployments, which is a great segue to this slide. Because if you think about today, we're obviously already available with Eon Mode and Amazon, AWS and Pure and actually MinIO as well. As I talked about in Vertica 10 we're adding Google and HDFS. And coming next, obviously, Microsoft Azure, Alibaba cloud. So being able to expand into more of these environments is really important for the Vertica team and how we go forward. And it's not just running in these clouds, for us, we want it to be a SaaS like experience in all these clouds. We want you to be able to deploy Vertica in 15 minutes or less on these clouds. You can also consume Vertica, in a lot of different ways, on these clouds. As an example, in Amazon Vertica by the Hour. So for us, it's not just about running, it's about taking advantage of the ecosystems that all these cloud providers offer, and really optimizing the Vertica experience as part of them. Optimization, around automation, around self service capabilities, extending our management console, we now have products that like the Vertica Advisor Tool that our Customer Success Team has created to actually use our own smarts in Vertica. To take data from customers that give it to us and help them tune automatically their environment. You can imagine that we're taking that to the next level, in a lot of different endeavors that we're doing around how Vertica as a product can actually be smarter because we all know that simplicity is key. There just aren't enough people in the world who are good at managing data and taking it to the next level. And of course, other things that we all hear about, whether it's Kubernetes and containerization. You can imagine that that probably works very well with the Eon Mode and separating compute and storage. But innovation happens everywhere. We innovate around our community documentation. Many of you have taken advantage of the Vertica Academy. The numbers there are through the roof in terms of the number of people coming in and certifying on it. So there's a lot of things that are within the core products. There's a lot of activity and action beyond the core products that we're taking advantage of. And let's not forget why we're here, right? It's easy to talk about a platform, a data platform, it's easy to jump into all the functionality, the analytics, the flexibility, how we can offer it. But at the end of the day, somebody, a person, she's got to take advantage of this data, she's got to be able to take this data and use this information to make a critical business decision. And that doesn't happen unless we explore lots of different and frankly, new ways to get that predictive analytics UI and interface beyond just the standard BI tools in front of her at the right time. And so there's a lot of activity, I'll tease you with that going on in this organization right now about how we can do that and deliver that for our customers. We're in a great position to be able to see exactly how this data is consumed and used and start with this core platform that we have to go out. Look, I know, the plan wasn't to do this as a virtual BDC. But I really appreciate you tuning in. Really appreciate your support. I think if there's any silver lining to us, maybe not being able to do this in person, it's the fact that the reach has actually gone significantly higher than what we would have been able to do in person in Boston. We're certainly looking forward to doing a Big Data Conference in the future. But if I could leave you with anything, know this, since that first release for Vertica, and our very first customers, we have been very consistent. We respect all the innovation around us, whether it's open source or not. We understand the market trends. We embrace those new ideas and technologies and for us true north, and the most important thing is what does our customer need to do? What problem are they trying to solve? And how do we use the advantages that we have without disrupting our customers? But knowing that you depend on us to deliver that unified analytics strategy, it will deliver that performance of scale, not only today, but tomorrow and for years to come. We've added a lot of great features to Vertica. I think we've said no to a lot of things, frankly, that we just knew we wouldn't be the best company to deliver. When we say we're going to do things we do them. Vertica 10 is a perfect example of so many of those things that we from you, our customers have heard loud and clear, and we have delivered. I am incredibly proud of this team across the board. I think the culture of Vertica, a customer first culture, jumping in to help our customers win no matter what is also something that sets us massively apart. I hear horror stories about support experiences with other organizations. And people always seem to be amazed at Team Vertica's willingness to jump in or their aptitude for certain technical capabilities or understanding the business. And I think sometimes we take that for granted. But that is the team that we have as Team Vertica. We are incredibly excited about Vertica 10. I think you're going to love the Virtual Big Data Conference this year. I encourage you to tune in. Maybe one other benefit is I know some people were worried about not being able to see different sessions because they were going to overlap with each other well now, even if you can't do it live, you'll be able to do those sessions on demand. Please enjoy the Vertica Big Data Conference here in 2020. Please you and your families and your co-workers be safe during these times. I know we will get through it. And analytics is probably going to help with a lot of that and we already know it is helping in many different ways. So believe in the data, believe in data's ability to change the world for the better. And thank you for your time. And with that, I am delighted to now introduce Micro Focus CEO Stephen Murdoch to the Vertica Big Data Virtual Conference. Thank you Stephen. >> Stephen: Hi, everyone, my name is Stephen Murdoch. I have the pleasure and privilege of being the Chief Executive Officer here at Micro Focus. Please let me add my welcome to the Big Data Conference. And also my thanks for your support, as we've had to pivot to this being virtual rather than a physical conference. Its amazing how quickly we all reset to a new normal. I certainly didn't expect to be addressing you from my study. Vertica is an incredibly important part of Micro Focus family. Is key to our goal of trying to enable and help customers become much more data driven across all of their IT operations. Vertica 10 is a huge step forward, we believe. It allows for multi-cloud innovation, genuinely hybrid deployments, begin to leverage machine learning properly in the enterprise, and also allows the opportunity to unify currently siloed lakes of information. We operate in a very noisy, very competitive market, and there are people, who are in that market who can do some of those things. The reason we are so excited about Vertica is we genuinely believe that we are the best at doing all of those things. And that's why we've announced publicly, you're under executing internally, incremental investment into Vertica. That investments targeted at accelerating the roadmaps that already exist. And getting that innovation into your hands faster. This idea is speed is key. It's not a question of if companies have to become data driven organizations, it's a question of when. So that speed now is really important. And that's why we believe that the Big Data Conference gives a great opportunity for you to accelerate your own plans. You will have the opportunity to talk to some of our best architects, some of the best development brains that we have. But more importantly, you'll also get to hear from some of our phenomenal Roth Data customers. You'll hear from Uber, from the Trade Desk, from Philips, and from AT&T, as well as many many others. And just hearing how those customers are using the power of Vertica to accelerate their own, I think is the highlight. And I encourage you to use this opportunity to its full. Let me close by, again saying thank you, we genuinely hope that you get as much from this virtual conference as you could have from a physical conference. And we look forward to your engagement, and we look forward to hearing your feedback. With that, thank you very much. >> Joy: Thank you so much, Stephen, for joining us for the Vertica Big Data Conference. Your support and enthusiasm for Vertica is so clear, and it makes a big difference. Now, I'm delighted to introduce Amy Fowler, the VP of Strategy and Solutions for FlashBlade at Pure Storage, who was one of our BDC Platinum Sponsors, and one of our most valued partners. It was a proud moment for me, when we announced Vertica in Eon mode for Pure Storage FlashBlade and we became the first analytics data warehouse that separates compute from storage for on-premise data centers. Thank you so much, Amy, for joining us. Let's get started. >> Amy: Well, thank you, Joy so much for having us. And thank you all for joining us today, virtually, as we may all be. So, as we just heard from Colin Mahony, there are some really interesting trends that are happening right now in the big data analytics market. From the end of the Hadoop hype cycle, to the new cloud reality, and even the opportunity to help the many data science and machine learning projects move from labs to production. So let's talk about these trends in the context of infrastructure. And in particular, look at why a modern storage platform is relevant as organizations take on the challenges and opportunities associated with these trends. The answer is the Hadoop hype cycles left a lot of data in HDFS data lakes, or reservoirs or swamps depending upon the level of the data hygiene. But without the ability to get the value that was promised from Hadoop as a platform rather than a distributed file store. And when we combine that data with the massive volume of data in Cloud Object Storage, we find ourselves with a lot of data and a lot of silos, but without a way to unify that data and find value in it. Now when you look at the infrastructure data lakes are traditionally built on, it is often direct attached storage or data. The approach that Hadoop took when it entered the market was primarily bound by the limits of networking and storage technologies. One gig ethernet and slower spinning disk. But today, those barriers do not exist. And all FlashStorage has fundamentally transformed how data is accessed, managed and leveraged. The need for local data storage for significant volumes of data has been largely mitigated by the performance increases afforded by all Flash. At the same time, organizations can achieve superior economies of scale with that segregation of compute and storage. With compute and storage, you don't always scale in lockstep. Would you want to add an engine to the train every time you add another boxcar? Probably not. But from a Pure Storage perspective, FlashBlade is uniquely architected to allow customers to achieve better resource utilization for compute and storage, while at the same time, reducing complexity that has arisen from the siloed nature of the original big data solutions. The second and equally important recent trend we see is something I'll call cloud reality. The public clouds made a lot of promises and some of those promises were delivered. But cloud economics, especially usage based and elastic scaling, without the control that many companies need to manage the financial impact is causing a lot of issues. In addition, the risk of vendor lock-in from data egress, charges, to integrated software stacks that can't be moved or deployed on-premise is causing a lot of organizations to back off the all the way non-cloud strategy, and move toward hybrid deployments. Which is kind of funny in a way because it wasn't that long ago that there was a lot of talk about no more data centers. And for example, one large retailer, I won't name them, but I'll admit they are my favorites. They several years ago told us they were completely done with on-prem storage infrastructure, because they were going 100% to the cloud. But they just deployed FlashBlade for their data pipelines, because they need predictable performance at scale. And the all cloud TCO just didn't add up. Now, that being said, well, there are certainly challenges with the public cloud. It has also brought some things to the table that we see most organizations wanting. First of all, in a lot of cases applications have been built to leverage object storage platforms like S3. So they need that object protocol, but they may also need it to be fast. And the said object may be oxymoron only a few years ago, and this is an area of the market where Pure and FlashBlade have really taken a leadership position. Second, regardless of where the data is physically stored, organizations want the best elements of a cloud experience. And for us, that means two main things. Number one is simplicity and ease of use. If you need a bunch of storage experts to run the system, that should be considered a bug. The other big one is the consumption model. The ability to pay for what you need when you need it, and seamlessly grow your environment over time totally nondestructively. This is actually pretty huge and something that a lot of vendors try to solve for with finance programs. But no finance program can address the pain of a forklift upgrade, when you need to move to next gen hardware. To scale nondestructively over long periods of time, five to 10 years plus is a crucial architectural decisions need to be made at the outset. Plus, you need the ability to pay as you use it. And we offer something for FlashBlade called Pure as a Service, which delivers exactly that. The third cloud characteristic that many organizations want is the option for hybrid. Even if that is just a DR site in the cloud. In our case, that means supporting appplication of S3, at the AWS. And the final trend, which to me represents the biggest opportunity for all of us, is the need to help the many data science and machine learning projects move from labs to production. This means bringing all the machine learning functions and model training to the data, rather than moving samples or segments of data to separate platforms. As we all know, machine learning needs a ton of data for accuracy. And there is just too much data to retrieve from the cloud for every training job. At the same time, predictive analytics without accuracy is not going to deliver the business advantage that everyone is seeking. You can kind of visualize data analytics as it is traditionally deployed as being on a continuum. With that thing, we've been doing the longest, data warehousing on one end, and AI on the other end. But the way this manifests in most environments is a series of silos that get built up. So data is duplicated across all kinds of bespoke analytics and AI, environments and infrastructure. This creates an expensive and complex environment. So historically, there was no other way to do it because some level of performance is always table stakes. And each of these parts of the data pipeline has a different workload profile. A single platform to deliver on the multi dimensional performances, diverse set of applications required, that didn't exist three years ago. And that's why the application vendors pointed you towards bespoke things like DAS environments that we talked about earlier. And the fact that better options exists today is why we're seeing them move towards supporting this disaggregation of compute and storage. And when it comes to a platform that is a better option, one with a modern architecture that can address the diverse performance requirements of this continuum, and allow organizations to bring a model to the data instead of creating separate silos. That's exactly what FlashBlade is built for. Small files, large files, high throughput, low latency and scale to petabytes in a single namespace. And this is importantly a single rapid space is what we're focused on delivering for our customers. At Pure, we talk about it in the context of modern data experience because at the end of the day, that's what it's really all about. The experience for your teams in your organization. And together Pure Storage and Vertica have delivered that experience to a wide range of customers. From a SaaS analytics company, which uses Vertica on FlashBlade to authenticate the quality of digital media in real time, to a multinational car company, which uses Vertica on FlashBlade to make thousands of decisions per second for autonomous cars, or a healthcare organization, which uses Vertica on FlashBlade to enable healthcare providers to make real time decisions that impact lives. And I'm sure you're all looking forward to hearing from John Yavanovich from AT&T. To hear how he's been doing this with Vertica and FlashBlade as well. He's coming up soon. We have been really excited to build this partnership with Vertica. And we're proud to provide the only on-premise storage platform validated with Vertica Eon Mode. And deliver this modern data experience to our customers together. Thank you all so much for joining us today. >> Joy: Amy, thank you so much for your time and your insights. Modern infrastructure is key to modern analytics, especially as organizations leverage next generation data center architectures, and object storage for their on-premise data centers. Now, I'm delighted to introduce our last speaker in our Vertica Big Data Conference Keynote, John Yovanovich, Director of IT for AT&T. Vertica is so proud to serve AT&T, and especially proud of the harmonious impact we are having in partnership with Pure Storage. John, welcome to the Virtual Vertica BDC. >> John: Thank you joy. It's a pleasure to be here. And I'm excited to go through this presentation today. And in a unique fashion today 'cause as I was thinking through how I wanted to present the partnership that we have formed together between Pure Storage, Vertica and AT&T, I want to emphasize how well we all work together and how these three components have really driven home, my desire for a harmonious to use your word relationship. So, I'm going to move forward here and with. So here, what I'm going to do the theme of today's presentation is the Pure Vertica Symphony live at AT&T. And if anybody is a Westworld fan, you can appreciate the sheet music on the right hand side. What we're going to what I'm going to highlight here is in a musical fashion, is how we at AT&T leverage these technologies to save money to deliver a more efficient platform, and to actually just to make our customers happier overall. So as we look back, and back as early as just maybe a few years ago here at AT&T, I realized that we had many musicians to help the company. Or maybe you might want to call them data scientists, or data analysts. For the theme we'll stay with musicians. None of them were singing or playing from the same hymn book or sheet music. And so what we had was many organizations chasing a similar dream, but not exactly the same dream. And, best way to describe that is and I think with a lot of people this might resonate in your organizations. How many organizations are chasing a customer 360 view in your company? Well, I can tell you that I have at least four in my company. And I'm sure there are many that I don't know of. That is our problem because what we see is a repetitive sourcing of data. We see a repetitive copying of data. And there's just so much money to be spent. This is where I asked Pure Storage and Vertica to help me solve that problem with their technologies. What I also noticed was that there was no coordination between these departments. In fact, if you look here, nobody really wants to play with finance. Sales, marketing and care, sure that you all copied each other's data. But they actually didn't communicate with each other as they were copying the data. So the data became replicated and out of sync. This is a challenge throughout, not just my company, but all companies across the world. And that is, the more we replicate the data, the more problems we have at chasing or conquering the goal of single version of truth. In fact, I kid that I think that AT&T, we actually have adopted the multiple versions of truth, techno theory, which is not where we want to be, but this is where we are. But we are conquering that with the synergies between Pure Storage and Vertica. This is what it leaves us with. And this is where we are challenged and that if each one of our siloed business units had their own stories, their own dedicated stories, and some of them had more money than others so they bought more storage. Some of them anticipating storing more data, and then they really did. Others are running out of space, but can't put anymore because their bodies aren't been replenished. So if you look at it from this side view here, we have a limited amount of compute or fixed compute dedicated to each one of these silos. And that's because of the, wanting to own your own. And the other part is that you are limited or wasting space, depending on where you are in the organization. So there were the synergies aren't just about the data, but actually the compute and the storage. And I wanted to tackle that challenge as well. So I was tackling the data. I was tackling the storage, and I was tackling the compute all at the same time. So my ask across the company was can we just please play together okay. And to do that, I knew that I wasn't going to tackle this by getting everybody in the same room and getting them to agree that we needed one account table, because they will argue about whose account table is the best account table. But I knew that if I brought the account tables together, they would soon see that they had so much redundancy that I can now start retiring data sources. I also knew that if I brought all the compute together, that they would all be happy. But I didn't want them to tackle across tackle each other. And in fact that was one of the things that all business units really enjoy. Is they enjoy the silo of having their own compute, and more or less being able to control their own destiny. Well, Vertica's subclustering allows just that. And this is exactly what I was hoping for, and I'm glad they've brought through. And finally, how did I solve the problem of the single account table? Well when you don't have dedicated storage, and you can separate compute and storage as Vertica in Eon Mode does. And we store the data on FlashBlades, which you see on the left and right hand side, of our container, which I can describe in a moment. Okay, so what we have here, is we have a container full of compute with all the Vertica nodes sitting in the middle. Two loader, we'll call them loader subclusters, sitting on the sides, which are dedicated to just putting data onto the FlashBlades, which is sitting on both ends of the container. Now today, I have two dedicated storage or common dedicated might not be the right word, but two storage racks one on the left one on the right. And I treat them as separate storage racks. They could be one, but i created them separately for disaster recovery purposes, lashing work in case that rack were to go down. But that being said, there's no reason why I'm probably going to add a couple of them here in the future. So I can just have a, say five to 10, petabyte storage, setup, and I'll have my DR in another 'cause the DR shouldn't be in the same container. Okay, but I'll DR outside of this container. So I got them all together, I leveraged subclustering, I leveraged separate and compute. I was able to convince many of my clients that they didn't need their own account table, that they were better off having one. I eliminated, I reduced latency, I reduced our ticketing I reduce our data quality issues AKA ticketing okay. I was able to expand. What is this? As work. I was able to leverage elasticity within this cluster. As you can see, there are racks and racks of compute. We set up what we'll call the fixed capacity that each of the business units needed. And then I'm able to ramp up and release the compute that's necessary for each one of my clients based on their workloads throughout the day. And so while they compute to the right before you see that the instruments have already like, more or less, dedicated themselves towards all those are free for anybody to use. So in essence, what I have, is I have a concert hall with a lot of seats available. So if I want to run a 10 chair Symphony or 80, chairs, Symphony, I'm able to do that. And all the while, I can also do the same with my loader nodes. I can expand my loader nodes, to actually have their own Symphony or write all to themselves and not compete with any other workloads of the other clusters. What does that change for our organization? Well, it really changes the way our database administrators actually do their jobs. This has been a big transformation for them. They have actually become data conductors. Maybe you might even call them composers, which is interesting, because what I've asked them to do is morph into less technology and more workload analysis. And in doing so we're able to write auto-detect scripts, that watch the queues, watch the workloads so that we can help ramp up and trim down the cluster and subclusters as necessary. There has been an exciting transformation for our DBAs, who I need to now classify as something maybe like DCAs. I don't know, I have to work with HR on that. But I think it's an exciting future for their careers. And if we bring it all together, If we bring it all together, and then our clusters, start looking like this. Where everything is moving in harmonious, we have lots of seats open for extra musicians. And we are able to emulate a cloud experience on-prem. And so, I want you to sit back and enjoy the Pure Vertica Symphony live at AT&T. (soft music) >> Joy: Thank you so much, John, for an informative and very creative look at the benefits that AT&T is getting from its Pure Vertica symphony. I do really like the idea of engaging HR to change the title to Data Conductor. That's fantastic. I've always believed that music brings people together. And now it's clear that analytics at AT&T is part of that musical advantage. So, now it's time for a short break. And we'll be back for our breakout sessions, beginning at 12 pm Eastern Daylight Time. We have some really exciting sessions planned later today. And then again, as you can see on Wednesday. Now because all of you are already logged in and listening to this keynote, you already know the steps to continue to participate in the sessions that are listed here and on the previous slide. In addition, everyone received an email yesterday, today, and you'll get another one tomorrow, outlining the simple steps to register, login and choose your session. If you have any questions, check out the emails or go to www.vertica.com/bdc2020 for the logistics information. There are a lot of choices and that's always a good thing. Don't worry if you want to attend one or more or can't listen to these live sessions due to your timezone. All the sessions, including the Q&A sections will be available on demand and everyone will have access to the recordings as well as even more pre-recorded sessions that we'll post to the BDC website. Now I do want to leave you with two other important sites. First, our Vertica Academy. Vertica Academy is available to everyone. And there's a variety of very technical, self-paced, on-demand training, virtual instructor-led workshops, and Vertica Essentials Certification. And it's all free. Because we believe that Vertica expertise, helps everyone accelerate their Vertica projects and the advantage that those projects deliver. Now, if you have questions or want to engage with our Vertica engineering team now, we're waiting for you on the Vertica forum. We'll answer any questions or discuss any ideas that you might have. Thank you again for joining the Vertica Big Data Conference Keynote Session. Enjoy the rest of the BDC because there's a lot more to come

Published Date : Mar 30 2020

SUMMARY :

And he'll share the exciting news And that is the platform, with a very robust ecosystem some of the best development brains that we have. the VP of Strategy and Solutions is causing a lot of organizations to back off the and especially proud of the harmonious impact And that is, the more we replicate the data, Enjoy the rest of the BDC because there's a lot more to come

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
StephenPERSON

0.99+

Amy FowlerPERSON

0.99+

MikePERSON

0.99+

John YavanovichPERSON

0.99+

AmyPERSON

0.99+

Colin MahonyPERSON

0.99+

AT&TORGANIZATION

0.99+

BostonLOCATION

0.99+

John YovanovichPERSON

0.99+

VerticaORGANIZATION

0.99+

Joy KingPERSON

0.99+

Mike StonebreakerPERSON

0.99+

JohnPERSON

0.99+

May 2018DATE

0.99+

100%QUANTITY

0.99+

WednesdayDATE

0.99+

ColinPERSON

0.99+

AWSORGANIZATION

0.99+

Vertica AcademyORGANIZATION

0.99+

fiveQUANTITY

0.99+

JoyPERSON

0.99+

2020DATE

0.99+

twoQUANTITY

0.99+

UberORGANIZATION

0.99+

Stephen MurdochPERSON

0.99+

Vertica 10TITLE

0.99+

Pure StorageORGANIZATION

0.99+

oneQUANTITY

0.99+

todayDATE

0.99+

PhilipsORGANIZATION

0.99+

tomorrowDATE

0.99+

AT&T.ORGANIZATION

0.99+

September 2019DATE

0.99+

PythonTITLE

0.99+

www.vertica.com/bdc2020OTHER

0.99+

One gigQUANTITY

0.99+

AmazonORGANIZATION

0.99+

SecondQUANTITY

0.99+

FirstQUANTITY

0.99+

15 minutesQUANTITY

0.99+

yesterdayDATE

0.99+

Ed Walsh, IBM | | CUBE Conversation February 2020


 

(upbeat music) >> From the Silicon Valley Media Office in Boston Massachusetts, it's theCUBE. Now here's your host, Dave Vellante. >> Hello everyone, and welcome to this exclusive CUBE conversation. Here's the setup. The storage industry has been drowning in complexity for years. Companies like Pure Storage and Nutanix, you know they reached escape velocity last decade, primarily because they really understood well how to deliver great products, that were simpler to use. But as we enter the 2020's, virtually every player in the storage business is trying to simplify it's portfolio. And the mandate is coming from customers, that are under huge pressure to operationalize and bring to market their major digital initiatives. They simply can't spend time managing infrastructure that the way they used to. They have to reallocate resources up the stack, so to speak to more strategic efforts. Now, as you know post the acquisition of EMC by Dell, we have followed closely, and been reporting on their efforts to manage the simplification of the storage portfolio under the leadership of Jeff Clark. IBM is one of those leading companies, along with Dell EMC, NetApp, and HPE that are under tremendous pressure to continue to simplify their respective portfolios. IBM as a company, has declared the dawn of a new era. They call it Chapter II of Digital and AI. Whereas, the company claims it's all about scaling and moving from experimentation to transformation. Chapter II, I will tell you unquestionably is not about humans managing complex storage infrastructure. Under the leadership of General Manager, Ed Walsh, the companies storage division has aligned with this Chapter II vision, and theCUBE has been able to secure an exclusive interview with Ed, who joins me today. Great to see you my friend. >> Thanks very much for having me. >> So, you're very welcome. And you heard my narrative. How did we get here? How did the industry get so complex? >> I like the way you kicked it off, because I think you nailed it. It's just how the storage industry has always been. And there was a reason for it twenty years ago, but it's almost, it's run its course, and I could tell you what were now seeing, but everyone there's always a difference between high end solutions sets, and low end solution sets. In fact their different, there's custom silicon on the high end. So think about EMC Matrix in the day, it was the ultimate custom hardware and software combination. And then the low end storage, well it didn't have any of that. And then there's a mid tier. But we actually, everything is based upon it. So you think about the right availability, the right price port, feature function, availability features. It made sense that you had to have that unique thing. So, what's happened is, we're all doing sustaining innovation. So we're all coming out with the next high end array for you. EMC's next one is Hashtag, Next Generation storage, right, mid-range. So they're going to redo their midrange. And then low end, but they never come together, and this is where the complexity is, you're nailing it. So no one is a high end or a low end shop, they basically use it all, but what they're having to do, is they have to manage and understand each one of those platforms. How to maintain it, it's kind of specialized. How to report on it, how to automate, each the automation requirements are different, but different API to actually automate it. Now the minute you say, now help me modernize that and bring me to a hybrid multi-cloud, now you're doing kind of a complex thing over multiple ways, and against different platforms, which are all completely different. And the key thing is, in the past it made sense to a have high end silicon with high end software, and it made sense. And different low end, and basically, because of some of the innovation we've driven, no longer do you have to do that. There's one platform that allows you to have one platform to meet those different requirements, and dramatically simplify what you're doing for enterprises. >> So, we're going to talk a little bit more about what you guys are announcing. But how do you know when you get there, to this land of simple? >> One it's hard to get there, we can talk about that too. But it's a, when a client, so we just had a call this morning with our board advisor for storage, our division. And they're kind of the bigs of the bigs. Up on the need, more on the high end side, just so you know the sample size. But literally, in the discussion we were talking about the platform simplification, how do you get to hybrid cloud, what we're going to do with the cyber incident response type of capabilities have resiliency. And literally in the call they are already emailing their team, saying we need to do something more strategic, we need to do that, we need to look at this holistically. They love the simplicity. Everything we just went through, they can't do anymore. Especially in Chapter II, it's about modernizing your existing mission critical enterprises, and then put them in the context of Hybrid multi-cloud. That's hard, you can't do it with all these different platforms, so they're looking for, let me spend less. Like you said, to get my team to do up-stack things, they definitely don't want to be managing different disparate storage organizations. They want to move forward and use that freed up resource to do other things, so. When I see big companies literally jumping at it, and giving the example. You know I want to talk about the cyber resiliency thing, I've had four of those this week. That's exactly what we need to have done, so it's just, I haven't had a conversation yet that clients aren't actually excited about this, and it's actually pretty straightforward. >> So I'll give you the benefit of the doubt, and again we'll get there, but assuming your there. Why do you think it took you so long? You kind of mentioned it's hard. >> So, transformations are never easy, and typically whoever is the transformation engine, gets shot in the back of the head, right. So it's really hard to get teams to do something different. So imagine every platform, EMC has nine now, right. So it is through acquisition of others, you have VP's, you know. VP of development, offering and maybe sales, and then you have whole teams, where you have founders you've acquired. So you have real people, that they love their platform, and there's no way they're going to give it up. They always come up with the next generation, and how it's going to solve all ills, but it's a people transformation. How do you get we're going to take three and say, hey, it's one platform. Now to do that it's a operational transformation challenge. It's actually driving the strategy, you don't do it in matter of a week, there's development to make sure that you can actually meet all the different use cases, that will take you literally years to do, and have a new platform. But, I think it's just hard to do. Now, anyone that's going to do that, let's say you know EMC or HP wants to do it. They're going to have to do the same thing we did, which is going to take them years of development. But also, it's managing that transition and the people involved, or the founders you've acquired, or it just it's amazing. In fact, it's the most wonderful part of my job is dealing with people, but it can frustrate you. >> So we've seen this over the years, look at NetApp, right with waffle, it was one size fits all for years, but they just couldn't cover all markets. And then they were faced with TAM expansion, of course now the portfolio expands. Do you think -- >> And now they have three and -- >> And David Scott at HPE, Storage VP at the time used to talk about how complex EMC's portfolio was, and you see HPE has to expand the portfolio. >> We all did, including IBM. >> Do you think Pure will have to face the same sort of -- >> We are seeing Pure with three, right. And that's without the file, so I'm just talking about what we do for physical, virtual, and container workloads and cloud. If you start going to what we're going to scale up to object we all have our own there too. And I'm not even counting the three to get to that. So you see Pure doing the exact the same thing, because they are trying to expand their TAM. And you have to do some basic innovation to have a platform actually meet the requirements, of the high end requirements, the mid range, and the entry level requirements. It's not just saying, I'm going to have one, you're actually have to do a lot of development to do it. >> All right, let's get to the news. What are you guys announcing? >> So basically, we're announcing a brand new, a dramatic simplification of our distributed storage. So, everything for non-Z. If you're doing physical boxes, bare metal, Linux. You're doing virtual environments, VMware environments, hyper-V, Power VM, or if you're doing container workloads or into the cloud. Our platforms are now one. One software, one API to manage. But we're going to actually, we're going to do simplification without compromise. We're going to give you want you need. You're going to need an entry level packaging, midrange and high end, but it's going to be one software allows you to meet every single price requirement and functionality. And we'll be able to do some surprises on the upside for what we're bringing out to you, because we believe in value in automation. We can up the value we bring to our clients, but also dramatically take out the cost complexity. But one thing we're getting rid of, is saying the need, the requirement to have a different hardware software platform for high end, midrange and low end. It's one hardware and software platform that gets you across all those. And that's where you get a dramatic simplification. >> So same OS? >> Same OS? >> Normally, you'd do, you'd optimize the code for the high end, midrange and low end. Why are you able to address all three with one OS? How are you able to do that? >> It took us three and half years, it was actually, I will talk about a couple innovation pieces. So, on the high end you have customized silicon, we did, everyone does, we had a Texas Memory Systems acquisition. It was the flash drawer 2U, about 375 TB, uncompressed de dup, pretty big chunky, you had to buy big chunks. So it was on the high end. >> That was the unit of granularity, right. >> But it gave you great value, but also you had great performance, latency better than you get in NVMe today, before NVMe. But you get inline compression, encryption, so it was wonderful. But it was really ultra high end. What we did was we took that great custom silicon, and we actually made it onto what it looks like a custom, or to be a standard NVMe SSD. So you take a Samsung NVMe, or a WD and you compare it to what we call our flash core module. They look the same and they go interchangeably into the NVMe standard slot. But what's in there is the same silicon, that was on this ultra high end box. So we can give the high end, exactly what we've did before. Ultra low latency, better than NVMe, but also you can get inline compression de dup and the were leveling, and the stuff that you expect in the custom silicon level. But we can take this same NVMe drive and we can put it in our lowest end model. Average sale price $15,000. Allows you to literally, no compromise on the high end, but have unbelievable surprises on the midrange and the low end, where now we can get the latency and the performance and all those benefits, to be honest on a much lower box. >> Same functionality? >> Same functionality, so you lose nothing. Now that took a lot of work, that wasn't easy. You're talking about people, there was roadmaps that had to be changed. We had to know that we were going to do that, and stick to our guns. But that'll be one. Other things is, you know you're going to get some things on the upside that you're not expecting, right. Because it's custom silicon, right, I might have a unique price performance. But also cost advantages, so I'm going to have best price performance or density across the whole product line. But also, I'm going to do things like, on the high end you used to unbelievable operational resiliency. Two site, three site, hyper-swap, you know two boxes that would act like one. Have a whole outage, or a site outage and you don't really miss a transaction, or multi-sites. But we're going to be able to do that on the low end and the midrange as well. Cyber resiliency is a big deal. So I talked about Operational Resiliency. It's very different coming back when it's cyber. But cyber incident response becomes key, so we're going to give you special capabilities there which are not available for anyone in the industry. But is cyber incident response only a high end thing, or is it a low end thing. No, it's across everywhere. So I think we're going to shock on the upside a lot of it, was the development to make sure the code stack, but also the hardware, we can at least say no compromise if you want entry-level. I'm going to meet anyone at that mote. In fact, because the features of it, I'm able to compete at an unfair level against everyone on the low end. So you say, midrange and high end, but you're not losing anything because your losing the custom silicon. >> So let's come back to the cyber piece, what exactly is that? >> All right, so, listen, this is not for data breaches. So if a data breach happens, they steal your database or they steal your customer name, you have to report to, you know you have to let people know. But it's typically than I call the storage guy and say hey, solve it. It was stolen at a different level. Now the ones that doesn't hit the media, but happens all the time actually more frequently. And it definitely, gets called down to the operations team and the storage team is for cyber or malicious code. They've locked up your system. Now they didn't steal data, so it's not something you have to report. So what happens is call comes down, and you don't know when they got you. So it's an iterative process, you have to literally find the box, bring up, maybe it's Wednesday, oh, bring it up, give it to application group, nope, it's there. Bring up Tuesday... it's an iterative process. >> It's like drilling for oil, a 100 years ago, nope, not it, drill another hole. >> So what happens is, if it's cyber without the right tools, you use your backup, one of our board advisors, literally major bank, I had four of those, I'll give you one. It took me 33 hours to bring back a box. It was a large database 30 TB, 33 hours. Now why did you backup, why didn't he use his primary storage against DR copies of everything. Well they didn't have the right tool sets, so what we were able to do is, tape is great for this air gap, but it takes time to restore and come back up and running. The modern day protection we have like Veeam or Cohesity allows the recovery being faster, because your mounting backup copies faster. But the fastest is your primary snapshot and your replicated DR snapshots. And if you can leverage those, the reason people don't leverage it, and we came upon this, almost accidentally. We were seeing our services brethren from IBM doing, IBM SO or outsourcing GTS, when they did have a hit. And what they want to do is, bring up your snapshots, but if you bring up a snapshot and you're not really careful, you start crashing production workloads, because it looks like the VM that just came up. So you need to have, and we're providing the software that allows you to visualize what your recovery points are. Allows you to orchestrate bringing up environments but more importantly, orchestrate into a fenced network environment, so it's not going to step on production workloads and address this. But allows you to do that, and provide a URL to the different business users, that they can come and say yes, it's there or it's not. So even if you don't use this software before this incident, it gives you visibility, orchestration, and then more importantly a fence, a safe fence network, a sandbox to bring these up quickly and check it out, and easily promote to production. >> So that's your safe zone? >> Safe zone, but it's just not there. You know you start bringing up snapshots, it's not like a DR case, where you're bringing things up, you have to be really smart, because you bring it up, and checking out. So without that, they don't want to trust to use the snapshots, so they just don't use primary storage. With it, it becomes the first thing you do. Because you hope you got it within a week, or week and a half of your snapshots. And it's in the environment for ninety days, now you're going to tape. Now if you do this, if you put this software in place before an incident, now you get more values, you can do orchestrated DR testing. Because where doing this orchestrated, bring up application sets it's not a VM, it's sets of VM's. Fenced network, bring it up, does it work. You can use it for Test/Dev data, you can use it for automatic DR. But even if you don't set it up, we're going to make it available so you can actually come back from these cyber incidents much faster. >> And this is the capability that I get on primary storage. Because everybody's targeting you know the backup corpus for ransomware and things of that nature. This is primary storage. >> And we do put it on our backups. So our backups allows you to do the exact same thing and do the bootable copies. And so if you have our backup product, you could already do this on primary. But, what we're saying is, regardless of who your using, we're still saying you need to do backup, you need to air a cup your backup. 'cause you know Want to Cry was in the environment for 90 days, you know your snapshots are only for a week or two. So the fact of the matter is that you need it, but in this case, if you're using the other guys, you can also, we're going to give it just for this tool set. >> How does immutability does it factor? I know like for instance AWS Reinvent they announced an immutability capability. I think IBM may have that, because of the acquisition that you made years ago, Clever Safe was fundamental to that, their architecture. Is that a way to combat ransomware? >> So immutability is obviously not just changes. So ransomware and you know malware typically is either encrypting or deleting things. Encrypting is what they do, but they have the key, so. The fact of the matter is that they're deleting things. So if it's immutable, than you can't change it. Now if you own the right controls, you can delete it, but you can't change it, they can't encrypt it on you. That becomes critical. So what you're looking for, is we do like for instance all of our flash system allows you to do these snapshots, local or remote that allow you to have, go to immutable copies either in Amazon, we support that or locally on our object storage, or in IBM's cloud. It allows you to do that. So the different platforms have this immutability that our software allows you integrate with. So I think immutability is kind of critical. >> How about consumption models? The way in which your packaging and pricing. People want to, the cloud is sort of change the way we think about this, how have you responded to that? >> So, you hit upon our Chapter II. We, IBM, actually resonates to the clients. In Chapter I, we are doing some lift and shift, and we're doing some new use cases in the cloud. And they had some challenges but it worked in general. But we're seeing the next phase II, is looking at the 80% of your key workloads, your mission critical workloads, and basically how you transfer those in. So basically, as you look at your Chapter 2, you're going to do the modernization, and you might move those into the cloud. So if you're going to move into the cloud, you might say, I'd like to modernize my storage, free my team up, because it's simple, I don't have to do a lot of things. But you need to simplify so you can now, modernize so you can transform. But, I'm going to be in the cloud in 18 months, so I don't want to modernize my storage. So what we have, is of course we have so you can buy things, you can lease things, we have a utility model, that is great for three to five years. But we have now a subscription model, which think of just cloud pricing. No long term commitment. Use what you use, up and down, and if it goes to zero, call us we'll pick it up, and there's no expense to you. So, no long term commitments and returns. So in 14 months, I've done my modernization, you've helped me free up my team. Let me go, and then we'll come and pick it up, and your bill stops that day. >> Cancel at anytime? >> Yeah, cancel at anytime. >> Do you expect people to take advantage of that? Is there a ton of demand at this point in time? >> I think everyone is on their own cloud journey. We talk a lot about meeting the client where they are, right. So how do I meet them where they're at. And everyone is on their own journey, so a lot of people are saying, hey, why would I do anything here, I need to get there. But if they can modernize and simplify what they're doing, and again these are your mission critical. We're not talking, this is how you're running your business, if we can make it better in the mean time, and then modernize it, get it in containers, get it into a new platform, that makes all the sense in the world. And because if we can give them a flexible way, say it's cheaper than using cloud storage, like in Amazon or IBM cloud. But you can use it on-prem, free you up, and then at anytime, just return it, that's a big value that people say, you know what, you're right, I'm going to go do that. You're able to give me cloud based pricing, down to zero when I'm done with it. Now I can use that to free up my team, that's the value equation. I don't think it's for everyone. But I think for a segment of the market, I think it's critical. And I think IBM's kind of perfectly positioned to do it with a balance sheet to help clients out. >> So how do you feel about this? Obviously, you've put a lot of work into it. You seem pretty excited. Do you feel as though this is going to help re-energize your business, your customer base, and how do you think competitors are going to respond? >> Good question. So, I think simplification, especially we can talk about value equation. I think I can add more value to you Mr. Customer. I can bring things you're not expected, right, and we're get to this cyber in a second, that would be one of the things they would not expected. And reduce the costs and complexity. So we've already done this a couple of times, so we did it with our Mainframe storage launch in the fall. It bar none, the best box for that workload. Lowest latency, most integration, encrypt, pervasive encryption, encryption in flight. But also, we took it from nine variants, to two. Because we could. We go, why did you need all those, we'll there's reasons for it in the past, but no longer. We also got rid of all the hard disk drives. We also add a little non-volatile cache and allowed you to get rid of all those battery backups. All these custom things that you used to have on this high end box. And now it's dramatically simpler, better. And by the way, no one asked, hey what are my other seven variants went. It was simpler, it was better, faster, but then it was the best launch we've had in the history of the product line. It think we can add better value and simplify for our clients. So that's what we'll do. You asked about how people respond. Listen, they're going to have to go through the same thing we did, right. A product line has people behind it, and it's really hard, or a founder behind it. You mentioned a couple, they're acquiring companies. I think they're going to have to go this, it's a transformational journey, that they'll have to go through. It's not as simple as doing a PowerPoint. I couldn't come to you and say, I can simplify without compromise. I can help you on the low end, the midrange, high end with same platform unless I did a lot of fundamental design work to make sure I could do that. Flash core modules being one of them, right. So I think it's going to be hard. It'll be interesting, well, they're going to have to go through the same thing I did, how about that. >> Usually when you make a major release like that, you're able to claim Top Gun, at least for a while with things like latency, and bandwidth and IOP's and performance. Are you able to make that claim? >> So, basically you saw it in the launch today. But basically you saw the latency which is one, because we're bringing a custom silicon down, our latency you'll see like I'll give you Pure bragging on their websites, their lowest latency is 70 microseconds, which by the way is pretty, you know. It's gonna be 150 microseconds, pretty good bragging rights. We're at 70 microseconds, but that's on the X90 using storage class memory. So literally we are 2x faster than on latency, how fast can you respond to something. But we can do it not only on our high end box, but we can also do it on our average sale price $15,000 box. Because I'm bringing that silicon up and down. So we can do the latency, now EMC the highest and PowerMax box. Two big chassis put together, that can do 100 microseconds. Again, still we're 70 microseconds, so we're 30% faster. And that's epitomized of the high end custom silicon software. So latency we got it. IOP's, so look at the biggest baddest two boxes of EMC, they'll do you know 15 million IOPs on their website. We'll do 18 million IOPs, but instead of two racks, it's 8U. It is 12x better IOPs per rack space, if you want to look at it that way. Throughput, which if you could do, it's all about building for our businesses. It's all about journey of the cloud and building for our businesses, everyone's trying to do this. Throughput in analytics becomes everything, and we you can do analytics in everything. Your DBA's are going to run analytics, so throughput matters. Ours is for every one of our boxes, that you can kind of add up and cluster out, it's 45 Gb/s. Pure, for instance their bragging rights, is 18, and they can't cluster anymore. So what we're able to do is on any of the, and most of those are high end, but I'll say, I can do the same thing up and down my line, because of where I'm bringing the custom silicon. So on bragging rights, and that's just kind of website, big bragging rights, I think we got a cold, and if you look price performance, and just overall price per capacity, we're inline to be the most the cost effective across everyone. >> Yeah, up and down the line, it's very interesting, it's kind of unique. >> And then you mentioned resiliency, I'll tell you that's the hottest thing, so. You mentioned the cyber incident response, that is something that we did on the Mainframe. So, we did the last Mainframe cycle, we allow you to do the same thing, and it literally drove all the demand for the product sets. It's already the number one thing people want to talk about, because it becomes a you're right, I needed that this week, I needed it last week. So, I think that's going to really drive demand? >> What worries you? >> (laughs) On this launch, not much. I think it's how fast and far we can get this message out. >> Wow, okay, so execution, obviously. You feel pretty confident about that, and yeah, getting the word out. Letting people know. Well, congratulations Ed. >> No, thank you very much, I appreciate it. I appreciate you coming in. And thank you for watching everybody. This is Dave Vellante for theCUBE. We'll see you next time. (upbeat music)

Published Date : Feb 12 2020

SUMMARY :

From the Silicon Valley Media Office Great to see you my friend. And you heard my narrative. I like the way you kicked it off, But how do you know when you get there, about the platform simplification, how do you get So I'll give you the benefit of the doubt, there's development to make sure that you can actually meet Do you think -- and you see HPE has to expand the portfolio. And you have to do some basic innovation What are you guys announcing? and high end, but it's going to be one software allows you How are you able to do that? So, on the high end you have customized silicon, we did, So you take a Samsung NVMe, or a WD and you compare it on the high end you used to unbelievable and you don't know when they got you. It's like drilling for oil, a 100 years ago, nope, So you need to have, and we're providing the software With it, it becomes the first thing you do. Because everybody's targeting you know the backup corpus So the fact of the matter is that you need it, that you made years ago, Clever Safe was fundamental So if it's immutable, than you can't change it. we think about this, how have you responded to that? So what we have, is of course we have so you can buy things, that people say, you know what, you're right, and how do you think competitors are going to respond? I couldn't come to you and say, Are you able to make that claim? and we you can do analytics in everything. it's kind of unique. So, we did the last Mainframe cycle, we allow you I think it's how fast and far we can get this message out. and yeah, getting the word out. And thank you for watching everybody.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

NutanixORGANIZATION

0.99+

IBMORGANIZATION

0.99+

threeQUANTITY

0.99+

Ed WalshPERSON

0.99+

150 microsecondsQUANTITY

0.99+

100 microsecondsQUANTITY

0.99+

David ScottPERSON

0.99+

30%QUANTITY

0.99+

ninety daysQUANTITY

0.99+

80%QUANTITY

0.99+

90 daysQUANTITY

0.99+

70 microsecondsQUANTITY

0.99+

30 TBQUANTITY

0.99+

33 hoursQUANTITY

0.99+

HPEORGANIZATION

0.99+

February 2020DATE

0.99+

EdPERSON

0.99+

nineQUANTITY

0.99+

$15,000QUANTITY

0.99+

Jeff ClarkPERSON

0.99+

twoQUANTITY

0.99+

AmazonORGANIZATION

0.99+

12xQUANTITY

0.99+

DellORGANIZATION

0.99+

a weekQUANTITY

0.99+

SamsungORGANIZATION

0.99+

TuesdayDATE

0.99+

NetAppORGANIZATION

0.99+

2xQUANTITY

0.99+

45 Gb/sQUANTITY

0.99+

last weekDATE

0.99+

EMCORGANIZATION

0.99+

PowerPointTITLE

0.99+

Boston MassachusettsLOCATION

0.99+

Chapter IIOTHER

0.99+

two boxesQUANTITY

0.99+

two racksQUANTITY

0.99+

five yearsQUANTITY

0.99+

oneQUANTITY

0.99+

Pure StorageORGANIZATION

0.99+

WednesdayDATE

0.99+

one platformQUANTITY

0.99+

14 monthsQUANTITY

0.99+

18 monthsQUANTITY

0.99+

HPORGANIZATION

0.99+

this weekDATE

0.98+

zeroQUANTITY

0.98+

twenty years agoDATE

0.98+

AWSORGANIZATION

0.98+

Top GunTITLE

0.98+

seven variantsQUANTITY

0.98+

fourQUANTITY

0.98+

Chapter IOTHER

0.98+

todayDATE

0.98+

three siteQUANTITY

0.98+

nine variantsQUANTITY

0.98+

PureORGANIZATION

0.98+

Two siteQUANTITY

0.98+

18 million IOPsQUANTITY

0.98+

LinuxTITLE

0.97+

Breaking Analysis: re:Invent 2019...of Transformation & NextGen Cloud


 

>> From the SiliconANGLE media office in Boston, Massachusetts, it's theCUBE. Now, here's your host, Dave Vellante. >> Hello, everyone, and welcome to this week's episode of theCUBE Insights, powered by ETR. In this Breaking Analysis, I want to do a quasi post-mortem on AWS re:Invent, and put the company's prospects into context using some ETR spending data. First I want to try to summarize some of the high-level things that we heard at the event. I won't go into all the announcements in any kind of great detail, there's a lot that's been written out there on what was announced, but I will touch on a few of the items that I felt were noteworthy and try to give you some of the main themes. I then want to dig into some of the spending data and share with you what's happening from a buyer's perspective in the context of budgets, and we'll specifically focus on AWS's business lines. And then I'm going to bring my colleague Stu Miniman into the conversation, and we're going to talk about AWS's hybrid strategy in some detail, and then we're going to wrap. So, the first thing that I want to do is give you a brief snapshot of the re:Invent takeaways, and I'll try to give you some commentary that you might not have heard coming out of the show. So, to summarize re:Invent, AWS is not being on rinsing and repeating, they have this culture of raising the bar, but one thing that doesn't change is this shock and awe that they do of announcements, it comes out each year, and it's obvious. It's always a big theme, and this year Andy Jassy really wanted to underscore the company's feature and functional lead relative to some of the other cloud providers. Now the overarching theme that Jassy brought home in his keynote this year is that the cloud is enabling transformation. Not just teeny, incremental improvement, he's talking about transformation that has to start at the very top of the organization, so it's somewhat a challenge and an appeal to enterprises, generally versus what is often a message to startups at re:Invent. And he was specifically talking to the c-suite here. Jassy didn't say this, but let me paraphrase something that John Furrier said in his analysis on theCUBE. He said if you're not born in the cloud, you basically better find the religion and get reborn, or you're going to be out of business. Now, one of the other big trends that we saw this year at re:Invent, and it's starting to come into focus, is that AWS is increasingly leveraging its acquisition of Annapurna with these new chip sets that give it higher performance and better cost structures and utilization than it can with merchant silicon, and specifically Intel. And here's what I'll say about that. AWS is one of the largest, if not the largest customer of Intel's in the world. But here's the thing, Intel wants a level playing field. We've seen this over the years, where it's in Intel's best interest to have that level playing field as much as possible, in its customer base. You saw it in PCs, in servers, and now you're seeing it in cloud. The more balanced the customer base is, the better it is for Intel because no one customer can exert undue influence and control over Intel. Intel's a consummate arms dealer, and so from AWS's perspective it makes sense to add capabilities and innovate, and vertically integrate in a way that can drive proprietary advantage that they can't necessarily get from Intel, and drive down costs. So that's kind of what's happening here. The other big thing we saw is latency, what Pat Gelsinger calls the law of physics. Well a few years ago, AWS, they wouldn't even acknowledge on-prem workloads, and Stu and I are going to talk about that, but clearly sees hybrid as an opportunity now. I'm going to talk more on detail and drill into this with Stu, but a big theme of the event was moving Outposts closer to on-prem workloads, that aren't going to be moving into the cloud anytime soon. And then also the edge, as well as, for instance, Amazon's Wavelength announcement that puts Outposts into 5G networks at major carriers. Now another takeaway is that AWS is unequivocal about the right tool for the right job, and you see this really prominently in database, where I've counted at least 10 purpose-built databases in the portfolio. AWS took some really indirect shots at Oracle, maybe even direct shots at Oracle, which, Oracle treats Oracle Database as a hammer, and every opportunity as a nail, antithetical to AWS's philosophy. Now there were a ton of announcements around AI and specifically the SageMaker IDE, specifically Studio, SageMaker Studio, which stood out as a way to simplify machine intelligence. Now this approach addresses the skillset problem. What I mean by that is, the lack of data scientists to leverage AI. But one of the things that we're kind of watching here is, it's going to be interesting to see if it exacerbates the AI black box issue. Making the logic behind the machines' outcomes less transparent. Now, all of this builds up to what we've been calling next-gen cloud, and we're entering a new era that goes well beyond infrastructure as a service, and lift and shift workloads. And it really ties back to Jassy's theme of transformation, where analytics approaches new computing models, like serverless, which are fundamental now, as is security, and a topic that we've addressed in detail in prior Breaking Analysis segments. AWS even made an announcement around quantum computing as a service, they call it Braket. So those are some of the things that we were watching. All right, now let's pivot and look at some of the data. Here's a reminder of the macro financials for AWS, we get some decent data around AWS financials, and this chart, I've showed before, but it's AWS's absolute revenue and quarterly revenue year on year with the growth rates. It's very large and it's growing, that's the bottom line, but growth is slowing to 35% last quarter as you can see. But to iterate, or reiterate, we're looking at a roughly 36 billion dollar company, growing at 35% a year, and you don't see that often. And so, this market, it still has a long way to go. Now let's look at some of the ETR tactical data on spending. Now remember, spending attentions according to ETR are reverting to pre-2018 levels, and are beginning to show signs of moderation. This chart shows spending momentum based on what ETR calls net score, and that represents the net percentage of customers that are spending more on a particular platform. Now, here's what's really interesting about this chart. It show the net scores for AWS across a number of the company's markets, comparing the gray, which is October '18 survey, with the blue, July '19, and the yellow, October '19. And you can see that workspaces, machine learning and AI, cloud overall, analytic databases, they're all either up or holding the same levels as a year ago, so you see AWS is bucking the trend, and even though spending on containers appears to be a little less than last year, it's holding firm from the July survey, so my point is that AWS is really bucking that trend from the overall market, and is continuing to do very very well. Now this next slide takes the same segments, and looks at what ETR refers to as market share, which is a measure of pervasiveness in the survey. So as you can see, AWS is gaining in virtually all of its segments. So even though spending overall is softening, AWS in the marketplace, AWS is doing a much better job than its peers on balance. Now, the other thing I want to address is this notion of repatriation. I get this a lot, as I'm sure do other analysts. People say to me, "Dave, you should really look into this. "We hear from a lot of customers "that they moved to the cloud, "now they're moving workloads back on-prem "because the cloud is so expensive." Okay, so they say "You should look into this." So this next chart really does look into this. What the chart shows is across those same offerings from AWS, so the same services, the percent of customers that are replacing AWS, so I'm using this as a proxy for repatriation. Look at the numbers, they're low single digits. You see traditional enterprise vendors' overall business growing in the low single digits, or shrinking. AWS's defections are in the low single digits, so, okay, now look at this next chart. What about adoptions, if the cloud is slowing down, you'd expect a slowdown in new adoptions. What this data shows is the percent of customers that are responding, that they're adding AWS in these segments, so there's a new platform. So look, across the board, you're seeing increases of most of AWS's market segments. Notably, in respondents citing AWS overall at the very rightmost bars, you are admittedly seeing some moderation relative to last year. So that's a bit of a concern and clearly something to watch, but as I showed you earlier, AWS overall, that same category, is holding firm, because existing customers are spending more. All right, so that's the data portion of the conversation, hopefully we put that repatriation stuff to bed, and I now want to bring in Stu Miniman to the conversation, and we're going to talk more about multicloud, hybrid, on-prem, we'll talk about Outposts specifically, so Stu, welcome, thank you very much for coming on. >> Thanks Dave, glad to be here with you. >> All right, so let's talk about, let's start with multicloud, and dig into the role of Kubernetes a little bit, let me sort of comment on how I think AWS looks at multicloud. I think they look at multicloud as using multiple public clouds, and they look at on-prem as hybrid. Your thoughts on AWS's perspective on multicloud, and what's going on in the market. >> Yeah, and first of all, Dave, I'll step back for a second, you talked about how Amazon has for years had shots against Oracle. The one that Amazon actually was taking some shots at this year was Microsoft, so, not only did they talk about Oracle, they talked about customers looking to flee their SQL customers, and I lead into that because when you talk about hybrid cloud, Dave, if you talked to any analyst over the last three, four years and you say "Okay, what vendor is best position in hybrid, "which cloud provider has the "best solution for hybrid cloud?" Microsoft is the one that we'd say, because their strong domain in the enterprise, of course with Windows, the move to Office 365, the clear number two player in Azure, and they've had Azure Stack for a number of years, and they had Azure Pack before that, they'd had a number of offerings, they just announced this year Azure Arc, so three, we've had at least three generations of hybrid multicloud solutions from Microsoft, Amazon has a different positioning. As we've talked about for years, Dave, not only doesn't Amazon like to use the words hybrid or multicloud, for the most part, but they do have a different viewpoint. So the partnership with VMware expanded what they're doing on hybrid, and while Andy Jassy, he at least acknowledges that multicloud is a thing, when he sat down with John Furrier ahead of the show, he said "Well, there might be reasons why customers "either there's a group inside "that has a service that they want, "that they might want to do a secondary cloud, "or if I'm concerned that I might fall out of love "with this primary supplier I have, "I might need a second one." Andy said in not so, just exactly, said "Look, we understand multicloud is a thing." Now, architecturally, Amazon's positioning on this is that you should use Amazon, and they should be the center of what you're doing. You talked a lot about Outposts, Outposts, critical to what Amazon is doing in this environment. >> And we're going to talk about that, but you're right, Amazon doesn't like to talk about multicloud as a term, however, and by the way, they say that multicloud is more expensive, less secure, more complicated, more costly, and probably true, but you're right, they are acknowledging at least, and I would predict just as hybrid, which we want to talk about right now, they'll be talking about, they'll be participating in some way, shape, or form, but before we go to multicloud, or hybrid, what about Kubernetes? >> So, right, first of all, we've been at the KubeCon show for years, we've watching Kubernetes since the early days. Kubernetes is not a magic layer, it does not automatically say "Hey, I've got my application, I can move it willy-nilly." Data gravity's really important, how I architect my microservices solution absolutely is hugely important. When I talk to my friends in the app dev world, Dave, hybrid is the way they are building things a lot, if I took some big monolithic application, and I start pulling it apart, if I have that data warehouse or data store in my data center, I can't just migrate that to the cloud, David Floyer for years has been talking about the cost of migration, so microservice architecture's the way most customers are building, a hybrid environment often is there. Multicloud, we're not doing cloud bursting, we're not just saying "Oh hey, I woke up today, "and cloud A is cheaper than cloud B, "let me move my workload." Absolutely, I had a great conversation with a good Amazon customer that said two years ago, when they deployed Kubernetes, they did it on Azure. You want to know why, the Azure solution was more mature and they were doing Azure, they were doing things there, but as Amazon fully embraced Kubernetes, not just sitting on top of their solution, but launched the service, which is EKS, they looked at it, and they took an application, and they migrated it from Azure to Amazon. Now, migrating it, there's the underlying services and everybody does things a little bit different. If you look at some of the tooling out there, great one to look at is HashiCorp has some great tooling that can span across multiple clouds, but if you look at how they deploy, to Azure, to Google, to AWS, it's different, so you got to have different code, there's different skillsets, it's not a utility and just generic compute and storage and networking underneath, you need to have specific skills there, so Kubernetes, absolutely when I've been talking to users for the last few years and saying "Why are you using Kubernetes?" The answer is "I need that eject lever, "so that if I want to leave AWS with an application, "I can do that, and it's not press a button and it's easy, "that easy, but I know that I can move that, "'cause underneath the pods, and the containers, "and all those pieces, the core building blocks "are the same, I will have to do some reconfiguration," as we know with the migration, usually I can get 80 to 90 percent of the way there, and then I need to make the last minute-- >> So it's a viable hedge on your AWS strategy, okay. >> Absolutely, and I've talked to lots of customers, Amazon shows that most cloud Kubernetes solutions out there are running on Amazon, and when I go talk to customers, absolutely, a lot of the customers that are doing Kubernetes in the public cloud are doing that on Amazon, and one of the main reasons they're using it is in case they do want to, as a hedge against being all-in on Amazon. >> All right, let's talk about Outposts, specifically as part of Amazon's hybrid strategy, and now their edge strategy as well. >> Right, so Azure Stack, I mentioned earlier from Microsoft has been out there for a few years. It has not been doing phenomenally well, when I was at Microsoft Ignite this year, I heard basically certain government agencies and service providers are using it and basically acting, delivering Azure as a service, but, Azure Stack is basically an availability zone in my data center, and Amazon looked at this and says "That's not how we're going to build this." Outposts is an extension of your local region, so, while people look at the box and they say, I took a picture of the box and Shu was like, "Hey, whose server and what networking card, "and the chipset and everything," I said "Hold on a second. "You might look at that box, "and you might be able to open the door, "but Amazon is going to deploy that, "they're going to manage that, "really you should put a curtain in front of it "and say pay no attention to what's behind here, "because this is Amazon gear, it's an Amazon "as a service in your data center, "and there are only a few handful of services "that are going to be there at first." If I want to even use S3, day one, the Amazon native services, you're going to just use S3 in your local region. Well, what if I need special latency? Well, Amazon's going to look at that, and see what's available, so, it is Amazon hardware, the Amazon software, the Amazon control plane, reaching into that data center, and very scalable, it's, Amazon says over time it should be able to go to thousands of racks if you need, so absolutely that cloud experience closer to my environment, but where I need certain applications, certain latency, certain pieces of data that I need to store there. >> And we've seen Amazon dip its toe into the hybrid on-prem market with Snowball and Greengrass and stuff like that before, but this is a much bigger commitment, one might even say capitulation, to hybrid. >> Well, right, and the reason why I even say, this is hybrid, but it's all Amazon, it is not "Take my private cloud and my public cloud "and tie 'em together," it's not, "I've taken cloud to customer" or IBM solution, where they're saying "I'm going to put a rack here "and a rack there, and it's all going to work the same." It is the same hardware and software, but it is not all of the pieces-- >> VMware and Outposts is hybrid. >> Really interesting, Dave, as the native AWS solution is announced first here in 2019, and the VMware solution on Outposts isn't going to be available until 2020. Draw what you will, it's been a strong partnership, there are exabytes of data in the VMware cloud on AWS now, but yeah, it's a little bit of a-- >> Quid pro quo, I think is what you call that. >> Well I'd say Amazon is definitely, "We're going to encroach a little bit on your business, "and we're going to woke you into our environment, too." >> Okay, let's talk about the edge, and Outposts at the edge, they announced Wavelength, which is essentially taking Outposts and putting it into 5G networks at carriers. >> Yeah, so Outposts is this building block, and what Amazon did is they said, "This is pretty cool, "we actually have our environment "and we can do other things with it." So sometimes they're just taking, pretty much that same block, and using it for another service, so one that you didn't mention was AWS Local Zones. So it is not a whole new availability zone, but it is basically extending the cloud, multi-tenant, the first one is done for the TME market in Los Angeles, and you expect, how does Amazon get lower latency and get closer, and get specialized services, local zones are how they're going to do this. The Wavelength solution is something they built specifically for the telco environment. I actually got to sit down with Verizon, this was at least an 18 month integration, anybody that's worked in the telco space knows that it's usually not standard gear, there's NEBB certification, there's all these things, it's often even DC power, so, it is leveraging Outposts, but it is not them rolling the same thing into Verizon that they did in their environments. Similar how they're going to manage it, but as you said, it's going to push to the telco edge and in a partnership with Verizon, Vodafone, SK, Telecom, and some others that will be rolling out across the globe, they are going to have that 5G offering and this little bit, I actually buy it from Amazon, but you still buy the 5G from your local carrier. It's going to roll out in Chicago first, and enabling all of those edge applications. >> Well what I like about the Amazon strategy at the edge is, and I've said this before, on a number of occasions on theCUBE Breaking Analysis, they're taking programmable infrastructure to the edge, the edge will be won by developers in my view, and Amazon obviously has got great developer traction, I don't see that same developer traction at HPE, even Dell EMC proper, even within VMware, and now they've got Pivotal, they've got an opportunity there, but they've really got a long way to go in terms of appealing to developers, whereas Amazon I think is there, obviously, today. >> Yeah, absolutely true, Dave. When we first started going to the show seven years ago, it was very much the hoodie crowd, and all of those cloud-native, now, as you said, it's those companies that are trying to become born again in the cloud, and do these environments, because I had a great conversation with Andy Jassy on air, Dave, and I said "Do we just shrink wrap solutions "and make it easy for the enterprise to deploy, "or are we doing the enterprise a disservice?" Because if you are truly going to thrive and survive in the cloud-native era, you've got to go through a little bit of pain, you need to have more developers. I've seen lots of stats about how fast people are hiring developers and I need to, it's really a reversal of that old outsourcing trend, I really need IT and the business working together, being agile, and being able to respond and leverage data. >> It's that hyperscaler mentality that Jassy has, "We got engineers, we'll spend time "on creating a better mousetrap, on lowering costs," whereas the enterprise, they don't have necessarily as many resources or as many engineers running around, they'll spend money to save time, so your point about solutions I think is right on. We'll see, I mean look, never say never with Amazon. We've seen it, certainly with on-prem, hybrid, whatever you want to call it, and I think you'll see the same with multicloud, and so we watch. >> Yeah, Dave, the analogy I gave in the final wrap is "Finding the right cloud is like Goldilocks "finding the perfect solution." There's one solution out there, I think it's a little too hot, and you're probably not smart enough to use it just yet. There's one solution that, yeah, absolutely, you can use all of your credits to leverage it, and will meet you where you are and it's great, and then you've got Amazon trying to fit everything in between, and they feel that they are just right no matter where you are on that spectrum, and that's why you get 36 billion growing at 35%, not something I've seen in the software space. >> All right, Stu, thank you for your thoughts on re:Invent, and thank you for watching this episode of theCUBE Insights, powered by ETR, this is Dave Vellante for Stu Miniman, we'll see you next time. (techno music)

Published Date : Dec 13 2019

SUMMARY :

From the SiliconANGLE media office and that represents the net percentage and what's going on in the market. and they should be the center of what you're doing. and they migrated it from Azure to Amazon. and one of the main reasons they're using it and now their edge strategy as well. it should be able to go to thousands of racks if you need, and stuff like that before, It is the same hardware and software, but it is not is announced first here in 2019, and the VMware solution "and we're going to woke you into our environment, too." Okay, let's talk about the edge, and Outposts at the edge, across the globe, they are going to have the edge will be won by developers in my view, "and make it easy for the enterprise to deploy, and so we watch. and that's why you get 36 billion growing at 35%, All right, Stu, thank you for your thoughts

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

Pat GelsingerPERSON

0.99+

Dave VellantePERSON

0.99+

AWSORGANIZATION

0.99+

VerizonORGANIZATION

0.99+

David FloyerPERSON

0.99+

Andy JassyPERSON

0.99+

VodafoneORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

JassyPERSON

0.99+

MicrosoftORGANIZATION

0.99+

OracleORGANIZATION

0.99+

John FurrierPERSON

0.99+

Stu MinimanPERSON

0.99+

AndyPERSON

0.99+

July '19DATE

0.99+

JulyDATE

0.99+

ChicagoLOCATION

0.99+

2019DATE

0.99+

80QUANTITY

0.99+

36 billionQUANTITY

0.99+

October '18DATE

0.99+

October '19DATE

0.99+

35%QUANTITY

0.99+

Los AngelesLOCATION

0.99+

Dan Hubbard, Lacework & Ilan Rabinovitch, Datadog | AWS re:Invent 2019


 

>>LA from Las Vegas. It's the cube covering AWS reinvent 2019 brought to you by Amazon web services and along with its ecosystem partners. >>Good afternoon. Welcome back to the cubes coverage of AWS reinvent 19 from Las Vegas. I'm Lisa Martin. Co-host is Justin Warren, the founder and chief, endless at pivot nine. Justin, great to have you. Great to be here next to me in the hosting chair today. Always fun. Let's have a great conversation next. Shall we? All right, please be a couple of our guests have joined Justin and me. I've got Dan Hubbard to my love CEO of Lacework and Ilan Rabinovitch, the VP of product at Datadog. Guys, welcome. Our pleasure to be here. Love anytime we can talk about dogs, even if there's no relation to the actual technology. Two thumbs up for me. So, but let's go ahead. I know that you guys have both been on or your companies have, but give our audience, Dan, we'll start with you on a refresher and overview. Lacework what do you guys >>sure. Yeah. Lacework we wake up every morning with a goal of trying to help our customers secure their public cloud infrastructure and, or any type of cloud native technologies such as Kubernetes or containers or any microservices. So our security company for the cloud and cloud native technologies. >>Awesome. Any long, give us a refresher about Datadog, >>Datadog as a monitoring and analytics platform for your modern infrastructure and applications. So micro services, containers, cloud providers like AWS. We're here at reinvent. Our goal is to help teams collaborate and understand the health of their business and their applications and their infrastructure. >>So how do you guys work together? >>So we recently announced a partnership and an integration of the intelligence and the data of all the risks and the threats that at least work as identifying, um, being, sending those, uh, automatically inside of the Datadog platform. So we're, we're putting the data that from our platform, uh, directly into obviously the monitoring the metrics, uh, platform, uh, Datadog's. Yep. And so, uh, what we, when we, we were pulling, um, that intelligence from, from Lacework into our, um, into our platform for our new security monitoring platform. In addition to enriching it with metrics from our infrastructure and application monitoring. Um, we find that a lot of the, a lot of times the first signs that something's going wrong might be a change in how your infrastructure or your applications are performing or a request that came in. And so if we're able to marry the two together, it's just a much better to get, it's a better together story. >>Um, give people much, much clearer insights into what's going on. The security has been a really tricky thing to solve. Well, as long as I've been in computing, which is longer than I can remember, but, uh, walk us through what does this extra visibility actually provide to customers? One of the big issues that seems to be that security is just too hard. So how does this make security easier for customers? >> So one of the big trends that we're seeing is that security and infrastructure were in the past very separate groups. Silos didn't men, many of them didn't know each other or talk to each other. But dev ops has become becoming a unifying force of data intelligence and infrastructure. You know, it's infrastructure as code. It's a little bit different like AWS for example, but it still is infrastructure. And so the combination of security and infrastructure comes together. >>When you get dev ops, some people call it secure dev ops, dev, sec ops, dev ops, whatever you want to call it. But really bringing those two together is finally the first time really where there's a meaningful connection at the data level. It allows you to actually combine both. >> Exactly. And so as all of these teams are taking advantage of infrastructure as code and other DevOps best practices, the security teams are looking at this and saying, how do I get earlier in the cycle? How do I make sure that code is enforcing this? Some scaling, you know, I'm scaling with automation, scaling with code rather than with people. Uh, and then as, as they start to do that, they realize that the data that's in the security silo and that's an application or infrastructure silo, uh, is actually very relevant to one another. Right? If a crypto miner shows up on your systems, the first thing it's going to do is spike your CPU. Um, the, you know, something like Lacework will also, you know, will, will detect that as well if we both look at both of those signals with detective faster. >>Yeah. So go ahead Justin. Sorry. This is a bit of it. That's the reactive side of, of security, which is, you know, there's a threat happens and you react to that, but part of DevSecOps or whichever term you want to actually use, part of that is act to actually shift left and try to get rid of these security flows before they even happen in the code, which is a lot of software development. I like to say that the first 80% of software development is putting the bugs in and the second 90% is taking them out again. So how do you help developers actually remove all of the security vulnerabilities before they even make it into production code? Yeah, >>so just like metrics and monitoring allow you to look at the quality of your infrastructure are very early in the pipeline. A security needs to go there also. Um, and it's, it's really, there is no time. It's just a continuous cycle. Um, early, what we allow you to do is to look at your configuration and check to see if your configuration is changing in a way that is leaving you at risk or an exposure. What's particularly interesting about this partnership is that quite often security people don't know enough about the application or the infrastructure to know if it's a risk. It's actually the dev ops people then now, so security people when when we send an alert many times to security person, they scratch their heads and go, I don't know if this is good, bad, or indifferent. The dev ops people look at it and go, Oh yeah, this is definitely okay. >>Yeah, that's the way our infrastructure should work. This is the way our application should work. Or they say, Oh no, this is a big problem. Let's get security involved. So doing that early is really critical and again, >> it's all about breaking down. I mean if dev ops was all about breaking down silos between Devin operations and and other parts of the business, dev, sec ops or secure dev ops or whatever we want to call it, is just bringing more people into the fold and helping security join that party, um, and get at things earlier in the cycle so we can catch it before it, you know, before, before there's a breach that's in the news, >>right? To be able to be predictive, which is, and then prescriptive, which is about a lot of businesses would love to be able to be, I'd like to get your opinion, Dan, on how cloud >>native cloud and the tra, the transformation of cloud technologies is changing the conversation within the customer base. One of the things Andy Jassy said yesterday is that transformation has gotta be driven from the top down like true business transformation. So that you know, a company is an Uber I's for example. Are you seeing that? Are these, are these, for example, what you're talking about with enlightening the DevOps folks in the security folks bringing them together so that they can be more collaborative? Are you seeing that come from more of a top down approach in terms of how do we leverage our data better, make sure that we have security and are able to securely extract insights from the data? Or is it still kind of from both ends? It depends on the, >>but he, it's, it's very diverse. Uh, what we see a lot is in large, uh, large companies that are migrating to the cloud but weren't born in the cloud. Every company they're buying is a cloud native company. So they buy these new companies and they look, everyone looks at the new company goes, wow, that's amazing. They can move so fast. They, they are, you know, super forward thinking and they're pushing code and are more efficient than us. We want to do that also. So it just kind of breeds the innovation and the speed from an M and a perspective. You know, in the, in the cloud native side, what we see is, it depends on your tenure as a company when you really want to take security seriously. You know, usually B2B companies take it more seriously in B to C for example. But it's usually, it's when your customers start asking you how secure are you, is when people start paying attention. >>We would like it to be before that. Right? And it's not always, you know, before that. Yup. I mean, I think it's from both directions. It depends on the size of the company and the culture, but you can't dictate culture. Right? So, uh, and a lot of, a lot of this, a lot of these silos and a lot of these sort of, these camps and fiefdoms that start to exist within organizations that have caused these groups to be separate. Um, they weren't necessarily top down. It's just, you know, it's a, it's human to human interactions. And so you, you, you can't just walk in and say, you must now be collaborative. Um, the executives have to beat that drum and help people understand why that's important to the business. But the folks on the ground have to actually want to be at one, want to be friends, want to talk, want to collaborate on projects, want to pull people in earlier. >>Um, and once they have that human connection, it's a lot more successful. So you have to do both. Yeah. Well, I mean what we're seeing is as it becomes more distributed and security is more centralized, you run to problems. So the people that are getting it right or are distributing security as close to those teams, whether it's a scrum team, a weekly get together, you know, whatever it is to get that human interaction together because you don't understand the application and what people are working on. How are you going to understand the risks and the threats in the models. So distributing it is really key and it's important those security teams understand the business requirements as well. Sometimes the most secure answer isn't necessarily the answer that actually serves their customers. Sometimes some, and sometimes app teams don't understand the trade offs that security people may understand. So it has to be, it has to be a partnership. Yep. >>You mentioned called change is probably >>harder than anything else, especially if there's a legacy organization. And Dan, to your point, a lot of the acquisitions they're doing are a cloud native companies who are presumably much fresher, maybe have a younger workforce. That's hard to do. Ultimately though, what a business needs to look at is legacy business. There's probably somebody in my rear view mirror is a lot closer than I might think that is more agile, more nimble than we are, has great technology and the aptitude and the culture to be able to move faster. How do you see some of these enterprises that you work with together? Let's put them in the context of they're an AWS customer. How are you seeing these enterprise organizations that are adopting and acquiring cloud native businesses? How are they able to pivot at the speed they need to use cloud technology, understand the security issues that they can remediate and really take that data to what it should be, which is a business differentiator. >>Yeah, I mean, you know, a lot of the times you run into the dev ops people say security slows us down. They're getting in our way and security says developers are insecure that, you know, we're totally gonna get breached. So, um, you know, one of our mottoes is you got to move with speed and safety. Um, as soon as you get in the way of anything. You know, typically the developer and the application's going to win. So you got to figure out where to get involved in that. And really big companies, what we've seen that are very inquisitive is they're moving the security to a central governance role, um, and maybe have tooling and uh, you know, some specialty teams and then they're distributing security baked as deep into the development infrastructure as they can. And then they have groups which kind of work together, uh, you know, broadly across that. >>So you can structurally set it up that way I think. And if you have the incentives right now, you know, nobody's looking to create a security breach, there are a vulnerability there. Gold engine engineers and your employees have your best, the company's best intentions at heart, otherwise they wouldn't, they wouldn't work, you know, work there. So they're looking to do the right thing. You just have to make it easy for them with, and some that's tooling. Some of that's culture. Some of that's just starting the conversation, not the day of the release started, you know, start it when the, when the, when the, when the first line of code is being written, what would it take for us to solve this problem in a secure fashion? And then everybody was happy to work together. They just don't want to redo things. You know, the, the, the day before the launch should have to, you know, be slowed down. >>Well that technical debt becomes a real problem. Right? Yeah. I think one of the great things about, uh, you know, our technical, uh, partnership and integration here is security in the past has always been just very binary. Are we insecure, secure? That's it. We're actually, there's all kinds of nuances around it and that's what lends itself to metrics. If, you know, what are our metrics? How are we doing, what's our risk? What's our exposures? Is getting better over time? Is it worse over time? So there's always the doomsday scenario, but there's also the, what's happening over time and are we getting better at what we do? And metrics really lends itself to that. And that comes right back to that, to that, uh, you know, some of dev ops philosophies of continuous improvement and continuous learning, uh, you know, bringing that into the world of security is, is just as critical. >>So you, so you mentioned, you've mentioned culture, you mentioned transformation, you mentioned metrics. So three things very close to my heart. Uh, we keep hearing this security is becoming a board level conversation. So a lot of this is very technical and, and DevSecOps is down here with the technical people, but that structure of the organization that you referred to and, and changing that structure and setting the culture that tends to come from the top level. And we heard from Andy in the keynote yesterday that that is very, very important. So what are the sorts of conversations you're having with senior management and board level from what your products do together? What does that look like from the board's perspective? So learning to manage risk, looking at how are we doing, how much of what of what you do is actually available to the board for them to make their job easier. >>I think one of the exciting trends is that compliance is cool again, right complaints. It's never a cool thing, you know, flight's kind of a boring thing. The auditors come in once a year, you know, you get stuck with it and the way you go. Um, but now compliance is continuous. It's always running and it's more about risks and exposures and Mia adhering to compliance via the risks and exposures executives get, ER, it's very challenging to explain things like Kubernetes and pods and nodes and all this technical acronyms and mumbo jumbo that we live in every day, you know, in this world. But compliance is real. Are we PCI, SOC two NIST, are we, are we applying best standards and best practices? So the ability to pull that in either via a metrics dashboard or through measurable things over time, I think is really key. As part of that. >>And similarly as, as, as filter moving, you know, whether whether they're moving new application, existing applications from, uh, you know, legacy or on prem environment into the cloud or building something from scratch. Um, it's, you know, visibility on compliance is important. We can bring that into our dashboards, into our, into the tooling that executives can look at over time. But also just understanding, am I done with the migration? Is my application there? Um, taking this nebulous thing that is a cloud and making it a tangible asset that you can look at and see the health and progress on overtime and Datadog has significantly sped up. Many of our customers cloud migrations, um, they often get stuck in a sort of analysis paralysis. Are we, are we performing the same as we did in the data center? I don't know. Uh, are we as secure? Can we move this workload and tooling like Datadog, like Lacework and the two together helps them put that into something concrete that they can say, actually, yes, we're ready to go. >>Or no, there's these three things we need to do first, let's go do them. Um, it's really challenging if for, um, traditional security people and this new world order because it's very ephemeral. Things change all the time. You know, it used to be like, I got five racks, I got 22, you know, 2200 servers. These are the IPS and that's it. Now it's like, what time is it? I don't know what I have, you know? So I think visibility's key, you used to be able to have a server that you might've monitored throughout your tenure at a company. Now you probably can't monitor it through the tenure of your lunch. Yeah. Yeah. >>Last question for you guys is how much do you see a lift or an impact from something the capital one data >>breach that happened a few months ago? You talked about, you know, B2B being more on it in terms of B to C, but we S we see these breaches that and many generations that are alive today understand to some degree is that in terms of getting insight into where are all of our risks and vulnerabilities and needing to get that visibility on it, do you see some of these big breaches as, um, catalysts for businesses to go, Oh, we have a lot of stake here. We don't really, and try to understand what the heck's going on and what we own. >>I mean, security has a very bad reputation of fear, uncertainty and doubt. And, you know, I've been in the, in the industry for a long time. Um, that said, you know, those moments do, uh, get up very high. Um, especially somebody like capital one who, who's one of them, no one to be one of the most sophisticated cloud security organizations on the planet. Um, so it certainly piques people's interests. Um, you know, I think people get carried away maybe on the messaging side of things, but you know, in order for security market to get really big, you have to have a big it transformation trend. You have to have a very diverse attack surface and you have to have the beginnings of breach. If you don't have the beginnings of breach, you spent all your time convincing people there may be a problem. And because there is problems that are happening almost every weekend are getting published. >>Um, they know many of them are, are, are being acknowledged. Uh, you know, publicly it does help, you know, it definitely helps the conversation. You know, I don't think that there's a lot more, there are a lot more breaches in the news off to some extent because there's a lot more tech companies using going through these digital transmissions, having tech news. I don't know that this is cloud versus not cloud. What cloud does, however introduces new concepts and new workflows that security teams need to understand and that application teams, they understand. And so this is where the new breed of tooling and education comes in, is helping people be ready for that. Um, and yeah, of course anytime there's a headline on, you know, the big on any of the big news shows, of course the first thing we're going to do is say, well clearly there's a, they're going to bring on, they're going to bring on Dan or you know, you know, uh, one of our security experts or somebody in industry to talk about how you prevent that in the future. >>And so it, it does bring some attention in our way, but it's, uh, I think that's great. It's just finding people that what's important. And one of the conversations we have with our prospects is, uh, have you ever had a breach before? You know, they're always going to say no, of course. But then you ask, how do you know, how do you know? How do you really know that? And then let's walk through how you would actually find that out if you did know. And that's a very different conversation than, Oh, my traditional data center, I would know this way. So it's just very different. >>Interesting stuff, guys. Thank you for sharing with us and congratulations on the integration with Datadog and Lacework. We appreciate your time. Our pleasure for Justin Warren. I am Lisa Martin and you're watching the cube live from AWS, reinvent 19 from Vegas. Thanks for watching.

Published Date : Dec 4 2019

SUMMARY :

AWS reinvent 2019 brought to you by Amazon web services I know that you guys have both been on or your companies have, but give our audience, So our security company for the cloud and cloud native technologies. Any long, give us a refresher about Datadog, Our goal is to help of all the risks and the threats that at least work as identifying, um, being, One of the big issues that seems to be that security is just too hard. So one of the big trends that we're seeing is that security and infrastructure were It allows you to actually combine both. Um, the, you know, something like Lacework will also, you know, will, will detect that as well if we of security, which is, you know, there's a threat happens and you react to that, but part of DevSecOps or whichever Um, early, what we allow you to do is to look This is the way our application should work. can catch it before it, you know, before, before there's a breach that's in the news, So that you know, a company is an Uber I's for example. you know, super forward thinking and they're pushing code and are more efficient than us. And it's not always, you know, before that. you know, whatever it is to get that human interaction together because you don't understand the application How do you see some of these enterprises that you work with together? and maybe have tooling and uh, you know, some specialty teams and then they're distributing security Some of that's just starting the conversation, not the day of the release started, you know, And that comes right back to that, to that, uh, you know, some of dev ops philosophies of continuous improvement and continuous learning, we doing, how much of what of what you do is actually available to the board for them to make their job easier. and mumbo jumbo that we live in every day, you know, in this world. existing applications from, uh, you know, legacy or on prem environment into the cloud or building So I think visibility's key, you used to be able to have a server that you might've monitored throughout your tenure at a You talked about, you know, B2B being more on it in terms Um, you know, I think people get carried away maybe on the messaging they're going to bring on, they're going to bring on Dan or you know, you know, uh, one of our security experts or somebody in industry to talk about how you how do you know, how do you know? Thank you for sharing with us and congratulations on the integration with Datadog

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JustinPERSON

0.99+

Lisa MartinPERSON

0.99+

Justin WarrenPERSON

0.99+

Ilan RabinovitchPERSON

0.99+

Andy JassyPERSON

0.99+

AndyPERSON

0.99+

UberORGANIZATION

0.99+

Dan HubbardPERSON

0.99+

AWSORGANIZATION

0.99+

DanPERSON

0.99+

LaceworkORGANIZATION

0.99+

five racksQUANTITY

0.99+

yesterdayDATE

0.99+

AmazonORGANIZATION

0.99+

Las VegasLOCATION

0.99+

DatadogORGANIZATION

0.99+

LALOCATION

0.99+

twoQUANTITY

0.99+

2200 serversQUANTITY

0.99+

first timeQUANTITY

0.99+

OneQUANTITY

0.99+

bothQUANTITY

0.99+

first lineQUANTITY

0.99+

22QUANTITY

0.98+

firstQUANTITY

0.98+

oneQUANTITY

0.98+

three thingsQUANTITY

0.97+

both directionsQUANTITY

0.97+

first thingQUANTITY

0.97+

first signsQUANTITY

0.97+

once a yearQUANTITY

0.97+

Two thumbsQUANTITY

0.97+

todayDATE

0.96+

first 80%QUANTITY

0.96+

nineQUANTITY

0.95+

both endsQUANTITY

0.95+

SOCORGANIZATION

0.94+

VegasLOCATION

0.94+

reinvent 19TITLE

0.89+

few months agoDATE

0.89+

second 90%QUANTITY

0.81+

NISTORGANIZATION

0.78+

19TITLE

0.74+

DevinORGANIZATION

0.7+

coupleQUANTITY

0.66+

Invent 2019EVENT

0.64+

DevSecOpsTITLE

0.59+

KubernetesTITLE

0.59+

DatadogTITLE

0.58+

2019TITLE

0.51+

LaceworkTITLE

0.5+

Mark Penny, University of Leicester | Commvault GO 2019


 

>>live >>from Denver, Colorado. It's the Q covering com vault Go 2019. Brought to you by combo. >>Hey, welcome to the Cube. Lisa Martin in Colorado for CONMEBOL Go 19. Statement. A man is with me this week, and we are pleased to welcome one of combos, longtime customers from the University of Leicester. We have Mark Penny, the systems specialist in infrastructure. Mark. Welcome to the Cube. >>Hi. It's good to be here. >>So you have been a convo customer at the UNI for nearly 10 years now, just giving folks an idea of about the union got 51 different academic departments about five research institutes. Cool research going on, by the way and between staff and students. About 20,000 folks, I'm sure all bringing multiple devices onto the campus. So talk to us about you came on board in 20 ton. It's hard to believe that was almost 10 years ago and said, All right, guys, we really got to get a strategy around back up, talk to us about way back then what? You guys were doing what you saw as an opportunity. What you're doing with combo today, a >>time and the There's a wide range of backup for us. There was no really assurance that we were getting back up. So we had a bit of convert seven that was backing up the Windows infrastructure. There was tyranny storage manager backing up a lot of Linux. And there was Amanda and open source thing. And then there was a LL sorts of scripts and things. So, for instance, of'em where backups were done by creating an array snapshot with the script, then mounting that script into that snapshot into another server backing up the server with calm bolt on the restore process is an absolute takes here. It was very, very difficult, long winded, required a lot of time on the checks. For this, it really was quite quite difficult to run it. Use a lot of stuff. Time we were, as far as the corporate side was concerned it exclusively on tape resource manager, we're using disc. Amanda was again for tape in a different, completely isolated system. Coupled with this, there had been a lack of investment in the data centers themselves, so the network hadn't really got a lot of throughput. This men that way were using data private backup networks in order to keep back up data off the production networks because there was really challenges over bandwidth contention backups on. So consider it over around and so on. If you got a back up coming into the working day defect student So Way started with a blank sheet of paper in many respects on went out to see what was available on Dhe. There was the usual ones it with the net back up, typically obviously again on convert Arc Serve has. But what was really interesting was deed Implication was starting to come in, But at the time, convo tonight just be released, and it had an absolutely killer feature for us, which was client side duplication. This men that we could now get rid of most of this private backup network that was making a lot of complex ISI. So it also did backup disk on back up to tape. So at that point, way went in with six Media agents. Way had a few 100 terabytes of disk storage. The strategy was to keep 28 days on disk and then the long term retention on tape into a tape library. WeII kept back through it about 2013 then took the decision. Disc was working, so let's just do disco only on save a whole load of effort. In even with a take life, you've got to refresh the tapes and things. So give it all on disk with D Duplication way, basically getting a 1 to 1. So if we had take my current figures about 1.5 petabytes of front side protected data, we've got about 1.5 petabytes in the back up system, which, because of all the synthetic fools and everything, we've got 12 months retention. We've got 28 days retention. It works really, really well in that and that that relationship, almost 1 to 1 with what's in the back up with all the attention with plants like data, has been fairly consistent since we went all disc >>mark. I wonder if you'd actually step back a second and talks about the role in importance of data in your organization because way went through a lot of the bits and bytes in that is there. But as a research organization, you know, I expect that data is, you know, quite a strategic component of the data >>forms your intellectual property. It's what is caught your research. It's the output of your investigations. So where were doing Earth Operational science. So we get data from satellites and that is then brought down roars time, little files. They then get a data set, which will consist of multiple packages of these, these vials and maybe even different measurements from different satellites that then combined and could be used to model scenarios climate change, temperature or pollution. All these types of things coming in. It's how you then take that raw data work with it. In our case, we use a lot of HPC haIf of computing to manipulate that data. And a lot of it is how smart researchers are in getting their code getting the maximum out of that data on. Then the output of that becomes a paper project on dhe finalized final set of of date, which is the results, which all goes with paper. We've also done the a lot of genetics and things like that because the DNA fingerprinting with Alec Jeffrey on what was very interesting with that one is how it was those techniques which then identified the bones that were dug up under the car park in Leicester, which is Richard >>Wright documentary. >>Yeah, on that really was quite exciting. The way that well do you really was quite. It's quite fitting, really, techniques that the university has discovered, which were then instrumental in identifying that. >>What? One of the interesting things I found in this part of the market is used to talk about just protecting my data. Yeah, a lot of times now it's about howto. Why leverage my data even Maur. How do I share my data? How do I extract more value out of the data in the 10 years you've been working with calm Boulder? Are you seeing that journey? Is that yes, the organization's going down. >>There's almost there's actually two conflicting things here because researchers love to share their data. But some of the data sets is so big that can be quite challenging. Some of the data sets. We take other people's Day to bring it in, combining with our own to do our own modeling. Then that goes out to provide some more for somebody else on. There's also issues about where data could exist, so there's a lot of very strict controls about the N. H s data. So health data, which so n hs England that can't then go out to Scotland on Booth. Sometimes the regulatory compliance almost gets sidelines with the excitement about research on way have quite a dichotomy of making sure that where we know about the data, that the appropriate controls are there and we understand it on Hopefully, people just don't go on, put it somewhere. It's not because some of the data sets for medical research, given the data which has got personal, identifiable information in it, that then has to be stripped out. So you've got an anonymous data set which they can then work with it Z assuring that the right data used the right information to remove so that you don't inadvertently go and then expose stuff s. So it's not just pure research on it going in this silo and in this silo it's actually ensuring that you've got the right bits in the right place, and it's being handled correctly >>to talk to us about has you know, as you pointed out, this massive growth and data volumes from a university perspective, health data perspective research perspective, the files are getting bigger and bigger In the time that you've started this foundation with combo in the last 9 10 years. Tremendous changes not just and data, but talking about complaints you've now got GDP are to deal with. Give us a perspective and snapshot of your of your con vault implementation and how you've evolved that as all the data changes, compliance changes and converts, technology has evolved. So if you take >>where we started off, we had a few 100 petabytes of disk. It's just before we migrated. Thio on Premise three Cloud Libraries That point. I think I got 2.1 petabytes of backup. Storage on the volume of data is exponentially growing covers the resolution of the instruments increases, so you can certainly have a four fold growth data that some of those are quite interesting things. They when I first joined the great excitement with a project which has just noticed Betty Colombo, which is the Mercury a year for in space agency to Demeter Mercury and they wanted 50 terabytes and way at that time, that was actually quite a big number way. We're thinking, well, we make the split. What? We need to be careful. Yes. Okay. 50 terrorizes that over the life of project. And now that's probably just to get us going. Not much actually happened with it. And then storage system changed and they still had their 50 terabytes with almost nothing in it way then understood that the spacecraft being launched and that once it had been launched, which was earlier this year, it was going to take a couple of years before the first data came back. Because it has to go to Venus. It has to go around Venus in the wrong direction, against gravity to slow it down. Then it goes to Mercury and the rial bolt data then starts coming back in. You'd have thought going to Mercury was dead easy. You just go boom straight in. But actually, if you did that because of gravity of the sun, it would just go in. You'd never stop. Just go straight into the sun. You lose your spacecraft. >>Nobody wants >>another. Eggs are really interesting. Is artfully Have you heard of the guy? A satellite? >>Yes. >>This is the one which is mapping a 1,000,000,000 stars in the Milky Way. It's now gone past its primary mission, and it's got most of that data. Huge data sets on DDE That data, there's, ah, it's already being worked on, but they are the university Thio task, packaging it and cleansing it. We're going to get a set of that data we're going to host. We're currently hosting a national HPC facility, which is for space research that's being replaced with an even bigger, more powerful one. Little probably fill one of our data centers completely. It's about 40 racks worth, and that's just to process that data because there's so much information that's come from it. And it's It's the resolution. It's the speed with which it can be computed on holding so much in memory. I mean, if you take across our current HPC systems, we've got 100 terabytes of memory across two systems, and those numbers were just unthinkable even 10 years ago, a terrible of memory. >>So Mark Lease and I would like to keep you here all way to talk about space, Mark todo of our favorite topics. But before we get towards the end, but a lot of changes, that combo, it's the whole new executive team they bought Hedvig. They land lost this metallic dot io. They've got new things. It's a longtime customer. What your viewpoint on com bold today and what what you've been seeing quite interesting to >>see how convoy has evolved on dhe. These change, which should have happened between 10 and 11 when they took the decision on the next generation platform that it would be this by industry. Sand is quite an aggressive pace of service packs, which are then come out onto this schedule. And to be fair, that schedule is being stuck to waken plan ahead. We know what's happening on Dhe. It's interesting that they're both patches and the new features and stuff, and it's really great to have that line to work, too. Now, Andi way with platform now supports natively stone Much stuff. And this was actually one of the decisions which took us around using our own on Prem Estimate Cloud Library. We were using as you to put a tear on data off site on with All is working Great. That can we do s3 on friend on. It's supported by convoy is just a cloud library. Now, When we first started that didn't exist. Way took the decision. It will proof of concept and so on, and it all worked, and we then got high for scale as well. It's interesting to see how convoy has gone down into the appliance 11 to, because people want to have to just have a box unpack it. Implicated. If you haven't got a technical team or strong yo skills in those area, why worry about putting your own system together? Haifa scale give you back up in a vault on the partnerships with were in HP customer So way we're using Apollo's RS in storage. Andi Yeah, the Apollo is actually the platform. If we bought Heifer Scale, it would have gone on an HP Apollo as well, because of the way with agreements, we've got invited. Actually, it's quite interesting how they've gone from software. Hardware is now come in, and it's evolving into this platform with Hedvig. I mean, there was a convoy object store buried in it, but it was very discreet. No one really knew about it. You occasionally could see a term on it would appear, but it it wasn't something which they published their butt object store with the increasing data volumes. Object Store is the only way to store. There's these volumes of data in a resilient and durable way. Eso Hedvig buying that and integrating in providing a really interesting way forward. And yet, for my perspective, I'm using three. So if we had gone down the Hedvig route from my perspective, what I would like to see is I have a story policy. I click on going to point it to s three, and it goes out it provision. The bucket does the whole lot in one a couple of clicks and that's it. Job done. I don't need to go out, create the use of create the bucket, and then get one out of every little written piece in there. And it's that tight integration, which is where I see benefits coming in you. It's giving value to the platform and giving the customer the assurance that you've configured correctly because the process is an automated in convoy has ensured that every step of the way the right decisions being made on that. Yet with metallic, that's everything is about it's actually tried and tested products with a very, very smart work for a process put round to ensure that the decisions you make. You don't need to be a convoy expert to get the outcome and get the backups. >>Excellent. Well, Mark, thank you for joining Student on the Cape Talking about tthe e evolution that the University of Leicester has gone through and your thoughts on com bolts evolution in parallel. We appreciate your time first to Minutemen. I'm Lisa Martin. You're watching the cue from combo go 19.

Published Date : Oct 15 2019

SUMMARY :

It's the Q covering com vault We have Mark Penny, the systems So talk to us about you came on board in 20 ton. So at that point, way went in with six Media agents. quite a strategic component of the data It's the output of your investigations. It's quite fitting, really, techniques that the university has discovered, the data in the 10 years you've been working with calm Boulder? it Z assuring that the right data used the right information to remove so to talk to us about has you know, as you pointed out, this massive growth and data volumes the great excitement with a project which has just noticed Betty Colombo, Is artfully Have you heard of the guy? It's the speed with which it can be computed on but a lot of changes, that combo, it's the whole new executive team they bought Hedvig. that the decisions you make. We appreciate your time first to Minutemen.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

Mark PennyPERSON

0.99+

28 daysQUANTITY

0.99+

VenusLOCATION

0.99+

ColoradoLOCATION

0.99+

20 tonQUANTITY

0.99+

MarkPERSON

0.99+

University of LeicesterORGANIZATION

0.99+

100 terabytesQUANTITY

0.99+

50 terabytesQUANTITY

0.99+

2.1 petabytesQUANTITY

0.99+

LeicesterLOCATION

0.99+

Milky WayLOCATION

0.99+

Mark LeasePERSON

0.99+

12 monthsQUANTITY

0.99+

HPORGANIZATION

0.99+

Alec JeffreyPERSON

0.99+

50 terabytesQUANTITY

0.99+

Denver, ColoradoLOCATION

0.99+

University of LeicesterORGANIZATION

0.99+

EarthLOCATION

0.99+

51 different academic departmentsQUANTITY

0.99+

100 petabytesQUANTITY

0.99+

10 yearsQUANTITY

0.99+

two systemsQUANTITY

0.99+

1QUANTITY

0.99+

ScotlandLOCATION

0.99+

1,000,000,000 starsQUANTITY

0.99+

MercuryLOCATION

0.99+

first dataQUANTITY

0.99+

ApolloORGANIZATION

0.99+

50QUANTITY

0.98+

MaurPERSON

0.98+

threeQUANTITY

0.98+

oneQUANTITY

0.98+

10 years agoDATE

0.98+

tonightDATE

0.97+

todayDATE

0.97+

AmandaPERSON

0.97+

About 20,000 folksQUANTITY

0.97+

OneQUANTITY

0.97+

LinuxTITLE

0.97+

firstQUANTITY

0.97+

bothQUANTITY

0.96+

EnglandLOCATION

0.96+

2019DATE

0.96+

WindowsTITLE

0.96+

WrightPERSON

0.96+

Betty ColomboPERSON

0.96+

this weekDATE

0.95+

ThioPERSON

0.95+

N. HLOCATION

0.93+

earlier this yearDATE

0.93+

RichardPERSON

0.93+

nearly 10 yearsQUANTITY

0.93+

ThioORGANIZATION

0.92+

six Media agentsQUANTITY

0.9+

sunLOCATION

0.89+

about 1.5 petabytesQUANTITY

0.87+

HedvigORGANIZATION

0.87+

about 40 racksQUANTITY

0.87+

a yearQUANTITY

0.85+

four foldQUANTITY

0.84+

2013DATE

0.83+

10QUANTITY

0.82+

about five research institutesQUANTITY

0.79+

clicksQUANTITY

0.78+

two conflicting thingsQUANTITY

0.78+

Prem EstimateORGANIZATION

0.76+

Eso HedvigORGANIZATION

0.75+

AndiPERSON

0.74+

ApolloCOMMERCIAL_ITEM

0.73+

Object StoreTITLE

0.68+

James Ilari, Alectra & Stephanie Schiraldi, Alectra | Nutanix .NEXT Conference 2019


 

>> live from Anaheim, California. It's the queue covering nutanix dot next twenty nineteen. Brought to you by Nutanix. >> Welcome back, everyone to the cubes. Live coverage of nutanix dot Next here in Anaheim, California. I'm your host, Rebecca Night, along with my co host, John Furrier. We have two guests for this next segment. We have Stephanie Scare Aldi. She is the director of operations and support for Electra. Thank you so much for coming on the cues. And we have James Ellery, director innovation and governance at Electra. Thank you, James. >> Thanks for having us. >> So I want to start with you, James. Tell our viewers a little bit about electorates. Ontario, based for our viewers who are not familiar. What do what do you do? What do you about? >> So we are a energy solutions provider in Ontario, Canada. Basically, we are an ldc a local distribution company, but we're trying to transition from the poles and wires into really energy solutions provider. We We're about a million customers are approaching a million customers right now and wear actually four utility companies that came together to form Electra. And we just recently emerged with a fifth now, so We're rapidly growing in the in Ontario, and we have very much more growth to come. >> It's all those mergers. How does I t all fit together? Different systems, all kind of legacy. Mishmash. What's what's What's the environment like? >> So the environment Right now there is a tremendous amount of data center Stephanie's actually leading our data center consolidation project. There are tremendous amount of data centers across a fast geographical location, and we're using NUTANIX actually to consolidate everything onto a single platform right now. So there's a lot of work to be done. Definitely a lot of integration to be done, but we're confident that we'LL get it all done and we want to move to new tanks by phone. >> So right now we have about, I think, eleven data centers and we've been mandated to get down to two. So we're use up utilizing technology like nutanix too kind of, you know, get down and scale ability. So wait >> here for a lot of customs from nutanix around, how it's been a great system for manageability and also getting rid of some older gear, whether it's old GMC Cem Dale stuff. So we're seeing a lot of, you know, go from twenty four racks to six. This is kind of the ratios pushing stuff from eight weeks. Tow two hours, new operational benefits. How close are you guys up to that now? Because you get all this stuff you consolidating down the merger's makes a lot of sense. What's some of the operational benefits you seeing with nutanix That you could share, >> I think, is a per example that you just gave. We're working on a front office consolidation project and we're moving. We're doubling our VD i environment, and we actually just got three new nodes in a few weeks ago and it took a matter of two hours to get everything spun up and ready. So traditionally, it would take us weeks of planning and getting someone in and specialized technicians and now make a phone call a few hours and it's done. So you see, like already the benefits of you know growing are our infrastructure, and it's enabling us to merge faster with different utilities. >> I want to actually back up now and talk about the journey to Nutanix and talk about life before nutanix and now life after it. What was that what were sort of the problems that you were trying to solve? And why was Nutanix the answer >> So I could speak to that way back in twenty fifteen? We're looking at video, and we're implementing it across organization. And we're running its issues on three tier architecture where whenever there was a performance issue, we would talk to the sand guy and we'LL talk to the server guy and we talked to the networking guy. And although everyone's trying to help everyone sort of looking at each other, saying, Okay, where is this problem? Really, really land? And the issue with that is, as you guys know what VD I I mean, user performance and user experience is key, right? That's King. So you know, when you're trying to take away someone's physical desktop and give him a virtual desktop, they want the same or better performance. And anytime we had an issue, we had to resolve it rapidly. So when we look at everything we said, Okay, this is okay, but it's not sustainable for the scale, ability in the growth that we had, especially because with, you know, ah media environment, its scales very rapidly and If the application scares wrapped scales rapidly, you need the infrastructure to scale as rapidly as your application and perform just as good. So what happened was we looked at nutanix. We said, You know what? If we can look at a single pane of glass to figure out where any performance issues lie, that makes operations much more operations, that management administration much easier for us. And that's really where we started our journey with nutanix. We went from a three note cluster to start and we're up to fourteen nodes now, just in our VD I cluster alone. >> And what about about the future? What? What is the future hold in terms of this partnership, >> I think for us were really hoping to go to fully H V in the next six or twelve months. Uh, I know, James. We're really pushing it and trying to get that in because, you know, way want to simplify our technologies. And I think by moving to a Chevy, I think, you know, we'LL save some money. >> So what we're looking to do with Nutanix isn't you know, there's been a lot of wins for us moving to NUTANIX, especially with regards to support Support's been fantastic. I mean, you know, although we don't like to call support because I mean something's probably wrong way love calling you guys because every time we call support, it's, you know, everyone's always there to help. And I'm not only the support from the support team, but also through our venders or a vendor are counts, you know, I've or who we love way love the whole team because they're there for you to help me. We run into some pretty significant issues. One of the things that happened to us was we had some changing workloads in our media environment. Through no fault of nutanix is you know when when we introduce some additional workloads, we didn't anticipate some of the challenges that would come along with introducing those workloads. And what happened was we filled up our hot storage rather rapidly. Nutanix came in right away because we call them up and said, You know, we're having big performance issues. We need some help and they brought in P E O. C notes to help us get over the hump. They were there for us. I mean, within a week, they got us right back up and running and fully operational and even better performance than we had before. So until we could get our own notes procured and in house, which was fantastic, I've never seen that levels from another organization. So we love the support from Nutanix on DH. Since then, we've grown. So we've actually looked at nutanix for General server computer platform as well. And we're doing Christ Cross hyper visor Support across high provides a replication Sorry from production to D. R. So we're actually running Acropolis. Indy are running GM. Where in production. But has Stephanie alluded to? We're trying to get off of'Em were completely, you know, everyone talks about the attacks. We don't like the V attacks with Phil on a baby anywhere for something that's commodity. And we're looking to repurpose that money so we can look at other things such as you ten exciting way very much. Want to move to the cloud for D R. And that's sort of our direction. >> OK, so you guys have the m we're now, not you Not yet off the anywhere, but you plan to be >> playing to be Yes. >> Okay, So what's it going to look like How long is that gonna take or what is that? We're >> really hoping at the next six to twelve months. So I think we're really gonna push hard at. We've been talking to some people and it seems like it's gonna be a pretty smooth transition, So looking forward to it. And I think our team is really looking for true as well. That's >> one of the challenges right. That the team is really is one of the challenges because we've merged and there's a lot of change going on organization. It's difficult to throw more change at people, right? There's a whole human component, Teo everything that we do. So you know Well, that's why we moved GHB into d. R. To start because we said, You know what, give the operations folks time to look at it, timeto play with it, time to get familiar with it. And then we'LL make the change in production. But like we said, you know, moving over age, he's going to save us a ton of money like a ton of money that we can repurpose elsewhere to really start moving the business forward >> about operations for second. Because one of the things you told earlier is that consolidation? You're leading the project at the VD. I think we're new workloads. There's always gonna be problems. Always speed bumps and hot spots, as they say. But what has changed with the advent of software and Dev ops and automation starts to come into it. How do you see that playing out? Because you tell this is a software company. So you guys knew them when they were five years ago Now, But this is the trend in I t. Operations have clean program ability for the infrastructure. What's your view on that? What's your reaction to that? And you guys getting theirs at the goal >> that is >> like part of our road map. And we're gonna be working with our NUTANIX partners t build a roll map, actually, the next coming few weeks. So because we are emerging all these utilities, we'd love to get automation and orchestration, and we actually have another budget in three years. So it is on our road map. We want to get there right, because we want to have her staff work on business strategy. We don't want their fingers to keyboards. We want them actually working with the business and solution ing and not, you know, changing tapes or working on supporting a system when we don't have to do that anymore. Because now there's so it's so much simpler running any tennis environment. I know James is saying a lot of change for employees. There used to be M where Nutanix is new to a lot of them. I think they're quickly seeing the benefit of managing it because now they get to do things that are a little bit more fun than just managing an environment. >> And this is point cost to repurpose what you're paying for a commodity for free. And if you can repurpose and automata way the manual labor that's boring and repetitive, moving people to a higher value activity. >> Exactly. And we love the message we heard today about being invisible. >> Yeah, I love that >> way, Lovett. I mean, that's essentially we wanted. The business doesn't really care what you're doing behind the scenes, right? They just want their applications to work. They want everything to work seamlessly. So that's what we want to get, too. We want to get to that invisibility where we're moving the business, Ford. We're enabling them through technology, but they don't need to worry about the back end of what's actually going on. >> Stephanie, I want to ask you about both a personal and professional passion of yours, and that is about bringing more women into technology. You are a senior woman in technology, and we know we know the numbers. There is a dearth of female leaders. There is a dearth of underrepresented minorities, particularly in in high level management roles. So I want to hear from you both from a personal standpoint in terms of what your thoughts are on this problem and why, why we have this problem and then also what you, an elector are doing to remedy it. >> Yeah, I think you know, I'm really lucky to work at Electra because we actually have a diversion inclusion committee that I'm part of with a lot of stem organizations. But I think you know, there's all these great programs going on, and but I still don't see enough women in this in this industry, and I think a lot of it stems from you walk into a room, and if you're the only one of you it's really intimidating. So I think we really need to work on making people feel more welcome. You know, getting more women in cedar senior leadership positions and kind of bring them to events like this, gaming them on the Internet. Going to the university is going to the schools and talking to education and talking to, you know, CEOs and seals that don't have sea level women executives and saying, You know, there's a business benefit toe having diversity of all kinds in an organization, you know, you know, strength lies in differences, not in similarities. And I think we can really grow businesses and have that value if we have different types of opinions. And I think there's, you know, statistic shows when you have more diversity, your business is more successful. So I think senior leaders should pay attention and, you know, purposely try to hire more a more diverse workforce >> and what do you have anything to add to that? I mean, I know that it that it's maybe tougher for a man to weigh in on this issue, but at the same time it is one that affects all of us. >> Absolutely. And I think seventy, said it best right when you bring in, you know, multiple bill from different ethnicities from different genders. I mean, it's it's that wealth of knowledge and everyone brings from the different experiences they have in life, and I think that's what you need. You don't want to know the collective all thinking the same way you want the collective that bring the diversity into your organization. And I think you know, when I was in school, we had one woman in my entire computer engineering class, and you know that you wanted to see that change, right? I love to see more of that disease. More women being in the work force, especially within technology. >> I >> think that's Ah, it's fantastic for technology. >> Stephanie, What's your advice for young girls out there? Maybe in high school college, who are having gravitating towards either it's computer science or some sort of stem related field that might be intimidated? >> I think the one important thing you can do is like really rely on your family and friends for encouragement, cause I think sometimes it is gonna be intimidating, you know, For me I'd walk into a course and I was the only female my computer networking class. But I had, like my father, always encouraged me to push me to say, like, Don't ever be intimately. Don't ever be scared and you need a little bit of a fix. Came because for a little bit it is going to be just you in a room. But I think the more you speak up and the more you just kind of push yourself, I think it is going to get better. And I think it's almost kind of cool when you're the only female. Because you feel that pride. I want to do better. I want to do better for all of us to say like we can be. Not just a good, even better. >> Great. So great advice. Yeah. Stephanie James. Thank you both. So much for coming on. Thanks for having us. Pleasure talking, Teo. Thanks. I'm Rebecca Knight for John Furrier. We will have so much more of nutanix dot Next coming up in just a little bit

Published Date : May 8 2019

SUMMARY :

Brought to you by Nutanix. Thank you so much for coming on the cues. What do what do you do? And we just recently emerged with a fifth now, so We're rapidly growing in the in Ontario, all kind of legacy. Definitely a lot of integration to be done, but we're confident that we'LL get it all done and we want to move to new tanks by phone. So we're use up utilizing technology like nutanix too kind of, you know, get down and So we're seeing a lot of, you know, go from twenty four racks to six. So you see, like already the benefits of you know growing are our infrastructure, What was that what were sort of the problems that you were trying to solve? And the issue with that is, as you guys know what VD I I mean, I think, you know, we'LL save some money. So what we're looking to do with Nutanix isn't you know, there's been a lot of wins for us moving to NUTANIX, And I think our team is really looking for true as well. So you know Well, that's why we moved GHB into d. So you guys knew them when they were five years ago Now, and not, you know, changing tapes or working on supporting a system when we don't have to do that And if you can repurpose and automata way the manual labor that's boring and repetitive, And we love the message we heard today about being invisible. I mean, that's essentially we wanted. So I want to hear from you both from a personal standpoint in terms of what your thoughts are And I think there's, you know, statistic shows when you have more diversity, and what do you have anything to add to that? And I think you know, when I was in school, we had one woman in my But I think the more you speak up and the more you just kind of push yourself, Thank you both.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
John FurrierPERSON

0.99+

Rebecca NightPERSON

0.99+

JamesPERSON

0.99+

James ElleryPERSON

0.99+

Stephanie SchiraldiPERSON

0.99+

Rebecca KnightPERSON

0.99+

Stephanie JamesPERSON

0.99+

StephaniePERSON

0.99+

Stephanie Scare AldiPERSON

0.99+

OntarioLOCATION

0.99+

NutanixORGANIZATION

0.99+

FordORGANIZATION

0.99+

two guestsQUANTITY

0.99+

James IlariPERSON

0.99+

eight weeksQUANTITY

0.99+

NUTANIXORGANIZATION

0.99+

Anaheim, CaliforniaLOCATION

0.99+

two hoursQUANTITY

0.99+

LovettPERSON

0.99+

ElectraORGANIZATION

0.99+

GMCORGANIZATION

0.99+

Anaheim, CaliforniaLOCATION

0.99+

sixQUANTITY

0.99+

oneQUANTITY

0.99+

twenty four racksQUANTITY

0.99+

AlectraPERSON

0.99+

a million customersQUANTITY

0.99+

todayDATE

0.99+

five years agoDATE

0.99+

Ontario, CanadaLOCATION

0.99+

nutanixORGANIZATION

0.98+

PhilPERSON

0.98+

seventyQUANTITY

0.98+

bothQUANTITY

0.98+

three yearsQUANTITY

0.98+

TeoPERSON

0.98+

eleven data centersQUANTITY

0.98+

about a million customersQUANTITY

0.97+

twoQUANTITY

0.97+

one womanQUANTITY

0.96+

fifthQUANTITY

0.96+

single platformQUANTITY

0.95+

AcropolisORGANIZATION

0.95+

secondQUANTITY

0.95+

twelve monthsQUANTITY

0.94+

three noteQUANTITY

0.94+

a weekQUANTITY

0.91+

single paneQUANTITY

0.91+

OneQUANTITY

0.87+

three new nodesQUANTITY

0.85+

a ton of moneyQUANTITY

0.85+

three tierQUANTITY

0.85+

Christ CrossORGANIZATION

0.84+

fourteen nodesQUANTITY

0.82+

fourQUANTITY

0.82+

few weeks agoDATE

0.8+

one ofQUANTITY

0.75+

twenty nineteenDATE

0.74+

ton of moneyQUANTITY

0.74+

NutanixEVENT

0.72+

twenty fifteenQUANTITY

0.72+

d. R.LOCATION

0.7+

next sixDATE

0.68+

VORGANIZATION

0.65+

KingPERSON

0.56+

Dheeraj Pandey, Nutanix | Nutanix .NEXT Conference 2019


 

>> Announcer: Live, from Anaheim, California, it's theCUBE, covering Nutanix .NEXT 2019, brought to you by Nutanix. >> Welcome back, everyone to theCUBE's live coverage of Nutanix .NEXT here in Anaheim, California. I'm your host, Rebecca Knight, along with my co-host, John Furrier. We are so excited to welcome back to the program, Dheeraj Pandey, the co-founder/CEO and Chairman of Nutanix. Thank you so much for coming back on theCUBE. >> Thank you for pronouncing my name diligently. >> You are welcome. >> John: Gotta work on that. >> So, Dheeraj, it was a poignant moment in the keynote when you got up there with many of the people who were sort of employee number one, two, and three, four at Nutanix. They are the builders, the dreamers, the visionaries, the innovators, the disruptors of this company, a company that you started. So I'd love you to just start out by reflecting a little bit on your journey and sort of how Nutanix has evolved. >> Yeah, I mean it's a poignant 10 years, you know. The moment itself is poignant and it brought a lot of nostalgia, you know, for just looking at the early folks and how we had to huddle together in the smallest of technical blips that you'd find in our thesis, because our thesis was very bold. It was, like, hey, we can put a lot of hardware into your software. It's, like, the way Apple would say, we'll get rid of the camera and make it into an app. Like, what? There's no need for a camera anymore. So that's what we had to do with data center infrastructure. So, those moments are memorable, they're etched in history and my memory, and every time you get a tough moment now, we actually invoke a lot of those tough moments from the past and say, look, the more things change, the more they remain the same. >> The beautiful thing about theCUBE, is our 10th year as well, we've been following your journey as well. We actually have soundbites of the early interviews, and one of the things I was always impressed with you guys was you stayed the course, you didn't waver on what was fashionable at the time. HCI was an early category. You were misunderstood at the beginning and then the numbers started to show and you guys built a great business. But now, you're 10 years old, you're public. All the numbers are out there. You gotta go the next level. This is your challenge with the team. What's the focus? What's the strategy? What's the marching orders for the team now, as you go past 10 years old? You got competitive pressure. There's marketplace. The numbers are there. It's a big piece of the pie there. >> Yeah. You know, I go back to everything I just said in my last answer as well. The more things change, the more they remain the same. The friction hasn't changed. Five years ago we were a much smaller brand. We didn't have a customer base. We didn't have money in the bank and we still had to keep raising money to fund ourselves. Today, we are running this business, spending, you know, a billion dollars every year now. But it's a free cash flow neutral business, and we have told the Street that we gonna keep running it like that, but just go back to the basics. The basics of this company are what made it come to here. The same basics will need to take it from here to the next 10 years. 10 years is the new zero. I mean, I said, look, we've reset the clock and it's a very metaphorical thing to say, but it's the new zero for us, you know. So going back to the basics are the three Ds I talked about. Data, we are greater data. And we continue to be amazing at data. Reliable, highly available, high performance data management. A greater design. You know, just making things simple, and we're really really really good at delivery and when we suck at it, we go and improve and are very resilient in delivering things, you know, so whenever some things falter within our customer success, customer service, the way we delivering things with your software and subscription, I think nobody can touch us in these three Ds. >> As you guys have proven a great loyalty, customer basis, very loyal on the product. As you have to go multi-cloud, as the Enterprise gets modernized, this is a big part of your current business. What are some of the things that you're looking at, in terms of these new products? Because you don't want to open the door up for either a competitor or a misfire on you guys. You gotta continue to provide product leadership. >> Well, the most important thing is honesty and vulnerability. The fact that these things are not awesome big products yet, but they are awesome nonetheless. So how do you really have the small wins? You know, I go back in time to, Look, it took 10 years for Amazon Prime to become Primetime. It took six years for YouTube to even start to figure out who YouTube is really gonna be, and you know, Google bought Writely, which was the company that became Google Docs. Five years, they didn't know what they were doing with those things, so what's really important for the new products is this long-term greed. You know, the fact that you really have this 10 year view of a multi-product portfolio, but the most important thing is how they gell well together, how they really integrate well together, because if we don't integrate these products, and we just throw it out as things, as opposed to an experience. Customers are, like, I can buy things from Best of Breed. So how do you really make these multi-product look like an experience is where the real Nutanix design value is actually shown. >> One of the things that you guys have a good customer reaction to is the simplicity and how you can integrate well and reduce all these manual tasks, which is, people talk about automation and everything, but you guys have customers saying, "I went from 24 racks to six. "I now run everything with the push of a button. "Not there yet with the one-click but pretty close." That sounds like the multi-cloud game right now, where it is kinda hodge-podge. No one's actually figured out how to bring it all together and orchestrate it. >> That's the money statement, John. That's where the money is. Complexities where we go in and really figure out how to really save money for our customers, make money for our partners and make money for ourselves. >> And the partner-side, HPE, a big announcement that you guys have been part of. They're gonna be coming on today. How's that going? Give us the update on the HPE. >> You know, the energy levels are high, but there's a bell curve of people, you know. You can't have everybody really be an innovator, an early adopter. We're looking for innovators and early adopters. Some great discussions happening with HP account managers. They're our account managers of very large accounts, and the word-of-mouth has to basically play its powerful game actually. >> I wanna ask you about innovation. Earlier, on a CUBE conversation, you talked with our own John Furrier, and you said, we disrupt ourselves, but you also just talked about these products being these sort of long-term play and really thinking about what the, more of a holistic view of what the customers need. I wanna hear about the Nutanix innovation process and sort of how you have kept that culture of a tech start-up now that you are a company with a market cap in the multiple billions. >> You know, as I said before, we are like a billion dollar start-up, you know. And it's not easy, because everybody wants you to grow up, like, behave and grow up, and I saw one of my slides in there taking real potshots of the sand and we haven't changed much, you know. So in many ways, we're reminding everybody that it's still Day Zero and Day One. Is the great cultural gravitas that we need to keep, retained in the business, actually, in the company? You know, having the kind of humor that we had, and you know, keeping it personal and personable with everybody, as opposed to, you know, stiff upper lip, and suits and mahogany tables and corner offices. Those are things that are the antithesis of what Nutanix is. And just keeping it humble, you know. Like, the fact that even though we have layers of management in the middle, how do you go six levels deep and really have a conversation as technical as you wanted it to be and as business incisively as we want it to be? And you know, there's a lot of things you can do by going six levels deep that otherwise were not possible if you just said, look, I just talked to my next level action team, and to us, that's the engine of innovation. >> And how is your leadership changed? >> I have a new customer called Wall Street. >> That's true. >> 'Cause you know, they buy my product. It just happens to be a retail product that you folks can buy, too. It's called NTNX, the ticker. So I have Main Street customers and then I have Wall Street as a customer, and I need to figure out where to really keep them balanced, because I sell products to both of them, and it's a journey. You know, it's never easy, because there's a customer that actually wants instant success. There's another customer that says we are with you for the long haul, and what I need to find in this Wall Street customer is the ones who are actually for the long haul. My leadership, actually, is about balancing the two together. >> So let's talk about the Wall Street thing for a second, because I think that's interesting. You've always said to me, you're gonna play the long game and you do. We've kinda proved that, but Wall Street, they're very short sighted right? So the earnings come out, you gotta deal with the shot clock, as a public company. As you go to Wall Street, how are they looking at the long game? Because there's major examples. Microsoft stock's at an all-time high. They were in the 20s a few years ago. Cloud obviously is validated, so you got a cloud vision, this cloud marketplace. You're in the core enterprise, which has been revitalized with private cloud. Again, proves your thesis originally. So you're in good position and you got the cloud game right there. What are they missing? What's Wall Street missing? >> I think the biggest thing is that in any transformation is actually messy. Look at all the transformations in the last 20 years. The good thing is that those that took the tough call of transforming themselves, they really have done well, you know. And this is not just Microsoft alone, but Adobe, where I sit on the board. There is Autodesk and there is Parametric PTC and Cadence and many many other companies that have gone through this transition of getting out of the box to being software and subscription actually, and that's the journey that we said we couldn't punt and postpone 'cause we wanna be a hybrid cloud company. How can we not have subscription on prem? If subscription is gonna be the off prem, it has to have on prem subscription as well. And I think it requires communication, constant communication, watch, don't be stupid, with Wall Street as well. >> Well, Wall Street likes those valuations. If you look at the SaaS companies, or subscription-based companies, their valuations are really on a multiple, much higher than, >> I mean, look, valuation, to me, is not an end in itself. If you do it right by Main Street, I think this Wall Street thing will take care of itself. >> Awesome. On the long game with your innovation, I gotta ask you about how you're gonna look at the partnerships and integrating in, because the competitor out there in the middle of the room there is VMware and Dell Technologies. They want to go end-to-end and they want to own everything end-to-end. You guys are taking a different approach. Could you share your competitive strategy in terms of how you guys are different than that, because you're partnering? You're competing in a different way. >> Yeah, as we go into becoming a bigger company and yet, having a real child-like brain, I think it's important, really, that we are in this cooperative world and every competitor is also a company we cooperate with. Look, I mean, we run on top of VMware and more than half our customers still use VMware underneath us. We are an app on their platform. So we are a platform company. We are also an app company and our platform should run all apps and our apps should run on all platforms and that's the way we look at it. That's the reason why Microsoft is relevant again, 'cause they're still looking at, rather than a single stack strategy, how do you really look at yourselves as living two lives actually, you know? And to compete, you just have to go back to the three Ds I talked about. If you just keep doing a really good job of data, disrupting the biggest hardware players out there in data, and be really really good with design and elegance and friction-less delivery, I think we'll be in good shape. >> One of the compliments that the analysts on theCUBE always pay to you, Dheeraj, is that you have a really good sense of the wave. You really know which way the technological and economic winds are blowing. I wanna know, what do you read? Who do you talk to? What signals are you paying attention to? Or is it just this innate sense you have that the rest of us can't hope to ever achieve? >> Well, thank for that compliment, first of all. I'm honored. But I just have this simple mantra which is, the more things change, the more they remain the same. So I bring a lot of things from my consumer life because I read a lot about consumer life and I have a little bit of an artist in me and even though I am supposed to be a geek, I was telling somebody I was trying to recruit the other day that, look, I'm really, at heart, an artist, more so than an engineer, and I think a lot of what you see in this conference and this company and the product portfolio, it's really the empathy for the other side. You know, that really brings out a lot of the innovation, and obviously, I don't innovate alone, but the people that are with us in this company, I just try to tell them about the empathy that I invoke for everybody else and I read a lot of history, I'm a big history buff, and not just the last 30 years of IT, which I invoke a lot, but I'm deep into, like, the history of humans, you know. Like, last two weeks, I spent a lot of time reading about Neanderthals and the hybrid Neanderthals with humans, modern humans, and there's another ones that they found in these caves of Denisova. They call Denisovans, you know. So I read a lot of history and that gives me a lot of perspective and a lot of courage and I bring a lot of those things into this new life, that's again, as I said, it's the same as the old one, with some new color. >> You're an entrepreneur. That's what entrepreneurship is all about. What entrepreneurial thing are you working on right now? 'Cause I've known, You've gotta have your hands in some new things. What's the new entrepreneurial thinking or project that you're taking on? >> Well, the one that is very interesting one for operating a business is Capital Allocation, and it's a difficult one because you have to, basically, be somebody who really balances content and delivery, you know, and content is products and delivery is go to market, and when you go to market, it's marketing and sales. So as a company, we were tested in the last nine months to really understand Capital Allocation. I'm a big fan of the book, The Outsiders. I just read this probably a year ago, and you could see that there was some themes in The Outsiders about running the business on free cash flow, which is nothing new. It's not like Amazon invented it. They've been doing it for those 40, 50 years. Second one is Decentralized Decision Making. The third one is a really good capital allocation. So as an entrepreneur, I'm learning to actually understand what it means to decentralize decision making, and do a really good job of capital allocation, and finally, go and tell the Street about why free cash is the way to run a business as opposed to profitability and a gap way, because a lot of our dollars are sitting in the balance sheet, and they aren't in the P&L. So I think really running the business where growth matters, which is about free cash flow, about making sure that we can really create more CEOs in the company, independent decision making, and finally, this idea that you want to run this business as if it was a bunch of businesses, actually. >> Great. >> Awesome. >> One of the things you keep talking about in this interview is balance. You're balancing the needs of Main Street and Wall Street, the needs of your cloud customers, the needs of your employees, while also growing this business. How do you balance at all? As the CEO of this fast-growing company? You said you're an artist. And you read a lot of history. >> Honestly, I'm not a very balanced person. If you ask me, like, work and life, family and work, is because of my wife that I find a balance there. >> So you owe it all to her? >> Yeah, I think you can say that again, and the same thing is true for, like, one of my team members, our COO, David Sangster. He says, "Look, our health, family, and work, "in that order," and honestly, mine is in the reverse right now. So I need to really go and, These kind of conversations remind myself that it's important to actually have some balance. >> Great, well, Dheeraj, always a pleasure having you on theCUBE. >> Pleasure. >> I'm Rebecca Knight, for John Furrier. We'll have so much more from Nutanix next coming up on theCUBE just after this. (techno music)

Published Date : May 8 2019

SUMMARY :

NEXT 2019, brought to you by Nutanix. Thank you so much for coming back on theCUBE. a company that you started. and it brought a lot of nostalgia, you know, and one of the things I was always impressed and are very resilient in delivering things, you know, What are some of the things that you're looking at, You know, the fact that you really have this 10 year view One of the things that you guys have That's the money statement, John. HPE, a big announcement that you guys have been part of. and the word-of-mouth has to basically play and sort of how you have kept that culture and we haven't changed much, you know. we are with you for the long haul, and you got the cloud game right there. and that's the journey that we said If you look at the SaaS companies, If you do it right by Main Street, I gotta ask you about how you're gonna look at and that's the way we look at it. is that you have a really good sense of the wave. and I think a lot of what you see in this conference What entrepreneurial thing are you working on right now? and finally, this idea that you want to run this business One of the things you keep talking about in this interview If you ask me, like, work and life, family and work, and the same thing is true for, having you on theCUBE. We'll have so much more from Nutanix next coming up

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Rebecca KnightPERSON

0.99+

Dheeraj PandeyPERSON

0.99+

DheerajPERSON

0.99+

NutanixORGANIZATION

0.99+

John FurrierPERSON

0.99+

David SangsterPERSON

0.99+

AdobeORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

10 yearQUANTITY

0.99+

JohnPERSON

0.99+

YouTubeORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

10 yearsQUANTITY

0.99+

six yearsQUANTITY

0.99+

AutodeskORGANIZATION

0.99+

HPORGANIZATION

0.99+

CadenceORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

10th yearQUANTITY

0.99+

sixQUANTITY

0.99+

AppleORGANIZATION

0.99+

40QUANTITY

0.99+

Anaheim, CaliforniaLOCATION

0.99+

twoQUANTITY

0.99+

bothQUANTITY

0.99+

TodayDATE

0.99+

Five yearsQUANTITY

0.99+

GoogleORGANIZATION

0.99+

Five years agoDATE

0.99+

two livesQUANTITY

0.99+

a year agoDATE

0.99+

six levelsQUANTITY

0.99+

The OutsidersTITLE

0.99+

24 racksQUANTITY

0.99+

HPEORGANIZATION

0.98+

Parametric PTCORGANIZATION

0.98+

third oneQUANTITY

0.97+

Best of BreedORGANIZATION

0.97+

Dell TechnologiesORGANIZATION

0.97+

Second oneQUANTITY

0.97+

more than halfQUANTITY

0.97+

threeQUANTITY

0.97+

OneQUANTITY

0.96+

oneQUANTITY

0.96+

billionsQUANTITY

0.95+

fourQUANTITY

0.95+

theCUBEORGANIZATION

0.95+

Wall StreetLOCATION

0.95+

Day OneQUANTITY

0.95+

Day ZeroQUANTITY

0.94+

PrimeCOMMERCIAL_ITEM

0.94+

last two weeksDATE

0.93+

todayDATE

0.93+

Google DocsTITLE

0.93+

billion dollarQUANTITY

0.92+

zeroQUANTITY

0.92+

one-clickQUANTITY

0.91+

HCIORGANIZATION

0.91+

last 20 yearsDATE

0.9+

few years agoDATE

0.88+

next 10 yearsDATE

0.84+

NTNXORGANIZATION

0.83+

Jeffery Snover, Microsoft | Microsoft Ignite 2018


 

(electronic music) >> Live from Orlando, Florida, it's theCUBE! Covering Microsoft Ignite. Brought to you by Cohesity, and theCUBE's ecosystem partners. >> Welcome back everyone to theCUBE's live coverage of Microsoft Ignite here in Orlando, Florida. I'm your host, Rebecca Knight, along with my cohost, Stu Miniman. We're joined by Jeffrey Snover. He is the technical fellow and chief architect for Azure Storage and Cloud Edge at Microsoft. Thanks so much for coming, for returning to theCUBE, I should say, Jeffrey, you're a CUBE alum. >> Yes, I enjoyed the last time. So can't wait to do it again this time. >> Well we're excited to have you. So before the camera's were rolling, we were talking about PowerShell. You invented PowerShell. >> Yeah, I did. >> It was invented in the early 2000's, it took a few years to ship, as you said. But can you give our viewers an update of where we are? >> Yeah, you know, it's 2018, and it's never been a better time for PowerShell. You know, basically the initial mission is sort of complete. And the mission was provide sort of general purpose scripting for Windows. But now we have a new mission. And that new mission is to manage anything, anywhere. So we've taken PowerShell, we've open sourced it. It's now running, we've ported it to macOS and Linux. There's a very large list of Linux distributions that we support it on, and it runs everywhere. And so, now, you can manage from anywhere. Your Windows box, your Linux box, your Mac box, even in the browser, you can manage, and then anything. You can manage Windows, you can manage Linux, you can manage macOS. So manage anything, anywhere. Any cloud, Azure, or AWS, or Google. Any hypervisor, Hyper-V or VMware, or any physical server. It's amazing. In fact, our launch partners, when we launched this, our launch partners, VMware, Google, AWS. Not Microsoft's traditional partners. >> That's great to hear. It was actually, one of the critiques we had, at the key note this morning, was partnerships are critically important. But felt that Satya gave a little bit of a jab towards, the kind of, the Amazon's out there. When we talk to customers, we know it's a heterogeneous, multi-cloud world. You know, you work all over the place, with your solutions that you had. There's not, like, Azure, Azure Stack, out to The Edge. The Edge, it is early, it's going to be very heterogeneous. So connect the dots for us a little. You know, we love having the technical fellows on, as to, you go from PowerShell, to now this diverse set of solutions that you work on today. >> Yeah, exactly. So basically, from PowerShell, they asked me to be the chief architect for Windows Server. Right, because if you think about it, an operating system is largely management, right? And, so, that's what I did, resource management. And, so, I was the chief architect for that, for many years, and we decided that, as part of that, we were developing cloud-inspired infrastructure. So, basically, you know, Windows Server had grown up. You know, sort of focused in on a machine. Azure had gone and needed to build a new set of infrastructure for the cloud. And we looked at what they were doing. And they say, hey, that's some great ideas. Let's take the ideas there, and put them into the general purpose operating system. And that's what we call our software-defined data center. And the reason why we couldn't use Azure's directly is, Azure's, really, design center is very, very, very large systems. So, for instance, the storage stamp, that starts at about 10 racks. No customer wants to start with 10 racks. So we took the inspiration from them and re-implemented it. And now our systems can start with two servers. Our Azure Stack systems, well, so, then, what we decided was, hey, this is great technology. Let's take the great cloud-inspired infrastructure of Windows Server, and match it with the Azure services themselves. So we take Azure, put it on top of Windows Server, package it as an appliance experience, and we call that Azure Stack. And that's where I have been mostly focused for the last couple of years. >> Right, can you help us unpack a little bit. There's a lot of news today. >> Yes. >> You know, Windows 2019 was announced. I was real interested in the Data Box Edge solution, which I'm sure. >> Isn't that crazy? >> Yeah, really interesting. You're like, let's do some AI applications out at the Edge, and with the same kind of box that we can transport data. Because, I always say, you got to follow customers applications and data, and it's tough to move these things. You know, we've got physics that we still have to, you know, work on until some of these smart guys figure out how to break that. But, yeah, maybe give us a little context, as to news of the show, things your teams have been working on. >> Yeah, so the Data Box Edge, big, exciting stuff. Now, there's a couple scenarios for Data Box Edge. First is, first it's all kind of largely centered on storage and the Edge. So Storage, you've got a bunch of data in your enterprise, and you'd like it to be in Azure. One flavor of Data Box Edge is a disk. You call us up, we send you a disk, you fill up that disk, you send it back to us, it shows up in Azure. Next. >> A pretty big disk, though? >> Well, it can be a small disk. >> Oh, okay. >> Yeah, no, it can be a single SSD, okay. But then you can say, well, no, I need a bunch more. And so we send you a box, the box is over there. It's like 47 pounds, we send you this thing, it's about 100 terabytes of data. You fill that thing up, send it to us, and we upload it. Or a Data Box Heavy. Now this thing has a handle and wheels. I mean, literally, wheels, it's specially designed so that a forklift can pick this thing up, right? It's like, I don't know, like 400 pounds, it's crazy. And that's got about a petabyte worth of storage. Again, we ship it to you, you fill it up, ship it back to us. So that's one flavor, Data Box transport. Then there's Data Box Edge. Data Box Edge, you go to the website, say, I'd like a Data Box Edge, we send you a 1u server. You plug that in, you keep it plugged in, then you use it. How do you use it? You connect it to your Azure storage, and then all your Azure storage is available through here. And it's exposed through SMB. Later, we'll expose it through NFS and a Blob API. But, then, anything you write here is available immediately, it gets back to Azure, and, effectively, it looks like near-infinite storage. Just use it and it gets backed up, so it's amazing. Now, on that box, we're also adding the ability to say, hey, we got a bunch of compute there. You can run IoT Edge platforms. So you run the IoT Edge platform, you can run gateways, you can run Kubernetes clusters on this thing, you can run all sorts of IoT software. Including, we're integrating in brainwave technology. So, brainwave technology is, and, by the way, we'll want to talk about this a little bit, in a second. It is evidence of the largest transformation we'll see in our industry. And that is the re-integration of the industry. So, basically, what does that mean? In the past, the industry used to be, back when the big key players were digital. Remember digital, from DEC? We're all Massachusetts people. (Rebecca laughs) So, DEC was the number one employer in Massachusetts, gone. IBM dominant, much diminished, a whole bunch of people. They were dominant when the industry was vertically integrated. Vertically integrated meant all those companies designed their own silicone, they built their own boards, they built their own systems, they built their OS, they built the applications, the serviced them. Then there was the disintegration of the computer industry. Where, basically, we went vertically integrated. You got your chips from Intel or Motorola. The operating system, you got from Sun or Microsoft. The applications you got from a number of different vendors. Okay, so we got vertically integrated. What you're seeing, and what's so exciting, is a shift back to vertical integration. So Microsoft is designing its own hardware, right? We're designing our own chips. So we've designed a chip specially for AI, we call it a brainwave chip, and that's available in the Data Box Edge. So, now, when you do this AI stuff, guess what? The processing is very different. And it can be very, very fast. So that's just one example of Microsoft's innovation in hardware. >> Wow, so, I mean. >> What do you do with that? >> One of the things that we keep hearing so much, at this conference, is that Microsoft products and services are helping individual employees tap into their own creativity, their ingenuity, and then, also, collaborate with colleagues. I'm curious about where you get your ideas, and how you actually put that into practice, as a technical fellow. >> Yeah. >> How do you think about the future, and envision these next generation technologies? >> Yeah, well, you know, it's one of those things, honestly, where your strength is your weakness, your weakness is your strength. So my weakness is, I can't deal with complexity, right. And, so, what I'm always doing is I'm taking a look at a very complex situation, and I'm saying, what's the heart of it, like, give me the heart of it. So my background's physics, right? And so, in physics, you're not doing, you're looking for the F equals M A. And if you have that, when you find that, then you can apply it over, and over, and over again. So I'm always looking at what are the essential things here. And so that's this, well, you see a whole bunch of confusing things, like, what's up with this? What's with this? That idea of there is this narrative about the reintegration of the computer industry. How very large vendors, be it Microsoft, or AWS, are, because we operate at such large scales, we are going to be vertically integrated. We're developing our own hardware, we do our own systems, et cetera. So, I'm always looking for the simple story, and then applying it. And, it turns out, I do it pretty accurately. And it turns out, it's pretty valuable. >> Alright, so that's a good set up to talk about Azure Stacks. So, the value proposition we heard, of course, is, you know, start everything in the cloud first, you know, Microsoft does Azure, and then lets, you know, have some of those services in the same operating model in your data center, or in your hosting service provider environment. So, first of all, did I get that right? And, you know, give us the update on Azure Stack. I've been trying to talk to customers that are using it, talking to your partners. There is a lot of excitement around it. But, you know, proof points, early use cases, you know, where is this going to be pointing towards, where the future of the data center is? >> So, it's a great example. So what I figured out, when I thought about this, and kind of drilled in, like what's really, what really matters here? What I realized was that what the gestalt of Azure Stack is different than everything we've done in the past. And it really is an appliance, okay? So, in the past, I just had a session the other day, and people were asking, well, when are you going to, when is Azure Stack going to have the latest version of the operating system? I said, no, no, no, no, no. Internals are internal, it's an appliance. Azure Stack is for people who want to use a cloud, not for people who want to build it. So you shouldn't be concerned about all the internals. You just plug it in, fill out some forms, and then you use it, just start using it. You don't care about the details of how it's all configured, you don't do the provisioning, we do all that for you. And so that's what we've done. And it turns out that that message resonates really well. Because, as you probably know, most private clouds fail. Most private clouds fail miserably. Why? And there's really two reasons. There's two flavors of failure. But one is they just never work. Now that's because, guess what, it's incredibly hard. There are so many moving pieces and, guess what, we learned that ourselves. The numbers of times we stepped on the rakes, and, like, how do you make all this work? There's a gazillion moving parts. So if any of your, you have a team, that's failed at private cloud, they're not idiots. It's super, super, super hard. So that's one level of failure. But even those teams that got it working, they ultimately failed, as well, because of lack of usage. And the reason for that is, having done all that, they then built a snowflake cloud. And then when someone said, well, how do I use this? How do I add another NIC to a VM? The team that put it together were the only ones that could answer that. Nope, there was no ecosystem around it. So, with Azure Stack, the gestalt is, like, this is for people who want to use it, not for people who want to build it. So you just plug it in, you pick a vendor, and you pick a capacity. This vendor, four notes, this vendor 12 or 16 notes. And that's it. You come in, we ask you what IP range is, how do I integrate with your identity? Within a day, it's up and running, and your users are using it, really using it. Like, that's craziness. And then, well what does it mean to use it? Like, oh, hey, how do I ad a NIC to a VM? It's Azure, so how does Azure do it? I have an entire Azure ecosystem. There's documentation, there's training, there's videos, there's conferences. You can go and put on a resume, I'd like to hire someone with Azure skills, and get someone, and then they're productive that day. Or, and here's the best part, you can put on your resume, I have Azure skills, and you knock on 10 doors, and nine of them are going to say, come talk to me. So, that was the heart of it. And, again, it goes back to your question of, like, the value, or what does a technical fellow do. It's to figure out what really matters. And then say, we're all in on that. There was a lot of skepticism, a lot of customers like, I must have my security agent on there. It's like, well, no, then you're not a good candidate. What do you mean? I say, well, look, we're not going to do this. And they say, well you'll never be able to sell to anyone in my industry. I said, no, you're wrong. They say, what do you mean, I'm wrong? I say, well, let me prove it to ya, do you own a SAN? They say, well, of course we own a SAN. I said, I know you own a SAN. Let me ask you this, a SAN is a general purpose server with a general purpose operating system. So do you put your security and managing agents on there? And they said, no, we're not allowed to. I said, right, and that's the way Azure Stack is. It's a sealed appliance. We take care of that responsibility for you. And it's worked out very, very well. >> Alright, you got me thinking. One of the things we want to do is, we want to simplify the environment. That's been the problem we've had in IT, for a long time, is it's this heterogeneous mess. Every group did their own thing. I worry a multi-cloud world has gotten us into more silos. Because, I've got lots of SAS providers, I've got multiple cloud providers, and, boy, maybe when I get to the Edge, every customer is going to have multiple Edge applications, and they're going to be different, so, you know. How do you simplify this, over time, for customers? Or do we? >> Here's the hard story, back to getting at the heart of it. Look, one of the benefits of having done this a while, is I've stepped on a lot of these rakes. You're looking at one of the biggest, earliest adopters of the Boolean cross-platform, Gooey Framework. And, every time, there is this, oh, there's multiple platforms? People say, oh, that's a problem, I want a technology that allows me to bridge all of those things. And it sound so attractive, and generates a lot of early things, and then it turned out, I was rocking with this Boolean cross-breed platform. I wrote it, and it worked on Mac's and Windows. Except, I couldn't cut and paste. I couldn't print, I couldn't do anything. And so what happens is it's so attractive, blah, blah, blah. And then you find out, and when the platforms aren't very sophisticated, the gap between what these cross-platform things do, and the platform is not so much, so it's like, eh, it's better to do this. But, over time, the platform just grows and grows and grows. So the hard message is, people should pick. People should pick. Now, one of the benefits of Azure, as a great choice, is that, with the other guys, you are locked to vendor. Right, there is exactly one provider of those API's. With Azure, you can get an implementation of Azure from Microsoft, the Azure Public Cloud. Or you can get an implementation from one of our hardware vendors, running Azure Stack. They provide that to you. Or you can get it from a service provider. So, you don't have to get, you buy into these API's. You optimize around that, but then you can still use vendor. You know, hey, what's your price for this? What's your price for that, what can you give me? With the other guys, they're going to give you whatcha give ya, and that's your deal. (Rebecca laughs) >> That's a good note to end on. Thank you so much, Jeffrey, for coming on theCUBE again. It was great talking to you. >> Oh, that was fast. (Rebecca laughs) Enjoyed it, this was great. >> Great. I'm Rebecca Knight, for Stu Miniman, stay tuned to theCUBE. We will have more from Microsoft Ignite in just a little bit. (electronic music)

Published Date : Sep 24 2018

SUMMARY :

Brought to you by Cohesity, He is the technical Yes, I enjoyed the last time. So before the camera's were rolling, it took a few years to ship, as you said. even in the browser, you can You know, you work all over the place, So, basically, you know, Right, can you help the Data Box Edge solution, Because, I always say, you You call us up, we send you a disk, And so we send you a box, and how you actually And if you have that, when you find that, and then lets, you know, it to ya, do you own a SAN? One of the things we want to do is, they're going to give you Thank you so much, Jeffrey, Oh, that was fast. in just a little bit.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffreyPERSON

0.99+

Rebecca KnightPERSON

0.99+

MotorolaORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

Jeffrey SnoverPERSON

0.99+

Jeffery SnoverPERSON

0.99+

MassachusettsLOCATION

0.99+

AWSORGANIZATION

0.99+

RebeccaPERSON

0.99+

AmazonORGANIZATION

0.99+

SunORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

IntelORGANIZATION

0.99+

10 racksQUANTITY

0.99+

47 poundsQUANTITY

0.99+

Azure StackTITLE

0.99+

400 poundsQUANTITY

0.99+

IBMORGANIZATION

0.99+

DECORGANIZATION

0.99+

Orlando, FloridaLOCATION

0.99+

Orlando, FloridaLOCATION

0.99+

two reasonsQUANTITY

0.99+

12QUANTITY

0.99+

16 notesQUANTITY

0.99+

FirstQUANTITY

0.99+

2018DATE

0.99+

WindowsTITLE

0.99+

MacCOMMERCIAL_ITEM

0.99+

one levelQUANTITY

0.99+

LinuxTITLE

0.99+

macOSTITLE

0.98+

Windows 2019TITLE

0.98+

theCUBEORGANIZATION

0.98+

firstQUANTITY

0.98+

two flavorsQUANTITY

0.98+

10 doorsQUANTITY

0.98+

four notesQUANTITY

0.98+

two serversQUANTITY

0.98+

CohesityORGANIZATION

0.98+

AzureTITLE

0.97+

oneQUANTITY

0.97+

Azure Public CloudTITLE

0.97+

PowerShellTITLE

0.96+

todayDATE

0.96+

nineQUANTITY

0.96+

EdgeTITLE

0.96+

VMwareORGANIZATION

0.96+

one providerQUANTITY

0.95+

early 2000'sDATE

0.95+

one exampleQUANTITY

0.95+

about 100 terabytesQUANTITY

0.95+

OneQUANTITY

0.94+

SASORGANIZATION

0.94+

CUBEORGANIZATION

0.94+

One flavorQUANTITY

0.94+

PowerShellORGANIZATION

0.94+

singleQUANTITY

0.94+

IoT EdgeTITLE

0.93+

Chhandomay Mandal, Dell EMC | VMworld 2018


 

(upbeat music) >> Live from Las Vegas, it's theCUBE! Covering VMworld 2018. Brought to you by VMware, and its ecosystem partners. >> Hey, welcome back to theCUBE! Our continuing coverage at VMworld 2018, I'm Lisa Martin with my co-host John Troyer. We're very excited to welcome back to theCUBE one of our alumni, Chhandomay Mandal, the director of product marketing at Dell EMC. Chhandomay, it's great to talk to you again! >> Thank you, nice to be here. >> We just seem to do this circuit in Las Vegas. >> Yeah. (laughing) >> So, loads of people here, we last got to speak four months ago at Dell Technologies World, thematically that event about making IT transformation real, about making digital transformation real, security transformation real. Let's talk about IT transformation. Yesterday, Pat Gelsinger talked about you know, the essentialness that customers have to transform IT, it's an enabler of digital transformation, let's talk about what Dell EMC is continuing to help customers do, to transform their IT so they can really get, get on that successful journey to digital transformation. >> Yes, the Dell transformation is key into this digital economy in order to thrive in this new world, right? And, digital transformation is fueled by IT transformation. For us, IT transformation means modernizing the underlying infrastructure, so that they can deliver on scale, performance, availability, cost-effectiveness. They can also automate a lot of the manual processes, and streamline the operations, net result being freeing up the resources, and kind of like, deliver the transformation for not only application processes, but also businesses in general. So, with our portfolio, we are helping customers into this journey and since we talked at Dell Technologies World, it is going great, we are seeing a lot of adoption in this portfolio. >> Chhandomay, I love, you know, you work on high-end storage, right? Which is. >> Yes. >> Which means that these are business-critical applications that you are supporting. >> Absolutely. >> And, that means that they're the most, in some of the ways, some of the most interesting, right? And the deepest and most important, when you're talking digital transformation. But it comes down to, you know, as you say, efficiency and how the IT department is running. In the olden days, you'd get a VMAX, and you'd have an admin, and there's a lot of knobs and adjustments and tuning, and you have to keep that machine running smoothly because they're supporting the enterprise. Now, new next generation PowerMax, some of the, you know, tell us a little about that. What I'm really impressed with is all the automation, and all the efficiency that goes into that platform. >> Absolutely. Absolutely. So, PowerMax is our latest flagship high-end product. It's an end-to-end NVMe design platform, designed to deliver like highest level of performance. Not just performance, but highest level of efficiency, as well as all the trusted data services that are synonymous with VMAX. And, not to talk about the six-nines of availability, all those goodness of the previous generations carried over. But, the key thing is, with PowerMax, what we have done is, if I need to boil it down into three things, this is a very powerful platform. It's simple, and it's trusted. So now, when I talk about very powerful, obviously performance is part and parcel. It is actually the fastest storage array. 10 million IOPS, 150 gigabytes per second, >> It's a maniac, it's a, it's a screamer, it's amazing. >> Et cetera, et cetera, et cetera. >> Yeah yeah, yeah. >> But like that's kind of like a table steak and bread and butter for us. Now, what I want to highlight is, how simple the platform has become. We have a built-in machine learning engine within the platform. And now, instead of like, I need this much of capacity and this much of performance, you can actually provision storage based on the surface levels that you need to give your customers. And we, underneath, will take care of like whatever it means for any workloads you are running. And how are you doing it? So for example, today, right? Most of the applications are still like business applications, like Oracle, SAP, you name it. But, within the digital transformation, a lot of the modern, analytics heavy applications are also coming in, right? So, if I were to break it up it would be say like, 80, 20, to 80% business, 20% modern applications. Now, we are seeing the modern applications getting adopted like higher and higher and-- >> It's going to flip, right? At some point. >> Yes. Like in three to five years, the ratio will be opposite. Now, if you are buying an array like PowerMax today, how can we deliver the performance you need for business applications of today, while taking care of the analytics heavy applications of tomorrow, at the same time, meeting your applications? I mean, meeting your SLS all the way through. And that's where the machine learning engine comes in. It like, takes 40 million data sets in real-time. It makes six billion decisions per day, and, essentially, it figures out from the patterns in the data, how to optimize where to place the load, without the administrators having to like, tune anything, so it's like, extremely simple. Completely automated, thanks to the AI and ML engine. >> Taking advantage of those superpowers, AI, ML, that Pat. >> Yes. >> Talked about yesterday, so you talked about it's efficient, it's fast, trusted. Speaking of trust, Rackspace, long-time partner of Dell EMC and VMware, we actually spoke with them yesterday, Dell EMC and PowerMax particularly, have been really kind of foundational to enabling Rackspace to really accelerate their business, in terms of IT transformation. Talk to us about that in terms of them as a customer. >> So, nice that you bring up, Rackspace, they got a shout-out from Pat yesterday as the leading multi-cloud provider in the managed space, right. Now, if you look at Rackspace, they have like 100,000 plus customers all with various types of needs. Now, with a platform like PowerMax, they are able to simplify their IT environment, reduce a lot of consolidation happening on that dense platform. So they can reduce the footprint a lot of, less power culling. At the end of the day, they're minimizing their operational expenses, simplifying the management, how they manage their infrastructure, monitor their infrastructure. It becomes kind of like, invisible, or self-driving storage. Like, you really like, don't worry about it. You worry about the business, value it, and innovations that IT can bring, for your digital transformation. While the array kind of like, does it own work. A lot of work, no mistake about it. But everything is kind of like, hidden from the admin perspective. Whether you are running Oracle or Splunk, it figures out like what to do. Not only like maintaining the service levels, but as the technology evolves you bring in not just NVMe necessities, but next-generation storage class memory, they are going to automate and do the plasmid by itself. >> Yeah, that's huge, right? Because, and that's where you free up those time and resources, and brain power, frankly, for your IT and group then to be able to work on more strategic projects than tuning this particular data store and LUN or whatever for Splunk and et cetera, right? You've got so much, again, self-driving kind of self-driving storage, there. I also, Chhandomay, I also wanted to talk about the other kind of high-end array in Dell EMC's portfolio, the XtremeIO. And that, you know, all-flash, you can talk a little about that, but you know, what are the use cases there, and when should people be looking at that? And what kind of, what's new in that world? >> Sure. So, PowerMax is the flagship high-end productive spin, like evolved over 30 years, 1,000 plus patents, right? Whereas if you contrast it, XtremeIO is a purpose-built, all-flash array designed to take advantage of the flash media and designed from the ground up. Now, it delivers very high performance with consistently low latency. But, the key innovation there is the way it does in line, all the time, our data services. Especially the data reduction, the content, 800% in memory content, our metadata, helps deliver a new class of copy services so, and then, I mean, it scales modular loots, scale up and scale out. So, the use cases where XtremIO is very efficient is where you need a lot of, I mean you have a lot of common datas, for example VDI, we can offer like, very high data reduction ratios reducing your footprint for VDI type environment. The other use case is active, open data management. So, for example, like for every database, there are probably like eight to 10 copies at a minimum. Now with XtremIO, like you can actually use those copies, same as the production platform, and, cut around workloads on them. Like whether it's like your VIO upload, or like reporting test day of sandboxing. All of those things can be run at the same platform, and like the array will be able to deliver like, without any sweat. >> And as I said, you're doing copy data management sort of thing? >> Yes. >> Yeah, okay that's great. >> Yes, yes, yes. >> Yeah, that's. >> So, customer examples, you know how much I love that. You talked about this really strong example with PowerMax and Rackspace. Give us a great example of a customer using XtremIO X2 that's really enabled with these superpowers to grow their businesses. >> Sure, so at VMware what best can it be saying the customer, in this case will be, guess what? >> VMware. (laughing) >> So, VMware's IT cloud infrastructure team is using XtremIO X2 for their factualized SMP HANA environment. And there are several other workloads in the pipeline. But what I want to highlight is like, what and how they are doing it. So they have their production environment, they are leveraging replication technologies for our tier, and then from that tier, they are making copies, on those copies they are applying the patches, sandboxing, all those things. An exact replica of the production environment. And then, like when they are done, they are rolling it back out to the production. And the entire workflow is kind of like automated, tested, and a great example of, like how they are doing it. But it's not just the copy that are management, there are other aspects to it. So for example, the performance. Now, they started with like a two terabyte VM and they tried to clone both in the traditional storage, and XtremIO. With the traditional storage, it took like 2 1/2 hours. With XtremIO, it was done in like 90 seconds. >> So from two hours to 90 seconds. >> Seconds. >> Is dramatic. >> And, like they ran the data reduction, they can as if. So, for VMware's entire ESX production environment, this is like 1.2 petabyte storage. Now, with XtremIO data reduction technology, they can see that it will be reduced to like, 240 terabyte worth of storage. So, essentially, from three rows of storage, it would be reduced to three racks of XtremIO. So, you can see, these settings in, all over the place. Like, I mean footprint, power cooling management, all of those things. So, that would be my best example of, like, how XtremIO X2 is being used for, I mean, in a transformative way in the IT environment. >> Well it kind of goes along with one of the things that Pat Gelsinger talked about yesterday from VMware's perspective is, I think that the stat was, they've been able to reduce CO2 emissions by 540 million tons. Sounds like XtremIO might be, want to be, invisible. >> Yeah, of course. >> Facilitators. >> Yeah, yeah. Like we are contributing a lot in that. And I mean, at the end of the day, this is, like, what digital transformation is about right? So like, absolutely, yes. >> That's great, Chhandomay, I mean, the, I would love to have a problem. I would love to have a problem that required running, you know, hot on XtremIO because I think those are super interesting problems. And the fact that you can, you know, actually turn those huge data sets into something that's actually manageable and, I can envision three racks, I can't really envision, half a data center's worth of spinning discs, so, that's amazing. I love the fact that the engineering that goes into these high-end systems that you, on your, on the team, there. >> Yeah, so the one other thing I wanted to mention was the future-proof loyalty program. >> Yeah we've heard a little bit about that, tell us. >> Yes, so, this is essentially for our customers three things, like one is peace of mind. You know like what you are getting, there are no surprises. The second thing is investment protection. And then the third would be like (mumbles). So, there are like several components to it. And, like, it is not only like for XtremIO or PowerMax, it's pretty much like for the portfolio there is a list. Like, of what is part of it, and it's continually growing. Now for XtremIO and PowerMax purpose is the important things of asking for like if it's a three year warranty, and then like tier pricing, they know, like, exactly like what they are going to pay for support today as well as when maintenance renewal comes up. Then, (mumbles) migrations. So, back from exchange, right? Like with XtremIO to the next-generation PowerMax to PowerMax dot next, but like, they are covered with non-disruptive migration plans, storage efficiencies. And the last two things that we added they truly like we have announced that VMware is cloud-enabled. And cloud conception models, so like, I mean, as Michael says, cloud is not a place it's an operating model. So even with XtremIO and PowerMax, customers can pay for what they're using, and then, like, it's called flex on-demand. And they use, I mean when they use the buffer space, they can pay for that. And then with CloudIQ, we can monitor the storage areas from the cloud. It's the storage analytics, so it's cloud-enabled as well. So it covered pretty much like, all of the things Pat talked about yesterday. >> Fantastic, well I'm going to go out on a limb. Yesterday, I've asked a number of folks, what would you describe, I asked Scott Delandy, the superpower of certain technologies. And what I'm getting from this is trust. Like, the Trustinator, so, maybe that? Can you make a sticker by the time we get to Dell Technologies World next year? >> Oh yeah, absolutely, yeah. >> Chhandomay, awesome. Great to have you back on theCUBE, >> Thank you. >> Thank you so much for sharing all the excitement what's going on. We'll talk to you next time. We want to thank you for watching theCUBE, for John Troyer, my co-host, I'm Lisa Martin. We are live at VMware with day two from the Mandalay Bay Las Vegas. Stick around, John and I will be right back with our next guest. (upbeat music)

Published Date : Aug 28 2018

SUMMARY :

Brought to you by VMware, and its ecosystem partners. Chhandomay, it's great to talk to you again! So, loads of people here, we last got to speak They can also automate a lot of the manual processes, Chhandomay, I love, you know, you work applications that you are supporting. And the deepest and most important, But, the key thing is, with PowerMax, It's a maniac, it's a, Et cetera, et cetera, the surface levels that you need to give your customers. It's going to flip, right? from the patterns in the data, Taking advantage of those superpowers, Talked about yesterday, so you talked about but as the technology evolves you bring in And that, you know, all-flash, of the flash media and designed from the ground up. So, customer examples, you know how much I love that. (laughing) So for example, the performance. So, you can see, these settings in, all over the place. Well it kind of goes along with one of the things And I mean, at the end of the day, And the fact that you can, you know, Yeah, so the one other thing I wanted to mention And the last two things that we added they truly like Like, the Trustinator, so, maybe that? Great to have you back on theCUBE, We'll talk to you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

Pat GelsingerPERSON

0.99+

MichaelPERSON

0.99+

John TroyerPERSON

0.99+

Chhandomay MandalPERSON

0.99+

40 millionQUANTITY

0.99+

ChhandomayPERSON

0.99+

JohnPERSON

0.99+

80QUANTITY

0.99+

threeQUANTITY

0.99+

Las VegasLOCATION

0.99+

90 secondsQUANTITY

0.99+

two hoursQUANTITY

0.99+

Scott DelandyPERSON

0.99+

800%QUANTITY

0.99+

20%QUANTITY

0.99+

yesterdayDATE

0.99+

VMwareORGANIZATION

0.99+

eightQUANTITY

0.99+

Dell EMCORGANIZATION

0.99+

five yearsQUANTITY

0.99+

RackspaceORGANIZATION

0.99+

20QUANTITY

0.99+

1,000 plus patentsQUANTITY

0.99+

2 1/2 hoursQUANTITY

0.99+

1.2 petabyteQUANTITY

0.99+

80%QUANTITY

0.99+

next yearDATE

0.99+

two terabyteQUANTITY

0.99+

thirdQUANTITY

0.99+

240 terabyteQUANTITY

0.99+

YesterdayDATE

0.99+

540 million tonsQUANTITY

0.99+

VMworld 2018EVENT

0.99+

PatPERSON

0.99+

three thingsQUANTITY

0.99+

six-ninesQUANTITY

0.99+

tomorrowDATE

0.99+

10 copiesQUANTITY

0.99+

PowerMaxORGANIZATION

0.99+

todayDATE

0.99+

over 30 yearsQUANTITY

0.99+

ESXTITLE

0.99+

four months agoDATE

0.98+

100,000 plus customersQUANTITY

0.98+

oneQUANTITY

0.98+

Dell Technologies WorldORGANIZATION

0.98+

three racksQUANTITY

0.98+

OracleORGANIZATION

0.98+

six billion decisionsQUANTITY

0.97+

bothQUANTITY

0.97+

three rowsQUANTITY

0.96+

DellORGANIZATION

0.96+

SMP HANATITLE

0.96+

XtremIO X2TITLE

0.96+

day twoQUANTITY

0.96+

Mandalay Bay Las VegasLOCATION

0.95+

second thingQUANTITY

0.95+

10 million IOPSQUANTITY

0.94+

two thingsQUANTITY

0.93+

XtremIOTITLE

0.93+

SplunkORGANIZATION

0.92+

Carl Jaspersohn & Jason O'Brien, Boston Architectural College | WTG Transform 2018


 

from Boston Massachusetts it's the cube covering wtg transform 2018 brought to you by Winslow technology group welcome back I'm Stu minimun and you're watching the cube at wtg transform 2018 happy to welcome to the program two gentlemen from the Boston Architectural College to my left is Carl Jasperson who is the systems administrator and to his left is Jason O'Brien who's the director of IT gentlemen thanks so much for joining us thank you for having us all right so Jason why don't we start with you help us power up this conversation to tell us a little bit about the college so Boston Architectural college we started in the late 1800s it's a small design at school and we offer programs in landscape interior and traditional architecture yeah so I love that to talk to a little bit more about you know that the charter of the school and how IT fits into that so we we are a mission of the schools to provide excellent education to a diverse population technology factors in is very important and over the last ten years the Carll I've been at the school technology has use has increased immensely our students are using it more and more every year and meeting those needs has become you know difficult and it's a challenge we we strive to achieve every year well Design Thinking is is so important these days I I studied engineering as an undergrad in which I've learned more about design one of my favorite authors so I have an interview about a month ago Walter Isaacson you know the ones he studies are the ones that can take that design thinking and technology and bring them together Carles bring us up to speed on from from the IT standpoint you know how big of a team do you have what are you involved with I said you know things have been changing over the last few years yeah so I mean we've got Jason in addition to running the department he runs our online learning system I'm responsible for all the backend its infrastructure servers networking backup virtualization we recently hired a junior systems administrator to help me out we've got a web guy we've got a DBA to the woodshop is under IT because we have a fabrication guy so 3d printing laser cutting we have the help desk and the help desk manager who also does our purchasing and she and I will take escalations so it's there's not a lot of crossover you know skill crossover in the group but we managed to keep everything going yeah but as you said they've been you know woodworking not something you think of in Italy as you know an IT thing IT an OT or you know really converging a lot when you talk about manufacturing as you know we talk about sensors and IOT it's it's hitting everywhere yeah for us you know 3d printing and laser cutting and we also have a CNC router they all started as experiments at the school and have turned into a major factor in for our students it's a resource that they demand and the increasing use every single year and how we meet those demands is is becoming tricky to accomplish in our you know we're in the Back Bay real estate is very expensive and we have to make our space do amazing things Jason that's great points I mean I've talked to lots of higher education and even you talk to the K 2 through 12 it was you know what mobility has had a huge impact you know therefore stresses and strains on wireless you know how do I get devices into the classroom how do I manage it I had gentleman from bu who's here at the show last year we were talking a lot about MOOCs so you know it's that that role of i TS but it's expanding but luckily they're throwing way more money at you I'm sure well we've been flat headcount over the last eight years we lost someone last year and gain someone this year so you know we we basically have to do more with less every year like most IT departments so you know we've we redesign our spaces periodically to meet those our students needs you know and turn returning what was labs just computer labs into more flexible space where students are can move the tables around and you the computers are available sometimes there we have high end alien wares in a in a cabinet they pull out news or they can use it to make models we have they can put up their designs on a 3d TV they're using VR headsets to walk around their own designs it's really fascinating where the technologies okay I wish we could spend more time anywhere in VR stuff and everything like that our production crews gamers my son's into this stuff but but Karl I'm hearing things like space constrained we need to do more with less we need to simplify this environment wow that seems like a really good set up for kind of infrastructure modernization so how long have you guys been there about 10 years right yeah so it's a change don't want one in ten years so walk us back 10 years ago and give us that point when you went to modernize yeah well when we started there's no virtualization 3 server racks in a room in the basement for 10 years that we've been there there's been water in that room twice so that always gave us the warm fuzzies you're saying it wasn't water cooling I mean no we tried for that but it didn't you know it didn't work out last year we moved to Colo facility in Summerville so and by the time we did that move yeah we did we started virtualization with VMware like three five within a year or two of me starting and the racks got you know less and less full and now in the fall we rolled out VX rail and we're in a single rack in a data center and there's I think three physical servers in that rack that aren't the VX rail at this point so it's it's consolidation power savings stuffs in a much better physical location than it used to be moving that server room out we were able to free up that space for you know the students to be able to have it's a it's a meditation space now so it's it's been really interesting kind of going through all that great what I wanted you know we don't have a ton of time but let's talk about that VX rail was your team were you looking for HCI was it you know just time for a server refresh you know what what kind of led to that was there a specific application that you started with so this event two years ago we saw Brian from bu give this presentation on their tan and that really turns us on to the whole hyper-converged option we we worked with Winslow we actually talked to another vendor and we looked at Nutanix we looked at pivot three we looked at rolling our own you know visa non FX 2 and after kind of comparing everything and seeing the pros and cons VX rail made the most sense from management perspective and a price perspective our old cluster was coming up on the five-year mark things were going out of warranty we had ecologic sand with 7200 rpm drives one gig I scuzzy just flow for most of its life we were just doing lightweight servers and applications two years ago we needed to virtualize our database server and we threw her Knicks in there with 800 gig on VM e drives and that was a great stopgap but you know we we needed something more permanent more robust - that's how we got to be X ray from a management standpoint the hyper-converged model gave us more flexibility it's easier to expand and since we're small we're not talking about you know racks and racks working together ryote you started with just three hosts so from a overview standpoint it's easy for us as we grow to just add another node and we get the compute we get the storage and we get the memory all at once as an expansion so it's the model is just fantastic for our workload that we put on it we've got like 70 servers in there the only stuff that's not in there yet is our student file server and exchange and they're going in there in the next six months yeah yeah good great and that's so so it sounds like you're real happy with the solution you've been with Dell for four years so from an Operations standpoint was there you know a lot of steep learning curve or was this pretty straightforward and very easy I mean I was I was already really familiar with the VMware piece going into this so that you know that wasn't a big deal we were already on Ruby sphere 6 and we started in the it's row of B so 6px role manager is it's kind of a stupid easy interface you know you can go in you can see are there alerts is there an update you know can it see my hardware is all that good there's not a whole lot to learn from there if we were doing V San on our own my understanding is that some a lot more complicated to stand up once you have it going you're good until you try to make a change so the VX rail manager extract abstracts all that away and just kind of gives you the the VMware experience that you're used to yeah any commentary on the economic service you know we actually found it was very interesting because our original assessment of our own needs were there was no way we could afford all flash and we started we focused exclusively on hybrid solutions and after a certain point we saw I think a presentation from Rick on the external platform and we saw the VX rail as inline dedupe and compression with the all flash and we thought wait maybe we could make this work with all flash and so we actually had a very slight reduction in RAW storage in our new platform but the percentage that we're actually consuming is far less than on our old platform simply because of those gains and it is the performance is far far faster and it's a we've just been very pleased with the implementation from a cost perspective the all-flash VX rail came in under the hybrid pivot 3 and the hybrid Nutanix products so you know we it was a huge win from that perspective we were shocked we could be able to do it thrilled with it ok final word it sounds like you're real happy with the solution when it smoothly operates well economics were good what final takeaways would you give for your peers I mean I'd say the implementation was you know the VX rail platform the the installation is as advertised it was it's basically a wizard that walks you through the installation process the very few minor issues we encountered the winslow team and the is EMC no support support people had no problem solving for us it was really a pretty easy migration to the new platform and we were able to do it with essentially zero downtime yeah awesome well gentlemen thanks so much for joining that's the promise is to get that easy button for IT HD I definitely helping to move in that direction next time we'll get to talk a little bit more about cloud and everything like that be back with lots more coverage here from wtg transform 2018 I'm Stu minimun thanks for watching the Q

Published Date : Jun 16 2018

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
Jason O'BrienPERSON

0.99+

JasonPERSON

0.99+

Carl JaspersonPERSON

0.99+

Boston Architectural CollegeORGANIZATION

0.99+

7200 rpmQUANTITY

0.99+

Walter IsaacsonPERSON

0.99+

four yearsQUANTITY

0.99+

800 gigQUANTITY

0.99+

10 yearsQUANTITY

0.99+

RickPERSON

0.99+

DellORGANIZATION

0.99+

twoQUANTITY

0.99+

SummervilleLOCATION

0.99+

KarlPERSON

0.99+

one gigQUANTITY

0.99+

last yearDATE

0.99+

ItalyLOCATION

0.99+

Ruby sphere 6TITLE

0.99+

2018DATE

0.99+

BrianPERSON

0.99+

ColoLOCATION

0.99+

two years agoDATE

0.99+

two years agoDATE

0.99+

Boston Architectural collegeORGANIZATION

0.99+

NutanixORGANIZATION

0.99+

three hostsQUANTITY

0.99+

Carl JaspersohnPERSON

0.99+

five-yearQUANTITY

0.98+

70 serversQUANTITY

0.98+

Boston Architectural CollegeORGANIZATION

0.98+

10 years agoDATE

0.98+

late 1800sDATE

0.97+

Stu minimunPERSON

0.97+

ten yearsQUANTITY

0.97+

VX railCOMMERCIAL_ITEM

0.97+

single rackQUANTITY

0.97+

this yearDATE

0.97+

WinslowORGANIZATION

0.97+

twiceQUANTITY

0.96+

EMCORGANIZATION

0.96+

two gentlemenQUANTITY

0.96+

about 10 yearsQUANTITY

0.94+

a yearQUANTITY

0.94+

CarlesPERSON

0.93+

3 server racksQUANTITY

0.92+

hybrid pivot 3COMMERCIAL_ITEM

0.92+

oneQUANTITY

0.92+

Boston MassachusettsLOCATION

0.91+

three physical serversQUANTITY

0.89+

last eight yearsDATE

0.88+

about a month agoDATE

0.87+

next six monthsDATE

0.83+

SanTITLE

0.82+

VX railCOMMERCIAL_ITEM

0.74+

every single yearQUANTITY

0.73+

VX railCOMMERCIAL_ITEM

0.7+

VX railORGANIZATION

0.69+

hybridCOMMERCIAL_ITEM

0.67+

last few yearsDATE

0.65+

WTG TransformORGANIZATION

0.64+

everyQUANTITY

0.62+

FXTITLE

0.58+

VMwareORGANIZATION

0.56+

2COMMERCIAL_ITEM

0.55+

zeroQUANTITY

0.54+

lastQUANTITY

0.54+

12TITLE

0.48+

K 2TITLE

0.46+

6pxTITLE

0.44+

HCIORGANIZATION

0.44+

fiveQUANTITY

0.4+

Matt Burr, Pure Storage & Rob Ober, NVIDIA | Pure Storage Accelerate 2018


 

>> Announcer: Live from the Bill Graham Auditorium in San Francisco, it's theCUBE! Covering Pure Storage Accelerate 2018 brought to you by Pure Storage. >> Welcome back to theCUBE's continuing coverage of Pure Storage Accelerate 2018, I'm Lisa Martin, sporting the clong and apparently this symbol actually has a name, the clong, I learned that in the last half an hour. I know, who knew? >> Really? >> Yes! Is that a C or a K? >> Is that a Prince orientation or, what is that? >> Yes, I'm formerly known as. >> Nice. >> Who of course played at this venue, as did Roger Daltry, and The Who. >> And I might have been staff for one of those shows. >> You could have been, yeah, could I show you to your seat? >> Maybe you're performing later. You might not even know this. We have a couple of guests joining us. We've got Matt Burr, the GM of FlashBlade, and Rob Ober, the Chief Platform Architect at NVIDIA. Guys, welcome to theCUBE. >> Hi. >> Thank you. >> Dave: Thanks for coming on. >> So, lots of excitement going on this morning. You guys announced Pure and NVIDIA just a couple of months ago, a partnership with AIRI. Talk to us about AIRI, what is it? How is it going to help organizations in any industry really democratize AI? >> Well, AIRI, so AIRI is something that we announced, the AIRI Mini today here at Accelerate 2018. AIRI was originally announced at the GTC, Global Technology Conference, for NVIDIA back in March, and what it is is, it essentially brings NVIDIA's DGX servers, connected with either Arista or Cisco switches down to the Pure Storage FlashBlade, so this is something that sits in less than half a rack in the data center, that replaces something that was probably 25 or 50 racks of compute and store, so, I think Rob and I like to talk about it as kind of a great leap forward in terms of compute potential. >> Absolutely, yeah. It's an AI supercomputer in a half rack. >> So one of the things that this morning, that we saw during the general session that Charlie talked about, and I think Matt (mumbles) kind of a really brief history of the last 10 to 20 years in storage, why is modern external storage essential for AI? >> Well, Rob, you want that one, or you want me to take it? Coming from the non storage guy, maybe? (both laugh) >> Go ahead. >> So, when you look at the structure of GPUs, and servers in general, we're talking about massively parallel compute, right? These are, we're now taking not just tens of thousands of cores but even more cores, and we're actually finding a path for them to communicate with storage that is also massively parallel. Storage has traditionally been something that's been kind of serial in nature. Legacy storage has always waited for the next operation to happen. You actually want to get things that are parallel so that you can have parallel processing, both at the compute tier, and parallel processing at the storage tier. But you need to have big network bandwidth, which was what Charlie was eluding to, when Charlie said-- >> Lisa: You like his stool? >> When Charlie was, one of his stools, or one of the legs of his stool, was talking about, 20 years ago we were still, or 10 years ago, we were at 10 gig networks, in merges of 100 gig networks has really made the data flow possible. >> So I wonder if we can unpack that. We talked a little bit to Rob Lee about this, the infrastructure for AI, and wonder if we can go deeper. So take the three legs of the stool, and you can imagine this massively parallel compute-storage-networking grid, if you will, one of our guys calls it uni-grid, not crazy about the name, but this idea of alternative processing, which is your business, really spanning this scaled out architecture, not trying to stuff as much function on a die as possible, really is taking hold, but what is the, how does that infrastructure for AI evolve from an architect's perspective? >> The overall infrastructure? I mean, it is incredibly data intensive. I mean a typical training set is terabytes, in the extreme it's petabytes, for a single run, and you will typically go through that data set again and again and again, in a training run, (mumbles) and so you have one massive set that needs to go to multiple compute engines, and the reason it's multiple compute engines is people are discovering that as they scale up the infrastructure, you actually, you get pretty much linear improvements, and you get a time to solution benefit. Some of the large data centers will run a training run for literally a month and if you start scaling it out, even in these incredibly powerful things, you can bring time to solution down, you can have meaningful results much more quickly. >> And you be a sensitive, sort of a practical application of that. Yeah there's a large hedge fund based in the U.K. called Man AHL. They're a system-based quantitative training firm, and what that means is, humans really aren't doing a lot of the training, machines are doing the vast majority if not all of the training. What the humans are doing is they're essentially quantitative analysts. The number of simulations that they can run is directly correlative to the number of trades that their machines can make. And so the more simulations you can make, the more trades you can make. The shorter your simulation time is, the more simulations that you can run. So we're talking about in a sort of a meta context, that concept applies to everything from retail and understanding, if you're a grocery store, what products are not on my shelves at a given time. In healthcare, discovering new forms of pathologies for cancer treatments. Financial services we touched on, but even broader, right down into manufacturing, right? Looking at, what are my defect rates on my lines, and if it used to take me a week to understand the efficiency of my assembly line, if I can get that down to four hours, and make adjustments in real time, that's more than just productivity, it's progress. >> Okay so, I wonder if we can talk about how you guys see AI emerging in the marketplace. You just gave an example. We were talking earlier again to Rob Lee about, it seems today to be applied and, in narrow use cases, and maybe that's going to be the norm, whether it's autonomous vehicles or facial recognition, natural language processing, how do you guys see that playing out? Whatever be, this kind of ubiquitous horizontal layer or do you think the adoption is going to remain along those sort of individual lines, if you will. >> At the extreme, like when you really look out at the future, let me start by saying that my background is processor architecture. I've worked in computer science, the whole thing is to understand problems, and create the platforms for those things. What really excited me and motivated me about AI deep learning is that it is changing computer science. It's just turning it on its head. And instead of explicitly programming, it's now implicitly programming, based on the data you feed it. And this changes everything and it can be applied to almost any use case. So I think that eventually it's going to be applied in almost any area that we use computing today. >> Dave: So another way of asking that question is how far can we take machine intelligence and your answer is pretty far, pretty far. So as processor architect, obviously this is very memory intensive, you're seeing, I was at the Micron financial analyst meeting earlier this week and listening to what they were saying about these emerging, you got T-RAM, and obviously you have Flash, people are excited about 3D cross-point, I heard it, somebody mentioned 3D cross-point on the stage today, what do you see there in terms of memory architectures and how they're evolving and what do you need as a systems architect? >> I need it all. (all talking at once) No, if I could build a GPU with more than a terabyte per second of bandwidth and more than a terabyte of capacity I could use it today. I can't build that, I can't build that yet. But I need, it's a different stool, I need teraflops, I need memory bandwidth, and I need memory capacity. And really we just push to the limit. Different types of neural nets, different types of problems, will stress different things. They'll stress the capacity, the bandwidth, or the actual compute. >> This makes the data warehousing problem seem trivial, but do you see, you know what I mean? Data warehousing, it was like always a chase, chasing the chips and snake swallowing a basketball I called it, but do you see a day that these problems are going to be solved, architecturally, it talks about, More's laws, moderating, or is this going to be this perpetual race that we're never going to get to the end of? >> So let me put things in perspective first. It's easy to forget that the big bang moment for AI and deep learning was the summer of 2012, so slightly less than six years ago. That's when Alex Ned get the seed and people went wow, this is a whole new approach, this is amazing. So a little less than six years in. I mean it is a very young, it's a young area, it is in incredible growth, the change in state of art is literally month by month right now. So it's going to continue on for a while, and we're just going to keep growing and evolving. Maybe five years, maybe 10 years, things will stabilize, but it's an exciting time right now. >> Very hard to predict, isn't it? >> It is. >> I mean who would've thought that Alexa would be such a dominant factor in voice recognition, or that a bunch of cats on the internet would lead to facial recognition. I wonder if you guys can comment, right? I mean. >> Strange beginnings. (all laughing) >> But very and, I wonder if I can ask you guys ask about the black box challenge. I've heard some companies talk about how we're going to white box everything, make it open and, but the black box problem meaning if I have to describe, and we may have talked about this, how I know that it's a dog. I struggle to do that, but a machine can do that. I don't know how it does it, probably can't tell me how it does it, but it knows, with a high degree of accuracy. Is that black box phenomenon a problem, or do we just have to get over it? >> Up to you. >> I think it's certain, I don't think it's a problem. I know that mathematicians, who are friends, it drives them crazy, because they can't tell you why it's working. So it's a intellectual problem that people just need to get over. But it's the way our brains work, right? And our brains work pretty well. There are certain areas I think where for a while there will be certain laws in place where you can't prove the exact algorithm, you can't use it, but by and large, I think the industry's going to get over it pretty fast. >> I would totally agree, yeah. >> You guys are optimists about the future. I mean you're not up there talking about how jobs are going to go away and, that's not something that you guys are worried about, and generally, we're not either. However, machine intelligence, AI, whatever you want to call it, it is very disruptive. There's no question about it. So I got to ask you guys a few fun questions. Do you think large retail stores are going to, I mean nothing's in the extreme, but do you think they'll generally go away? >> Do I think large retail stores will generally go away? When I think about retail, I think about grocery stores, and the things that are going to go away, I'd like to see standing in line go away. I would like my customer experience to get better. I don't believe that 10 years from now we're all going to live inside our houses and communicate over the internet and text and half of that be with chat mods, I just don't believe that's going to happen. I think the Amazon effect has a long way to go. I just ordered a pool thermometer from Amazon the other day, right? I'm getting old, I ordered readers from Amazon the other day, right? So I kind of think it's that spur of the moment item that you're going to buy. Because even in my own personal habits like I'm not buying shoes and returning them, and waiting five to ten times, cycle, to get there. You still want that experience of going to the store. Where I think retail will improve is understanding that I'm on my way to their store, and improving the experience once I get there. So, I think you'll see, they need to see the Amazon effect that's going to happen, but what you'll see is technology being employed to reach a place where my end user experience improves such that I want to continue to go there. >> Do you think owning your own vehicle, and driving your own vehicle, will be the exception, rather than the norm? >> It pains me to say this, 'cause I love driving, but I think you're right. I think it's a long, I mean it's going to take a while, it's going to take a long time, but I think inevitably it's just too convenient, things are too congested, by freeing up autonomous cars, things that'll go park themselves, whatever, I think it's inevitable. >> Will machines make better diagnoses than doctors? >> Matt: Oh I mean, that's not even a question. Absolutely. >> They already do. >> Do you think banks, traditional banks, will control of the payment systems? >> That's a good one, I haven't thought about-- >> Yeah, I'm not sure that's an AI related thing, maybe more of a block chain thing, but, it's possible. >> Block chain and AI, kind of cousins. >> Yeah, they are, they are actually. >> I fear a world though where we actually end up like WALLE in the movie and everybody's on these like floating chez lounges. >> Yeah lets not go there. >> Eating and drinking. No but I'm just wondering, you talked about, Matt, in terms of the number of, the different types of industries that really can verge in here. Do you see maybe the consumer world with our expectation that we can order anything on Amazon from a thermometer to a pair of glasses to shoes, as driving other industries to kind of follow what we as consumers have come to expect? >> Absolutely no question. I mean that is, consumer drives everything, right? All flash arrays were driven by you have your phone there, right? The consumerization of that device was what drove Toshiba and all the other fad manufacturers to build more NAM flash, which is what commoditized NAM flash, which what brought us faster systems, these things all build on each other, and from a consumer perspective, there are so many things that are inefficient in our world today, right? Like lets just think about your last call center experience. If you're the normal human being-- >> I prefer not to, but okay. >> Yeah you said it, you prefer not to, right? My next comment was going to be, most people's call center experiences aren't that good. But what if the call center technology had the ability to analyze your voice and understand your intonation, and your inflection, and that call center employee was being given information to react to what you were saying on the call, such that they either immediately escalated that call without you asking, or they were sent down a decision path, which brought you to a resolution that said that we know that 62% of the time if we offer this person a free month of this, that person is going to view, is going to go away a happy customer, and rate this call 10 out of 10. That is the type of things that's going to improve with voice recognition, and all of the voice analysis, and all this. >> And that really get into how far we can take machine intelligence, the things that machines, or the humans can do, that machines can't, and that list changes every year. The gap gets narrower and narrower, and that's a great example. >> And I think one of the things, going back to your, whether stores'll continue being there or not but, one of the biggest benefits of AI is recommendation, right? So you can consider it userous maybe, or on the other hand it's great service, where a lot of, something like an Amazon is able to say, I've learned about you, I've learned about what people are looking for, and you're asking for this, but I would suggest something else, and you look at that and you go, "Yeah, that's exactly what I'm looking for". I think that's really where, in the sales cycle, that's really where it gets up there. >> Can machines stop fake news? That's what I want to know. >> Probably. >> Lisa: To be continued. >> People are working on that. >> They are. There's a lot, I mean-- >> That's a big use case. >> It is not a solved problem, but there's a lot of energy going into that. >> I'd take that before I take the floating WALLE chez lounges, right? Deal. >> What if it was just for you? What if it was just a floating chez lounge, it wasn't everybody, then it would be alright, right? >> Not for me. (both laughing) >> Matt and Rob, thanks so much for stopping by and sharing some of your insights and we should have a great rest of the day at the conference. >> Great, thank you very much. Thanks for having us. >> For Dave Vellante, I'm Lisa Martin, we're live at Pure Storage Accelerate 2018 at the Bill Graham Civic Auditorium. Stick around, we'll be right back after a break with our next guest. (electronic music)

Published Date : May 23 2018

SUMMARY :

brought to you by Pure Storage. I learned that in the last half an hour. Who of course played at this venue, and Rob Ober, the Chief Platform Architect at NVIDIA. Talk to us about AIRI, what is it? I think Rob and I like to talk about it as kind of It's an AI supercomputer in a half rack. for the next operation to happen. has really made the data flow possible. and you can imagine this massively parallel and if you start scaling it out, And so the more simulations you can make, AI emerging in the marketplace. based on the data you feed it. and what do you need as a systems architect? the bandwidth, or the actual compute. in incredible growth, the change I wonder if you guys can comment, right? (all laughing) I struggle to do that, but a machine can do that. that people just need to get over. So I got to ask you guys a few fun questions. and the things that are going to go away, I think it's a long, I mean it's going to take a while, Matt: Oh I mean, that's not even a question. maybe more of a block chain thing, but, it's possible. and everybody's on these like floating to kind of follow what we as consumers I mean that is, consumer drives everything, right? information to react to what you were saying on the call, the things that machines, or the humans can do, and you look at that and you go, That's what I want to know. There's a lot, I mean-- It is not a solved problem, I'd take that before I take the Not for me. and sharing some of your insights and Great, thank you very much. at the Bill Graham Civic Auditorium.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Lisa MartinPERSON

0.99+

Matt BurrPERSON

0.99+

MattPERSON

0.99+

CharliePERSON

0.99+

10 gigQUANTITY

0.99+

25QUANTITY

0.99+

Rob LeePERSON

0.99+

NVIDIAORGANIZATION

0.99+

RobPERSON

0.99+

fiveQUANTITY

0.99+

LisaPERSON

0.99+

AmazonORGANIZATION

0.99+

100 gigQUANTITY

0.99+

ToshibaORGANIZATION

0.99+

Rob OberPERSON

0.99+

CiscoORGANIZATION

0.99+

62%QUANTITY

0.99+

DavePERSON

0.99+

10QUANTITY

0.99+

MarchDATE

0.99+

five yearsQUANTITY

0.99+

10 yearsQUANTITY

0.99+

Pure StorageORGANIZATION

0.99+

Alex NedPERSON

0.99+

Roger DaltryPERSON

0.99+

AIRIORGANIZATION

0.99+

U.K.LOCATION

0.99+

four hoursQUANTITY

0.99+

ten timesQUANTITY

0.99+

oneQUANTITY

0.99+

Bill Graham Civic AuditoriumLOCATION

0.99+

todayDATE

0.99+

less than half a rackQUANTITY

0.98+

AristaORGANIZATION

0.98+

10 years agoDATE

0.98+

San FranciscoLOCATION

0.98+

20 years agoDATE

0.98+

summer of 2012DATE

0.98+

three legsQUANTITY

0.98+

tens of thousands of coresQUANTITY

0.97+

less than six yearsQUANTITY

0.97+

Man AHLORGANIZATION

0.97+

bothQUANTITY

0.97+

a weekQUANTITY

0.96+

earlier this weekDATE

0.96+

more than a terabyteQUANTITY

0.96+

50 racksQUANTITY

0.96+

Global Technology ConferenceEVENT

0.96+

this morningDATE

0.95+

more than a terabyte per secondQUANTITY

0.95+

PureORGANIZATION

0.94+

GTCEVENT

0.94+

less than six years agoDATE

0.93+

petabytesQUANTITY

0.92+

terabytesQUANTITY

0.92+

half rackQUANTITY

0.92+

one of the legsQUANTITY

0.92+

single runQUANTITY

0.92+

a monthQUANTITY

0.91+

FlashBladeORGANIZATION

0.9+

theCUBEORGANIZATION

0.88+

Pure Storage Accelerate 2018EVENT

0.88+

20 yearsQUANTITY

0.87+

Caitlin Gordon, Dell EMC | Dell Technologies World 2018


 

>> Announcer: Live from Las Vegas, it's the Cube. Covering Dell Technologies World 2018. Brought to you by Dell EMC and its ecosystem partners. >> Well welcome back. Glad to have you live here on the Cube as we continue our coverage of Dell Technologies World 2018. We are live in Las Vegas. We're in the Sands Exposition Center. I'm with Keith Townsend who had a heck of a night last night. Just a good chicken-and-waffle Las Vegas night. >> You know what? One o'clock in the morning is chicken and waffles here in the Grand Lux, and the view of Venetian, I have to eat at Palazzo because the one in the Venetian closes at 11. >> Oh my, well you know how to live. You know how to live. And I've always said that about you. (laughs) It's a pleasure to welcome as our first guest of the day, Caitlin Gordon, who is the Director of Storage Marketing at Dell EMC. And good afternoon, Caitlin. Thanks for joining us. >> Thank you so much for having me. >> John: A Cube vet, right? You're a Cube veteran. >> I mean as three, is that like, is you're over the hump as a veteran? >> John: Oh absolutely. >> All right, then yes, I'm in. >> You deserve a varsity letter now. >> Aw, do I get a letter jacket too? >> Well, we'll work on that later. We'll give you a Cube sticker for now how 'about that? >> Okay, I'll take a sticker. >> All right, so you've given, you've launched I would say given birth, but you've launched a brand new product today, PowerMax. Tell us all about that. First off, paint us the big picture, and we'll drill down a little bit and find out what's so new about this. >> Yeah, absolutely. So hot off the presses. Announced just two hours ago in the keynote this morning. So PowerMax is, really, the future of storage. The way we're talking about it, it is fast. It is smart and it's efficient. So we could kind of go through each one of those, but the headline here, this is modern tier zero storage. It's designed for traditional applications of today, but also next gen applications like real-time analytics. We have some metrics that show us that up to 70% of companies are going to have these mission-critical, real-time analytic workloads. And they're going to need a platform to support those and why shouldn't it be the same platform that they already have for those traditional workloads. >> So let's just go back. What makes it smarter? And what makes it more efficient? You know, what makes it faster? >> Caitlin: Can we start with fast? >> Yeah sure. >> Okay, that's my favorite one. So fast. I've got some good hero numbers for ya. So we'll start there. 10 million IOPS. That makes it the world's fastest storage array. Full stop. No caveats to that. 150 gigabytes a second throughput. We've got under 300 microseconds latency. That's up to 50% faster than what we already have with VMAX All Flash. So that's great. That's wicked fast, as Bob said, right? But how do we actually do that is a little bit more interesting. So the architecture behind that, it is a multi-controller, scale out architecture. Okay, that's good. That's check. You had a good start with that. But the next thing we did is we built that with end-to-end NVME. So end-to-end NVME means it's NVME-based drives, flash drives now, SCM drives, next generation media coming soon. It's also NVME over Fabric ready. So we're going to have a non-disruptive upgrade in the very near future to add support for NVME over Fabric. So that means you can get all the way from server across the network, to your storage array with NVME. It's really NVME done right. >> So let's talk about today. We've NVME, Fabric ready, which I love NVME over Fabric. Connectivity getting 10 million IOPS to the server in order to take care of that. What are the practical use cases for that much performance? What type of workloads are we seeing? >> Where we see this going in is to data centers where they want to consolidate all of their workloads, all of their practices, all of their processes, on a single platform. 10 million IOPS means you will never have to think about if that array can support that workload. You will be able to support everything. And again, traditional apps, but also these emerging apps, but also mainframe. IBM i, file, all on the same system. >> So can we talk about that as opposed to, let's say you even compare it to another Dell family technology. We just had the team Sean Amay and his VMware customer talking about SAP HANA on XtremIO. XtremIO is really great for one-to-one application mapping, so that's as SAP HANA. So are you telling me that PowerMax is positioned that I can run SAP HANA and in addition to my other data center workloads and get similar performance? >> Absolutely, it is the massive consolidator. It's kind of an app hoarder. You can put anything on it that you've got. And it's block, it's file, and then it's also got support for mainframe and IBM i, which there's still a significant amount of that out there. >> So that's an interesting thing. You're having all of these traditional data services. Usually when we see tier zero type of arrays, Dell EMC had one just last year, there's no services because you just, it's either go really fast or moderately fast and data services. How do you guys do that? >> Yeah well the benefit of where we're coming from is that we built this on the platform of the flagship storage array that's been leading the industry for decades. So what we did is we took the foundation of what we had with VMAX, and we built from that this end-to-end NVME PowerMax. So you get all of that best-in-class hardware, that optimized software, but it comes with all the data services. So you get six nines availability, best-in-class data protection, resiliency, everything that you'd need, so you never have to worry. So this is truly built for your mission-critical applications. >> Yeah, so really interesting speeds and feeds. Let's talk about managing this box. VMAX has come a long way from the Symmetrix days, so much easier to manage. However, we're worried today about data tiering, moving workloads from one area to another. These analytics workloads move fast. How does PowerMax help with day two operations? >> So you've heard the mention of autonomous infrastructure, right? Really PowerMax is autonomous storage. So what is has is it has a built-in, real-time, machine learning engine. And that's designed to use pattern recognition. It actually looks at the IOs and it can determine in a sub-millisecond time, what data is hot, what data should be living where, which data should be compressed. It can optimize the data placement. It can optimize the data reduction. And we see this as a critical enabler to actually leveraging next-generation media in the most effective way. We see some folks out there talking about SCM and using it more as a cache. We're going to have SCM in the array, side-by-side with Flash. Now we know that the price point on that when it comes out the door is going to be more than Flash. So how do you cost-effectively use that? You have a machine learning engine that can analyze that data set and automatically place the data on that when it gets hot or before it even gets hot, and then move it off it when it needs to. So you can put in just as much as you need and no more than that. >> So let's talk about scale. You know I'm a typical storage ad man. I have my spreadsheet. I know what lines I map to what data and to what application. And I've statically managed this for the past 15 years. And it's served me well. How much better is PowerMax than my storage ad man? I can move two or three data sets a day from cache to Flash. >> Really what this enables from a storage administrator perspective, you can focus on much more strategic initiatives. You don't have to do the day-to-day management. You don't have to worry about what data's sending where. You don't have to worry about how much of the different media types you've put into that array. You just deploy it and it manages itself. You can focus on more tasks. The other part I wanted to mention is the fact that you heard Jeff mention this morning that we have Cloud.IQ in the portfolio. Cloud.IQ we're going to be bringing across the entire storage portfolio, including to PowerMax. So that will also really enable this Cloud-based monitoring predictive analytics to really take that to the next level as well. Simplify that even more. >> You know, I'd like to step back to the journey. More or less. When you start out on a project like this and you're reinventing, right, in a way. Do you set, how do you set the specs? You just ran off a really impressive array of capability. >> Caitlin: Yeah. >> Was that the initial goal line or how was that process, how do you manage that? How do you set those kinds of goals? And how do you get your teams to realize that kind of potential, and some people might look at you a little cross-eyed and say, are you kidding? >> Caitlin: Right, right. >> How are we going to get there? I don't know. (laughs) >> We always shoot for the moon. >> John: Right. >> So we always, this type of product takes well over a year to get into market. So you saw PowerMax Bob on stage there talking about it. So his team is the one that really brings this to market. They developed those requirements two years ago. And they were really looking to make sure that at this time, as soon as the technology curve is ready on NVME, we were there, right? So this just shipping with enterprise class, dual port, NVME drives. Those were not ready until right now. Right, those boxes start shipping next week. They are ready next week, right? So we're at the cutting edge of that. And that takes an extraordinary world-class engineering team. A product management team that understands our customers' requirements that we have today, 'cause we have thousands of customers, but more importantly is looking to what's also coming in the future. And then at some point in the process things do fall off, right? So we have even more coming in future releases as well. >> So let's talk connectivity into the box. How do I connect to this? Is this iSCSI, is this fiber channel? What connectivity-- >> So this is definitely fiber channel. And so our NVME over Fabric will be supported over fiber channel with this array. But we find with the install base with our VMAX install base especially they're very heavily invested in fiber channel today. So right now that's where we're still focused. 'Cause that's going to enable the most people to leverage it as quickly as possible. We're obviously looking at when it makes sense to have an IP-based protocol supported as well. >> So this storage is expensive on the back end. Talk to me about if data efficiency, dedup, are we coming out with. 'Cause a lot of these tier zero solutions don't have dedup out the box. >> Or they have it, but if you use it you can't actually get the performance that you paid for, right? >> There's no point in turning it on. >> Yeah, it's like yeah, we checked the box, but there's really no point. Yeah, so VMAX had compression. VMAX also had compression, and what we've done with PowerMax is we now have inline deduplication and compression. The secret to that is that it's a hardware-assisted. So it's designed to, that card actually will take in, it'll compress the data, and it also passes out the hashes you need for dedup. So that it's inline, it will not have a performance impact on the system. It can also be turned on and off by application and it can give you up to five-to-one data reduction. And you can leverage it with all your data services. Some competitive arrays, if you want to use encryption, sorry you can't actually use dedup. The way we've implemented it, you can actually do both the data reduction and the data services you need, especially encryption. >> So before we say goodbye, I'm just, I'm curious, when you see something like this get launched, right. Huge project. Year-long as you've been saying. And even further back in the making. Just from a personal standpoint, you get pumped? Are you, I would imagine-- >> Caitlin: I got to tell ya-- >> This is the end of a really long road for you. >> We have been worked, for the marketing team, we've been working on this for months. It is the best product I've ever launched. It's the best team I've ever worked with. In the past two days since I landed here to getting that keynote out the door has been so much adrenaline, built up, that we're just so excited to get this out there and share it with customers. >> And what's this done to the bar in your mind? Because you were here, now you're here. But tell me about this. What have you jumped over in your mind? >> We have set a very high bar. I'm not really sure what we're going to do at this point, right? From a product standpoint it is in a class by itself. There is just nothing else like it And from an overall what the team has delivered, from engineering all the way from my team, what we've brought together, what we've gotten from the executive, we've never done anything like it before. So we've set a high bar for ourselves, but we've jumped over some high bars before. So we've got some other plans in the future. >> I'm sorry go ahead. >> Let's not end the conversation too quickly. >> All right, all right, sure, all right. >> There is some-- >> He's got some burning questions. >> Yeah, I have burning, this is a big product. So I still have a lot of questions from a customer perspective. Let's talk data protection. You can't have mission-critical all this consolidation without data protection. >> Caitlin: Absolutely. >> What are the data protection features of the PowerMax? >> I'm so glad you asked. I spent a decade in data protection. It is a passionate topic of mine, right? So you look at data protection and kind of think of it as layered, within the array, so we have very efficient snapshot technology. You can take as many snaps as you need. Very, very efficient to take those. They don't take any extra space on them when you make those copies. >> Then can I use those as tertiary copies to actually perform, to point to workloads such as refreshing, QA, DAB, et cetera? >> Yeah, absolutely. You can mount those snapshots and leverage those for any type of use case. So it's not just for data protection. It's absolutely for active use as well. So it's kind of the on the array, and then the next level out is okay, how do I make a copy of that off the array? So the first one would be well do that to another PowerMax. So as you probably know, the VMAX really pioneered the entire primary storage replication concept. So we have certainly asynch if you have a longer distance, but a synchronous replication, but also Metro, if you have that truly active active-use case so, truly the gold standard in replication technologies. And our customers, it's one of the number one reasons why they say there is no other platform on the planet that they would ever use. And then, you go to the next level of we're really talking about backup. We have built in to PowerMax the capabilities to do a direct backup from PowerMax to a data domain. And that gets you that second protection copy also on a protection storage. So you have those multiple layers of protection. All the copies across all of the different places to ensure that have that operational recovery, disaster recovery in that array, and that the data's accessible at all times no matter what the scenario. >> So let's talk about what else we see. When we look at it, we go into our data center and you see a VMAX array, there's a big box with cabinets of shelves, and you're thinking, wow, this thing is rock solid. Look at the PowerMax. That thing is what about a six-- >> Caitlin: I think it's pretty cute, right? >> Yeah it's pretty cute. I love, that's a pretty array. (laughs) >> Yeah. >> You have one over there. So when you see a VMAX, it just gives you this feeling of comfort. PowerMax, let's talk about resiliency. Do we still have that same VMAX, rock solid, you go into a data center and you see two VMAX, and you're thinking this company's never going to go down. >> Caitlin: Right. >> What about PowerMax? >> Guess what? It is the same system. It's just a lot more compact. We have people consolidating from either VMAXs or competitive arrays, but they're in four racks and they come down into maybe half a rack. But you have all the same operating system, all the same data services, so you have non-disruptive upgrades. If you have to do a code upgrade across the whole array at the same time. You don't have to do rolling reboots of all the controllers. You can just upgrade that all at the same time. We have component-level fault isolation. So if a component fails, the whole controller doesn't go down. All you lose is that one little component on there until you're able to swap that out. So you have all of the resiliency that over six nines availability built into this array. Just like you did with the ones that used to be taking up a bit more floor tile space. The PowerMax is about 40% lower power consumption than you have with VMAX All Flash 'cause it can be supported in such a small footprint. >> So are we going to see PowerMax and converge system configurations? >> Yeah, absolutely. So if you're familiar with the VxBlock 1000, which we launched back in February, it will be available in a VxBlock 1000. And of course the big news on that is you have the flexibility to really choose any array. So it could be an X2 and a PowerMax in a VxBlock 1000. >> So that's curious. What is the, now that we have PowerMax, where's the position of the VMAX 250? >> So the, I'm glad you asked, 'cause it's an important thing to remember. VMAX All Flash is absolutely still around and we expect people to buy it for a good amount of time. The main reason being that the applications, the workloads, the customers, the data centers, that are buying these arrays, they have a very strict qualification policy. They take six, nine months, sometimes a year, to really qualify, even a new operating system. >> Keith: Right. >> Let alone a new platform. So we absolutely will be selling a lot of VMAX All Flash for the foreseeable future. >> Well, Caitlin, it's been a long time in the making, right? >> Absolutely. >> Huge day for you. >> Yes. >> So congratulations on that. >> Thank you, thank you. >> Great to have you here on the Cube. And best of luck, I'm sure, well you don't need it. Like I said, superior product, great start. And I wish you all the best down the road. >> Thank you. Hope to see you guys again soon. >> Caitlin Gordon. Now that'd be four. >> Yes, it'd be four. >> We'd love to have you back. Caitlin Gordon joining us from Dell EMC. PowerMax, the big launch coming just a couple hours ago here at Dell Technologies World 2018. Back with more live coverage here on the Cube after this short time out. (upbeat music)

Published Date : May 1 2018

SUMMARY :

Brought to you by Dell EMC Glad to have you live here on the Cube and the view of Venetian, first guest of the day, Caitlin Gordon, You're a Cube veteran. We'll give you a Cube sticker and find out what's so new about this. So hot off the presses. So let's just go back. So that means you can get all the way What are the practical use IBM i, file, all on the same system. So are you telling me that Absolutely, it is the How do you guys do that? So you get all of that from the Symmetrix days, So how do you cost-effectively use that? and to what application. You don't have to do the You know, I'd like to How are we going to get there? So his team is the one that connectivity into the box. enable the most people don't have dedup out the box. the data services you need, And even further back in the making. This is the end of a It is the best product I've ever launched. What have you jumped over in your mind? from the executive, we've never done Let's not end the So I still have a lot of questions So you look at data protection So it's kind of the on the array, and you see a VMAX I love, that's a pretty array. So when you see a VMAX, it just gives you all the same data services, so you have And of course the big news on that is So that's curious. So the, I'm glad you So we absolutely will be selling a lot Great to have you here on the Cube. Hope to see you guys again soon. Caitlin Gordon. We'd love to have you back.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

Caitlin GordonPERSON

0.99+

Keith TownsendPERSON

0.99+

Sean AmayPERSON

0.99+

sixQUANTITY

0.99+

JeffPERSON

0.99+

CaitlinPERSON

0.99+

twoQUANTITY

0.99+

BobPERSON

0.99+

FebruaryDATE

0.99+

KeithPERSON

0.99+

next weekDATE

0.99+

Las VegasLOCATION

0.99+

VxBlock 1000COMMERCIAL_ITEM

0.99+

Dell EMCORGANIZATION

0.99+

DellORGANIZATION

0.99+

150 gigabytesQUANTITY

0.99+

VMAXORGANIZATION

0.99+

IBMORGANIZATION

0.99+

SAP HANATITLE

0.99+

nine monthsQUANTITY

0.99+

todayDATE

0.99+

VMAX 250COMMERCIAL_ITEM

0.99+

X2COMMERCIAL_ITEM

0.99+

last yearDATE

0.99+

Sands Exposition CenterLOCATION

0.99+

two years agoDATE

0.99+

FirstQUANTITY

0.98+

bothQUANTITY

0.98+

VenetianLOCATION

0.98+

Cloud.IQTITLE

0.98+

threeQUANTITY

0.98+

half a rackQUANTITY

0.98+

two hours agoDATE

0.98+

first guestQUANTITY

0.97+

PowerMaxCOMMERCIAL_ITEM

0.97+

XtremIOTITLE

0.97+

Dell Technologies World 2018EVENT

0.97+

a yearQUANTITY

0.97+

four racksQUANTITY

0.97+

under 300 microsecondsQUANTITY

0.96+

FlashTITLE

0.96+

first oneQUANTITY

0.96+

PowerMaxORGANIZATION

0.96+

11DATE

0.95+

single platformQUANTITY

0.95+

up to 50%QUANTITY

0.94+

10 million IOPSQUANTITY

0.94+

oneQUANTITY

0.94+

last nightDATE

0.94+

one little componentQUANTITY

0.94+

each oneQUANTITY

0.93+

fourQUANTITY

0.93+

decadesQUANTITY

0.93+

a dayQUANTITY

0.92+

NVMETITLE

0.92+

Dustin Kirkland, Canonical | KubeCon 2017


 

>> Announcer: Live from Austin, Texas, it's theCUBE. Covering KubeCon and CloudNativeCon 2017. Brought to you by: Red Hat, the Linux Foundation, and theCUBE's ecosystem partners. >> Hey, welcome back everyone. And we're live here in Austin, Texas. This is theCUBE's exclusive coverage of the Cloud Native conference and KubeCon for Kubernetes Conference. This is for the Linux Foundation. This is theCUBE. I'm John Furrier, the co-founder of Silicon ANGLE Media. My co, Stu Miniman. Our next guest is Dustin Kirkland Vice-President of product. The Ubuntu, Canonical, welcome to theCUBE. >> Thank you, John. >> So you're the product guy. You get the keys to the kingdom, as they would say in the product circles. Man, what a best time to be-- >> Dustin: They always say that. I don't think I've heard that one. >> Well, the product guys are, well all the action's happening on the product side. >> Dustin: We're right in the middle of it. >> Cause you got to have a road map. You got to have a 20 mile steer on the next horizon while you go up into the pasture and deliver value, but you always got to be watching for it always making decision on what to do, when to ship product, not you got the Cloud things are happening at a very accelerated rate. And then you got to bring it out to the customers. >> That's right. >> You're livin' on both sides of the world You got to look inside, you got to look outside. >> All three. There's the marketing angle too. which is what we're doing here right now. So there's engineering sales and this is the marketing. >> Alright so where are we with this? Because now you guys have always been on the front lines of open source. Great track record. Everyone knows the history there. What are the new things? What's the big aha moment that this event, largest they've had ever. They're not even three years old. Why is this happening? >> I love seeing these events in my hometown Austin, Texas. So I hope we keep coming back. The aha moment is how application development is fundamentally changing. Cloud Native is the title of the Cloud Native Computing Foundation and CloudNativeConference here. What does Cloud Native mean? It's a different form of writing applications. Just before we were talking about systems programing right? That's not exactly Cloud Native. Cloud Native programming is writing to API's that are Cloud exposed API's, integrating with software as a service. Creating applications that have no intelligence, whatsoever, about what's underneath them, Right? But taking advantage of that and all the ways that you would want and expect in a modern application. Fault tolerance, automatic updates, hyper security. Just security, security, security. That is the aha moment. The way applications are being developed is fundamentally changing. >> Interesting perspective we had on earlier. Lew Tucker from Cisco, (mumbles) in the (mumbles) History Museum, CTO at Cisco, and we have Kelsey Hightower co-chair for this conference and also very active in the community. Yet, in the perspective, and I'll over simplify and generalize it, but basically was: Hey, that's been going on for 30 years, it's just different now. Tell us the old way and new way. Because the old way, you kind of describing it you're going to build your own stuff, full stack, building all parts of the stack and do a lot of stuff that you didn't want to do. And now you have more, especially time on your hands if DevOps and infrastructure as code starts to happen. But doesn't mean that networking goes away, doesn't mean storage goes away, that some new lines are forming. Describe that dynamic of what's new and the new way what changes from the old way? >> Virtualization has brought about a different way of thinking about resources. Be those compute resources, chopping CPU's up into virtual CPU's, that's KVM ware. You mentioned network and storage. Now we virtualized both of those into software defined storage and software defined networking, right? We have things like OpenStack that brings that all together from an infrastructure perspective. and we now have Kubernetes that brings that to fare from an application perspective. Kubernetes helps you think about applications in a different way. I said that paradigm has changed. It's Kubernetes that helps implement that paradigm. So that developers can write an application to a container orchestrator like Kubernetes and take advantage of many of the advances we've made below that layer in the operating system and in the Cloud itself. So from that perspective the game has changed and the way you write your application is not the same as a the monolithic app we might have written on an IBM or a traditional system. >> Dustin, you say monolithic app versus oh my gosh the multi layered cake that we have today. We were talking about the keynote this morning where CNCF went from four projects to 14 projects, you got Kubernetes, You got things like DSDU on top. Help up tease that a little bit. What are the ones that, where's canonical engaged? What are you hearing from customers? What are they excited about? What are they still looking for? >> In a somewhat self-serving way, I'll use this opportunity to explain exactly what we do in helping build that layered cake. It starts with the OS. We provide a great operating system, Ubuntu that every developer would certainly know and understand and appreciate. That's the kernel, that's the systemd, that's the hyperviser, that's all the storage and drivers that makes an operating system work well on hardware. Lot's of hardware, IBM, Dell HP, Intel, all the rest. As well as in virtual machines, the public Clouds, Microsoft, Amazon, Google, VM ware and others. So, we take care of that operating system perspective. Within the CNCF and within in the Kubernetes ecosystem, It really starts with the Kubernetes distribution. So we provide a Kubernetes distribution, we call it Canonicals Distribution of Kubernetes, CDK. Which is open source Kubernetes with security patches applied. That's it. No special sauce, no extra proprietary extensions. It is open source Kubernetes. The reference platform for open source Kubernetes 100% conformed. Now, once you have Kubernetes as you say, "What are you hearing from customers?" We hear a lot of customers who want a Kubernetes. Once they have a Kubernetes, the next question is: "Now what do I do with it?" If they have applications that their developers have been writing to Google's Kubernetes Engine GKE, or Amazon's Kubernetes Engine, the new one announced last week at re:Invent, AKS. Or Microsoft's Kubernetes Engine, Microsoft-- >> Microsoft's AKS, Amazons EKS. A lot of TLA's out there, always. >> Thank you for the TLA dissection. If you've written the applications already having your own Kubernetes is great, because then your applications simply port and run on that. And we help customers get there. However, if you haven't written your first application, that's where actually, most of the industry is today. They want a Kubernetes, but they're not sure why. So, to that end, we're helping bring some of the interesting workloads that exists, open source workloads and putting those on top of Canonical Kubernetes. Yesterday, we press released a new product from Canonical, launched in conjunction with our partners at Rancher Labs, Which is the Cloud Native platform. The Cloud Native platform is Ubuntu plus Kubernetes plus Rancher. That combination, we've heard from customers and from users of Ubuntu inside and out. Everyone's interested in a developer work flow that includes open-source Ubuntu, open-source Kubernetes and open-source Rancher, Which really accelerates the velocity of development. And that end solution provides exactly that and it helps populate, that Kubernetes with really interesting workloads. >> Dustin, so we know Sheng, Shannon and the team, they know a thing or two about building stacks with open source. We've talked with you many times, OpenStack. Give us a little bit of compare and contrast, what we've been doing with OpenStack with Canonical, very heavily involved, doing great there versus the Cloud Native stacking. >> If you know Shannon and Sheng, I think you can understand and appreciate why Mark, myself and the rest of the Canonical team are really excited about this partnership. We really see eye-to-eye on open source principles First. Deliver great open source experiences first. And then taking that to market with a product that revolves around support. Ultimately, developer option up front is what's important, and some of those developer applications will make its way into production in a mission critical sense. Which open up support opportunities for both of us. And we certainly see eye-to-eye from that perspective. What we bring to bare is Ubuntu ecosystem of developers. The Ubuntu OpenStack infrastructure is a service where we've seen many of the world's largest organizations deploying their OpenStacks. Doing so on Ubuntu and with Ubuntu OpenStacks. With the launch of Kubernetes and Canonical Kubernetes, many of those same organizations are running their own Kubernetes along side OpenStack. Or, in some cases, on top of OpenStack. In a very few cases, instead of Openstack, in very special cases, often at the Edge or in certain tiny Cloud or micro Cloud scenarios. In all of these we see Rancher as a really, really good partner in helping to accelerate that developer work flow. Enabling developers to write code, commit code to GitHub repository, with full GitHub integration. Authenticate against an active directory with full RBAC controls. Everything that you would need in an enterprise to bring that application to bare from concept, to development, to test into production, and then the life cycle, once it gains its own life in production. >> What about the impact of customers? So, I'm an IT guy or I'm an architect and man, all this new stuff's comin' at me. I love my open source, I'm happy with space. I don't want to touch it, don't want to break it, but I want to innovate. This whole world can be a little bit noisy and new to them. How do you have that conversation with that potential customer or customer where you say, Look, we can get there. Use your app team here's what you want to shape up to be, here's service meshes and plugable, Whoa plugable (mumbles)! So, again, how do you simplify that when you have conversations? What's the narrative? What's the conversation like? >> Usually our introduction into the organization of a Fortune 500 company is by the developers inside of that company who already know Ubuntu. Who already have some experience with Kubernetes or have some experience with Rancher or any of those other-- >> So it's a bottoms up? >> Yeah, it's bottoms up. Absolutely, absolutely. The developer network around Ubuntu is far bigger than the organization that is Canonical. So that helps us with the intro. Once we're in there, and the developers write those first few apps, we do get the introductions to their IT director who then wants that comfy blanket. Customer support, maybe 24 by seven-- >> What's the experience like? Is it like going to the airport, go through TSA, and you got to take your shoes off, take your belt off. What kind of inspection, what is kind of is the culture because they want to move fast, but they got to be sure. There's always been the challenge when you have the internal advocate saying, "Look, if we want to go this way "this is going to be more the reality for companies." Developers are now major influencers. Not just some, here's the product we made a decision and they ship it to 'em, it's shifted. >> If there's one thing that I've learned in this sort of product management assignment, I'm a engineer by trade, but as a product manager now for almost five years, is that you really have to look at the different verticals and some verticals move at vastly different paces than other verticals. When we are in the tele close phase, We're in RFI's, requests for a quote or a request for information that may last months, nine months. And then go through entering into a procurement process that may last another nine months. And we're talking about 18 months in an industry here that is spinning up, we're talking about how fast this goes, which is vastly different than the work we do in Silicon Valley, right? With some of the largest dot-coms in the world that are built on Ubuntu, maybe an AWS or else where. Their adoption curve is significantly different and the procurement angle is really different. What they're looking to buy often on the US West Coast is not so much support, but they're looking to guide your roadmap. We offer for customers of that size and scale a different set of products something we call feature sponsorships, where those customers are less interested in 24 by seven telephone support and far more interested in sponsoring certain features into Ubuntu itself and helping drive the Ubuntu roadmap. We offer both of those a products and different verticals buy in different ways. We talked to media and entertainment, and the conversation's completely different. Oil and gas, conversation's completely different. >> So what are you doing here? What's the big effort at CloudNativeCon? >> So we've got a great booth and we're talking about Ubuntu as a pretty universal platform for almost anything you're doing in the Cloud. Whether that's on frame infrastructure as a service, OpenStack. People can coo coo OpenStack and point OpenStack versus Kubernetes against one another. We cannot see it more differently-- >> Well no I think it's more that it's got clarity on where the community's lines are because apps guys are moving off OpenStack that's natural. It's really found the home, OpenStack very relevant huge production flow, I talk to Johnathon Bryce about this all the time. There's no co cooing OpenStack. It's not like it's hurting. Just to clarify OpenStack is not going anywhere its just that there's been some comments about OpenStack refugees going to (mumbles), but they're going there anyway! Do you agree? >> Yeah I agree, and that choice is there on Ubuntu. So infrastructure is a service, OpenStack's a fantastic platform, platforms as a service or Cloud Native through Cloud Native development Kubernetes is an excellent platform. We see those running side by side. Two racks a systems or a single rack. Half of those machines are OpenStack, Half of those are Kubernetes and the same IT department manages both. We see IT departments that are all in OpenStack. Their entire data center is OpenStack. And we see Kubernetes as one workload inside of that Openstack. >> How do you see Kubernetes impact on containers? A lot of people are coo cooing containers. But they're not going anywhere either. >> It's fundamental. >> The ecosystem's changing, certainly the roles of each part (mumbles) is exploding. How do you talk about that? What's your opinion on how containers are evolving? >> Containers are evolving, but they've been around for a very long time as well. Kubernetes has helped make containers consumable. And doctored to an extent, before that the work we've done around Linux containers LXE LEXT as well. All of those technologies are fundamental to it and it take tight integration with the OS. >> Dustin, so I'm curious. One of the big challenges I have the U face is the proliferation of deployments for customers. It's not just data center or even Cloud. Edge is now a very big piece of it. How do you think that containers helps enable the little bit of that Cloud Native goes there, but what kind of stresses does that put on your product organization? >> Containers are adding fuel to the fire on both the Edge and the back end Cloud. What's exciting to me about the Edge is that every Edge device, every connected device is connected to something. What's it connected to, a Cloud somewhere. And that can be an OpenStack Cloud or a Kubernetes Cloud, that can be a public Cloud, that could be a private implementation of that Cloud. But every connected device, whether its a car or a plane or a train or a printer or a drone it's connected to something, it's connected to a bunch of services. We see containers being deployed on Ubuntu on those Edge devices, as the packaging format, as the application format, as the multi-tendency layer that keeps one application from DOSing or attacking or being protected from another application on that Edge device. We also see containers running the micro services in the Cloud on Ubuntu there as well. The Edge to me, is extremely interesting in how it ties back to the Cloud and to be transparent here, Canonical strategy and Canonical's play is actually quiet strong here with Ubuntu providing quite a bit of consistency across those two layers. So developers working on those applications on those devices, are often sitting right next to the developers working on those applications in the Cloud and both of them are seeing Ubuntu helping them go faster. >> Bottom line, where do you see the industry going and how do you guys fit into the next three years, what's your prediction? >> I'm going to go right back to what I was saying right there. That the connection between the Edge and the Cloud is our angle right there, and there is nothing that's stopping that right now. >> We were just talking with Joe Beda and our view is if it's a shoot and computing world, everything's an Edge. >> Yeah, that's right. That's exactly right. >> (mumbles) is an Edge. A light in a house is an Edge with a processor in it. >> So I think the data centers are getting smarter. You wanted a prediction for next year: The data center is getting smarter. We're seeing autonomous data centers. We see data centers using metals as a service mask to automatically provision those systems and manage those systems in a way that hardware look like a Cloud. >> AI and IOT, certainly two topics that are really hot trends that are very relevant as changing storage and networking those industries have to transform. Amazon's tele (mumbles), everything like LAN and serverless, you're starting to see the infrastructure as code take shape. >> And that's what sits on top of Kubernetes. That's what's driving Kubernetes adoption are those AI machine learning artificial intelligence workloads. A lot of media and transcoding workloads are taking advantage of Kubernetes everyday. >> Bottom line, that's software. Good software, smart software. Dustin, Thanks so much for coming theCube. We really appreciate it. Congratulations. Continued developer success. Good to have a great ecosystem. You guys have been successful for a very long time. As the world continues to be democratized with software as it gets smarter more pervasive and Cloud computing, grid computing, Unigrid. Whatever it's called it is all done by software and the Cloud. Thanks for coming on. It's theCube live coverage from Austin, Texas, here at KubeCon and CloudNativeCon 2017. I'm John Furrier, Stu Miniman, We'll be back with more after this short break. (lively music)

Published Date : Dec 7 2017

SUMMARY :

Brought to you by: Red Hat, the Linux Foundation, This is for the Linux Foundation. You get the keys to the kingdom, I don't think I've heard that one. the action's happening on the product side. to do, when to ship product, not you got the You got to look inside, you got to look outside. There's the marketing angle too. What are the new things? But taking advantage of that and all the ways and the new way what changes from the old way? and the way you write your application is not the same What are the ones that, where's canonical engaged? Lot's of hardware, IBM, Dell HP, Intel, all the rest. A lot of TLA's out there, always. Which is the Cloud Native platform. We've talked with you many times, OpenStack. And then taking that to market with What about the impact of customers? of a Fortune 500 company is by the developers So that helps us with the intro. There's always been the challenge when you have is that you really have to look at We cannot see it more differently-- It's really found the home, OpenStack very relevant Yeah I agree, and that choice is there on Ubuntu. How do you see Kubernetes impact on containers? the roles of each part (mumbles) is exploding. All of those technologies are fundamental to it One of the big challenges I have the U face We also see containers running the micro services That the connection between the Edge and the Cloud We were just talking with Joe Beda Yeah, that's right. A light in a house is an Edge with a processor in it. and manage those systems in a way the infrastructure as code take shape. And that's what sits on top of Kubernetes. As the world continues to be democratized with software

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
John FurrierPERSON

0.99+

Stu MinimanPERSON

0.99+

DustinPERSON

0.99+

Red HatORGANIZATION

0.99+

Dustin KirklandPERSON

0.99+

IBMORGANIZATION

0.99+

100%QUANTITY

0.99+

MarkPERSON

0.99+

DellORGANIZATION

0.99+

CanonicalORGANIZATION

0.99+

Silicon ValleyLOCATION

0.99+

MicrosoftORGANIZATION

0.99+

Linux FoundationORGANIZATION

0.99+

JohnPERSON

0.99+

nine monthsQUANTITY

0.99+

ShannonPERSON

0.99+

Rancher LabsORGANIZATION

0.99+

20 mileQUANTITY

0.99+

ShengPERSON

0.99+

next yearDATE

0.99+

AmazonORGANIZATION

0.99+

Austin, TexasLOCATION

0.99+

KubeConEVENT

0.99+

30 yearsQUANTITY

0.99+

CiscoORGANIZATION

0.99+

twoQUANTITY

0.99+

Silicon ANGLE MediaORGANIZATION

0.99+

HalfQUANTITY

0.99+

IntelORGANIZATION

0.99+

last weekDATE

0.99+

14 projectsQUANTITY

0.99+

24QUANTITY

0.99+

bothQUANTITY

0.99+

CanonicalsORGANIZATION

0.99+

CNCFORGANIZATION

0.99+

two topicsQUANTITY

0.99+

theCUBEORGANIZATION

0.99+

Johnathon BrycePERSON

0.99+

both sidesQUANTITY

0.99+

KubernetesTITLE

0.99+

AmazonsORGANIZATION

0.99+

two layersQUANTITY

0.99+

Lew TuckerPERSON

0.99+

GoogleORGANIZATION

0.99+

Cloud NativeTITLE

0.98+

CDKORGANIZATION

0.98+

OpenStackTITLE

0.98+

Cloud Native Computing FoundationORGANIZATION

0.98+

Joe BedaPERSON

0.98+

EdgeTITLE

0.98+

UbuntuTITLE

0.98+

Cloud NativeEVENT

0.98+

four projectsQUANTITY

0.98+

OpenstackTITLE

0.98+

AWSORGANIZATION

0.98+

first applicationQUANTITY

0.97+

John Lockwood, Algo Logic Systems | Super Computing 2017


 

>> Narrator: From Denver, Colorado, it's theCUBE. Covering Super Computing '17, brought to you by Intel. (electronic music) >> Hey, welcome back everybody. Jeff Frick here with theCUBE. We're at Denver, Colorado at Super Computing 2017. 12,000 people, our first trip to the show. We've been trying to come for awhile, it's pretty amazing. A lot of heavy science in terms of the keynotes. All about space and looking into brain mapping and it's heavy lifting, academics all around. We're excited to have our next guest, who's an expert, all about speed and that's John Lockwood. He's the CEO of Algo-Logic. First off, John, great to see you. >> Yeah, thanks Jeff, glad to be here. >> Absolutely, so for folks that aren't familiar with the company, give them kind of the quick overview of Algo. >> Yes, Algo-Logic puts algorithms into logic. So our main focus is taking things are typically done in software and putting them into FPGAs and by doing that we make them go faster. >> So it's a pretty interesting phenomenon. We've heard a lot from some of the Intel execs about kind of the software overlay that now, kind of I guess, a broader ecosystem of programmers into hardware, but then still leveraging the speed that you get in hardware. So it's a pretty interesting combination to get those latencies down, down, down. >> Right, right, I mean Intel certainly made a shift to go on into heterogeneous compute. And so in this heterogeneous world, we've got software running on Xeons, Xeon Phis. And we've also got the need though, to use new compute in more than just the traditional microprocessor. And so with the acquisition of Altera, is that now Intel customers can use FPGAs in order to get the benefit in speed. And so Algo-Logic, we typically provide applications with software APIs, so it makes it really easy for end customers to deploy FPGAs into their data center, into their hosts, into their network and start using them right away. >> And you said one of your big customer sets is financial services and trading desk. So low latency there is critical as millions and millions and millions if not billions of dollars. >> Right, so Algo-Logic we have a whole product line of high-frequency trading systems. And so our Tick-To-Trade system is unique in the fact that it has a sub-microsecond trading latency and this means going from market data that comes in, for example on CME for options and futures trading, to time that we can place a fix order back out to the market. All of that happens in an FPGA. That happens in under a microsecond. So under a millionth of second and that beats every other software system that's being used. >> Right, which is a game change, right? Wins or losses can be made on those time frames. >> It's become a must have is that if you're trading on Wall Street or trading in Chicago and you're not trading with an FPGA, you're trading at a severe disadvantage. And so we make a product that enables all the trading firms to be playing on a fair, level playing field against the big firms. >> Right, so it's interesting because the adoption of Flash and some of these other kind of speed accelerator technologies that have been happening over the last several years, people are kind of getting accustomed to the fact that speed is better, but often it was kind of put aside in this kind of high-value applications like financial services and not really proliferating to a broader use of applications. I wonder if you're seeing that kind of change a little bit, where people are seeing the benefits of real time and speed beyond kind of the classic high-value applications? >> Well, I think the big change that's happened is that it's become machine-to-machine now. And so humans, for example in trading, are not part of the loop anymore and so it's not a matter of am I faster than another person? It's am I faster than the other person's machine? And so this notion of having compute that goes fast has become suddenly dramatically much more important because everything now is going to machine versus machine. And so if you're an ad tech advertiser, is that how quickly you can do an auction to place an ad matters and if you can get a higher value ad placed because you're able to do a couple rounds of an auction, that's worth a lot. And so, again, with Algo-Logic we make things go faster and that time benefit means, that all thing else being the same, you're the first to come to a decision. >> Right, right and then of course the machine-to-machine obviously brings up the hottest topic that everybody loves to talk about is autonomous vehicles and networked autonomous vehicles and just the whole IOT space with the compute moving out to the edge. So this machine-to-machine systems are only growing in importance and really percentage of the total compute consumption by far. >> That's right, yeah. So last year at Super Computing, we demonstrated a drone, bringing in realtime data from a drone. So doing realtime data collection and doing processing with our Key Value Store. So this year, we have a machine learning application, a Markov Decision Process where we show that we can scale-out a machine learning process and teach cars how to drive in a few minutes. >> Teach them how to drive in a few minutes? >> Right. >> So that's their learning. That's not somebody programming the commands. They're actually going through a process of learning? >> Right, well so the Key Value Store is just a part of this. We're just the part of the system that makes the scale-outs that runs well in a data center. And so we're still running the Markov Decision Process in simulations in software. So we have a couple Xeon servers that we brought with us to do the machine learning and a data center would scale-out to be dozens of racks, but even with a few machines though, for simple highway driving, what we can show is we start off with, the system's untrained and that in the Markov Decision Process, we reward the final state of not having accidents. And so at first, the cars drive and they're bouncing into each other. It's like bumper cars, but within a few minutes and after about 15 million simulations, which can be run that quickly, is that the cars start driving better than humans. And so I think that's a really phenomenal step, is the fact that you're able to get to a point where you can train a system how to drive and give them 15 man years of experience in a matter of minutes by the scale-out compute systems. >> Right, 'cause then you can put in new variables, right? You can change that training and modify it over time as conditions change, throw in snow or throw in urban environments and other things. >> Absolutely, right. And so we're not pretending that our machine learning, that application we're showing here is an end-all solution. But as you bring in other factors like pedestrians, deer, other cars running different algorithms or crazy drivers, is that you want to expose the system to those conditions as well. And so one of the questions that came up to us was, "What machine learning application are you running?" So we're showing all 25 cars running one machine learned application and that's incrementally getting better as they learn to drive, but we could also have every car running a different machine learning application and see how different AIs interact with each other. And I think that's what you're going to see on the highway as we have more self-driving cars running different algorithms, we have to make sure they all place nice with each other. >> Right, but it's really a different way of looking at the world, right, using machine learning, machine-to-machine versus single person or a team of people writing a piece of software to instruct something to do something and then you got to go back and change it. This is a much more dynamic realtime environment that we're entering into with IOT. >> Right, I mean the machine-to-human, which was kind of last year and years before, were, "How do you make interactions "between the computers better than humans?" But now it's about machine-to-machine and it's,"How do you make machines interact better "with other machines?" And that's where it gets really competitive. I mean, you can imagine with drones for example, for applications where you have drones against drones, the drones that are faster are going to be the ones that win. >> Right, right, it's funny, we were just here last week at the commercial drone show and it's pretty interesting how they're designing the drones now into a three-part platform. So there's the platform that flies around. There's the payload, which can be different sensors or whatever it's carrying, could be herbicide if it's an agricultural drone. And then they've opened up the STKs, both on the control side as well as the mobile side, in terms of the controls. So it's a very interesting way that all these things now, via software could tie together, but as you say, using machine learning you can train them to work together even better, quicker, faster. >> Right, I mean having a swarm or a cluster of these machines that work with each other, you could really do interesting things. >> Yeah, that's the whole next thing, right? Instead of one-to-one it's many-to-many. >> And then when swarms interact with other swarms, then I think that's really fascinating. >> So alright, is that what we're going to be talking about? So if we connect in 2018, what are we going to be talking about? The year's almost over. What are your top priorities for next year? >> Our top priorities are to see. We think that FPGA is just playing this important part. A GPU for example, became a very big part of the super computing systems here at this conference. But the other side of heterogeneous is the FPGA and the FPGA has seen almost, just very minimal adoption so far. But the FPGA has the capability of providing, especially when it comes to doing network IO transactions, it's speeding up realtime interactions, it has an ability to change the world again for HPC. And so I'm expecting that in a couple years, at this HPC conference, that what we'll be talking about, is the biggest top 500 super computers, is that how big of FPGAs do they have. Not how big of GPUs do they have. >> All right, time will tell. Well, John, thanks for taking a few minutes out of your day and stopping by. >> Okay, thanks Jeff, great to talk to you. >> All right, he's John Lockwood, I'm Jeff Frick. You're watching theCUBE from Super Computing 2017. Thanks for watching. >> Bye. (electronic music)

Published Date : Nov 14 2017

SUMMARY :

Covering Super Computing '17, brought to you by Intel. A lot of heavy science in terms of the keynotes. that aren't familiar with the company, and by doing that we make them go faster. still leveraging the speed that you get in hardware. And so with the acquisition of Altera, And you said one of your big customer sets Right, so Algo-Logic we have a whole product line Right, which is a game change, right? And so we make a product that enables all the trading firms Right, so it's interesting because the adoption of Flash And so this notion of having compute that goes fast and just the whole IOT space and teach cars how to drive in a few minutes. That's not somebody programming the commands. and that in the Markov Decision Process, Right, 'cause then you can put in new variables, right? And so one of the questions that came up to us was, of looking at the world, right, using machine learning, Right, I mean the machine-to-human, in terms of the controls. you could really do interesting things. Yeah, that's the whole next thing, right? And then when swarms interact with other swarms, So alright, is that what we're going to be talking about? And so I'm expecting that in a couple years, All right, time will tell. All right, he's John Lockwood, I'm Jeff Frick. (electronic music)

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

John LockwoodPERSON

0.99+

ChicagoLOCATION

0.99+

Jeff FrickPERSON

0.99+

JohnPERSON

0.99+

2018DATE

0.99+

millionsQUANTITY

0.99+

25 carsQUANTITY

0.99+

last weekDATE

0.99+

Algo-LogicORGANIZATION

0.99+

last yearDATE

0.99+

billions of dollarsQUANTITY

0.99+

12,000 peopleQUANTITY

0.99+

Wall StreetLOCATION

0.99+

Denver, ColoradoLOCATION

0.99+

AlteraORGANIZATION

0.99+

next yearDATE

0.99+

this yearDATE

0.99+

Algo Logic SystemsORGANIZATION

0.99+

first tripQUANTITY

0.98+

under a microsecondQUANTITY

0.98+

oneQUANTITY

0.98+

IntelORGANIZATION

0.98+

dozens of racksQUANTITY

0.98+

firstQUANTITY

0.98+

Super Computing 2017EVENT

0.97+

FirstQUANTITY

0.97+

bothQUANTITY

0.97+

under a millionth of secondQUANTITY

0.96+

500 super computersQUANTITY

0.96+

Super Computing '17EVENT

0.94+

15 man yearsQUANTITY

0.94+

about 15 million simulationsQUANTITY

0.93+

three-part platformQUANTITY

0.89+

minutesQUANTITY

0.88+

XeonORGANIZATION

0.84+

theCUBEORGANIZATION

0.82+

single personQUANTITY

0.78+

one ofQUANTITY

0.75+

last several yearsDATE

0.74+

Key Value StoreORGANIZATION

0.72+

coupleQUANTITY

0.63+

couple yearsQUANTITY

0.61+

FlashTITLE

0.61+

yearsDATE

0.55+

Xeon PhisCOMMERCIAL_ITEM

0.51+

machinesQUANTITY

0.5+

questionsQUANTITY

0.5+

Value StoreORGANIZATION

0.49+

KeyTITLE

0.47+

XeonsORGANIZATION

0.4+

MarkovORGANIZATION

0.39+

CMETITLE

0.39+

Yaron Haviv, iguazio | BigData NYC 2017


 

>> Announcer: Live from midtown Manhattan, it's theCUBE, covering BigData New York City 2017, brought to you by SiliconANGLE Media and its ecosystem sponsors. >> Okay, welcome back everyone, we're live in New York City, this is theCUBE's coverage of BigData NYC, this is our own event for five years now we've been running it, been at Hadoop World since 2010, it's our eighth year covering the Hadoop World which has evolved into Strata Conference, Strata Hadoop, now called Strata Data, and of course it's bigger than just Strata, it's about big data in NYC, a lot of big players here inside theCUBE, thought leaders, entrepreneurs, and great guests. I'm John Furrier, the cohost this week with Jim Kobielus, who's the lead analyst on our BigData and our Wikibon team. Our next guest is Yaron Haviv, who's with iguazio, he's the founder and CTO, hot startup here at the show, making a lot of waves on their new platform. Welcome to theCUBE, good to see you again, congratulations. >> Yes, thanks, thanks very much. We're happy to be here again. >> You're known in the theCUBE community as the guy on Twitter who's always pinging me and Dave and team, saying, "Hey, you know, you guys got to "get that right." You really are one of the smartest guys on the network in our community, you're super-smart, your team has got great tech chops, and in the middle of all that is the hottest market which is cloud native, cloud native as it relates to the integration of how apps are being built, and essentially new ways of engineering around these solutions, not just repackaging old stuff, it's really about putting things in a true cloud environment, with an application development, with data at the center of it, you got a whole complex platform you've introduced. So really, really want to dig into this. So before we get into some of my pointed questions I know Jim's got a ton of questions, is give us an update on what's going on so you guys got some news here at the show, let's get to that first. >> So since the last time we spoke, we had tons of news. We're making revenues, we have customers, we've just recently GA'ed, we recently got significant investment from major investors, we raised about $33 million recently from companies like Verizon Ventures, Bosch, you know for IoT, Chicago Mercantile Exchange, which is Dow Jones and other properties, Dell EMC. So pretty broad. >> John: So customers, pretty much. >> Yeah, so that's the interesting thing. Usually you know investors are sort of strategic investors or partners or potential buyers, but here it's essentially our customers that it's so strategic to the business, we want to... >> Let's go with GA of the projects, just get into what's shipping, what's available, what's the general availability, what are you now offering? >> So iguazio is trying to, you know, you alluded to cloud native and all that. Usually when you go to events like Strata and BigData it's nothing to do with cloud native, a lot of hard labor, not really continuous development and integration, it's like continuous hard work, it's continuous hard work. And essentially what we did, we created a data platform which is extremely fast and integrated, you know has all the different forms of states, streaming and events and documents and tables and all that, into a very unique architecture, won't dive into that today. And on top of it we've integrated cloud services like Kubernetes and serverless functionality and others, so we can essentially create a hybrid cloud. So some of our customers they even deploy portions as an Opix-based settings in the cloud, and some portions in the edge or in the enterprise deployed the software, or even a prepackaged appliance. So we're the only ones that provide a full hybrid experience. >> John: Is this a SAS product? >> So it's a software stack, and it could be delivered in three different options. One, if you don't want to mess with the hardware, you can just rent it, and it's deployed in Equanix facility, we have very strong partnerships with them globally. If you want to have something on-prem, you can get a software reference architecture, you go and deploy it. If you're a telco or an IoT player that wants a manufacturing facility, we have a very small 2U box, four servers, four GPUs, all the analytics tech you could think of. You just put it in the factory instead of like two racks of Hadoop. >> So you're not general purpose, you're just whatever the customer wants to deploy the stack, their flexibility is on them. >> Yeah. Now it is an appliance >> You have a hosting solution? >> It is an appliance even when you deploy it on-prem, it's a bunch of Docker containers inside that you don't even touch them, you don't SSH to the machine. You have APIs and you have UIs, and just like the cloud experience when you go to Amazon, you don't open the Kimono, you know, you just use it. So our experience that's what we're telling customers. No root access problems, no security problems. It's a hardened system. Give us servers, we'll deploy it, and you go through consoles and UIs, >> You don't host anything for anyone? >> We host for some customers, including >> So you do whatever the customer was interested in doing? >> Yes. (laughs) >> So you're flexible, okay. >> We just want to make money. >> You're pretty good, sticking to the product. So on the GA, so here essentially the big data world you mentioned that there's data layers, like data piece. So I got to ask you the question, so pretend I'm an idiot for a second, right. >> Yaron: Okay. >> Okay, yeah. >> No, you're a smart guy. >> What problem are you solving. So we'll just go to the simple. I love what you're doing, I assume you guys are super-smart, which I can say you are, but what's the problem you're solving, what's in it for me? >> Okay, so there are two problems. One is the challenge everyone wants to transform. You know there is this digital transformation mantra. And it means essentially two things. One is, I want to automate my operation environment so I can cut costs and be more competitive. The other one is I want to improve my customer engagement. You know, I want to do mobile apps which are smarter, you know get more direct content to the user, get more targeted functionality, et cetera. These are the two key challenges for every business, any industry, okay? So they go and they deploy Hadoop and Hive and all that stuff, and it takes them two years to productize it. And then they get to the data science bit. And by the time they finished they understand that this Hadoop thing can only do one thing. It's queries, and reporting and BI, and data warehousing. How do you do actionable insights from that stuff, okay? 'Cause actionable insights means I get information from the mobile app, and then I translate it into some action. I have to enrich the vectors, the machine learning, all that details. And then I need to respond. Hadoop doesn't know how to do it. So the first generation is people that pulled a lot of stuff into data lake, and started querying it and generating reports. And the boss said >> Low cost data link basically, was what you say. >> Yes, and the boss said, "Okay, what are we going to do with this report? "Is it generating any revenue to the business?" No. The only revenue generation if you take this data >> You're fired, exactly. >> No, not all fired, but now >> John: Look at the budget >> Now they're starting to buy our stuff. So now the point is okay, how can I put all this data, and in the same time generate actions, and also deal with the production aspects of, I want to develop in a beta phase, I want to promote it into production. That's cloud native architectures, okay? Hadoop is not cloud, How do I take a Spark, Zeppelin, you know, a notebook and I turn it into production? There's no way to do that. >> By the way, depending on which cloud you go to, they have a different mechanism and elements for each cloud. >> Yeah, so the cloud providers do address that because they are selling the package, >> Expands all the clouds, yeah. >> Yeah, so cloud providers are starting to have their own offerings which are all proprietary around this is how you would, you know, forget about HDFS, we'll have S3, and we'll have Redshift for you, and we'll have Athena, and again you're starting to consume that into a service. Still doesn't address the continuous analytics challenge that people have. And if you're looking at what we've done with Grab, which is amazing, they started with using Amazon services, S3, Redshift, you know, Kinesis, all that stuff, and it took them about two hours to generate the insights. Now the problem is they want to do driver incentives in real time. So they want to incent the driver to go and make more rides or other things, so they have to analyze the event of the location of the driver, the event of the location of the customers, and just throwing messages back based on analytics. So that's real time analytics, and that's not something that you can do >> They got to build that from scratch right away. I mean they can't do that with the existing. >> No, and Uber invested tons of energy around that and they don't get the same functionality. Another unique feature that we talk about in our PR >> This is for the use case you're talking about, this is the Grab, which is the car >> Grab is the number one ride-sharing in Asia, which is bigger than Uber in Asia, and they're using our platform. By the way, even Uber doesn't really use Hadoop, they use MemSQL for that stuff, so it's not really using open source and all that. But the point is for example, with Uber, when you have a, when they monetize the rides, they do it just based on demand, okay. And with Grab, now what they do, because of the capability that we can intersect tons of data in real time, they can also look at the weather, was there a terror attack or something like that. They don't want to raise the price >> A lot of other data points, could be traffic >> They don't want to raise the price if there was a problem, you know, and all the customers get aggravated. This is actually intersecting data in real time, and no one today can do that in real time beyond what we can do. >> A lot of people have semantic problems with real time, they don't even know what they mean by real time. >> Yaron: Yes. >> The data could be a week old, but they can get it to them in real time. >> But every decision, if you think if you generalize round the problem, okay, and we have slides on that that I explain to customers. Every time I run analytics, I need to look at four types of data. The context, the event, okay, what happened, okay. The second type of data is the previous state. Like I have a car, was it up or down or what's the previous state of that element? The third element is the time aggregation, like, what happened in the last hour, the average temperature, the average, you know, ticker price for the stock, et cetera, okay? And the fourth thing is enriched data, like I have a car ID, but what's the make, what's the model, who's driving it right now. That's secondary data. So every time I run a machine learning task or any decision I have to collect all those four types of data into one vector, it's called feature vector, and take a decision on that. You take Kafka, it's only the event part, okay, you take MemSQL, it's only the state part, you take Hadoop it's only like historical stuff. How do you assemble and stitch a feature vector. >> Well you talked about complex machine learning pipeline, so clearly, you're talking about a hybrid >> It's a prediction. And actions based on just dumb things, like the car broke and I need to send a garage, I don't need machine learning for that. >> So within your environment then, do you enable the machine learning models to execute across the different data platforms, of which this hybrid environment is composed, and then do you aggregate the results of those models, runs into some larger model that drives the real time decision? >> In our solution, everything is a document, so even a picture is a document, a lot of things. So you can essentially throw in a picture, run tensor flow, embed more features into the document, and then query those features on another platform. So that's really what makes this continuous analytics extremely flexible, so that's what we give customers. The first thing is simplicity. They can now build applications, you know we have tier one now, automotive customer, CIO coming, meeting us. So you know when I have a project, one year, I need to have hired dozens of people, it's hugely complex, you know. Tell us what's the use case, and we'll build a prototype. >> John: All right, well I'm going to >> One week, we gave them a prototype, and he was amazed how in one week we created an application that analyzed all the streams from the data from the cars, did enrichment, did machine learning, and provided predictions. >> Well we're going to have to come in and test you on this, because I'm skeptical, but here's why. >> Everyone is. >> We'll get to that, I mean I'm probably not skeptical but I kind of am because the history is pretty clear. If you look at some of the big ideas out there, like OpenStack. I mean that thing just morphed into a beast. Hadoop was a cost of ownership nightmare as you mentioned early on. So people have been conceptually correct on what they were trying to do, but trying to get it done was always hard, and then it took a long time to kind of figure out the operational model. So how are you different, if I'm going to play the skeptic here? You know, I've heard this before. How are you different than say OpenStack or Hadoop Clusters, 'cause that was a nightmare, cost of ownership, I couldn't get the type of value I needed, lost my budget. Why aren't you the same? >> Okay, that's interesting. I don't know if you know but I ran a lot of development for OpenStack when I was in Matinox and Hadoop, so I patched a lot of those >> So do you agree with what I said? That that was a problem? >> They are extremely complex, yes. And I think one of the things that first OpenStack tried to bite on too much, and it's sort of a huge tent, everyone tries to push his agenda. OpenStack is still an infrastructure layer, okay. And also Hadoop is sort of a something in between an infrastructure and an application layer, but it was designed 10 years ago, where the problem that Hadoop tried to solve is how do you do web ranking, okay, on tons of batch data. And then the ecosystem evolved into real time, and streaming and machine learning. >> A data warehousing alternative or whatever. >> So it doesn't fit the original model of batch processing, 'cause if an event comes from the car or an IoT device, and you have to do something with it, you need a table with an index. You can't just go and build a huge Parquet file. >> You know, you're talking about complexity >> John: That's why he's different. >> Go ahead. >> So what we've done with our team, after knowing OpenStack and all those >> John: All the scar tissue. >> And all the scar tissues, and my role was also working with all the cloud service providers, so I know their internal architecture, and I worked on SAP HANA and Exodata and all those things, so we learned from the bad experiences, said let's forget about the lower layers, which is what OpenStack is trying to provide, provide you infrastructure as a service. Let's focus on the application, and build from the application all the way to the flash, and the CPU instruction set, and the adapters and the networking, okay. That's what's different. So what we provide is an application and service experience. We don't provide infrastructure. If you go buy VMware and Nutanix, all those offerings, you get infrastructure. Now you go and build with the dozen of dev ops guys all the stack above. You go to Amazon, you get services. Just they're not the most optimized in terms of the implementation because they also have dozens of independent projects that each one takes a VM and starts writing some >> But they're still a good service, but you got to put it together. >> Yeah right. But also the way they implement, because in order for them to scale is that they have a common layer, they found VMs, and then they're starting to build up applications so it's inefficient. And also a lot of it is built on 10-year-old baseline architecture. We've designed it for a very modern architecture, it's all parallel CPUs with 30 cores, you know, flash and NVMe. And so we've avoided a lot of the hardware challenges, and serialization, and just provide and abstraction layer pretty much like a cloud on top. >> Now in terms of abstraction layers in the cloud, they're efficient, and provide a simplification experience for developers. Serverless computing is up and coming, it's an important approach, of course we have the public clouds from AWS and Google and IBM and Microsoft. There are a growing range of serverless computing frameworks for prem-based deployment. I believe you are behind one. Can you talk about what you're doing at iguazio on serverless frameworks for on-prem or public? >> Yes, it's the first time I'm very active in CNC after Cloud Native Foundation. I'm one of the authors of the serverless white paper, which tries to normalize the definitions of all the vendors and come with a proposal for interoperable standard. So I spent a lot of energy on that, 'cause we don't want to lock customers to an API. What's unique, by the way, about our solution, we don't have a single proprietary API. We just emulate all the other guys' stuff. We have all the Amazon APIs for data services, like Kinesis, Dynamo, S3, et cetera. We have the open source APIs, like Kafka. So also on the serverless, my agenda is trying to promote that if I'm writing to Azure or AWS or iguazio, I don't need to change my app. I can use any developer tools. So that's my effort there. And we recently, a few weeks ago, we launched our open source project, which is a sort of second generation of something we had before called Nuclio. It's designed for real time >> John: How do you spell that? >> N-U-C-L-I-O. I even have the logo >> He's got a nice slick here. >> It's really fast because it's >> John: Nuclio, so that's open source that you guys just sponsor and it's all code out in the open? >> All the code is in the open, pretty cool, has a lot of innovative ideas on how to do stream processing and best, 'cause the original serverless functionality was designed around web hooks and HTTP, and even many of the open source projects are really designed around HTTP serving. >> I have a question. I'm doing research for Wikibon on the area of serverless, in fact we've recently published a report on serverless, and in terms of hybrid cloud environments, I'm not seeing yet any hybrid serverless clouds that involve public, you know, serverless like AWS Lambda, and private on-prem deployment of serverless. Do you have any customers who are doing that or interested in hybridizing serverless across public and private? >> Of course, and we have some patents I don't want to go into, but the general idea is, what we've done in Nuclio is also the decoupling of the data from the computation, which means that things can sort of be disjoined. You can run a function in Raspberry Pi, and the data will be in a different place, and those things can sort of move, okay. >> So the persistence has to happen outside the serverless environment, like in the application itself? >> Outside of the function, the function acts as the persistent layer through APIs, okay. And how this data persistence is materialized, that server separate thing. So you can actually write the same function that will run against Kafka or Kinesis or Private MQ, or HTTP without modifying the function, and ad hoc, through what we call function bindings, you define what's going to be the thing driving the data, or storing the data. So that can actually write the same function that does ETL drop from table one to table two. You don't need to put the table information in the function, which is not the thing that Lambda does. And it's about a hundred times faster than Lambda, we do 400,000 events per second in Nuclio. So if you write your serverless code in Nuclio, it's faster than writing it yourself, because of all those low-level optimizations. >> Yaron, thanks for coming on theCUBE. We want to do a deeper dive, love to have you out in Palo Alto next time you're in town. Let us know when you're in Silicon Valley for sure, we'll make sure we get you on camera for multiple sessions. >> And more information re:Invent. >> Go to re:Invent. We're looking forward to seeing you there. Love the continuous analytics message, I think continuous integration is going through a massive renaissance right now, you're starting to see new approaches, and I think things that you're doing is exactly along the lines of what the world wants, which is alternatives, innovation, and thanks for sharing on theCUBE. >> Great. >> That's very great. >> This is theCUBE coverage of the hot startups here at BigData NYC, live coverage from New York, after this short break. I'm John Furrier, Jim Kobielus, after this short break.

Published Date : Sep 27 2017

SUMMARY :

brought to you by SiliconANGLE Media I'm John Furrier, the cohost this week with Jim Kobielus, We're happy to be here again. and in the middle of all that is the hottest market So since the last time we spoke, we had tons of news. Yeah, so that's the interesting thing. and some portions in the edge or in the enterprise all the analytics tech you could think of. So you're not general purpose, you're just Now it is an appliance and just like the cloud experience when you go to Amazon, So I got to ask you the question, which I can say you are, So the first generation is people that basically, was what you say. Yes, and the boss said, and in the same time generate actions, By the way, depending on which cloud you go to, and that's not something that you can do I mean they can't do that with the existing. and they don't get the same functionality. because of the capability that we can intersect and all the customers get aggravated. A lot of people have semantic problems with real time, but they can get it to them in real time. the average temperature, the average, you know, like the car broke and I need to send a garage, So you know when I have a project, an application that analyzed all the streams from the data Well we're going to have to come in and test you on this, but I kind of am because the history is pretty clear. I don't know if you know but I ran a lot of development is how do you do web ranking, okay, and you have to do something with it, and build from the application all the way to the flash, but you got to put it together. it's all parallel CPUs with 30 cores, you know, Now in terms of abstraction layers in the cloud, So also on the serverless, my agenda is trying to promote I even have the logo and even many of the open source projects on the area of serverless, in fact we've recently and the data will be in a different place, So if you write your serverless code in Nuclio, We want to do a deeper dive, love to have you is exactly along the lines of what the world wants, I'm John Furrier, Jim Kobielus, after this short break.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jim KobielusPERSON

0.99+

MicrosoftORGANIZATION

0.99+

IBMORGANIZATION

0.99+

BoschORGANIZATION

0.99+

UberORGANIZATION

0.99+

JohnPERSON

0.99+

John FurrierPERSON

0.99+

Verizon VenturesORGANIZATION

0.99+

Yaron HavivPERSON

0.99+

AsiaLOCATION

0.99+

NYCLOCATION

0.99+

GoogleORGANIZATION

0.99+

New York CityLOCATION

0.99+

JimPERSON

0.99+

Palo AltoLOCATION

0.99+

30 coresQUANTITY

0.99+

New YorkLOCATION

0.99+

AWSORGANIZATION

0.99+

two yearsQUANTITY

0.99+

BigDataORGANIZATION

0.99+

Silicon ValleyLOCATION

0.99+

AmazonORGANIZATION

0.99+

five yearsQUANTITY

0.99+

two problemsQUANTITY

0.99+

Dell EMCORGANIZATION

0.99+

YaronPERSON

0.99+

OneQUANTITY

0.99+

DavePERSON

0.99+

KafkaTITLE

0.99+

third elementQUANTITY

0.99+

SiliconANGLE MediaORGANIZATION

0.99+

Dow JonesORGANIZATION

0.99+

two thingsQUANTITY

0.99+

two racksQUANTITY

0.99+

todayDATE

0.99+

GrabORGANIZATION

0.99+

NuclioTITLE

0.99+

two key challengesQUANTITY

0.99+

Cloud Native FoundationORGANIZATION

0.99+

about $33 millionQUANTITY

0.99+

eighth yearQUANTITY

0.99+

HadoopTITLE

0.98+

second typeQUANTITY

0.98+

LambdaTITLE

0.98+

10 years agoDATE

0.98+

each cloudQUANTITY

0.98+

Strata ConferenceEVENT

0.98+

EquanixLOCATION

0.98+

10-year-oldQUANTITY

0.98+

first thingQUANTITY

0.98+

first generationQUANTITY

0.98+

oneQUANTITY

0.98+

second generationQUANTITY

0.98+

Hadoop WorldEVENT

0.98+

first timeQUANTITY

0.98+

theCUBEORGANIZATION

0.97+

NutanixORGANIZATION

0.97+

MemSQLTITLE

0.97+

each oneQUANTITY

0.97+

2010DATE

0.97+

KinesisTITLE

0.97+

SASORGANIZATION

0.96+

WikibonORGANIZATION

0.96+

Chicago Mercantile ExchangeORGANIZATION

0.96+

about two hoursQUANTITY

0.96+

this weekDATE

0.96+

one thingQUANTITY

0.95+

dozenQUANTITY

0.95+

Paul Hodge, Honeywell Process Solutions | VMworld 2017


 

>> Narrator: Live from Las Vegas, it's theCUBE. Covering VMworld 2017. Brought to you by VMware and its ecosystem partner. >> Hi, I'm Stu Miniman with my co-host Keith Townsend. Happy to welcome to the program first-time guest Paul Hodge, who's the global marketing manager of Honeywell Process Solutions, thanks so much for joining us. >> Thank you Stu. >> Alright, so Paul, you have to tell us, so Honeywell's the company, I think many people are aware. The process solutions; maybe you can tell us a little about that part of the organization and what your role there is. >> Sure, sure. So yeah, Honeywell, multi-national conglomerate, hundred thirty thousand people, forty billion dollars. Honeywell Process Solutions is then a subdivision within Honeywell that serves the manufacturing industry. So we go through and provide goods and services that allow people to go through and automate their plants for those pharmaceuticals or refining, and those types of things. >> And your role here coming to the show, you're actually a partner of Honeywell. >> Yes we are. >> Sometimes we've got tons of practitioners here, so tell us a little bit, you know, manufacturing, I think I know a few places where that makes a lot of sense for VMware, but tell us a little bit about the history of the partnership and your role there. >> Sure, so we've been partners with VMware since 2010. So it's been a long, long time partnership. And we've been bringing virtualization into the manufacturing industry, because we're typically quite conservative as a company, in terms of adopting technology, so it really takes an automation leader like Honeywell to go through and drive a new technology into the industry. So we've been doing that since, yeah, 2010. And yeah, this week we've been going through and talking about our new HDI hyperconversion infrastructure sort of solution that we've been doing, sort of, with VMware, and along with Dell EMC, that goes through and takes that a step further into our industry. >> Wow, so that's pretty interesting, I've worked in pharmaceuticals, manufacturing organization, and automation of IT is pretty difficult because of regulatory issues, et cetera, safety. What are some of the challenges that Honeywell is addressing in automation and specifically around VMware products? >> Sure, sure. I think the number one thing for our industry is purely simplicity, the people in our industry, they're not IT geeks, they don't have all of this knowledge, they don't have a storage administrator out there, so we have to go through and do all of that for them and take all of the complexity sort of out of the product. So it needs to be simple, but it just needs to be reliable, as well. I mean we're dealing with your refineries and pharmaceutical plants and things like that so the things just cannot stop. So you need simplicity with the reliability and availability and have both of them in sort of a package that's ready to go. And the other complexity is that we need to be able to deliver this anywhere around the world, and that's the other reason why it needs to be simple because it's not just going to North America, it's going to Europe, it's going to the Middle East, it's going to all different places. >> All right, well you say simplicity, and any time we've been talking about hyperconversion infrastructure, simplicity's usually at the top of the list. >> Paul: Absolutely, it's one of the big benefits. >> It seems like a natural fit there. Maybe, what is the solution, what made up with it, you said Dell EMC is part of it, of course VMware is part of it, how's it different from, say, the VxRail that Dell's been offering, you know, vSAN to hit ten thousand customers. What differentiates this compared to everything else that's available? >> Sure, so we're taking the vSAN, which is absolutely as you were saying, ten thousand customers out there very mature, very reliable, and we're taking it and sort of marrying it with the Dell EMC FX2 solution there which is an extremely powerful platform and flexible platform for going through and writing sort of, vSAN on top of it. So we've taken those two best in breed products there and we've gone through and built a reference configuration that's customized and optimized for the manufacturing industry. >> Yeah it's interesting. Keith, I remember when the FX2 launched, everybody was like, wait is this an HCI solution? Will it be there, will this be a platform for it? I don't know, is there anybody else leveraging that for this type of solution yet? >> vSAN is a very very popular sort of platform for, >> Stu: On the FX2. >> Yes, sorry, FX2, thank you. >> Yeah, I know vSAN is, but the marrying of those two together, is that a standard offering that was out there, or is that something that you've optimized? >> Certainly, I think there might be a vSAN ready version of that as well, but the reason why it's quite popular is because I can go through and have four vSAN nodes in the one FX2. So I can have a vSAN in a box with the FX2 solution, which makes it quite a nice fit. But it's really, the hardware platform aside, the value that Honeywell's providing is just really, the integration of those products. Building a reference design that's optimized for our industry and testing out all of the stack and delivering that for a market. >> So talking about building up a stack specifically for manufacturing, can you talk about, who's the end customer? Who's actually buying the solution? You say you don't, may not have a storage administrator. Are you guys selling to IT, or manufacturing operations? >> Mainly to the manufacturing part of the business, which is why it needs to be so simple is because those IT resources that you would normally have on the IT side of the business, they're just not there. And so we go through and sell to our customers there, refiners, pharmaceutical plants and things like that. And typically Honeywell is the one that's then engineering the overall solution to solve the manufacturing problems, so we deliver it to our own engineers, and our own engineers then customize that to go through and solve a manufacturing problem. But in our industry there's typically quite a big separation between the IT part of the organization and the OT, if you will, part, which is why the simplicity is such a big part of what we do. >> Paul, can you expand on that at all? Something we, from the research side have been looking at, kind of the IT, OT, what you're hearing from customers. >> Sure. So I think the main reason, traditionally, why that separation has been there is just on the OT side, there's a very, very different need in terms of reliability and availability and criticality and what happens if certain things just go wrong. And traditionally, those skills have been in a separate part of the organization to the IT part of the organization. So in some companies, those two worlds are absolutely converging and IoT is certainly a big thing that is driving that convergence. But in other organizations, they are still remaining separate, it's just the cultural way that a company has gone through and run itself. So I think, whether they are merging, those worlds, or whether they're staying separate is really, changes on a corporation by corporation basis. >> Paul, let's talk a little bit more about that OT customer. One of the things that's been my experience, you walk into a manufacturing floor, you see a system there that's 15 years old easy. This is a tool and that tool is just there to do a function within the manufacturing process, but with all of the malware, and the encrypted, and shutting down an entire operation's perspective, how are you helping OT get to a point where they accept, I guess the flexibility that this needed in an operation to support, something like a FX2 on their data center floor, with running vSAN? >> Sure, well, first of all, I think it is that simplicity. I mean if it's too complicated, then it just will not be accepted for somebody like that. So that simplicity and the reliability as where I've already spoken about there, but I think Honeywell there is there as well, helping that OT side of the business to be able to go through and deploy a system of that level of complexity, because as you're saying there, it is very different in terms of the 15 year old thing that they might be upgrading from. But it delivers just so many benefits from them. I mean going from 100 servers, which is what, say, some plants typically might go through and have, and you're just going to maybe two FX2 base clusters of our systems there, it's just a massive reduction in terms of hardware, and with each piece of hardware we remove, that's space and power and cooling and maintenance and everything like that goes away with it. >> Alright Paul, talking about different architectures, you mentioned IoT, so I have to imagine that's having significant impact on your industry. >> It is. >> Walk us through that. What are you seeing, what is Honeywell's role there, what are your customers doing? >> It's actually an interesting area for Honeywell Process Solutions because first of all, we've been doing IoT for 30 years, in terms of really, from Honeywell Process Solutions. >> You were the hipster IoT company. >> Really, oh that's good, that's a compliment. >> Is that what you were saying, you were IoT before it was cool, is what I understand. (laughter) >> We've been out there doing it, I mean, our job in Honeywell Process Solutions is to take field data, marry it with an inch device, create value there and sometimes just make that available on the internet, but getting back to your question, though, is the IoT way of life is changing how we go through and do things. First of all, there's data that's out there that previously, people wouldn't have considered valuable, okay, so that data, they're trying to extract that data out there, so we're, I guess there's a wave if you will of people trying to get that previously non-valuable data out of the field, so that's one part there. Sometimes as well, projects are very, very geographically dispersed. Traditionally you would have had like a plant infrastructure and it would've been in a self-contained area but now it can be in over a very wide geographical area. So you've got to have a controller, which potentially is on the internet, and have that be highly secure all the way back to then, the sources that need to go through and consume that. So that's a difference in how it's going through and impacting us. But I think as well, there's I guess, building an awareness out there in the market of trying to go through and extract more information and more intelligence out of the data that people are already getting, and driving new waves there as well. >> What are some of the lessons you're helping customers, especially OT, understand when they move from this isolated manufacturing network to this distributed network, that they're extracting value from, but they're also exposing security risks, and just you know, control risks. They're not used to operating at this multi, manufacturing facility perspective, from an IT perspective. They're in essence becoming IT. What are some of the pitfalls you're helping them to avoid? >> Sure, well I think security is a great one that you've just gone through and mentioned there. Anything that we're, data that's running the plant, first and foremost, needs to be secure in terms of going through and doing that. So I think that's one of the first things is how you go through and design that system and make it secure. And so I think that's one of the areas there. But also, to extract data and value out of it requires infrastructure to be able to store the data, to be able to go through and allow third parties to do analytics and other types of things; on top of that, infrastructure. So Honeywell's doing a lot to provide that back-end infrastructure that people can go through and do data mining and do analytics, and solving those new problems on that infrastructure. >> So, this power of FX2, the Dell EMC reference architecture, the VMare vSAN, gives an awful lot of computer. I think competitors like AWS will come in and say, you know what, AWS Snowball Edge is designed for this big data use case where we can ingest IoT data at the edge, do some light processing on it. What are you running from a practical perspective that you're seeing users say you know that just isn't enough, we need this power of the FX2, this Dell EMC reference architecture and vSAN. >> Sure, sure. So I think it's serving a different market segment. So absolutely there's a market segment out there that says, I'm prepared to take my data and put it into an Edge device and send it to a cloud, into AWS, or wherever, okay and that's absolutely a market segment that's out there. But there's another segment of market and it is quite large, for manufacturing, that says, no, the data that I'm ingesting needs to stay within my corporate control, within the boundaries of the corporation, okay. And it's those types of customers there that need that on-premise compute capacity to be able to ingest that data, to be able to display it to operators, to be able to go through and solve other problems with that data; it needs to be local. And that could just be because they don't trust it, because remember, we lag in terms of our, our adoption. We're lagging as an industry in general. So I think it's a lot of those types of reasons, yeah. >> So I'm kind of curious about a practical self process. Again, these OT folks don't look at their traditional 100 racks and say, we need to do something with this. We need to change it. If it works, why change it? Especially in manufacturing. What's the catalyst for change? >> Sure, absolutely, well I think in a lot of these industries there, they're losing people in terms of the people that run those types of plants there. So I think the first catalyst for change is, I had all of this equipment that was taking me all of these people to go through and maintain. I just don't have those people anymore. I need to do more with less. So by removing those pieces of equipment there, I make myself sort of more efficient. Not only in terms of the maintenance of those pieces of equipment there, but there's always on-going changes that need to be made to these environments as well, so you need to be able to go through and deploy new virtual machines, you know, far more agile environment. And when you're dealing with, say, physical pieces of equipment, if I wanted to deploy a new node, I would need to order that node from a supplier. I would then need to go through and commission it, install the software, and rack it, and then do all that, I mean that's months and months and months of work and effort. With a virtual machine, I just go through and deploy it, and I'm done. Yeah. >> Paul, just want to get your final take. VMworld, the show itself, kind of the experience as a partner, what's your takeaway from that? >> It's been fantastic, for me, VMworld is always about the relationships and the conversations that go through and take place, whether that be with partners like VMware, or whether it be with other supplies that I go through and do business with, and everyone's here. It's just one of the events where it's just, you're only limited by your ability to get your calendar organized and see all of the people that you want to do. That's the only limit of what you can achieve here. But it's just been a fantastic event this year and Honeywell's been glad to be here. >> Paul Hodge, Honeywell Process Solutions. Really appreciate you joining us, and absolutely agree a thousand percent, I'm sure Keith would attest to this also, if you're not at this show, you need to be here next year. If you're in the European one, you should go over there. So many conversations, we're happy to bring you a number of them, give you just a taste or a flavor of what's been happening at VMworld 2017. Thank you for joining us for three days of program. We're going to be wrapping up shortly, but all of it goes on the website and check it all out. Thank you so much for watching theCUBE. (electronic music)

Published Date : Aug 30 2017

SUMMARY :

Brought to you by VMware Happy to welcome to the program first-time guest Paul Hodge, that part of the organization and what your role there is. that allow people to go through and automate their plants And your role here coming to the show, the history of the partnership and your role there. an automation leader like Honeywell to go through and What are some of the challenges that Honeywell is addressing So it needs to be simple, All right, well you say simplicity, the VxRail that Dell's been offering, optimized for the manufacturing industry. I don't know, is there anybody else leveraging that But it's really, the hardware platform aside, Who's actually buying the solution? engineering the overall solution to solve the kind of the IT, OT, what you're hearing from customers. of the organization to the IT part of the organization. One of the things that's been my experience, So that simplicity and the reliability as where I've already you mentioned IoT, so I have to imagine that's having What are you seeing, what is Honeywell's role there, Honeywell Process Solutions because first of all, Is that what you were saying, you were IoT before more intelligence out of the data that people are What are some of the pitfalls you're helping them to avoid? to be able to go through and allow of the FX2, that on-premise compute capacity to be able to What's the catalyst for change? in terms of the people that run those types of plants there. kind of the experience as a partner, organized and see all of the people that you want to do. So many conversations, we're happy to bring you a number

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
HoneywellORGANIZATION

0.99+

Keith TownsendPERSON

0.99+

Paul HodgePERSON

0.99+

AWSORGANIZATION

0.99+

PaulPERSON

0.99+

VMwareORGANIZATION

0.99+

Honeywell Process SolutionsORGANIZATION

0.99+

EuropeLOCATION

0.99+

KeithPERSON

0.99+

15 yearQUANTITY

0.99+

Stu MinimanPERSON

0.99+

100 serversQUANTITY

0.99+

30 yearsQUANTITY

0.99+

North AmericaLOCATION

0.99+

forty billion dollarsQUANTITY

0.99+

next yearDATE

0.99+

Middle EastLOCATION

0.99+

DellORGANIZATION

0.99+

2010DATE

0.99+

100 racksQUANTITY

0.99+

vSANTITLE

0.99+

ten thousand customersQUANTITY

0.99+

bothQUANTITY

0.99+

each pieceQUANTITY

0.99+

three daysQUANTITY

0.98+

hundred thirty thousand peopleQUANTITY

0.98+

this weekDATE

0.98+

VMworldORGANIZATION

0.98+

OneQUANTITY

0.98+

Dell EMCORGANIZATION

0.98+

twoQUANTITY

0.98+

VMworld 2017EVENT

0.98+

oneQUANTITY

0.98+

this yearDATE

0.98+

Las VegasLOCATION

0.98+

two worldsQUANTITY

0.98+

Snowball EdgeCOMMERCIAL_ITEM

0.98+

one partQUANTITY

0.97+

StuPERSON

0.97+

first-timeQUANTITY

0.96+

first catalystQUANTITY

0.96+

FX2COMMERCIAL_ITEM

0.96+

FX2TITLE

0.96+

FirstQUANTITY

0.95+

first thingsQUANTITY

0.91+

VMwareTITLE

0.9+

15 years oldQUANTITY

0.89+