Image Title

Search Results for Omer Asad:

Sheila Rohra & Omer Asad, HPE Storage | HPE Discover 2022


 

>> Announcer: "theCUBE" presents HPE Discover 2022. Brought to you by HPE. >> Welcome back to HPE Discover 2022. You're watching "theCUBE's" coverage. This is Day 2, Dave Vellante with John Furrier. Sheila Rohra is here. She's the Senior Vice President and GM of the Data Infrastructure Business at Hewlett Packard Enterprise, and of course, the storage division. And Omer Asad. Welcome back to "theCUBE", Omer. Senior Vice President and General Manager for Cloud Data Services, Hewlett Packard Enterprise storage. Guys, thanks for coming on. Good to see you. >> Thank you. Always a pleasure, man. >> Thank you. >> So Sheila, I'll start with you. Explain the difference. The Data Infrastructure Business and then Omer's Cloud Data Services. You first. >> Okay. So Data Infrastructure Business. So I'm responsible for the primary secondary storage. Basically, what you physically store, the data in a box, I actually own that. So I'm going to have Omer explain his business because he can explain it better than me. (laughing) Go ahead. >> So 100% right. So first, data infrastructure platforms, primary secondary storage. And then what I do from a cloud perspective is wrap up those things into offerings, block storage offerings, data protection offerings, and then put them on top of the GreenLake platform, which is the platform that Antonio and Fidelma talked about on main Keynote stage yesterday. That includes multi-tenancy, customer subscription management, sign on management, and then on top of that we build services. Services are cloud-like services, storage services or block service, data protection service, disaster recovery services. Those services are then launched on top of the platform. Some services like data protection services are software only. Some services are software plus hardware. And the hardware on the platform comes along from the primary storage business and we run the control plane for that block service on the GreenLake platform and that's the cloud service. >> So, I just want to clarify. So what we maybe used to know as 3PAR and Nimble and StoreOnce. Those are the products that you're responsible for? >> That is the primary storage part, right? And just to kind of show that, he and I, we do indeed work together. Right. So if you think about the 3PAR, the primary... Sorry, the Primera, the Alletras, the Nimble, right? All that, right? That's the technology that, you know, my team builds. And what Omer does with his magic is that he turns it into HPE GreenLake for storage, right? And to deliver as a service, right? And basically to create a self-service agility for the customer and also to get a very Cloud operational experience for them. >> So if I'm a customer, just so I get this right, if I'm a customer and I want Hybrid, that's what you're delivering as a Cloud service? >> Yes. >> And I don't care where the data is on-premises, in storage, or on Cloud. >> 100%. >> Is that right? >> So the way that would work is, as a customer, you would come along with the partner, because we're 100% partner-led. You'll come to the GreenLake Console. On the GreenLake Console, you will pick one of our services. Could be a data protection service, could be the block storage service. All services are hybrid in nature. Public Cloud is 100% participant in the ecosystem. You'll choose a service. Once you choose a service, you like the rate card for that service. That rate card is just like a hyperscaler rate card. IOPS, Commitment, MINCOMMIT's, whatever. Once you procure that at the price that you like with a partner, you buy the subscription. Then you go to console.greenLake.com, activate your subscription. Once the subscription is activated, if it's a service like block storage, which we talked about yesterday, service will be activated, and our supply chain will send you our platform gear, and that will get activated in your site. Two things, network cable, power cable, dial into the cloud, service gets activated, and you have a cloud control plane. The key difference to remember is that it is cloud-consumption model and cloud-operation model built in together. It is not your traditional as a service, which is just like hardware leasing. >> Yeah, yeah, yeah. >> That's a thing of the past. >> But this answers a question that I had, is how do you transfer or transform from a company that is, you know, selling boxes, of course, most of you are engineers are software engineers, I get that, to one that is selling services. And it sounds like the answer is you've organized, I know it's inside baseball here, but you organize so that you still have, you can build best of breed products and then you can package them into services. >> Omer: 100%. 100%. >> It's separate but complementary organization. >> So the simplest way to look at it would be, we have a platform side at the house that builds the persistence layers, the innovation, the file systems, the speeds and feeds, and then building on top of that, really, really resilient storage services. Then how the customer consumes those storage services, we've got tremendous feedback from our customers, is that the cloud-operational model has won. It's just a very, very simple way to operate it, right? So from a customer's perspective, we have completely abstracted away out hardware, which is in the back. It could be at their own data center, it could be at an MSP, or they could be using a public cloud region. But from an operational perspective, the customer gets a single pane of glass through our service console, whether they're operating stuff on-prem, or they're operating stuff in the public cloud. >> So they get storage no matter what? They want it in the cloud, they got it that way, and if they want it as a service, it just gets shipped. >> 100%. >> They plug it in and it auto configures. >> Omer: It's ready to go. >> That's right. And the key thing is simplicity. We want to take the headache away from our customers, we want our customers to focus on their business outcomes, and their projects, and we're simplifying it through analytics and through this unified cloud platform, right? On like how their data is managed, how they're stored, how they're secured, that's all taken care of in this operational model. >> Okay, so I have a question. So just now the edge, like take me through this. Say I'm a customer, okay I got the data saved on-premise action, cloud, love that. Great, sir. That's a value proposition. Come to HPE because we provide this easily. Yeah. But now at the edge, I want to deploy it out to some edge node. Could be a tower with Telecom, 5G or whatever, I want to box this out there, I want storage. What happens there? Just ship it out there and connects up? Does it work the same way? >> 100%. So from our infrastructure team, you'll consume one or two platforms. You'll consume either the Hyperconverged form factor, SimpliVity, or you might convert, the Converged form factor, which is proliant servers powered by Alletras. Alletra 6Ks. Either of those... But it's very different the way you would procure it. What you would procure from us is an edge service. That edge service will come configured with certain amount of compute, certain amount of storage, and a certain amount of data protection. Once you buy that on a dollars per gig per month basis, whichever rate card you prefer, storage rate card or a VMware rate card, that's all you buy. From that point on, the platform team automatically configures the back-end hardware from that attribute-based ordering and that is shipped out to your edge. Dial in the network cable, dial in the power cable, GreenLake cloud discovers it, and then you start running the- >> Self-service, configure it, it just shows up, plug it in, done. >> Omer: Self-service but partner-led. >> Yeah. >> Because we have preferred pricing for our partners. Our partners would come in, they will configure the subscriptions, and then we activate those customers, and then send out the hardware. So it's like a hyperscaler on-prem at-scale kind of a model. >> Yeah, I like it a lot. >> So you guys are in the data business. You run the data portion of Hewlett Packard Enterprise. I used to call it storage, even if we still call it storage but really, it's evolving into data. So what's your vision for the data business and your customer's data vision, if you will? How are you supporting that? >> Well, I want to kick it off, and then I'm going to have my friend, Omer, chime in. But the key thing is that what the first step is is that we have to create a unified platform, and in this case we're creating a unified cloud platform, right? Where there's a single pane of glass to manage all that data, right? And also leveraging lots of analytics and telemetry data that actually comes from our infosite, right? We use all that, we make it easy for the customer, and all they have to say, and they're basically given the answers to the test. "Hey, you know, you may want to increase your capacity. You may want to tweak your performance here." And all the customers are like, "Yes. No. Yes, no." Basically it, right? Accept and not accept, right? That's actually the easiest way. And again, as I said earlier, this frees up the bandwidth for the IT teams so then they actually focus more on the business side of the house, rather than figuring out how to actually manage every single step of the way of the data. >> Got it. >> So it's exactly what Sheila described, right? The way this strategy manifests itself across an operational roadmap for us is the ability to change from a storage vendor to a data services vendor, right? >> Sheila: Right. >> And then once we start monetizing these data services to our customers through the GreenLake platform, which gives us cloud consumption model and a cloud operational model, and then certain data services come with the platform layer, certain data services are software only. But all the services, all the data services that we provide are hybrid in nature, where we say, when you provision storage, you could provision it on-prem, or you can provision it in a hyperscaler environment. The challenge that most of our customers have come back and told us, is like, data center control planes are getting fragmented. On-premises, I mean there's no secrecy about it, right? VMware is the predominant hypervisor, and as a result of that, vCenter is the predominant configuration layer. Then there is the public cloud side, which is through either Ajour, or GCP, or AWS, being one of the largest ones out there. But when the customer is dealing with data assets, the persistence layer could be anywhere, it could be in AWS region, it could be your own data center, or it could be your MSP. But what this does is it creates an immense amount of fragmentation in the context in which the customers understand the data. Essentially, John, the customers are just trying to answer three questions: What is it that I store? How much of it do I store? Should I even be storing it in the first place? And surprisingly, those three questions just haven't been answered. And we've gotten more and more fragmented. So what we are trying to produce for our customers, is a context to ware data view, which allows the customer to understand structured and unstructured data, and the lineage of how it is stored in the organization. And essentially, the vision is around simplification and context to ware data management. One of the key things that makes that possible, is again, the age old infosite capability that we have continued to hone and develop over time, which is now up to the stage of like 12 trillion data points that are coming into the system that are not corroborated to give that back. >> And of course cost-optimizing it as well. We're up against the clock, but take us through the announcements, what's new from when we sort of last talked? I guess it was in September. >> Omer: Right. >> Right. What's new that's being announced here and, or, you know, GA? >> Right. So three major announcements that came out, because to keep on establishing the context when we were with you last time. So last time we announced GreenLake backup and recovery service. >> John: Right. >> That was VMware backup and recovery as a complete cloud, sort of SaaS control plane. No backup target management, no BDS server management, no catalog management, it's completely a SaaS service. Provide your vCenter address, boom, off you go. We do the backups, agentless, 100% dedup enabled. We have extended that into the public cloud domain. So now, we can back up AWS, EC2, and EBS instances within the same constructs. So a single catalog, single backup policy, single protection framework that protects you both in the cloud and on-prem, no fragmentation, no multiple solutions to deploy. And the second one is we've extended our Hyperconverged service to now be what we call the Hybrid Cloud On-Demand. So basically, you go to GreenLake Console control plane, and from there, you basically just start configuring virtual machines. It supports VMware and AWS at the same time. So you can provision a virtual machine on-prem, or you can provision a virtual machine in the public cloud. >> Got it. >> And, it's the same framework, the same catalog, the same inventory management system across the board. And then, lastly, we extended our block storage service to also become hybrid in nature. >> Got it. >> So you can manage on-prem and AWS, EBS assets as well. >> And Sheila, do you still make product announcements, or does Antonio not allow that? (Omer laughing) >> Well, we make product announcements, and you're going to see our product announcements actually done through the HPE GreenLake for block storage. >> Dave: Oh, okay. >> So our announcements will be coming through that, because we do want to make it as a service. Again, we want to take all of that headache of "What configuration should I buy? How do I actually deploy it? How do I...?" We really want to take that headache away. So you're going to see more feature announcements that's going to come through this. >> So feature acceleration through GreenLake will be exposed? >> Absolutely. >> This is some cool stuff going on behind the scenes. >> Oh, there's a lot good stuff. >> Hardware still matters, you know. >> Hardware still matters. >> Does it still matter? Does hardware matter? >> Hardware still matters, but what matters more is the experience, and that's actually what we want to bring to the customer. (laughing) >> John: That's good. >> Good answer. >> Omer: 100%. (laughing) >> Guys, thanks so much- >> John: Hardware matters. >> For coming on "theCUBE". Good to see you again. >> John: We got it. >> Thanks. >> And hope the experience was good for you Sheila. >> I know, I know. Thank you. >> Omer: Pleasure as always. >> All right, keep it right there. Dave Vellante and John Furrier will be back from HPE Discover 2022. You're watching "theCUBE". (soft music)

Published Date : Jun 29 2022

SUMMARY :

Brought to you by HPE. and of course, the storage division. Always a pleasure, man. Explain the difference. So I'm responsible for the and that's the cloud service. Those are the products that That's the technology that, you know, the data is on-premises, On the GreenLake Console, you And it sounds like the Omer: 100%. It's separate but is that the cloud-operational and if they want it as a and it auto configures. And the key thing is simplicity. So just now the edge, and that is shipped out to your edge. it just shows up, plug it in, done. and then we activate those customers, for the data business the answers to the test. and the lineage of how it is And of course and, or, you know, GA? establishing the context And the second one is we've extended And, it's the same framework, So you can manage on-prem the HPE GreenLake for block storage. that's going to come through this. going on behind the scenes. and that's actually what we Omer: 100%. Good to see you again. And hope the experience I know, I know. Dave Vellante and John

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
SheilaPERSON

0.99+

JohnPERSON

0.99+

DavePERSON

0.99+

Sheila RohraPERSON

0.99+

Dave VellantePERSON

0.99+

SeptemberDATE

0.99+

Dave VellantePERSON

0.99+

three questionsQUANTITY

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

100%QUANTITY

0.99+

AWSORGANIZATION

0.99+

John FurrierPERSON

0.99+

oneQUANTITY

0.99+

OmerPERSON

0.99+

John FurrierPERSON

0.99+

two platformsQUANTITY

0.99+

Omer AsadPERSON

0.99+

firstQUANTITY

0.99+

HPEORGANIZATION

0.99+

NimbleORGANIZATION

0.99+

first stepQUANTITY

0.99+

console.greenLake.comOTHER

0.99+

yesterdayDATE

0.99+

second oneQUANTITY

0.99+

OneQUANTITY

0.99+

AntonioPERSON

0.98+

12 trillion data pointsQUANTITY

0.98+

Two thingsQUANTITY

0.98+

AlletrasORGANIZATION

0.97+

HPE StorageORGANIZATION

0.97+

5GORGANIZATION

0.97+

theCUBETITLE

0.97+

bothQUANTITY

0.95+

GALOCATION

0.95+

StoreOnceORGANIZATION

0.95+

EBSORGANIZATION

0.94+

three major announcementsQUANTITY

0.94+

Cloud Data ServicesORGANIZATION

0.93+

PrimeraORGANIZATION

0.92+

AjourORGANIZATION

0.9+

GreenLakeORGANIZATION

0.9+

single paneQUANTITY

0.88+

single backup policyQUANTITY

0.86+

single catalogQUANTITY

0.86+

Day 2QUANTITY

0.85+

single protection frameworkQUANTITY

0.84+

VMwareTITLE

0.82+

theCUBEORGANIZATION

0.82+

EC2TITLE

0.79+

Alletra 6KsTITLE

0.77+

VMwareORGANIZATION

0.73+

KeynoteEVENT

0.72+

single stepQUANTITY

0.72+

HPE DiscoverORGANIZATION

0.7+

dollars per gigQUANTITY

0.7+

Omer Asad & Sandeep Singh, HPE | HPE Discover 2021


 

>>Welcome back to HPD discovered 2021. The virtual edition. My name is Dave a lot and you're watching the cube. We're here with Omar assad is the vice president, GM of H P S H C I and primary storage and data management business. And Sandeep Singh was the vice president of marketing for HP storage division. Welcome gents. Great to see you. >>Great to be here. Dave, >>it's a pleasure to be here today. >>Hey, so uh, last month you guys, you made a big announcement and and now you're, you know, shining the spotlight on that here at discover Cindy. Maybe you can give us a quick recap, what do we need to know? >>Yeah, Dave. We announced that we're expanding HB Green Lake by transforming HB storage to a cloud native software defined data services business. We unveiled a new vision for data that accelerates data dream of transformation for our customers. Uh and it introduced a and we introduced the data services platform that consists of two game changing innovations are first announcement was data services cloud console. It's a SAS based console that delivers the cut operational agility and it's designed to unify data operations through a suite of cloud data services. Our second announcement is H P E electra. It's cloud native data infrastructure to power your data edge to cloud. And it's managed natively with data services cloud console to bring that cloud operational model to our customers wherever their data lives. Together with the data services >>platform. >>Hp Green Green Lake brings that cloud experience to our customers data across edge and on premises environment and lays the foundation for our customers to shift from managing storage to managing data. >>Well, I think it lays the foundation for the next decade. You know, when we entered this past decade, we we we we keep we use terms like software led that that sort of morphed into. So the software defined data center containers with kubernetes, let's zoom out for a minute. If we can homer, maybe you could describe the problems that you're trying to address with this announcement. >>Thanks dave. It's always a pleasure talking to you on these topics. So in my role as general manager for primary storage, I speak with the hundreds of customers across the board and I consistently hear that data is at the heart of what our customers are doing and they're looking for a data driven transformative approach to their business. But as they engage on these things, there are two challenges that they consistently faced. The first one is that managing storage at scale Is rife with complexity. So while storage has gotten faster in the last 20 years, managing a single array or maybe two or three arrays has gotten simpler over time. But managing storage at scale when you deploy fleet, so storage as customers continue to gather, store and life cycle of that data. This process is extremely frustrating for customers. Still I. T. Administrators are firefighting, they're unable to innovate for their business because now data spans all the way from edge to corridor cloud. And then with the advent of public cloud there's another dimension of multi cloud that has been added to their data sprawl. And then secondly what what we what we consistently hear is that idea administrators need to shift from managing storage to managing data. What this basically means is that I. T. Has a desire to mobilize, protect and provision data seamlessly across its lifecycle and across the locations that it is stored at. This ensures that I. D. Leaders uh and also people within the organization understand the context of the data that they store and they operate upon. Yet data management is an extremely big challenge and it is a web of fragmented data silos across processes across infrastructure all the way from test and dev to administration uh to production uh to back up to lifecycle data advantage. Uh And so up till now data management was tied up with storage management and this needs to change for our customers especially with the diversity of the application workloads as they're growing and as customers are expanding their footprint across a multi cloud environment, >>just had to almost um response there. We recently conducted a survey that was actually done by E. S. She. Um and that was a survey of IT. decision makers. And it's interesting what it showcased, 93% of the respondents indicated that storage and data management complexity is impeding their digital transformation. 95% of the respondents indicated that solving storage and data management complexity is a top 10 business initiative for them And 94% want to bring the cloud experience on premises. >>You know, I'll chime in. I think as you guys move to the sort of software world and container world affinity to developers homer. You talked about, you know, things like data protection and we talk about security being bolted on all the time. Now. It's designed in it's it's done at sort of the point of creation, not as an afterthought and that's a big change that we see coming. Uh Let's talk about, you know what also needs to change as customers make the move from this idea of managing storage to to managing data or maybe you can take that one. >>That's a that's a very interesting problem. Right. What are the things that have to be true in order for us to move into this new data management model? So, dave one of the things that the public cloud got right is the cloud operational model which sets the standard for agility and a fast pace for our customers in a classic I. T. On prime model. If you ever wanted to stand up an application or if you were thinking about standing up a particular workload, uh you're going to file a series of I. T. Tickets uh And then you are at the mercy of whatever complex processes exist within organization and and depending on what the level of approvals are within a particular organization, standing up a workload can take days, weeks or even months in certain cases. So what cloud did was a rock that level of simplicity for someone that wanted to instead she ate an app. This means that the provision of underlying infrastructure that makes that workload possible needs to be reduced to minutes from days and weeks. But so what we are intending to do over here is to bring the best of both worlds together so that the cloud experience can be experienced everywhere with ease and simplicity and the customers don't need to change their operating model. So it's blending the two together. And that's what we are trying to usher in into this new era where we start to differentiate between data management and storage management as two independent. Yes, >>Great. Thank you for that. Omer. So deep. I wonder if you could share with the audience, you know, the vision that you guys unveiled, What does it look like? How are you making it actually substantive and and real? >>Yeah. David, That's also great question. Um across the board it's time to reimagine data management. Everything that homer shared. Those challenges are leading to customers needing to break down the silos and complexity that plagues these distributed data environments. And our vision is to deliver a new data experience that helps customers unleash the power of data. We call this vision unified data obs Unified Data Ops integrates data centric policies to streamline data management cloud native control to bring the cloud operational model to where customers data labs and a I driven insights to make the infrastructure invisible. It delivers a new data experience to simplify and bring that agility of cloud to data infrastructure. Streamline data management and help customers innovate faster than ever before. We're making the promise of unified Data Ops Real by transforming H P E storage to a cloud native software defined data services business and introducing a data services platform that expands Hve Green Lake. >>I mean, you know, you talk about the complexity, I see, I look at it as you kind of almost embracing the complexity saying, look, it's gonna keep getting more complex as the cloud expands to the edge on prem Cross cloud, it gets more complex underneath. What you're doing is you're almost embracing that complexity, putting a layer over it and hiding that complexity from from the end customer that and so they can spend their time doing other things over. I wonder if you can maybe talk a little bit more about the data services console, is it sort of another, you know, software layer to manage infrastructure? What exactly is it? >>It's a lot more than that dave and you're you're 100% right. It's basically we're attempting in this release to attack that complexity. Head on. So simply put data services. Cloud console is a SAS based console that delivers cloud operational model and cloud operational agility uh to our customers, it unifies data operations through a series of cloud data services that are delivered on top of this console to our customers in a continuous innovation stream. Uh And what we have done is going back to the point that I made earlier separating storage and data management and putting the strong suites of each of those together into the SAS delivered console for our customers. So what we have done is we have separated data and infrastructure management away from physical hardware to provide a comprehensive and a unified approach to managing data and infrastructure wherever it lives from a customer's perspective, it could be at the edge, it could be in a coal. Oh, it could be in their data center or it could be a bunch of data services that are deployed within the public cloud. So now our customers with data services, cloud console can manage the entire life cycle of their data from all the way from deployment, upgrading and optimizing it uh from a single console from anywhere in the world. Uh This console is designed to streamline data management with cloud data services that enable access to data, It allows for policy-based data protection, it allows for an organizational wide search on top of your storage assets. And we deliver basically a 360° visibility to all your data from a single console that the customer can experience from anywhere. So, so if you look at the journey, the way we're deciding to deliver this. So the first in its first incarnation, uh data services, cloud console gives you infrastructure and cloud data services to start to do data management along with that. But this is that foundation that we are placing in front of our customers, the SAS console through which we get touch our customers on a daily basis. And now as our customers get access to the SAAS platform on the back end, we will continue to roll in additional services throughout the years on a true SAS based innovation base for our customers. And and these services can will be will be ranging all the way from data protection to multiple out data management, all the way to visibility all the way to understanding the context of your data as it's stored across your enterprise. And in addition to that, we're offering a consistent, revised, unified API which allows for our customers to build automation against their storage infrastructure without ever worrying about that. As infrastructure changes. Uh the A P I proof points are going to break for them. That is never going to happen because they are going to be programming to a single SAS based aPI interface from now on. >>Right. And that brings in this idea of infrastructures coding because you talk about as a service to talk about Green Lake and and my question is always okay. Tell me what's behind that. And if and if and if and if you're talking about boxes and and widgets, that's a it's a problem. And you're not you're talking about services and A P. I. S and microservices and that's really the future model. And infrastructure is code and ultimately data as code is really part of that. So, All right. So you guys, I know some of your branding folks, you guys give deep thought uh, to this. So the second part of the announcement is the new product brands and deep maybe you can talk about that a little bit. >>Sure. Ultimately delivering the cloud operational model requires cognitive data infrastructure and that has been engineered to be natively managed from the cloud. And that's why we have also introduced H. P. E. Electra. Omar. Can you perhaps described HB electro even more? >>Absolutely. Thank you. Sandy. Uh, so with with HB Electoral we're launching a new brand of cloud native hardware infrastructure to power our customers data all the way from edge to the core to the cloud. The releases are smaller models for the edge then at the same time having models for the data center and then expanding those services into the public cloud as well. Right. All these hardware devices, Electoral hardware devices are cloud native. Empowered by our Data services. Cloud Council. We're announcing two models with this launch H. P. E. Electra 9000. Uh, this is for our mission critical workloads. It has its history and bases in H P E primera. It comes with 100% availability guarantee. Uh It's the first of its type in the industry. It comes with standard support contract, No special verb is required. And then we're also launching HB electoral 6000. Uh These are based in our history of uh nimble storage systems. Uh These these are for business critical applications, especially for that mid range of the storage market, optimizing price, performance and efficiency. Both of these systems are full envy, any storage powered by our timeless capabilities with data in place upgrades. And then they both deliver a unified infrastructure and data management experience through the data services, cloud console. Uh and and and at the back end, unified ai Ops experience with H P E info site is seamlessly blended in along with the offering for our customers. >>So this is what I was talking about before. It's sort of not your grandfather's storage business anymore. Is this is this is this is something that is part of that, that unified vision, that layer that I talked about. The AP is the program ability. So you're you're reaching into new territory here. Maybe you can give us an example of how the customers experience what that looks like. >>Excellent, loved her Dave. So essentially what we're doing is we're changing the storage experience to a true cloud operational model for our customers. These recent announcements that we just went through along with, indeed they expand the cloud experience that our customers get with storage as a service with HPD Green Lake. So a couple of examples to make this real. So the first of all is simplified deployment. Uh, so I t no longer has to go through complex startup and deployment processes. Now, all you need to do is these systems shipped and delivered to the customer's data center. Operational staff just need to rack and stack and then leave, connect the power cable, connect the network cable. And the job is done from that point onwards, data services console takes over where you can onboard these systems, you can provision these systems if you have a pre existing organization wide security as well as standard profile setup in data services console, we can automatically apply those on your behalf and bring these systems online. From a customer's perspective, they can be anywhere in the world to onboard these systems, they could be driving in a car, they could be sitting on a beach uh And and you know, these systems are automatically on boarded through this cloud operational model which is delivered through the SAAS application for our customers. Another big example. All that I'd like to shed light on is intent based provisioning. Uh So Dave typically provisioning a workload within a data center is an extremely spreadsheet driven trial and error kind of a task. Which system do I land it on? Uh Is my existing sl is going to be affected which systems that loaded, which systems are loaded enough that I put this additional workload on it and the performance doesn't take. All of these decisions are trial and error on a constant basis with cloud data services console along with the electron new systems that are constantly in a loop back information feeding uh Typical analytics to the console. All you need to do is to describe the type of the workload and the intent of the workload in terms of block size S. L. A. That you would like to experience at that point. Data services console consults with intra site at the back end. We run through thousands of data points that are constantly being given to us by your fleet and we come back with a few recommendations. You can accept the recommendation and at that time we go ahead and fully deploy this workload on your behalf or you can specify a particular system and then we will try to enforce the S. L. A. On that system. So it completely eliminates the guesswork and the planning that you have to do in this regard. Uh And last but not the least. Uh you know, one of the most important things is, you know, upgrades has been a huge problem for our customers. Uh And typically oftentimes when you're not in this constant, you know, loop back communication with your customers. It often is a big challenge to identify which release or which bug fix or which update goes on to which particular machine. All of that has been completely taken away from our customers and fully automated. Uh we run thousands of signatures across are installed base. We identify which upgrades need to be curated for which machines in a fleet for a particular customer. And then if it applies to that customer we presented, and if the customer accepts it, we automatically go ahead and upgrade the system and and and last, but not the least from a global management perspective. Now, a customer has an independent data view of their data estate, independent from a storage estate. And data services. Council can blend the two to give a consistent view or you can just look at the fleet view or the data view. >>It's kind of the Holy Grail. I mean I've been in this business a long time and I think I t. People have dreamt about you know this kind of capability for for a long long time. I wonder if we could sort of stay on the customers for a moment here and and talk about what's enabled. Now everybody's talking digital transformation that I joke about the joke. Not funny. The force marched to digital with Covid uh and we really wasn't planned for but the customers really want to drive now that digital transfer some of them are on the back burner and now they're moving to the front burner. What are the outcomes that are that are enabled here? Omar. >>Excellent. So so on on a typical basis for a traditional I. T. Customer, this cloud operational model means that you know information technology staff can move a lot faster and they can be a lot more productive on the things that are directly relevant to their business. They can get up to 99% of the savings back to spend more time on strategic projects or best of all spend time with their families rather than managing and upgrading infrastructure and fleets of infrastructure. Right. For line of business owners, the new experience means that their data infrastructure can be presented can be provision where the self service on demand type of capability. Uh They necessarily don't have to be in the data center to be able to make those decisions. Capacity management, performance management, all of that is died in and presented to them wherever they are easy to consume SAS based models and especially for data innovators, whether it's D B A s, uh whether it's data analysts, they can start to consume infrastructure and ultimately data as a code to speed up their app development because again, the context that we're bringing forward is the context of data decoupling it from. Actually, storage management, storage management and data management are now two separate domains that can be presented through a single console to tie the end to end picture for a customer. But at the end of the day, what we have felt is that customers really really want to rely and move forward with the data management and leave infrastructure management to machine oriented task, which we have completely automated on their behalf. >>So I'm sure you've heard you got the memo about, you know, H H P going all in on as a service. Uh it's clear that the companies all in. How does this announcement fit in to that overall mission, Sandeep >>Dave. We believe the future is edge to cloud and our mission is to be the edge to cloud platform as a service company and as as HB transforms HP Green Lake is our unified cloud platform. Hp Green Link is how we deliver cloud services and agile cloud experiences to customers, applications and data across the edge to cloud. With the storage announcement that we made recently, we announced that we're expanding HB Green Lake with as a service transformation of the HPV storage business to a cloud native software defined data services business. And this expands storage as a service delivering full cloud experience to our customers data across edge and on prem environment across the board were committed to being a strategic partner for every one of our customers and helping them accelerate their digital transformation. >>Yeah, that's where the puck is going guys. Hey as always great conversation with with our friends from HP storage. Thanks so much for the collaboration and congratulations on the announcements and I know you're not done yet. >>Thanks. Dave. Thanks. Dave. All right. Dave. It's a pleasure to be here. >>You're very welcome. And thank you for being with us for hp. You discovered 2021. You're watching the cube, the leader digital check coverage. Keep it right there, but right back. >>Mhm. Mhm.

Published Date : Jun 23 2021

SUMMARY :

Great to see you. Great to be here. Hey, so uh, last month you guys, you made a big announcement and and now that delivers the cut operational agility and it's designed to unify data operations Hp Green Green Lake brings that cloud experience to our customers So the software defined data center containers with kubernetes, let's zoom and this needs to change for our customers especially with the diversity of the application 95% of the respondents indicated that solving storage to managing data or maybe you can take that one. What are the things that have to be true the vision that you guys unveiled, What does it look like? Um across the board it's time to reimagine saying, look, it's gonna keep getting more complex as the cloud expands to the edge on prem Cross cloud, Uh the A P I proof points are going to break for So the second part of the announcement is the new product brands and deep maybe you can talk about that data infrastructure and that has been engineered to be natively managed from Uh and and and at the back end, unified ai Ops experience with H of how the customers experience what that looks like. Council can blend the two to give a consistent view or you can just look at the fleet view on the back burner and now they're moving to the front burner. Uh They necessarily don't have to be in the data center to be able to make those decisions. Uh it's clear that the companies all in. customers, applications and data across the edge to cloud. on the announcements and I know you're not done yet. It's a pleasure to be here. the leader digital check coverage.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

DavePERSON

0.99+

twoQUANTITY

0.99+

Sandeep SinghPERSON

0.99+

HPORGANIZATION

0.99+

95%QUANTITY

0.99+

100%QUANTITY

0.99+

hundredsQUANTITY

0.99+

Omar assadPERSON

0.99+

two challengesQUANTITY

0.99+

SandeepPERSON

0.99+

SandyPERSON

0.99+

94%QUANTITY

0.99+

second announcementQUANTITY

0.99+

2021DATE

0.99+

93%QUANTITY

0.99+

thousandsQUANTITY

0.99+

three arraysQUANTITY

0.99+

H P S H C IORGANIZATION

0.99+

HPDORGANIZATION

0.99+

OmarPERSON

0.99+

Hp Green LinkORGANIZATION

0.99+

HB ElectoralORGANIZATION

0.99+

first announcementQUANTITY

0.99+

H H PORGANIZATION

0.99+

two modelsQUANTITY

0.99+

HB Green LakeORGANIZATION

0.99+

Omer AsadPERSON

0.99+

davePERSON

0.99+

firstQUANTITY

0.99+

BothQUANTITY

0.99+

HPD Green LakeORGANIZATION

0.98+

hpORGANIZATION

0.98+

last monthDATE

0.98+

Hp Green Green LakeORGANIZATION

0.98+

first incarnationQUANTITY

0.98+

both worldsQUANTITY

0.98+

Data OpsORGANIZATION

0.98+

todayDATE

0.98+

HPEORGANIZATION

0.98+

next decadeDATE

0.97+

bothQUANTITY

0.97+

Hve Green LakeORGANIZATION

0.97+

first oneQUANTITY

0.96+

two gameQUANTITY

0.96+

single arrayQUANTITY

0.96+

eachQUANTITY

0.95+

HP Green LakeORGANIZATION

0.95+

oneQUANTITY

0.95+

HPVORGANIZATION

0.95+

single consoleQUANTITY

0.95+

CovidPERSON

0.95+

H. P. E. Electra 9000COMMERCIAL_ITEM

0.94+

up to 99%QUANTITY

0.94+

HBORGANIZATION

0.94+

H P E electraORGANIZATION

0.93+

two separate domainsQUANTITY

0.93+

secondlyQUANTITY

0.93+

second partQUANTITY

0.91+

customersQUANTITY

0.9+

thousands of data pointsQUANTITY

0.87+

HB electroORGANIZATION

0.86+

SAASTITLE

0.86+

HB electoral 6000COMMERCIAL_ITEM

0.85+

past decadeDATE

0.85+

HB GreenORGANIZATION

0.84+

Cloud CouncilORGANIZATION

0.84+

H P EORGANIZATION

0.82+

360°QUANTITY

0.81+

SASORGANIZATION

0.81+

10 businessQUANTITY

0.71+

singleQUANTITY

0.7+

Omer Asad & Sandeep Singh | HPE Discover 2021


 

>>Welcome back to HPD discovered 2021. The virtual edition. My name is Dave Volonte and you're watching the cube. We're here with Omar assad is the vice president GM of H P S H C I and primary storage and data management business. And Sandeep Singh was the vice president of marketing for HP storage division. Welcome gents. Great to see you. >>Great to be here. Dave, >>It's a pleasure to be here today. >>Hey, so uh, last month you guys, you made a big announcement and and now you're, you know, shining the spotlight on that here at discover Cindy. Maybe you can give us a quick recap, what do we need to know? >>Yeah, Dave. We announced that we're expanding HB Green Lake by transforming HB storage to a cloud native software defined data services business. We unveiled a new vision for data that accelerates data, dream of transformation for our customers. Uh and it introduced a and we introduced the data services platform that consists of two game changing innovations are first announcement was Data services cloud console. It's a SAS based console that delivers the cut operational agility and it's designed to unify data operations through a suite of cloud data services. Our 2nd announcement is HPE. Electra. It's cloud native data infrastructure to power your data edge to cloud. And it's managed natively with data services cloud console to bring that cloud operational model to our customers wherever their data lives together with the data services platform. Hp Green Green Lake brings that cloud experience to our customers data across edge and on premises environment and lays the foundation for our customers to shift from managing storage to managing data. >>Well, I think it lays the foundation for the next decade. You know, when we entered this past decade, we we were Ricky bobby's terms like software led that that sort of morphed into. So the software defined data center containers with kubernetes, Let's zoom out for a minute. If we can homer maybe you could describe the problems that you're trying to address with this announcement. >>Thanks dave. It's always a pleasure talking to you on these topics. So in my role as general manager for primary storage, I speak with the hundreds of customers across the board and I consistently hear that data is at the heart of what our customers are doing and they're looking for a data driven transformative approach to their business. But as they engage on these things, there are two challenges that they consistently faced. The first one is that managing storage at scale Is rife with complexity. So while storage has gotten faster in the last 20 years, managing a single array or maybe two or three arrays has gotten simpler over time. But managing storage at scale when you deploy fleet. So storage as customers continue to gather, store and lifecycle that data. This process is extremely frustrating for customers. Still I. T. Administrators are firefighting, they're unable to innovate for their business because now data spans all the way from edge to corridor cloud. And then with the advent of public cloud there's another dimension of multi cloud that has been added to their data sprawl. And then secondly what what we what we consistently hear is that idea administrators need to shift from managing storage to managing data. What this basically means is that I. D. Has a desire to mobilize, protect and provision data seamlessly across its lifecycle and across the locations that it is stored at. Uh This ensures that I. D. Leaders uh and also people within the organization understand the context of the data that they store and they operate upon. Yet data management is an extremely big challenge and it is a web of fragmented data silos across processes across infrastructure all the way from test and dev to administration uh to production uh to back up to lifecycle data management. Uh And so up till now data management was tied up with storage management and this needs to change for our customers especially with the diversity of the application workloads as they're growing and as customers are expanding their footprint across a multi cloud environment >>just to add to almost uh response there. We recently conducted a survey that was actually done by E. S. She. Um and that was a survey of IT. decision makers. And it's interesting what it showcased, 93% of the respondents indicated that storage and data management complexity is impeding their digital transformation. 95% of the respondents indicated that solving storage and data management complexity is a top 10 business initiative for them and 94% want to bring the cloud experience on premises, >>you know, al china. And I think as you guys move to the sort of software world and container world affinity to developers homer, you talked about, you know, things like data protection and we talk about security being bolted on all the time. Now. It's designed in it's it's done at sort of the point of creation, not as an afterthought. And that's a big change that we see coming. Uh But let's talk about, you know, what also needs to change as customers make the move from this idea of managing storage to to managing data or maybe you can take that one. >>That's a that's a that's a very interesting problem. Right. What are the things that have to be true in order for us to move into this new data management model? So, dave one of the things that the public cloud got right is the cloud operational model uh which sets the standard for agility and a fast pace for our customers in a classic I. T. On prime model, if you ever wanted to stand up an application or if you were thinking about standing up a particular workload, uh you're going to file a series of I. T. Tickets and then you're at the mercy of whatever complex processes exist within organization and and depending on what the level of approvals are within a particular organization, standing up a workload can take days, weeks or even months in certain cases. So what cloud did was they brought that level of simplicity for someone that wanted to instead she ate an app. This means that the provisioning of underlying infrastructure that makes that workload possible needs to be reduced to minutes from days and weeks. But so what we are intending to do over here is to bring the best of both worlds together so that the cloud experience can be experienced everywhere with ease and simplicity and the customers don't need to change their operating model. So it's blending the two together. And that's what we are trying to usher in into this new era where we start to differentiate between data management and storage management as two independent things. >>Great, thank you for that. Omer sometimes I wonder if you could share with the audience, you know, the vision that you guys unveiled, What does it look like? How are you making it actually substantive and and real? >>Yeah. Dave. That's also great question. Um across the board it's time to reimagine data management. Everything that homer shared. Those challenges are leading to customers needing to break down the silos and complexity that plagues these distributed data environments. And our vision is to deliver a new data experience that helps customers unleash the power of data. We call this vision unified data jobs, Unified Data Ops integrates data centric policies to streamline data management, cloud native control to bring the cloud operational model to where customers data labs and a I driven insights to make the infrastructure invisible. It delivers a new data experience to simplify and bring that agility of cloud to data infrastructure. Streamline data management and help customers innovate faster than ever before. We're making the promise of Unified Data Ops Real by transforming Hve storage to a cloud native software defined data services business and introducing a data services platform that expands Hve Green Lake. >>I mean, you know, you talk about the complexity, I see, I look at it as you kind of almost embracing the complexity saying, look, it's gonna keep getting more complex as the cloud expands to the edge on prem Cross cloud, it gets more complex underneath. What you're doing is you're almost embracing that complexity and putting a layer over it and hiding that complexity from from the end customer that and so they can spend their time doing other things over. I wonder if you can maybe talk a little bit more about the data services console, Is it sort of another software layer to manage infrastructure? What exactly is it? >>It's a lot more than that, Dave and you're you're 100% right. It's basically we're attempting in this release to attack that complexity head on. So simply put data services. Cloud console is a SAS based console that delivers cloud operational model and cloud operational agility uh to our customers. It unifies data operations through a series of cloud data services that are delivered on top of this console to our customers in a continuous innovation stream. Uh And what we have done is going back to the point that I made earlier separating storage and data management and putting the strong suites of each of those together into the SAS delivered console for our customers. So what we have done is we have separated data and infrastructure management away from physical hardware to provide a comprehensive and a unified approach to managing data and infrastructure wherever it lives. From a customer's perspective, it could be at the edge, it could be in a coal. Oh, it could be in their data center or it could be a bunch of data services that are deployed within the public cloud. So now our customers with data services. Cloud console can manage the entire life cycle of their data from all the way from deployment, upgrading and optimizing it uh from a single console from anywhere in the world. Uh This console is designed to streamline data management with cloud data services that enable access to data. It allows for policy-based data protection, it allows for an organizational wide search on top of your storage assets. And we deliver basically a 360° visibility to all your data from a single console that the customer can experience from anywhere. So, so if you look at the journey the way we're deciding to deliver this. So the first, in its first incarnation, uh Data services, Cloud console gives you infrastructure and cloud data services to start to do data management along with that. But this is that foundation that we are placing in front of our customers, the SAS console, through which we get touch our customers on a daily basis. And now as our customers get access to the SAAS platform on the back end, we will continue to roll in additional services throughout the years on a true SAS based innovation base for our customers. And and these services can will be will be ranging all the way from data protection to multiple out data management, all the way to visibility all the way to understanding the context of your data as it's stored across your enterprise. And in addition to that, we're offering a consistent revised unified Api which allows for our customers to build automation against their storage infrastructure. Without ever worrying about that. As infrastructure changes, uh, the A. P I proof points are going to break for them. That is never going to happen because they are going to be programming to a single SAS based aPI interface from now on. >>Right. And that brings in this idea of infrastructure as code because you talk about as a service to talk about Green Lake and and my question is always okay. Tell me what's behind that. And if and if and if and if you're talking about boxes and and widgets, that's a it's a problem. And you're not, you're talking about services and A P. I. S and microservices and that's really the future model and infrastructure is code and ultimately data as code is really part of that. So, All right. So you guys, I know some of your branding folks, you guys give deep thought to this. So the second part of the announcement is the new product brands and deep maybe you can talk about that a little bit. >>Sure. Ultimately delivering the cloud operational model requires cognitive data infrastructure and that has been engineered to be natively managed from the cloud. And that's why we have also introduced H. P. E. Electra. Omar, Can you perhaps described HB electro even more. >>Absolutely. Thank you. Sandy. Uh, so with with HB Electoral we're launching a new brand of cloud native hardware infrastructure to power our customers data all the way from edge to the core to the cloud. The releases are smaller models for the edge then at the same time having models for the data center and then expanding those services into the public cloud as well. Right. All these hardware devices, Electoral hardware devices are cloud native and powered by our data services. Cloud Council, we're announcing two models with this launch H. P. E Electoral 9000. Uh, this is for our mission critical workloads. It has its history and bases in H P E. Primera. It comes with 100% availability guarantee. Uh It's the first of its type in the industry. It comes with standard support contract, no special verb is required. And then we're also launching HB Electoral 6000. Uh These are based in our history of uh nimble storage systems. Uh These these are for business critical applications, especially for that mid range of the storage market, optimizing price, performance and efficiency. Both of these systems are full envy any storage powered by our timeless capabilities with data in place upgrades. And then they both deliver a unified infrastructure and data management experience through the data services, cloud console. Uh And and and at the back end unified Ai Ops experience with H P. E. Info site is seamlessly blended in along with the offering for our >>customers. So this is what I was talking about before. It's sort of not your grandfather's storage business anymore. This is this is this is something that is part of that, that unified vision, that layer that I talked about, the A. P. I. Is the program ability. So you're you're reaching into new territory here. Maybe you can give us an example of how the customers experience what that looks like. >>Excellent. Love to Dave. So essentially what we're doing is we're changing the storage experience to a true cloud operational model for our customers. These recent announcements that we just went through along with, indeed they expand the cloud experience that our customers get with storage as a service with HP Green Lake. So a couple of examples to make this real. So the first of all is simplified deployment. Uh So I t no longer has to go through complex startup and deployment processes. Now all you need to do is these systems shipped and delivered to the customer's data center. Operational staff just need to rack and stack and then leave connect the power cable, connect the network cable. And the job is done. From that point onwards, data services console takes over where you can onboard these systems, you can provision these systems if you have a pre existing organization wide security as well as standard profile setup in data services console, we can automatically apply those on your behalf and bring these systems online. From a customer's perspective, they can be anywhere in the world to onboard these systems, they could be driving in a car, they could be sitting on a beach. Uh And and you know, these systems are automatically on boarded through this cloud operational model which is delivered through the SAAS application for our customers. Another big example. All that I'd like to shed light on is intent based provisioning. Uh So Dave typically provisioning a workload within a data center is an extremely spreadsheet driven trial and error kind of a task. Which system do I land it on? Uh Is my existing sl is going to be affected which systems that loaded which systems are loaded enough that I put this additional workload on it and the performance doesn't take. All of these decisions are trial and error on a constant basis with cloud Data services console along with the electron new systems that are constantly in a loop back information feeding uh Typical analytics to the console. All you need to do is to describe the type of the workload and the intent of the workload in terms of block size S. L. A. That you would like to experience at that point. Data services console consults with intra site at the back end. We run through thousands of data points that are constantly being given to us by your fleet and we come back with a few recommendations. You can accept the recommendation and at that time we go ahead and fully deploy this workload on your behalf or you can specify a particular system and then people try to enforce the S. L. A. On that system. So it completely eliminates the guesswork and the planning that you have to do in this regard. Uh And last but not the least. Uh You know, one of the most important things is, you know, upgrades has been a huge problem for our customers. Uh And typically oftentimes when you're not in this constant, you know, loop back communication with your customers. It often is a big challenge to identify which release or which bug fix or which update goes on to which particular machine, all of that has been completely taken away from our customers and fully automated. Uh We run thousands of signatures across are installed base. We identify which upgrades need to be curated for which machines in a fleet for a particular customer. And then if it applies to that customer we presented, and if the customer accepts it, we automatically go ahead and upgrade the system and and and last, but not the least from a global management perspective. Now, a customer has an independent data view of their data estate, independent from a storage estate and data services. Council can blend the two to give a consistent view or you can just look at the fleet view or the data view. >>It's kind of the holy Grail. I mean I've been in this business a long time and I think I. T. People have dreamt about you know this kind of capability for for a long long time. I wonder if we could sort of stay on the customers for a moment here and and talk about what's enabled. Now. Everybody's talking digital transformation. I joke about the joke. Not funny. The force marched to digital with Covid. Uh and we really wasn't planned for but the customers really want to drive now that digital transfer some of them are on the back burner and now they're moving to the front burner. What are the outcomes that are that are enabled here? Omar. >>Excellent. So so on on a typical basis for a traditional I. T. Customer this cloud operational model means that you know information technology staff can move a lot faster and they can be a lot more productive on the things that are directly relevant to their business. They can get up to 99% of the savings back to spend more time on strategic projects or best of all spend time with their families rather than managing and upgrading infrastructure and fleets of infrastructure. Right for line of business owners, the new experience means that their data infrastructure can be presented can be provision where the self service on demand type of capability. Uh They necessarily don't have to be in the data center to be able to make those decisions. Capacity management, performance management, all of that is died in and presented to them wherever they are easy to consume. SaS based models and especially for data innovators, whether it's D B A s, whether it's data analysts, they can start to consume infrastructure and ultimately data as a code to speed up their app development because again, the context that we're bringing forward is the context of data decoupling it from. Actually, storage management, storage management and data management are now two separate domains that can be presented through a single console to tie the end to end picture for a customer. But at the end of the day, what we have felt is that customers really, really want to rely and move forward with the data management and leave infrastructure management to machine oriented task, which we have completely automated on their behalf. >>So I'm sure you've heard you got the memo about, you know, H H p going all in on as a service. Uh it is clear that the companies all in. How does this announcement fit in to that overall mission? Cindy >>dave We believe the future is edge to cloud and our mission is to be the edge to cloud platform as a service company and as as HB transforms HP Green Lake is our unified cloud platform. Hp Green Link is how we deliver cloud services and agile cloud experiences to customers applications and data across the edge to cloud. With the storage announcement that we made recently, we announced that we're expanding HB Green Lake with as a service transformation of the HPV storage business to a cloud native software defined data services business. And this expands storage as a service, delivering full cloud experience to our customers data across edge and on prem environment across the board were committed to being a strategic partner for every one of our customers and helping them accelerate their digital transformation. >>Yeah, that's where the puck is going guys. Hey as always great conversation with with our friends from HP storage. Thanks so much for the collaboration and congratulations on the announcements and and I know you're not done yet. >>Thanks. Dave. Thanks. Dave. >>Thanks. Dave. It's a pleasure to be here. >>You're very welcome. And thank you for being with us for hp. You discovered 2021 you're watching the cube, the leader digital check coverage. Keep it right there, but right back. >>Yeah. Yeah.

Published Date : Jun 4 2021

SUMMARY :

Great to see you. Great to be here. Hey, so uh, last month you guys, you made a big announcement and and now you're, that delivers the cut operational agility and it's designed to unify data operations So the software defined data center containers with kubernetes, Let's zoom and this needs to change for our customers especially with the diversity of the application 95% of the respondents indicated that solving storage to managing data or maybe you can take that one. What are the things that have to be true you know, the vision that you guys unveiled, What does it look like? Um across the board it's time to reimagine saying, look, it's gonna keep getting more complex as the cloud expands to the edge on prem Cross cloud, Uh This console is designed to streamline data management with cloud So the second part of the announcement is the new product brands and deep maybe you can talk about that a little bit. data infrastructure and that has been engineered to be natively managed from Uh And and and at the back end unified Ai Ops experience with H that layer that I talked about, the A. P. I. Is the program ability. Uh You know, one of the most important things is, you know, upgrades has been a huge problem The force marched to digital with Covid. Uh They necessarily don't have to be in the data center to be able to make those decisions. Uh it is clear that the companies all in. dave We believe the future is edge to cloud and our mission is to be on the announcements and and I know you're not done yet. Dave. the leader digital check coverage.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

Dave VolontePERSON

0.99+

Sandeep SinghPERSON

0.99+

twoQUANTITY

0.99+

100%QUANTITY

0.99+

HPORGANIZATION

0.99+

94%QUANTITY

0.99+

95%QUANTITY

0.99+

two challengesQUANTITY

0.99+

Omar assadPERSON

0.99+

CindyPERSON

0.99+

93%QUANTITY

0.99+

2nd announcementQUANTITY

0.99+

chinaLOCATION

0.99+

OmarPERSON

0.99+

SandyPERSON

0.99+

2021DATE

0.99+

BothQUANTITY

0.99+

thousandsQUANTITY

0.99+

Hp Green LinkORGANIZATION

0.99+

Hp Green Green LakeORGANIZATION

0.99+

firstQUANTITY

0.99+

HB ElectoralORGANIZATION

0.99+

H P S H C IORGANIZATION

0.99+

HPDORGANIZATION

0.99+

HB Green LakeORGANIZATION

0.99+

two modelsQUANTITY

0.99+

Omer AsadPERSON

0.99+

second partQUANTITY

0.99+

first announcementQUANTITY

0.99+

single arrayQUANTITY

0.98+

three arraysQUANTITY

0.98+

davePERSON

0.98+

last monthDATE

0.98+

both worldsQUANTITY

0.98+

hpORGANIZATION

0.98+

next decadeDATE

0.98+

bothQUANTITY

0.98+

todayDATE

0.98+

HPVORGANIZATION

0.97+

Ricky bobbyPERSON

0.97+

first incarnationQUANTITY

0.97+

H H pORGANIZATION

0.97+

single consoleQUANTITY

0.97+

HB electroORGANIZATION

0.97+

first oneQUANTITY

0.97+

HPEORGANIZATION

0.96+

Cloud CouncilORGANIZATION

0.96+

HB Electoral 6000COMMERCIAL_ITEM

0.96+

Unified Data OpsORGANIZATION

0.96+

two independent thingsQUANTITY

0.96+

HP Green LakeORGANIZATION

0.96+

HveORGANIZATION

0.96+

Hve Green LakeORGANIZATION

0.95+

up to 99%QUANTITY

0.95+

two gameQUANTITY

0.95+

secondlyQUANTITY

0.95+

SaSTITLE

0.94+

oneQUANTITY

0.94+

Green LakeLOCATION

0.94+

SAASTITLE

0.93+

CovidPERSON

0.92+

eachQUANTITY

0.92+

360°QUANTITY

0.9+

past decadeDATE

0.9+

H. P. E Electoral 9000COMMERCIAL_ITEM

0.88+

ApiTITLE

0.88+

SASORGANIZATION

0.87+

hundreds of customersQUANTITY

0.85+

vice presidentPERSON

0.84+

singleQUANTITY

0.81+

SASTITLE

0.8+

H. P. E. ElectraORGANIZATION

0.8+

thousands of data pointsQUANTITY

0.78+

Sandeep Singh & Omer Asad, HPE


 

(digital music) >> Hello everyone. And welcome to theCUBE where we're covering the recent news from Hewlett Packard Enterprise Making Moves and Storage. And with me are Omer Asad, Vice President and General Manager for Primary Storage, HCI and Data Management at HPE and Sandeep Singh who's the Vice President of Storage Marketing at Hewlett Packard Enterprise. Gentlemen, welcome back to theCUBE. Great to see you both. >> Dave its a pleasure to be here. >> Always a pleasure talking to you Dave thank you so much. >> Oh, it's my pleasure. Hey, so we just watched HPE make a big announcement and I wonder Sandeep, if you could give us a quick recap. >> Yeah, of course Dave. In the world of enterprise storage there hasn't been a moment like this in decades, a point at which everything is changing for data and infrastructure and it's really coming at the nexus of data, cloud and AI that's opening up the opportunity for customers across industries to accelerate their data-driven transformation. Building on that we just unveiled a new vision for data that accelerates the data driving transformation for customers edge to cloud. And to pay that off we introduce a new data services platform that consists of two game-changing innovations. First it's a data services cloud console which is a SaaS based console that delivers cloud operational agility for customers. And it's designed to unify data operations through a suite of cloud services. Though our second announcement is HPE Electra. HPE Electra is a cloud native data infrastructure portfolio to power your data edge to cloud. It's managed natively with data services cloud console and it brings that cloud operational model to customers wherever their data lives. These innovations are really combined with our industry leading AIOPS platform which is HPE InfoSight and combine these innovations radically simplify and bring that cloud operational model to customers our data and infrastructure management. And it gives the opportunity for streamlining data management across the life cycle. These innovations are making it possible for organizations across the industries to unleash the power of data. >> That's kind of cool. There're a lot of the stuff we've been talking about for all these years is sort of this unified layer across all clouds on-prem, AI injected in I could tell you're excited and it sounds like you you can't wait to get these offerings in the hands of customers, but I wonder we get back up a minute. Omer, maybe you could describe the problem statement that you're addressing with this announcement. What are customers really pain points? >> Excellent question, Dave. So in my role, as the General Manager for Data Management and Storage here at HPE I get the wonderful opportunity to talk to hundreds of customers in a year. And, you know, as time has progressed as the amount of data under organizations' management has continued to increase, what I have noticed is that recently there are three main themes that are continuously emerging and are now bubbling at the top. The first one is storage infrastructure management itself is extremely complex for customers. While there have been lots of leaps and down progress in managing a single array or managing two arrays with a lot of simplification of the UI and maybe some modern UIs are present but as the problem starts to get at scale as customers acquire more and more assets to store and manage their data on premise the management at scale is extremely complex. Yes, storage has gotten faster, yes, flash has had a profound effect on performance availability and latency access to the data but infrastructure management and storage management as a whole has become a pain for customers and it's a constant theme as storage lifecycle management comes up storage refresh has come up and deploying and managing storage infrastructure at scale comes up. So that's one of the main problems that I've been seeing as I talk to customers. Now, secondly, a lot of customers are now talking about two different elements. One is storage and storage deployment and life cycle management. And the second is the management of data that is stored on those storage devices. As the amount of data grows the silos continue to grow a single view of life cycle management of data doesn't, you know, customers don't get to see it. And lastly, one of the biggest things that we see is a lot of customers are now asking, how can I extract a value from this data under my management because they can't seem to parse through the silos. So there is an incredible amount of productivity lost when it comes to data management as a whole, which is just fragmented into silos, and then from a storage management. And when you put these two together and especially add two more elements to it which is hybrid management of data or a multicloud management of data the silos and the sprawl just continues and there is nothing that is stitching together this thing at scale. So these are the three main themes that constantly appear in these discussions. Although in spite of these a lot of modern enhancements in storage >> Well, I wonder if I could comment guys 'cause I've been following this industry for a number of years and you're absolutely right, Omer. I mean, if you look at the amount of money and time and energy that's spent into or put into the data architectures people are frustrated they're not getting enough out of it. And I'd note that, you know, the prevailing way in which we've attacked complexity historically is you build a better box. And well, while that system was maybe easier to manage than the predecessor systems all it did is create another silo and then the cloud, despite its impaired simplicity that was another disconnected siloed. So then we threw siloed management solutions at the problem and we're left with this collection of point solutions with data sort of trapped inside. So I wonder if you could give us your thoughts on that and you know, do you agree, what data do you have around this problem statement? >> Yeah, Dave that's a great point. And actually ESG just recently conducted a survey of over 250 IT decision makers. And that actually brings one of the perfect validations of the problems that Omer and you just articulated. What it showed is that 93% of the respondents indicated that storage and data management, that complexity is impeding their digital transformation. On average, the organizations have over 23 different data management tools which just typifies and is a perfect showcase of the fragmentation and the complexity that exists in that data management. And 95% of the respondents indicated that solving storage and data management that complexity is a top 10 business initiative for them. And actually top five for 67% of the respondents. So it's a great validation across the board. >> Well, its fresh in their minds too, because pre pandemic there was probably, you know, a mixed picture, right. It was probably well there's complacency or we're not moving fast enough, we have other priorities, but they were forced into this. Now they know what the real problem is it's front and center. Yeah, I liked that you're putting out there in your announcement this sort of future state that you're envisioning for customers. And I wonder if we could sort of summarize that and share with our listeners that vision that you unveiled what does it look like and how are you making it real? >> Yeah, overall, we feel very strongly that it's time for our customers to reimagine data management. And our vision is that customers need to break down the silos and complexity that plagues the distributed data environments. And they need to experience a new data experience across the board that's going to help them accelerate their data-driven transformation and we call this vision Unified DataOps. Unified DataOps integrates data-centric policies across the board to streamline data management, cloud-native control and operations to bring that agility of cloud and the operational model to wherever data lives. And AI driven insights and intelligence to make the infrastructure invisible. It delivers a whole new experience to customers to radically simplify and bring the agility of cloud to data and data infrastructure, streamlined data management and really help customers innovate faster than ever before. And we're making the promise of Unified DataOps real by transforming the entire HPE storage business to a cloud native software defined data services and that's through introducing a data services platform that expands HPE GreenLake. >> I mean, the key word I take away there Sandeep, is invisible. I mean, as a customer I want you to abstract that complexity away that underlying infrastructure complexity I just don't want to see it anymore. Omer, I wonder if we could start with the first part of the announcement maybe you can help us unpack data services, cloud console. I mean, you know, people are immediately going to think it's just another software product to manage infrastructure. But to really innovate, I'm hoping that it's more than that. >> Absolutely, Dave, it's a lot more than that. What we have done fundamentally at the root of the problem is we have taken the data and infrastructure control away from the hardware and through that, we provided a unified approach to manage the data wherever it lives. It's a full blown SaaS console which our customers get onto and from there they can deploy appliances, manage appliances, lifecycle appliances and then they not only stop at that but then go ahead and start to get context around their data. But all of that (indistinct) available through a SaaS platform, a SaaS console as every customer onboards themselves and their equipment and their storage infrastructure onto this console then they can go ahead and define role-based access for different parts of their organization. They can also apply role-based access to HPE GreenLake management personnel so they can come in and do and perform all the operations for the customers via the same console by just being another access control methodology in that. And then in addition to that, as you know, data mobility is extremely important to our customers. How do you make data available in different hyperscaler clouds if the customer's digital transformation requires that? So again, from that single cloud console from that single data console, which we are naming here as data services console customers are able to curate the data, maneuver the data, pre-positioned the data into different hyperscalers. But the beautiful thing is that the entire view of the storage infrastructure, the data with its context that is stored on top of that access control methodologies and management framework is operational from a single SaaS console which the customer can decide to give access to whichever management entity or authority comes into help them. And then what this leads us into is then combining these things into a northbound API. So anybody that wants to streamline operational manageability can then use these APIs to program against a single API which will then control the entire infrastructure on behalf of the customer. So if somebody dare what this is it is bringing that cloud operational model that was so desired by each one of our customers into their data centers and this is what I call an in-place transformation of a management experience for our customer by making them seamlessly available on a cloud operational model for their infrastructure. >> Yeah, and you've turned that into essentially an API with a lot of automation, that's great. So, okay. So that's kind of how you're trying to change the game here you're charting new territory. I want you to talk, you talked to hundreds and hundreds of customers every year I wonder if you could paint a picture from the customer perspective how does their experience actually change? >> Right, that's a wonderful question, Dave. This allows me to break it down into bits and bytes further for you and I love that, right. So the way you look at it is, you know, recently if you look at the storage management, as we talked about earlier, from an array perspective or maybe two arrays perspective has been simplified I mean, it's a solved problem. But when you start to imagine deploying hundreds of arrays and these are large customers, they have massive amounts of data assets, storage management hasn't scaled along as the infrastructure scales. But if you look at the consumer world you can have hundreds of devices but the ownership model is completely (indistinct). So the inspiration for solving this problem for us actually was inspired from consumerization of IT and that's a big trend over here. So now we're changing the customer's ownership model, the customer's deployment model and the customer's data management model into a true cloud first model. So let me give some of the examples of that, right. So first of all, let's talk about deployment. So previously deployment has been a massive challenge for our customers. What does deployment in this new data services console world looks like? Devices show up, you rack them up and then you plug in the power cable, you plug in the network cable and then you walk out of the data center. Data center administrator or the storage of administrator they will be on their iPad, on their data services console, or iPhone or whatever the device of their choice is and from that console, from that point on the device will be registered, onboarded, its initial state will be given to it from the cloud. And if the customer has some predefined States for their previous deployment model already saved with the data console they don't even need to do that we'll just take that and apply that state and induct the device into the fleet that's just one example. It's extremely simple plug in the power cable, plug in the network cable and the data center operational manager just walks out. After that you could be on the beach, you could be at your home, you could be driving in a car and this don't, I advise people not to fiddle with their iPhones when they're driving in a car, but still you could do it if you want to, right. So that's just one part from a deployment methodology perspective. Now, the second thing that, you know, Sandeep and I often bounce ideas on is provisioning of a workload. It's like a science these days. And is this array going to be able to absorb my workload, is the latency going to go South does this workload latency profile match this particular piece of device in my data center? All of this is extremely manual and it literally takes, I mean, if you talk to any of the customers or even analysts, deploying a workload is a massive challenge. It's a guesswork that you have to model and, you know basically see how it works out. I think based on HPE InfoSight, we're collecting hundreds and millions of data points from all these devices. So now to harness that and present that back to a customer in a very simple manner so that we can model on their behalf to the data services console, which is now workload of it, you just describe your workload, hey, I'm going to need these many IOPS and by the way, this happens to be my application. And that's it. On the backend because we're managing your infrastructure the cloud console understands your entire fleet. We are seeing the statistics and the telemetric coming off of your systems and because now you've described the workload for us we can do that matching for you. And what intent based provisioning does is describe your workloads in two or three clicks or maybe two or three API construct formats and we'll do the provisioning, the deployment and bringing it up for you on your behalf on the right pieces of infrastructure that matched it. And if you don't like our choices you can manually change it as well. But from a provisioning perspective I think that took days can now come down to a couple of minutes of the description. And lastly, then, you know, global data management distributed infrastructure from edge to cloud, invisible upgrades, only upgrading the right amount of infrastructure that needs the upgrade. All of that just comes rolling along with it, right. So those are some of the things that this data services console as a SaaS management and scale allows you to. >> And actually, if I can just jump in and add a little bit of what Omer described, especially with intent-based provisioning, that's really bringing a paradigm shift to provisioning. It's shifting it from a LAN-centric to app-center provisioning. And when you combine it with identity management and role-based access what it means is that you're enabling self-service on demand provisioning of the underlying data infrastructure to accelerate the app workload deployments. And you're eliminating guesswork and providing the ability to be able to optimize service level objectives. >> Yeah, it sounds like you've really nailed that in an elegant way that provisioning challenge. I've been saying for years if your primary expertise is deploying logical unit numbers you better find some other scales because the day is coming that that's just going to get automated away. So that's cool. There's another issue that I'm sure you've thought about but I wonder if you could address, I mean, you've got the cloud, the definition of cloud is changing that the cloud is expanding to on-prem on-prem expand to the cloud. It's going out to the edge, it's going across clouds and so, you know, security becomes a big issue that threat surface is expanding, the operating model is changing. So how are you thinking about addressing those security concerns? >> Excellent question, Dave. So, you know, most of the organizations that we talked to in today's modern world, you know almost every customer that I talk to has deployed either some sort of a cloud console where they're either one of the customers were the hyperscalers or you know, buy in for SaaS-based applications or pervasive across the customer base. And as you know, we were the first ones to introduce the automatic telemeter management through HPE InfoSight that's one of the largest storage SaaS services in production today that we operate on behalf of our customers, which has, you know, Dave, about 85% connectivity rate. So from that perspective, keeping customer's data secure, keeping customer's telemetry information secure we're no stranger to that. Again, we follow all security protocols that any cloud operational SaaS service would do. So a reverse handling, the firewall compliancy security audit logs that are published to our customers and published to customers' chief information security officers. So all of those, you know what I call crossing the T's and dotted the I's we do that with security expert and security policies for which each of our customers has a different set of rules. And we have a proper engagement model that we go through that particular audit process for our customers. Then secondly, Dave the data services cloud console is actually built on a fundamental cloud deployment technology that is not sort of that new. Aruba Central which is an Aruba management console which is also an HPE company it's been deployed and it's managing millions of access points in a SaaS framework for our customers. So the fundamental building blocks of the data storage console from a basic enablement perspective come from the Aruba Central console. And what we've taken is we've taken those generic cloud-based SaaS services and then built data and storage centric SaaS services on top of that and made them available to our customers. >> Yeah, I really like the Aruba. You picked that up several years ago and it's same thing with InfoSight the way that you bring it to other parts of the portfolio those are really good signs to watch of successful acquisitions. All right, there's a lot here. I want to talk about the second part of the announcement. I know you're a branding team you guys are serious about branding that new product brand. Maybe you could talk about that. >> So again, so delivering the cloud operational model is just the first piece, right. And now the second part of the announcement is delivering the cloud native hardware infrastructure which is extremely performing to go along with this cloud operational model. So what we have done Dave, in this announcement is we've announced HPE Electra. This is our new brand for our cloud native infrastructure to power your data and its appliances from core to the edge, to the cloud, right. And what it does is it takes the cloud operational model and this hardware is powered by that, it's completely wrapped around data. And so HPE Electra is available in two models right now, the HB electron 9,000 which is available for mission critical workloads for those high intensity workloads with a hundred percent availability guarantee where no failure is ever an option. And then it's also available as HPE Electra, 6,000 which is available for general purpose, business critical workloads generally trying to address that mid range of the storage market. And both of these systems are full 100% NBME front and back. And they're powered by the same unified cloud management operational experience that the data cloud console provides. And what it does is it allows our customers to simplify the deployment model, it simplifies their management model and really really allows them to focus on the context, the data and their app diversity whereas data mobility, data connectivity, data management in a multicloud world is then completely obstructed from them. >> Dave: Yeah. >> Sandeep: And Dave. >> Dave: Go ahead, please. >> Just to jump in HPE Electra combined with data services cloud console is delivering a cloud experience that makes deploying and scaling the application workloads as simple as flipping a switch. >> Dave: Nice. >> It really does. And you know, I'm very comfortable in saying this you know, like HPE InfoSight, we were the first in the industry to bring AI-based elementary and support enabled metrics (indistinct). And then here with data services console and the hardware that goes with it we're just completely transforming the storage ownership and a storage management model. And for our customers, it's a seamless non-disruptive upgrade with fully data in place upgrade. And they transform to a cloud operational model where they can manage their infrastructure better where they are through a complete consumer grade SaaS console is again the first of its kind when you look at storage management and storage management at scale. >> And I like how you're emphasizing that management layer, but underneath you got all the modern hardware technologies too which is important because it's a performance got to be, you know, a good price performance. >> Absolutely. >> So now can we bring this back again to the customers what are the outcomes that this is going to enable for them? >> So I think Dave, the first and the foremost thing is as they scale their storage infrastructures they don't have to think it's really as simple as yeah, just send it to the data center, plug in the power cable, plug in the network cable and up it comes. And from that point onwards the life cycle and the device management aspect are completely abstracted by the data services console. All they have to focus is I just have new capacity available to me and when I have an application the system will figure it out for me where they need to deploy. So no more needing the guesswork, the Excel sheets of capacity management, you know the chargeback models, none of that stuff is needed. And for customers that are looking to transform their applications customers looking to refactor their applications into a hyperscaler model or maybe transform from VM to containers, all they need to think about and focus is on that the data will just follow these workloads from that perspective. >> And Dave, just to almost response here as I speak with customers one of the things I'm hearing from IT is that line of business really wants IT to deliver that agility of cloud yet IT also has to deliver all of the enterprise reliability, availability, all of the data services. And what's fantastic here is that through this cloud operational model IT can deliver that agility, that line of business owners are looking for at the same time they've been under pressure to do a lot more with less. And through this agility, IT is able to get time back be able to focus more on the strategic projects at the same time, be able to get time back to spend more time with their families that's incredibly important. >> Omer: Right >> Well, I love the sort of mindset shift that I'm seeing from HPE we're not talking about how much the box weighs (laughing) we're talking about the customer experience. And I wonder, you know, that kind of leads me, Sandeep to how this kind of fits in to this new really, to me, I'm seeing the transformation before our eyes but how does it fit into HPE's overall mission? >> Well, Dave, our mission overall is to be the edge to cloud platform as a service company with HPE GreenLake, being the key to delivering that cloud experience. And as Omer put it, be able to deliver that cloud experience wherever the customer's data lives. And today we're advancing HPE GreenLake as a service transformation of the HPE storage business to a software defined cloud data services business overall. And for our customers, this translates to how to operational and ownership experience that unleashes their agility, their data and their innovation. So we're super excited >> Guys, I can tell you're excited. Thanks so much for coming to theCUBE and summarizing the announcements, congratulations and best of luck to both of you and to HPE and your customers. >> Thank you Dave. It was a pleasure. (digital music)

Published Date : Apr 29 2021

SUMMARY :

Great to see you both. Always a pleasure talking to you Dave and I wonder Sandeep, if you and it's really coming at the There're a lot of the stuff but as the problem starts to get at scale and you know, do you agree, And 95% of the respondents indicated that vision that you unveiled the agility of cloud to data I mean, the key word I take away there is that the entire view of from the customer perspective is the latency going to go South and providing the ability that the cloud is expanding to on-prem and dotted the I's the way that you bring it to that the data cloud console provides. the application workloads and the hardware that goes with it got to be, you know, And from that point onwards the life cycle at the same time, be able to get time back And I wonder, you know, that of the HPE storage business and best of luck to both of you Thank you Dave.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

SandeepPERSON

0.99+

twoQUANTITY

0.99+

93%QUANTITY

0.99+

Sandeep SinghPERSON

0.99+

Omer AsadPERSON

0.99+

HPEORGANIZATION

0.99+

iPhonesCOMMERCIAL_ITEM

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

95%QUANTITY

0.99+

iPadCOMMERCIAL_ITEM

0.99+

oneQUANTITY

0.99+

first pieceQUANTITY

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

two modelsQUANTITY

0.99+

OmerPERSON

0.99+

bothQUANTITY

0.99+

67%QUANTITY

0.99+

HPE ElectraORGANIZATION

0.99+

FirstQUANTITY

0.99+

OneQUANTITY

0.99+

ESGORGANIZATION

0.99+

second partQUANTITY

0.99+

three clicksQUANTITY

0.99+

one partQUANTITY

0.99+

ExcelTITLE

0.99+

two arraysQUANTITY

0.99+

secondQUANTITY

0.99+

100%QUANTITY

0.99+

three main themesQUANTITY

0.99+

two different elementsQUANTITY

0.99+

second thingQUANTITY

0.99+

first oneQUANTITY

0.98+

second announcementQUANTITY

0.98+

firstQUANTITY

0.98+

first modelQUANTITY

0.98+

over 250 IT decision makersQUANTITY

0.98+

hundred percentQUANTITY

0.98+

one exampleQUANTITY

0.98+

HPE GreenLakeORGANIZATION

0.98+

eachQUANTITY

0.98+

first partQUANTITY

0.98+

hundreds of customersQUANTITY

0.97+

HCIORGANIZATION

0.97+

hundreds of arraysQUANTITY

0.97+

second partQUANTITY

0.97+

two more elementsQUANTITY

0.97+

HPE InfoSightORGANIZATION

0.97+

ArubaORGANIZATION

0.96+

Hewlett Packard Enterprise Making Moves and StorageORGANIZATION

0.96+

singleQUANTITY

0.96+

todayDATE

0.96+

about 85%QUANTITY

0.95+

several years agoDATE

0.94+

HPE ElectraCOMMERCIAL_ITEM

0.94+

over 23 different data management toolsQUANTITY

0.93+

each oneQUANTITY

0.93+

secondlyQUANTITY

0.93+

F1 Racing at the Edge of Real-Time Data: Omer Asad, HPE & Matt Cadieux, Red Bull Racing


 

>>Edge computing is predict, projected to be a multi-trillion dollar business. You know, it's hard to really pinpoint the size of this market. Let alone fathom the potential of bringing software, compute, storage, AI, and automation to the edge and connecting all that to clouds and on-prem systems. But what, you know, what is the edge? Is it factories? Is it oil rigs, airplanes, windmills, shipping containers, buildings, homes, race cars. Well, yes and so much more. And what about the data for decades? We've talked about the data explosion. I mean, it's mind boggling, but guess what, we're gonna look back in 10 years and laugh. What we thought was a lot of data in 2020, perhaps the best way to think about edge is not as a place, but when is the most logical opportunity to process the data and maybe it's the first opportunity to do so where it can be decrypted and analyzed at very low latencies that that defines the edge. And so by locating compute as close as possible to the sources of data, to reduce latency and maximize your ability to get insights and return them to users quickly, maybe that's where the value lies. Hello everyone. And welcome to this cube conversation. My name is Dave Vellante and with me to noodle on these topics is Omar Assad, VP, and GM of primary storage and data management services at HPE. Hello, Omer. Welcome to the program. >>Hey Steve. Thank you so much. Pleasure to be here. >>Yeah. Great to see you again. So how do you see the edge in the broader market shaping up? >>Uh, David? I think that's a super important, important question. I think your ideas are quite aligned with how we think about it. Uh, I personally think, you know, as enterprises are accelerating their sort of digitization and asset collection and data collection, uh, they're typically, especially in a distributed enterprise, they're trying to get to their customers. They're trying to minimize the latency to their customers. So especially if you look across industries manufacturing, which is distributed factories all over the place, they are going through a lot of factory transformations where they're digitizing their factories. That means a lot more data is being now being generated within their factories. A lot of robot automation is going on that requires a lot of compute power to go out to those particular factories, which is going to generate their data out there. We've got insurance companies, banks that are creating and interviewing and gathering more customers out at the edge for that. >>They need a lot more distributed processing out at the edge. What this is requiring is what we've seen is across analysts. A common consensus is that more than 50% of an enterprise is data, especially if they operate globally around the world is going to be generated out at the edge. What does that mean? More data is new data is generated at the edge, but needs to be stored. It needs to be processed data. What is not required needs to be thrown away or classified as not important. And then it needs to be moved for Dr. Purposes either to a central data center or just to another site. So overall in order to give the best possible experience for manufacturing, retail, uh, you know, especially in distributed enterprises, people are generating more and more data centric assets out at the edge. And that's what we see in the industry. >>Yeah. We're definitely aligned on that. There's some great points. And so now, okay. You think about all this diversity, what's the right architecture for these deploying multi-site deployments, robo edge. How do you look at that? >>Oh, excellent question. So now it's sort of, you know, obviously you want every customer that we talk to wants SimpliVity, uh, in, in, and, and, and, and no pun intended because SimpliVity is reasoned with a simplistic edge centric architecture, right? So because let's, let's take a few examples. You've got large global retailers, uh, they have hundreds of global retail stores around the world that is generating data that is producing data. Then you've got insurance companies, then you've got banks. So when you look at a distributed enterprise, how do you deploy in a very simple and easy to deploy manner, easy to lifecycle, easy to mobilize and easy to lifecycle equipment out at the edge. What are some of the challenges that these customers deal with these customers? You don't want to send a lot of ID staff out there because that adds costs. You don't want to have islands of data and islands of storage and promote sites, because that adds a lot of States outside of the data center that needs to be protected. >>And then last but not the least, how do you push lifecycle based applications, new applications out at the edge in a very simple to deploy better. And how do you protect all this data at the edge? So the right architecture in my opinion, needs to be extremely simple to deploy. So storage, compute and networking, uh, out towards the edge in a hyperconverged environment. So that's, we agree upon that. It's a very simple to deploy model, but then comes, how do you deploy applications on top of that? How do you manage these applications on top of that? How do you back up these applications back towards the data center, all of this keeping in mind that it has to be as zero touch as possible. We at HBS believe that it needs to be extremely simple. Just give me two cables, a network cable, a power cable, tied it up, connected to the network, push it state from the data center and back up at state from the ed back into the data center. Extremely simple. >>It's gotta be simple because you've got so many challenges. You've got physics that you have to deal your latency to deal with. You got RPO and RTO. What happens if something goes wrong, you've gotta be able to recover quickly. So, so that's great. Thank you for that. Now you guys have hard news. W what is new from HPE in this space >>From a, from a, from a, from a deployment perspective, you know, HPE SimpliVity is just gaining like it's exploding, like crazy, especially as distributed enterprises adopt it as it's standardized edge architecture, right? It's an HCI box has got stories, computer networking, all in one. But now what we have done is not only you can deploy applications all from your standard V-Center interface, from a data center, what have you have now added is the ability to backup to the cloud, right? From the edge. You can also back up all the way back to your core data center. All of the backup policies are fully automated and implemented in the, in the distributed file system. That is the heart and soul of, of the SimpliVity installation. In addition to that, the customers now do not have to buy any third-party software into backup is fully integrated in the architecture and it's van efficient. >>In addition to that, now you can backup straight to the client. You can backup to a central, uh, high-end backup repository, which is in your data center. And last but not least, we have a lot of customers that are pushing the limit in their application transformation. So not only do we previously were, were one-on-one them leaving VMware deployments out at the edge sites. Now revolver also added both stateful and stateless container orchestration, as well as data protection capabilities for containerized applications out at the edge. So we have a lot, we have a lot of customers that are now deploying containers, rapid manufacturing containers to process data out at remote sites. And that allows us to not only protect those stateful applications, but back them up, back into the central data center. >>I saw in that chart, it was a light on no egress fees. That's a pain point for a lot of CEOs that I talked to. They grit their teeth at those entities. So, so you can't comment on that or >>Excellent, excellent question. I'm so glad you brought that up and sort of at that point, uh, uh, pick that up. So, uh, along with SimpliVity, you know, we have the whole green Lake as a service offering as well. Right? So what that means, Dave, is that we can literally provide our customers edge as a service. And when you compliment that with, with Aruba wired wireless infrastructure, that goes at the edge, the hyperconverged infrastructure, as part of SimpliVity, that goes at the edge, you know, one of the things that was missing with cloud backups is the every time you backup to the cloud, which is a great thing, by the way, anytime you restore from the cloud, there is that breastfeed, right? So as a result of that, as part of the GreenLake offering, we have cloud backup service natively now offered as part of HPE, which is included in your HPE SimpliVity edge as a service offering. So now not only can you backup into the cloud from your edge sites, but you can also restore back without any egress fees from HBS data protection service. Either you can restore it back onto your data center, you can restore it back towards the edge site and because the infrastructure is so easy to deploy centrally lifecycle manage, it's very mobile. So if you want to deploy and recover to a different site, you could also do that. >>Nice. Hey, uh, can you, Omar, can you double click a little bit on some of the use cases that customers are choosing SimpliVity for, particularly at the edge, and maybe talk about why they're choosing HPE? >>What are the major use cases that we see? Dave is obviously, uh, easy to deploy and easy to manage in a standardized form factor, right? A lot of these customers, like for example, we have large retailer across the us with hundreds of stores across us. Right now you cannot send service staff to each of these stores. These data centers are their data center is essentially just a closet for these guys, right? So now how do you have a standardized deployment? So standardized deployment from the data center, which you can literally push out and you can connect a network cable and a power cable, and you're up and running, and then automated backup elimination of backup and state and BR from the edge sites and into the data center. So that's one of the big use cases to rapidly deploy new stores, bring them up in a standardized configuration, both from a hardware and a software perspective, and the ability to backup and recover that instantly. >>That's one large use case. The second use case that we see actually refers to a comment that you made in your opener. Dave was where a lot of these customers are generating a lot of the data at the edge. This is robotics automation that is going to up in manufacturing sites. These is racing teams that are out at the edge of doing post-processing of their cars data. Uh, at the same time, there is disaster recovery use cases where you have, uh, you know, campsites and local, uh, you know, uh, agencies that go out there for humanity's benefit. And they move from one site to the other. It's a very, very mobile architecture that they need. So those, those are just a few cases where we were deployed. There was a lot of data collection, and there's a lot of mobility involved in these environments. So you need to be quick to set up quick, to up quick, to recover, and essentially you're up to your next, next move. >>You seem pretty pumped up about this, uh, this new innovation and why not. >>It is, it is, uh, you know, especially because, you know, it is, it has been taught through with edge in mind and edge has to be mobile. It has to be simple. And especially as, you know, we have lived through this pandemic, which, which I hope we see the tail end of it in at least 2021, or at least 2022. They, you know, one of the most common use cases that we saw, and this was an accidental discovery. A lot of the retail sites could not go out to service their stores because, you know, mobility is limited in these, in these strange times that we live in. So from a central center, you're able to deploy applications, you're able to recover applications. And, and a lot of our customers said, Hey, I don't have enough space in my data center to back up. Do you have another option? So then we rolled out this update release to SimpliVity verse from the edge site. You can now directly back up to our backup service, which is offered on a consumption basis to the customers, and they can recover that anywhere they want. >>Fantastic Omer, thanks so much for coming on the program today. >>It's a pleasure, Dave. Thank you. >>All right. Awesome to see you. Now, let's hear from red bull racing and HPE customer, that's actually using SimpliVity at the edge. Countdown really begins when the checkered flag drops on a Sunday. It's always about this race to manufacture >>The next designs to make it more adapt to the next circuit to run those. Of course, if we can't manufacture the next component in time, all that will be wasted. >>Okay. We're back with Matt kudu, who is the CIO of red bull racing? Matt, it's good to see you again. >>Great to say, >>Hey, we're going to dig into a real-world example of using data at the edge and in near real time to gain insights that really lead to competitive advantage. But, but first Matt, tell us a little bit about red bull racing and your role there. >>Sure. So I'm the CIO at red bull racing and that red bull race. And we're based in Milton Keynes in the UK. And the main job job for us is to design a race car, to manufacture the race car, and then to race it around the world. So as CIO, we need to develop the ITT group needs to develop the applications is the design, manufacturing racing. We also need to supply all the underlying infrastructure and also manage security. So it's really interesting environment. That's all about speed. So this season we have 23 races and we need to tear the car apart and rebuild it to a unique configuration for every individual race. And we're also designing and making components targeted for races. So 20 a movable deadlines, um, this big evolving prototype to manage with our car. Um, but we're also improving all of our tools and methods and software that we use to design and make and race the car. >>So we have a big can do attitude of the company around continuous improvement. And the expectations are that we continuously make the car faster. That we're, that we're winning races, that we improve our methods in the factory and our tools. And, um, so for, I take it's really unique and that we can be part of that journey and provide a better service. It's also a big challenge to provide that service and to give the business the agility, agility, and needs. So my job is, is really to make sure we have the right staff, the right partners, the right technical platforms. So we can live up to expectations >>That tear down and rebuild for 23 races. Is that because each track has its own unique signature that you have to tune to, or are there other factors involved there? >>Yeah, exactly. Every track has a different shape. Some have lots of strengths. Some have lots of curves and lots are in between. Um, the track surface is very different and the impact that has some tires, um, the temperature and the climate is very different. Some are hilly, some, a big curves that affect the dynamics of the power. So all that in order to win, you need to micromanage everything and optimize it for any given race track. >>Talk about some of the key drivers in your business and some of the key apps that give you a competitive advantage to help you win races. >>Yeah. So in our business, everything is all about speed. So the car obviously needs to be fast, but also all of our business operations needed to be fast. We need to be able to design a car and it's all done in the virtual world, but the, the virtual simulations and designs need to correlate to what happens in the real world. So all of that requires a lot of expertise to develop the simulation is the algorithms and have all the underlying infrastructure that runs it quickly and reliably. Um, in manufacturing, um, we have cost caps and financial controls by regulation. We need to be super efficient and control material and resources. So ERP and MES systems are running and helping us do that. And at the race track itself in speed, we have hundreds of decisions to make on a Friday and Saturday as we're fine tuning the final configuration of the car. And here again, we rely on simulations and analytics to help do that. And then during the race, we have split seconds, literally seconds to alter our race strategy if an event happens. So if there's an accident, um, and the safety car comes out, or the weather changes, we revise our tactics and we're running Monte Carlo for example. And he is an experienced engineers with simulations to make a data-driven decision and hopefully a better one and faster than our competitors, all of that needs it. Um, so work at a very high level. >>It's interesting. I mean, as a lay person, historically we know when I think about technology and car racing, of course, I think about the mechanical aspects of a self-propelled vehicle, the electronics and the light, but not necessarily the data, but the data's always been there. Hasn't it? I mean, maybe in the form of like tribal knowledge, if somebody who knows the track and where the Hills are and experience and gut feel, but today you're digitizing it and you're, you're processing it and close to real time. >>It's amazing. I think exactly right. Yeah. The car's instrumented with sensors, we post-process at Virgin, um, video, um, image analysis, and we're looking at our car, our competitor's car. So there's a huge amount of, um, very complicated models that we're using to optimize our performance and to continuously improve our car. Yeah. The data and the applications that can leverage it are really key. Um, and that's a critical success factor for us. >>So let's talk about your data center at the track, if you will. I mean, if I can call it that paint a picture for us, what does that look like? >>So we have to send, um, a lot of equipment to the track at the edge. Um, and even though we have really a great wide area network linked back to the factory and there's cloud resources, a lot of the trucks are very old. You don't have hardened infrastructure, don't have ducks that protect cabling, for example, and you could lose connectivity to remote locations. So the applications we need to operate the car and to make really critical decisions, all that needs to be at the edge where the car operates. So historically we had three racks of equipment, like a safe infrastructure, um, and it was really hard to manage, um, to make changes. It was too flexible. Um, there were multiple panes of glass, um, and, um, and it was too slow. It didn't run her applications quickly. Um, it was also too heavy and took up too much space when you're cramped into a garage with lots of environmental constraints. >>So we, um, we'd, we'd introduced hyperconvergence into the factory and seen a lot of great benefits. And when we came time to refresh our infrastructure at the track, we stepped back and said, there's a lot smarter way of operating. We can get rid of all the slow and flexible, expensive legacy and introduce hyperconvergence. And we saw really excellent benefits for doing that. Um, we saw a three X speed up for a lot of our applications. So I'm here where we're post-processing data, and we have to make decisions about race strategy. Time is of the essence in a three X reduction in processing time really matters. Um, we also, um, were able to go from three racks of equipment down to two racks of equipment and the storage efficiency of the HPE SimpliVity platform with 20 to one ratios allowed us to eliminate a rack. And that actually saved a hundred thousand dollars a year in freight costs by shipping less equipment, um, things like backup, um, mistakes happen. >>Sometimes the user makes a mistake. So for example, a race engineer could load the wrong data map into one of our simulations. And we could restore that VDI through SimpliVity backup at 90 seconds. And this makes sure it enables engineers to focus on the car to make better decisions without having downtime. And we sent them to, I take guys to every race they're managing 60 users, a really diverse environment, juggling a lot of balls and having a simple management platform like HPE SimpliVity gives us, allows them to be very effective and to work quickly. So all of those benefits were a huge step forward relative to the legacy infrastructure that we used to run at the edge. >>Yeah. So you had the nice Petri dish and the factory. So it sounds like your, your goals, obviously your number one KPI is speed to help shave seconds time, but also costs just the simplicity of setting up the infrastructure. >>Yeah. It's speed. Speed, speed. So we want applications absolutely fly, you know, get to actionable results quicker, um, get answers from our simulations quicker. The other area that speed's really critical is, um, our applications are also evolving prototypes, and we're always, the models are getting bigger. The simulations are getting bigger and they need more and more resource and being able to spin up resource and provision things without being a bottleneck is a big challenge in SimpliVity. It gives us the means of doing that. >>So did you consider any other options or was it because you had the factory knowledge? It was HCI was, you know, very clearly the option. What did you look at? >>Yeah, so, um, we have over five years of experience in the factory and we eliminated all of our legacy, um, um, infrastructure five years ago. And the benefits I've described, um, at the track, we saw that in the factory, um, at the track we have a three-year operational life cycle for our equipment. When into 2017 was the last year we had legacy as we were building for 2018. It was obvious that hyper-converged was the right technology to introduce. And we'd had years of experience in the factory already. And the benefits that we see with hyper-converged actually mattered even more at the edge because our operations are so much more pressurized time has even more of the essence. And so speeding everything up at the really pointy end of our business was really critical. It was an obvious choice. >>Why, why SimpliVity? What why'd you choose HPE SimpliVity? >>Yeah. So when we first heard about hyperconverged way back in the, in the factory, um, we had, um, a legacy infrastructure, overly complicated, too slow, too inflexible, too expensive. And we stepped back and said, there has to be a smarter way of operating. We went out and challenged our technology partners. We learned about hyperconvergence within enough, the hype, um, was real or not. So we underwent some PLCs and benchmarking and, and the, the PLCs were really impressive. And, and all these, you know, speed and agility benefits, we saw an HP for our use cases was the clear winner in the benchmarks. So based on that, we made an initial investment in the factory. Uh, we moved about 150 VMs in the 150 VDI into it. Um, and then as, as we've seen all the benefits we've successfully invested, and we now have, um, an estate to the factory of about 800 VMs and about 400 VDI. So it's been a great platform and it's allowed us to really push boundaries and, and give the business, um, the service that expects. >>So w was that with the time in which you were able to go from data to insight to recommendation or, or edict, uh, was that compressed, you kind of indicated that, but >>So we, we all telemetry from the car and we post-process it, and that reprocessing time really it's very time consuming. And, um, you know, we went from nine, eight minutes for some of the simulations down to just two minutes. So we saw big, big reductions in time and all, ultimately that meant an engineer could understand what the car was during a practice session, recommend a tweak to the configuration or setup of it, and just get more actionable insight quicker. And it ultimately helps get a better car quicker. >>Such a great example. How are you guys feeling about the season, Matt? What's the team's sentiment? >>Yeah, I think we're optimistic. Um, we w we, um, uh, we have a new driver >>Lineup. Uh, we have, um, max for stopping his carries on with the team and Sergio joins the team. So we're really excited about this year and, uh, we want to go and win races. Great, Matt, good luck this season and going forward and thanks so much for coming back in the cube. Really appreciate it. And it's my pleasure. Great talking to you again. Okay. Now we're going to bring back Omer for quick summary. So keep it real >>Without having solutions from HB, we can't drive those five senses, CFD aerodynamics that would undermine the simulations being software defined. We can bring new apps into play. If we can bring new them's storage, networking, all of that can be highly advises is a hugely beneficial partnership for us. We're able to be at the cutting edge of technology in a highly stressed environment. That is no bigger challenge than the formula. >>Okay. We're back with Omar. Hey, what did you think about that interview with Matt? >>Great. Uh, I have to tell you I'm a big formula one fan, and they are one of my favorite customers. Uh, so, you know, obviously, uh, one of the biggest use cases as you saw for red bull racing is Trackside deployments. There are now 22 races in a season. These guys are jumping from one city to the next, they've got to pack up, move to the next city, set up, set up the infrastructure very, very quickly and average formula. One car is running the thousand plus sensors on that is generating a ton of data on track side that needs to be collected very quickly. It needs to be processed very quickly, and then sometimes believe it or not, snapshots of this data needs to be sent to the red bull back factory back at the data center. What does this all need? It needs reliability. >>It needs compute power in a very short form factor. And it needs agility quick to set up quick, to go quick, to recover. And then in post processing, they need to have CPU density so they can pack more VMs out at the edge to be able to do that processing now. And we accomplished that for, for the red bull racing guys in basically two are you have two SimpliVity nodes that are running track side and moving with them from one, one race to the next race, to the next race. And every time those SimpliVity nodes connect up to the data center collector to a satellite, they're backing up back to their data center. They're sending snapshots of data back to the data center, essentially making their job a whole lot easier, where they can focus on racing and not on troubleshooting virtual machines, >>Red bull racing and HPE SimpliVity. Great example. It's agile, it's it's cost efficient, and it shows a real impact. Thank you very much. I really appreciate those summary comments. Thank you, Dave. Really appreciate it. All right. And thank you for watching. This is Dave Volante. >>You.

Published Date : Mar 30 2021

SUMMARY :

as close as possible to the sources of data, to reduce latency and maximize your ability to get Pleasure to be here. So how do you see the edge in the broader market shaping up? A lot of robot automation is going on that requires a lot of compute power to go out to More data is new data is generated at the edge, but needs to be stored. How do you look at that? a lot of States outside of the data center that needs to be protected. We at HBS believe that it needs to be extremely simple. You've got physics that you have to deal your latency to deal with. In addition to that, the customers now do not have to buy any third-party In addition to that, now you can backup straight to the client. So, so you can't comment on that or So as a result of that, as part of the GreenLake offering, we have cloud backup service natively are choosing SimpliVity for, particularly at the edge, and maybe talk about why from the data center, which you can literally push out and you can connect a network cable at the same time, there is disaster recovery use cases where you have, uh, out to service their stores because, you know, mobility is limited in these, in these strange times that we always about this race to manufacture The next designs to make it more adapt to the next circuit to run those. it's good to see you again. insights that really lead to competitive advantage. So this season we have 23 races and we So my job is, is really to make sure we have the right staff, that you have to tune to, or are there other factors involved there? So all that in order to win, you need to micromanage everything and optimize it for Talk about some of the key drivers in your business and some of the key apps that So all of that requires a lot of expertise to develop the simulation is the algorithms I mean, maybe in the form of like tribal So there's a huge amount of, um, very complicated models that So let's talk about your data center at the track, if you will. So the applications we need to operate the car and to make really Time is of the essence in a three X reduction in processing So for example, a race engineer could load the wrong but also costs just the simplicity of setting up the infrastructure. So we want applications absolutely fly, So did you consider any other options or was it because you had the factory knowledge? And the benefits that we see with hyper-converged actually mattered even more at the edge And, and all these, you know, speed and agility benefits, we saw an HP So we saw big, big reductions in time and all, How are you guys feeling about the season, Matt? we have a new driver Great talking to you again. We're able to be at Hey, what did you think about that interview with Matt? and then sometimes believe it or not, snapshots of this data needs to be sent to the red bull And we accomplished that for, for the red bull racing guys in And thank you for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

SergioPERSON

0.99+

MattPERSON

0.99+

DavidPERSON

0.99+

DavePERSON

0.99+

two racksQUANTITY

0.99+

StevePERSON

0.99+

Dave VolantePERSON

0.99+

2020DATE

0.99+

OmarPERSON

0.99+

Omar AssadPERSON

0.99+

2018DATE

0.99+

Matt CadieuxPERSON

0.99+

20QUANTITY

0.99+

Red Bull RacingORGANIZATION

0.99+

HBSORGANIZATION

0.99+

Milton KeynesLOCATION

0.99+

2017DATE

0.99+

23 racesQUANTITY

0.99+

60 usersQUANTITY

0.99+

22 racesQUANTITY

0.99+

three-yearQUANTITY

0.99+

90 secondsQUANTITY

0.99+

eight minutesQUANTITY

0.99+

Omer AsadPERSON

0.99+

UKLOCATION

0.99+

two cablesQUANTITY

0.99+

One carQUANTITY

0.99+

more than 50%QUANTITY

0.99+

twoQUANTITY

0.99+

nineQUANTITY

0.99+

each trackQUANTITY

0.99+

ITTORGANIZATION

0.99+

SimpliVityTITLE

0.99+

last yearDATE

0.99+

two minutesQUANTITY

0.99+

VirginORGANIZATION

0.99+

HPE SimpliVityTITLE

0.99+

three racksQUANTITY

0.99+

Matt kuduPERSON

0.99+

oneQUANTITY

0.99+

hundreds of storesQUANTITY

0.99+

five sensesQUANTITY

0.99+

hundredsQUANTITY

0.99+

about 800 VMsQUANTITY

0.99+

bothQUANTITY

0.98+

green LakeORGANIZATION

0.98+

about 400 VDIQUANTITY

0.98+

10 yearsQUANTITY

0.98+

second use caseQUANTITY

0.98+

one cityQUANTITY

0.98+

ArubaORGANIZATION

0.98+

one siteQUANTITY

0.98+

five years agoDATE

0.98+

F1 RacingORGANIZATION

0.98+

todayDATE

0.98+

SimpliVityORGANIZATION

0.98+

this yearDATE

0.98+

150 VDIQUANTITY

0.98+

about 150 VMsQUANTITY

0.98+

SundayDATE

0.98+

red bullORGANIZATION

0.97+

firstQUANTITY

0.97+

OmerPERSON

0.97+

multi-trillion dollarQUANTITY

0.97+

over five yearsQUANTITY

0.97+

one large use caseQUANTITY

0.97+

first opportunityQUANTITY

0.97+

HPEORGANIZATION

0.97+

eachQUANTITY

0.96+

decadesQUANTITY

0.96+

one ratiosQUANTITY

0.96+

HPORGANIZATION

0.96+

one raceQUANTITY

0.95+

GreenLakeORGANIZATION

0.94+

Omer Asad, HPE ft Matt Cadieux, Red Bull Racing full v1 (UNLISTED)


 

(upbeat music) >> Edge computing is projected to be a multi-trillion dollar business. It's hard to really pinpoint the size of this market let alone fathom the potential of bringing software, compute, storage, AI and automation to the edge and connecting all that to clouds and on-prem systems. But what is the edge? Is it factories? Is it oil rigs, airplanes, windmills, shipping containers, buildings, homes, race cars. Well, yes and so much more. And what about the data? For decades we've talked about the data explosion. I mean, it's a mind-boggling but guess what we're going to look back in 10 years and laugh what we thought was a lot of data in 2020. Perhaps the best way to think about Edge is not as a place but when is the most logical opportunity to process the data and maybe it's the first opportunity to do so where it can be decrypted and analyzed at very low latencies. That defines the edge. And so by locating compute as close as possible to the sources of data to reduce latency and maximize your ability to get insights and return them to users quickly, maybe that's where the value lies. Hello everyone and welcome to this CUBE conversation. My name is Dave Vellante and with me to noodle on these topics is Omer Asad, VP and GM of Primary Storage and Data Management Services at HPE. Hello Omer, welcome to the program. >> Thanks Dave. Thank you so much. Pleasure to be here. >> Yeah. Great to see you again. So how do you see the edge in the broader market shaping up? >> Dave, I think that's a super important question. I think your ideas are quite aligned with how we think about it. I personally think enterprises are accelerating their sort of digitization and asset collection and data collection, they're typically especially in a distributed enterprise, they're trying to get to their customers. They're trying to minimize the latency to their customers. So especially if you look across industries manufacturing which has distributed factories all over the place they are going through a lot of factory transformations where they're digitizing their factories. That means a lot more data is now being generated within their factories. A lot of robot automation is going on, that requires a lot of compute power to go out to those particular factories which is going to generate their data out there. We've got insurance companies, banks, that are creating and interviewing and gathering more customers out at the edge for that. They need a lot more distributed processing out at the edge. What this is requiring is what we've seen is across analysts. A common consensus is this that more than 50% of an enterprises data especially if they operate globally around the world is going to be generated out at the edge. What does that mean? New data is generated at the edge what needs to be stored. It needs to be processed data. Data which is not required needs to be thrown away or classified as not important. And then it needs to be moved for DR purposes either to a central data center or just to another site. So overall in order to give the best possible experience for manufacturing, retail, especially in distributed enterprises, people are generating more and more data centric assets out at the edge. And that's what we see in the industry. >> Yeah. We're definitely aligned on that. There's some great points and so now, okay. You think about all this diversity what's the right architecture for these multi-site deployments, ROBO, edge? How do you look at that? >> Oh, excellent question, Dave. Every customer that we talked to wants SimpliVity and no pun intended because SimpliVity is reasoned with a simplistic edge centric architecture, right? Let's take a few examples. You've got large global retailers, they have hundreds of global retail stores around the world that is generating data that is producing data. Then you've got insurance companies, then you've got banks. So when you look at a distributed enterprise how do you deploy in a very simple and easy to deploy manner, easy to lifecycle, easy to mobilize and easy to lifecycle equipment out at the edge. What are some of the challenges that these customers deal with? These customers, you don't want to send a lot of IT staff out there because that adds cost. You don't want to have islands of data and islands of storage and promote sites because that adds a lot of states outside of the data center that needs to be protected. And then last but not the least how do you push lifecycle based applications, new applications out at the edge in a very simple to deploy manner. And how do you protect all this data at the edge? So the right architecture in my opinion needs to be extremely simple to deploy so storage compute and networking out towards the edge in a hyper converged environment. So that's we agree upon that. It's a very simple to deploy model but then comes how do you deploy applications on top of that? How do you manage these applications on top of that? How do you back up these applications back towards the data center, all of this keeping in mind that it has to be as zero touch as possible. We at HPE believe that it needs to be extremely simple, just give me two cables, a network cable, a power cable, fire it up, connect it to the network, push it state from the data center and back up it state from the edge back into the data center, extremely simple. >> It's got to be simple 'cause you've got so many challenges. You've got physics that you have to deal, you have latency to deal with. You got RPO and RTO. What happens if something goes wrong you've got to be able to recover quickly. So that's great. Thank you for that. Now you guys have heard news. What is new from HPE in this space? >> Excellent question, great. So from a deployment perspective, HPE SimpliVity is just gaining like it's exploding like crazy especially as distributed enterprises adopted as it's standardized edge architecture, right? It's an HCI box has got storage computer networking all in one. But now what we have done is not only you can deploy applications all from your standard V-Center interface from a data center, what have you have now added is the ability to backup to the cloud right from the edge. You can also back up all the way back to your core data center. All of the backup policies are fully automated and implemented in the distributed file system that is the heart and soul of the SimpliVity installation. In addition to that, the customers now do not have to buy any third-party software. Backup is fully integrated in the architecture and it's then efficient. In addition to that now you can backup straight to the client. You can back up to a central high-end backup repository which is in your data center. And last but not least, we have a lot of customers that are pushing the limit in their application transformation. So not only, we previously were one-on-one leaving VMware deployments out at the edge site now evolved also added both stateful and stateless container orchestration as well as data protection capabilities for containerized applications out at the edge. So we have a lot of customers that are now deploying containers, rapid manufacture containers to process data out at remote sites. And that allows us to not only protect those stateful applications but back them up back into the central data center. >> I saw in that chart, it was a line no egress fees. That's a pain point for a lot of CIOs that I talked to. They grit their teeth at those cities. So you can't comment on that or? >> Excellent question. I'm so glad you brought that up and sort of at the point that pick that up. So along with SimpliVity, we have the whole Green Lake as a service offering as well, right? So what that means Dave is, that we can literally provide our customers edge as a service. And when you compliment that with with Aruba Wired Wireless Infrastructure that goes at the edge, the hyperconverged infrastructure as part of SimpliVity that goes at the edge. One of the things that was missing with cloud backups is that every time you back up to the cloud, which is a great thing by the way, anytime you restore from the cloud there is that egress fee, right? So as a result of that, as part of the GreenLake offering we have cloud backup service natively now offered as part of HPE, which is included in your HPE SimpliVity edge as a service offering. So now not only can you backup into the cloud from your edge sites, but you can also restore back without any egress fees from HPE's data protection service. Either you can restore it back onto your data center, you can restore it back towards the edge site and because the infrastructure is so easy to deploy centrally lifecycle manage, it's very mobile. So if you want to deploy and recover to a different site, you could also do that. >> Nice. Hey, can you, Omer, can you double click a little bit on some of the use cases that customers are choosing SimpliVity for particularly at the edge and maybe talk about why they're choosing HPE? >> Excellent question. So one of the major use cases that we see Dave is obviously easy to deploy and easy to manage in a standardized form factor, right? A lot of these customers, like for example, we have large retailer across the US with hundreds of stores across US, right? Now you cannot send service staff to each of these stores. Their data center is essentially just a closet for these guys, right? So now how do you have a standardized deployment? So standardized deployment from the data center which you can literally push out and you can connect a network cable and a power cable and you're up and running and then automated backup, elimination of backup and state and DR from the edge sites and into the data center. So that's one of the big use cases to rapidly deploy new stores, bring them up in a standardized configuration both from a hardware and a software perspective and the ability to backup and recover that instantly. That's one large use case. The second use case that we see actually refers to a comment that you made in your opener, Dave, was when a lot of these customers are generating a lot of the data at the edge. This is robotics automation that is going up in manufacturing sites. These is racing teams that are out at the edge of doing post-processing of their cars data. At the same time there is disaster recovery use cases where you have campsites and local agencies that go out there for humanity's benefit. And they move from one site to the other. It's a very, very mobile architecture that they need. So those are just a few cases where we were deployed. There was a lot of data collection and there was a lot of mobility involved in these environments, so you need to be quick to set up, quick to backup, quick to recover. And essentially you're up to your next move. >> You seem pretty pumped up about this new innovation and why not. >> It is, especially because it has been taught through with edge in mind and edge has to be mobile. It has to be simple. And especially as we have lived through this pandemic which I hope we see the tail end of it in at least 2021 or at least 2022. One of the most common use cases that we saw and this was an accidental discovery. A lot of the retail sites could not go out to service their stores because mobility is limited in these strange times that we live in. So from a central recenter you're able to deploy applications. You're able to recover applications. And a lot of our customers said, hey I don't have enough space in my data center to back up. Do you have another option? So then we rolled out this update release to SimpliVity verse from the edge site. You can now directly back up to our backup service which is offered on a consumption basis to the customers and they can recover that anywhere they want. >> Fantastic Omer, thanks so much for coming on the program today. >> It's a pleasure, Dave. Thank you. >> All right. Awesome to see you, now, let's hear from Red Bull Racing an HPE customer that's actually using SimpliVity at the edge. (engine revving) >> Narrator: Formula one is a constant race against time Chasing in tens of seconds. (upbeat music) >> Okay. We're back with Matt Cadieux who is the CIO Red Bull Racing. Matt, it's good to see you again. >> Great to see you Dave. >> Hey, we're going to dig in to a real world example of using data at the edge in near real time to gain insights that really lead to competitive advantage. But first Matt tell us a little bit about Red Bull Racing and your role there. >> Sure. So I'm the CIO at Red Bull Racing and at Red Bull Racing we're based in Milton Keynes in the UK. And the main job for us is to design a race car, to manufacture the race car and then to race it around the world. So as CIO, we need to develop, the IT group needs to develop the applications use the design, manufacturing racing. We also need to supply all the underlying infrastructure and also manage security. So it's really interesting environment that's all about speed. So this season we have 23 races and we need to tear the car apart and rebuild it to a unique configuration for every individual race. And we're also designing and making components targeted for races. So 23 and movable deadlines this big evolving prototype to manage with our car but we're also improving all of our tools and methods and software that we use to design make and race the car. So we have a big can-do attitude of the company around continuous improvement. And the expectations are that we continue to say, make the car faster. That we're winning races, that we improve our methods in the factory and our tools. And so for IT it's really unique and that we can be part of that journey and provide a better service. It's also a big challenge to provide that service and to give the business the agility of needs. So my job is really to make sure we have the right staff, the right partners, the right technical platforms. So we can live up to expectations. >> And Matt that tear down and rebuild for 23 races, is that because each track has its own unique signature that you have to tune to or are there other factors involved? >> Yeah, exactly. Every track has a different shape. Some have lots of straight, some have lots of curves and lots are in between. The track surface is very different and the impact that has on tires, the temperature and the climate is very different. Some are hilly, some have big curbs that affect the dynamics of the car. So all that in order to win you need to micromanage everything and optimize it for any given race track. >> COVID has of course been brutal for sports. What's the status of your season? >> So this season we knew that COVID was here and we're doing 23 races knowing we have COVID to manage. And as a premium sporting team with Pharma Bubbles we've put health and safety and social distancing into our environment. And we're able to able to operate by doing things in a safe manner. We have some special exceptions in the UK. So for example, when people returned from overseas that they did not have to quarantine for two weeks, but they get tested multiple times a week. And we know they're safe. So we're racing, we're dealing with all the hassle that COVID gives us. And we are really hoping for a return to normality sooner instead of later where we can get fans back at the track and really go racing and have the spectacle where everyone enjoys it. >> Yeah. That's awesome. So important for the fans but also all the employees around that ecosystem. Talk about some of the key drivers in your business and some of the key apps that give you competitive advantage to help you win races. >> Yeah. So in our business, everything is all about speed. So the car obviously needs to be fast but also all of our business operations need to be fast. We need to be able to design a car and it's all done in the virtual world, but the virtual simulations and designs needed to correlate to what happens in the real world. So all of that requires a lot of expertise to develop the simulations, the algorithms and have all the underlying infrastructure that runs it quickly and reliably. In manufacturing we have cost caps and financial controls by regulation. We need to be super efficient and control material and resources. So ERP and MES systems are running and helping us do that. And at the race track itself. And in speed, we have hundreds of decisions to make on a Friday and Saturday as we're fine tuning the final configuration of the car. And here again, we rely on simulations and analytics to help do that. And then during the race we have split seconds literally seconds to alter our race strategy if an event happens. So if there's an accident and the safety car comes out or the weather changes, we revise our tactics and we're running Monte-Carlo for example. And use an experienced engineers with simulations to make a data-driven decision and hopefully a better one and faster than our competitors. All of that needs IT to work at a very high level. >> Yeah, it's interesting. I mean, as a lay person, historically when I think about technology in car racing, of course I think about the mechanical aspects of a self-propelled vehicle, the electronics and the light but not necessarily the data but the data's always been there. Hasn't it? I mean, maybe in the form of like tribal knowledge if you are somebody who knows the track and where the hills are and experience and gut feel but today you're digitizing it and you're processing it and close to real time. Its amazing. >> I think exactly right. Yeah. The car's instrumented with sensors, we post process and we are doing video image analysis and we're looking at our car, competitor's car. So there's a huge amount of very complicated models that we're using to optimize our performance and to continuously improve our car. Yeah. The data and the applications that leverage it are really key and that's a critical success factor for us. >> So let's talk about your data center at the track, if you will. I mean, if I can call it that. Paint a picture for us what does that look like? >> So we have to send a lot of equipment to the track at the edge. And even though we have really a great wide area network link back to the factory and there's cloud resources a lot of the tracks are very old. You don't have hardened infrastructure, don't have ducks that protect cabling, for example and you can lose connectivity to remote locations. So the applications we need to operate the car and to make really critical decisions all that needs to be at the edge where the car operates. So historically we had three racks of equipment like I said infrastructure and it was really hard to manage, to make changes, it was too flexible. There were multiple panes of glass and it was too slow. It didn't run our applications quickly. It was also too heavy and took up too much space when you're cramped into a garage with lots of environmental constraints. So we'd introduced hyper convergence into the factory and seen a lot of great benefits. And when we came time to refresh our infrastructure at the track, we stepped back and said, there's a lot smarter way of operating. We can get rid of all the slow and flexible expensive legacy and introduce hyper convergence. And we saw really excellent benefits for doing that. We saw up three X speed up for a lot of our applications. So I'm here where we're post-processing data. And we have to make decisions about race strategy. Time is of the essence. The three X reduction in processing time really matters. We also were able to go from three racks of equipment down to two racks of equipment and the storage efficiency of the HPE SimpliVity platform with 20 to one ratios allowed us to eliminate a rack. And that actually saved a $100,000 a year in freight costs by shipping less equipment. Things like backup mistakes happen. Sometimes the user makes a mistake. So for example a race engineer could load the wrong data map into one of our simulations. And we could restore that DDI through SimpliVity backup at 90 seconds. And this enables engineers to focus on the car to make better decisions without having downtime. And we sent two IT guys to every race, they're managing 60 users a really diverse environment, juggling a lot of balls and having a simple management platform like HPE SimpliVity gives us, allows them to be very effective and to work quickly. So all of those benefits were a huge step forward relative to the legacy infrastructure that we used to run at the edge. >> Yeah. So you had the nice Petri dish in the factory so it sounds like your goals are obviously number one KPIs speed to help shave seconds, awesome time, but also cost just the simplicity of setting up the infrastructure is-- >> That's exactly right. It's speed, speed, speed. So we want applications absolutely fly, get to actionable results quicker, get answers from our simulations quicker. The other area that speed's really critical is our applications are also evolving prototypes and we're always, the models are getting bigger. The simulations are getting bigger and they need more and more resource and being able to spin up resource and provision things without being a bottleneck is a big challenge in SimpliVity. It gives us the means of doing that. >> So did you consider any other options or was it because you had the factory knowledge? It was HCI was very clearly the option. What did you look at? >> Yeah, so we have over five years of experience in the factory and we eliminated all of our legacy infrastructure five years ago. And the benefits I've described at the track we saw that in the factory. At the track we have a three-year operational life cycle for our equipment. When in 2017 was the last year we had legacy as we were building for 2018, it was obvious that hyper-converged was the right technology to introduce. And we'd had years of experience in the factory already. And the benefits that we see with hyper-converged actually mattered even more at the edge because our operations are so much more pressurized. Time is even more of the essence. And so speeding everything up at the really pointy end of our business was really critical. It was an obvious choice. >> Why SimpliVity, why'd you choose HPE SimpliVity? >> Yeah. So when we first heard about hyper-converged way back in the factory, we had a legacy infrastructure overly complicated, too slow, too inflexible, too expensive. And we stepped back and said there has to be a smarter way of operating. We went out and challenged our technology partners, we learned about hyperconvergence, would enough the hype was real or not. So we underwent some PLCs and benchmarking and the PLCs were really impressive. And all these speed and agility benefits we saw and HPE for our use cases was the clear winner in the benchmarks. So based on that we made an initial investment in the factory. We moved about 150 VMs and 150 VDIs into it. And then as we've seen all the benefits we've successfully invested and we now have an estate in the factory of about 800 VMs and about 400 VDIs. So it's been a great platform and it's allowed us to really push boundaries and give the business the service it expects. >> Awesome fun stories, just coming back to the metrics for a minute. So you're running Monte Carlo simulations in real time and sort of near real-time. And so essentially that's if I understand it, that's what ifs and it's the probability of the outcome. And then somebody got to make, then the human's got to say, okay, do this, right? Was the time in which you were able to go from data to insight to recommendation or edict was that compressed and you kind of indicated that. >> Yeah, that was accelerated. And so in that use case, what we're trying to do is predict the future and you're saying, well and before any event happens, you're doing what ifs and if it were to happen, what would you probabilistic do? So that simulation, we've been running for awhile but it gets better and better as we get more knowledge. And so that we were able to accelerate that with SimpliVity but there's other use cases too. So we also have telemetry from the car and we post-process it. And that reprocessing time really, is it's very time consuming. And we went from nine, eight minutes for some of the simulations down to just two minutes. So we saw big, big reductions in time. And ultimately that meant an engineer could understand what the car was doing in a practice session, recommend a tweak to the configuration or setup of it and just get more actionable insight quicker. And it ultimately helps get a better car quicker. >> Such a great example. How are you guys feeling about the season, Matt? What's the team's sentiment? >> I think we're optimistic. Thinking our simulations that we have a great car we have a new driver lineup. We have the Max Verstapenn who carries on with the team and Sergio Cross joins the team. So we're really excited about this year and we want to go and win races. And I think with COVID people are just itching also to get back to a little degree of normality and going racing again even though there's no fans, it gets us into a degree of normality. >> That's great, Matt, good luck this season and going forward and thanks so much for coming back in theCUBE. Really appreciate it. >> It's my pleasure. Great talking to you again. >> Okay. Now we're going to bring back Omer for quick summary. So keep it right there. >> Narrator: That's where the data comes face to face with the real world. >> Narrator: Working with Hewlett Packard Enterprise is a hugely beneficial partnership for us. We're able to be at the cutting edge of technology in a highly technical, highly stressed environment. There is no bigger challenge than Formula One. (upbeat music) >> Being in the car and driving in on the limit that is the best thing out there. >> Narrator: It's that innovation and creativity to ultimately achieves winning of this. >> Okay. We're back with Omer. Hey, what did you think about that interview with Matt? >> Great. I have to tell you, I'm a big formula One fan and they are one of my favorite customers. So obviously one of the biggest use cases as you saw for Red Bull Racing is track side deployments. There are now 22 races in a season. These guys are jumping from one city to the next they got to pack up, move to the next city, set up the infrastructure very very quickly. An average Formula One car is running the thousand plus sensors on, that is generating a ton of data on track side that needs to be collected very quickly. It needs to be processed very quickly and then sometimes believe it or not snapshots of this data needs to be sent to the Red Bull back factory back at the data center. What does this all need? It needs reliability. It needs compute power in a very short form factor. And it needs agility quick to set up, quick to go, quick to recover. And then in post processing they need to have CPU density so they can pack more VMs out at the edge to be able to do that processing. And we accomplished that for the Red Bull Racing guys in basically two of you have two SimpliVity nodes that are running track side and moving with them from one race to the next race to the next race. And every time those SimpliVity nodes connect up to the data center, collect up to a satellite they're backing up back to their data center. They're sending snapshots of data back to the data center essentially making their job a whole lot easier where they can focus on racing and not on troubleshooting virtual machines. >> Red bull Racing and HPE SimpliVity. Great example. It's agile, it's it's cost efficient and it shows a real impact. Thank you very much Omer. I really appreciate those summary comments. >> Thank you, Dave. Really appreciate it. >> All right. And thank you for watching. This is Dave Volante for theCUBE. (upbeat music)

Published Date : Mar 5 2021

SUMMARY :

and connecting all that to Pleasure to be here. So how do you see the edge in And then it needs to be moved for DR How do you look at that? and easy to deploy It's got to be simple and implemented in the So you can't comment on that or? and because the infrastructure is so easy on some of the use cases and the ability to backup You seem pretty pumped up about A lot of the retail sites on the program today. It's a pleasure, Dave. SimpliVity at the edge. a constant race against time Matt, it's good to see you again. in to a real world example and then to race it around the world. So all that in order to win What's the status of your season? and have the spectacle So important for the fans So the car obviously needs to be fast and close to real time. and to continuously improve our car. data center at the track, So the applications we Petri dish in the factory and being able to spin up the factory knowledge? And the benefits that we see and the PLCs were really impressive. Was the time in which you And so that we were able to about the season, Matt? and Sergio Cross joins the team. and thanks so much for Great talking to you again. going to bring back Omer comes face to face with the real world. We're able to be at the that is the best thing out there. and creativity to ultimately that interview with Matt? So obviously one of the biggest use cases and it shows a real impact. Thank you, Dave. And thank you for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Matt CadieuxPERSON

0.99+

DavePERSON

0.99+

Dave VellantePERSON

0.99+

Sergio CrossPERSON

0.99+

2017DATE

0.99+

2018DATE

0.99+

Red Bull RacingORGANIZATION

0.99+

MattPERSON

0.99+

2020DATE

0.99+

Milton KeynesLOCATION

0.99+

two weeksQUANTITY

0.99+

three-yearQUANTITY

0.99+

20QUANTITY

0.99+

Red Bull RacingORGANIZATION

0.99+

Omer AsadPERSON

0.99+

Dave VolantePERSON

0.99+

USLOCATION

0.99+

OmerPERSON

0.99+

Red BullORGANIZATION

0.99+

UKLOCATION

0.99+

two racksQUANTITY

0.99+

23 racesQUANTITY

0.99+

Max VerstapennPERSON

0.99+

90 secondsQUANTITY

0.99+

60 usersQUANTITY

0.99+

22 racesQUANTITY

0.99+

eight minutesQUANTITY

0.99+

more than 50%QUANTITY

0.99+

each trackQUANTITY

0.99+

twoQUANTITY

0.99+

one raceQUANTITY

0.99+

two minutesQUANTITY

0.99+

two cablesQUANTITY

0.99+

nineQUANTITY

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

150 VDIsQUANTITY

0.99+

SimpliVityTITLE

0.99+

Pharma BubblesORGANIZATION

0.99+

oneQUANTITY

0.99+

five years agoDATE

0.99+

first opportunityQUANTITY

0.99+

last yearDATE

0.99+

OneQUANTITY

0.99+

about 800 VMsQUANTITY

0.99+

three racksQUANTITY

0.98+

firstQUANTITY

0.98+

one siteQUANTITY

0.98+

HPEORGANIZATION

0.98+

Monte CarloTITLE

0.98+

about 400 VDIsQUANTITY

0.98+

Primary Storage and Data Management ServicesORGANIZATION

0.98+

hundreds of storesQUANTITY

0.98+

Red bull RacingORGANIZATION

0.98+

bothQUANTITY

0.98+

thousand plus sensorsQUANTITY

0.98+

tens of secondsQUANTITY

0.98+

second use caseQUANTITY

0.98+

multi-trillion dollarQUANTITY

0.98+

over five yearsQUANTITY

0.98+

todayDATE

0.97+

GreenLakeORGANIZATION

0.97+

one cityQUANTITY

0.97+

10 yearsQUANTITY

0.96+

HPE SimpliVityTITLE

0.96+

COVIDOTHER

0.96+

hundreds of global retail storesQUANTITY

0.96+

about 150 VMsQUANTITY

0.96+

Omer Asad, HPE | HPE Discover 2020


 

>>from around the globe. It's the Cube covering HP Discover Virtual experience Brought to you by HP >>Welcome back. I'm stew Minuteman. And this is the Cube's coverage of HP. Discover the virtual experience. Gonna be digging into some primary storage. Happy to welcome to the program. First time guest. Former Assad. He's the vice president and general manager for both primary storage and data services with Hewlett Packard Enterprise. Omar, thanks so much for joining us. >>Thanks to happy to be here. Thanks for the invite. All >>right, so So why did you start out? Frame out for us? Kind of Ah, where primary storage fits in in the portfolio in your charter >>there. Thanks. Yeah. So primary storage is a combination off hp, primera, HP, nimble and all the associative software and data management services that go along with it. We are part of the broader HP storage umbrella. In addition to that, we have the HB h C I business and the HP complete partnerships that partner with our go to market partners and bring total intentions for our customers. From my perspective on the general manager for Primary nimble and all the data management services that come along with it. So that's what people. The primary storage portfolio mainly centered around block services for our for our customers. >>Excellent. Well, Omer, you know, you've been in the storage industry for quite a while. We always know that the only constant in our industry is that things are always changing. However, here in 2020 it's a little bit more unusual than normal. Give us a little bit of insight as to you know, how your customers responding, how HPE is helping them during the current global pandemic. >>Obviously, you know, across the industry across the world, it's a very difficult time, you know, definitely where customers are facing some challenges from our perspective. You know, one of the biggest things that we noticed was in these unprecedented safety is the paramount eso concern for each one of your customers and for HP ways in our fellow sort of workers around the globe, the access to the data center has costs, um, some challenges for our customers, obviously for capacity expansion purposes, for scaling up work from home needs. You can do all of them. But for all of our customers, you know, as the pandemic kid in the shelter in place. Global policies came across the access to it. Did the data center became a big problems? Well, right, so just, you know, a lot of vendors that make changes to it. After these solutions off an HP perspective, we added a couple of policies, like 90 days payment difference. In addition to that, a bunch of financing capabilities to allow our customers to focus on that cash flow help on not to worry about some of the purchase decisions, but it comes from a storage perspective now. In addition to that, HP was also fortunate enough to have to cloud storage services. We have data protection online services. They have block storage online services. These are just sort of cloud based services that are available in conjunction with our portfolio to our customers. So one of the unique ways that we were able to help our customers is for without accessing their data center, they were able to slip a lot of their own from storage and former Peter snapshots or data migrations into our cloud storage subscriptions, which we expect extended to our customers and they were able to expand, and we're just in time capacity to scale up there in data center needs without actually accessing the business. So some down perspective. It was very profound experience that we had in order to sort of keep our customers operations running while we were shipping at psychopathy an expansion capacity for them as they scale sort of work from home operation. Like VD. I database scale up as as they adapted to these sort of uncertain times. >>Well, excellent. Absolutely. A spotlight has been shown on you can the products and services with liver for what we needed. That flexibility that you mentioned so critically important. Great to see things like the financial pieces to to make sure you can help companies in these uncertain times here at Discover. So, of course, let's tee up and not keep things waiting any longer. Uh, what's new? Ah, for your piece. Polio. >>So there are a couple of the new announcements that we're bringing to the market over here, right? And one of the biggest ones that I'm most excited by is obviously autonomous operations and ai ops that we're now extending, uh, for our customers for actually taking action. So what that means is, we were sort of the first to market with AI ops, which is our info side technology that was built off the top three nimble storage acquisition that happened within HP. Then we sort of extended that to, uh, to be primarily, we extended that to HP three par on then Also, we're now extending that to be simplicity so that the enormity of the size off this AI operation on automation that it just continues to grow right from. From from a primary perspective, especially, we're now bringing intelligent and intelligence autonomous operations on two primary as well, which basically means all the models and all the AI engines that we have trained for analytics for helping our customers. Our 13 workloads for providing proactive support and pro active recommendations to impose a couple of those models are now ported into our tiers of the portfolio. That is HP primarily so not only can we make recommendations in primary, but now we have also made the Kent. If the customer allows us to go ahead and actually implement those decisions, eso, Primerica and automatically adjusts without having the user intervene because in tier zero applications, the time to intervene is very, very no food non existent. So given certain set of parameters and given a certain set of policies. Http Primary. I can now execute the recommendations autonomously and make real time changes, the workloads and profiled in US policies to keep our customers Boeing rather than just a recommendation. Again, this is the first of its class for AI, and autonomous applications with intelligence is not only in recommendations but now also going ahead and executing. That's decisions from a primary storage perspective. >>O Mara with the things that you were just talking about, this bring us inside. You know what's changing inside the customers that you're working with, you know, traditionally, storage. You know, you had a storage administrator, people thinking about you know, the speeds and feeds and all the knobs that they can turn with storage. When you start talking about autonomous and AI functions coming in, I have to expect there's different requirements from the customer and there's different people engage with it s o, you know, bring us inside what you're seeing at the customer side. >>It's actually interesting you here you could explode on, right, So from a customer perspective, it's always you know the the do more with less right that is happening on the training side that is happening on the customer persona side. So, you know, simplifying the portfolio. Is it absolutely one of the biggest, therefore, customers? They're the general push the words the I t generalist back there. Management perspective. From a perspective, there's a lot of simple City that is desired. So one of the biggest things that we have changed with 18 primarily is that if the industry's first tier zero platform, it gives 100% availability guarantee s so it really simplifies from a responsibility perspective from a customer's perspective, where we picked up most of the risk by giving the customers 100% availability guarantee. It's the industry's first year zero platform that is self upgradable, self installing and now also self autonomously executing operations on the customer's behalf. So again, from a monitoring perspective, from from an installation perspective from a day to day operational cost perspective, it's really, really ties into that. Do more with less team from a customer's perspective, right? And then the maximum from an AI ops perspective. You know, Prospect Analytics. We were the 1st 1 to bring that to the market. Now we've extended up to it across the portfolio on and then some recommendations. Perspective. Not only there are these proactive recommendations, but then also, if the customer allows us, we will go ahead and execute those recommendations in order to 24 by seven mission critical operations continuously running and continuously adapting to changing conditions from a customer perspective and then on the customer side. Again, there's a lot more simple a city that has been enforced into the environment because again yourself installed so complete, self automate, self autonomous, sort of storage operations happy introduced in tier zero environment. And I think that's the biggest breakthrough in bringing that simplicity in the Tier zero. >>Excellent. You also you mentioned that one of the things that companies air leveraging now when they need to be working remote is the remote backup capability. Bring us the latest as to what he's doing when it comes to a cloud backup. >>So against what you raised, an important point right? One of the biggest things that this pandemic has so far made the ICTY operational staff realized that although there could be an outage, but there could be an outage of the kind where the systems might be running. But you won't have access to the data center, right? This shelter in place has been huge learning lesson for for operation teams. Right, So one of the things that we have now introduced, you know HP was with nimble storage earlier was one of the first technologies to have a cloud storage block. Services available to our customers now have expanded that portfolio, and now we have cloud volumes also available. So when you buy HD primera as your peers zero offering or if you buy a 80 nimble storage as your mid range Tier one offering with both, we now include http cloud volumes of backup services. So not only do you have access to on Prem storage, but you have access to backup capabilities, which are not managed by HP for our customers as well. And then, in addition to that, the mobility technology that sources Depot that transfers these backups into an HP and managed back up service is also included with the piece of software and then, in addition to that, we have also made Hve cloud backup available to our highest partner. So whether you were seen whether you're calm vault, we have source site plug ins available so our customers water on our partner ecosystem and also take advantage of that. One of the biggest changes that you know, as you know, Reid rate at this point, it is included with our portfolio is included from a software perspective. No particular physical changes need to be made at the data center, and customers can take advantage of that. You know, as soon as they start consuming the the primera or nimble boxes along along with the rest of the portfolio. >>Yeah, you know, back up to the cloud was one of the earliest cloud storage solutions that we saw there. It's good to hear you say that you you've got kind of integrations with partners and with your portfolio, anything else that you point out that really differentiates what HP is doing compared to other cloud providers or other software solutions out >>there. So to do things right, So from from a data protection perspective, this entire software portfolio is sort of bundled in when, when you when you look at HP primera or when you look at HP nimble like one of the biggest different shading factors is that the entire encapsulation off a solution from a workload perspective is Write your application autonomous support. So whether you're running sequel Oracle DB next gen applications. The awareness of these workloads is present inside of info site, and it is also present inside of the boxes. And then he regards to that their lifecycle management. Uh, there, you know, data visibility's recovery capabilities there Diyar capabilities that entire equal system and and what what it takes to make a little work. It's also built into HD primarily and being nimble environments and proactive support off visibility and lifecycle. Operational support of these workloads that the wave missed from an intelligence perspective is built in with people set right. So one of the largest single or the most critical difference is that it's not a piecemeal solutions. The entire ecosystem portfolio from a protection lifecycle management. We are just a death is completely talk to and incorporated. When you buy any particular aspect of the V block storage. >>Excellent. Well, when we talk about primary storage, one of the big impacts on that market has been that the wave of hyper converged infrastructure. You know, I've had conversations. Everything from your Green Lake offering is how to have a managed service with many options with h c. I underneath that, of course, HP purchase simplicity. Help us understand. You know where you think HD I fits today and how that relates toe overall, your section of the market >>Absolutely right. So AI has had a profound impact in simplifying the consumption of the data center. Right? 80 I, according to me, is an experience. It's an infrastructure consumption experience. Ah, storage, networking. Compute or abstracted out, and you start to consume that as Watson Machine Instances to simplify your operations. Right? So from an HP perspective, 80 simplicity is one for our largest offerings in the portfolio for, you know, for smaller data centers. For for the Generalists, for the Edge Cases HP Simplicity Simplicity is one of the preferred choices that the customers built right now. In addition to that, we've also introduced DHC I, which is this ability aggregated 80. Either this aggregated 80 a sort of on the name it is, it is sort of a conversation starter that that's why we love it. But again, in keeping to do the nature off. You know, it's the eyes of consumption. Once you Once you put the infrastructure in the closet and you shut the closet door, you should not be able to sort of tell whether it's a single box that's running the entire portfolio. Are this aggregated storage, networking and compute instances that are running the portfolio? From our perspective, you know the flexibility that the customer has from a consumption model. So storage, networking and compute in a single model in a single chassis, if that is simply for for the customer. But then if the compute and the networking and the storage needs need to still independently but yet maintain the same simplicity off the consumption infrastructure, we offer that use case as well. And that's where DHC I based on HP Nimble storage with HP Prime servers and Aruba EMC switches all consumed as a single software comes into play, so all the flexibility are in worse. But the simplicity of hyper converged is consolidated, and then, from a from a financial perspective, the customers can buy on cap backs, and all PACs basically relate or not be like it's up to the customer But again, then the focuses focuses one on the hardware. Stupid focuses on what the software consumption layers are. And then from a flexibility perspective, yet being able to scale storage and networking independently should the customer want that flexibility? >>Yeah. You know, without getting into too much of the naming conventions we actually, we keep on the research arm. We had put out what we call server san, and it was looking at the architectures that the hyper scale environments were doing, which was even different. Really? You bake, you know, the scalability that you need into the apple Asian, Um, and therefore, some of the underlying software which in scale you do different agency. I dhc I You know any other prefix in there? We like to have an umbrella rather than, you know, just a bucket that you put things in with rigid environment. Okay, so, uh, I guess the final takeaways, you know, any other key things that you want point out from HP Discover, You know, any sessions, papers like that people make that they take away from this week's event. >>They obviously autonomous operations with info site models being actually executed on on Prem storage is one of the biggest takeaways. In addition to that, we brought, you know, mission critical VR to all three par both primary and nimble storage platforms. A swell so three market VR where cloud storage is also integrated as part of that VR story. So you can have synchronous replication between two sites and then a bunker site, whether that be 1/3 autonomous data center or it can >>be it be >>cloud story off as part of that that here, in addition to that, we introduced all the Emmy primera on and be introduced storage class memory on the nimble storage architectures as well. So obviously further pushing the envelope, Sof hp primarily of porn or massively, Pablo, all in the in the system and then nimble storage, which is our cash, accelerated our connector. Now, as another tier of storage class memory. So we give you the performance of storage class memory. At the price of all flash arrays are some of the biggest capabilities that we're putting forward. And then lastly, you know, in regards to started automation, you know, we've all support on it be primary, uh, you know, be able. Was legacy already supported on It's the Nimble. It's combining Primera Nimble 34 over there gives it one of the largest adoption and promoters of vehicles out there with the largest people in small. Based on the last but not believe we're now introducing, you know, Google and costs. And we will see a size based dinner. Uh, started automation drivers for both HP nimble as well as for you know, uh, HP primary. So kubernetes CS i compliant container set of implementation drivers have now implemented in both the platforms that are available for general use for our customers that prefer to run bare metal or container based workloads or for their production. >>Alright, well, Omar, no shortage of updates that you give our audience to be able to dig in and find out the latest on your portfolio. Thanks so much for joining us. >>Absolutely pleasure to be here. Thanks so much. >>Alright, stay with us for lots more coverage. HP, discover virtual experience on stew minimum. Thank you for watching the Cube. Yeah, Yeah, yeah, yeah, yeah

Published Date : Jun 23 2020

SUMMARY :

Discover Virtual experience Brought to you by HP Discover the virtual experience. Thanks for the invite. all the data management services that come along with it. We always know that the only constant in our industry is that things are always changing. You know, one of the biggest things that we noticed That flexibility that you mentioned simplicity so that the enormity of the size off this AI operation on automation from the customer and there's different people engage with it s o, you know, bring us inside what you're seeing So one of the biggest things that we have changed with 18 You also you mentioned that one of the things that companies air leveraging now when One of the biggest changes that you know, as you know, Reid rate at this point, It's good to hear you say that you you've got kind of integrations with partners So one of the largest single or the most critical difference that the wave of hyper converged infrastructure. the networking and the storage needs need to still independently but yet We like to have an umbrella rather than, you know, just a bucket that you put things in we brought, you know, mission critical VR to all three par both primary So we give you the performance of storage class memory. Alright, well, Omar, no shortage of updates that you give our audience to be able to dig in and find out the latest Absolutely pleasure to be here. Thank you for watching the Cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
BoeingORGANIZATION

0.99+

90 daysQUANTITY

0.99+

HPORGANIZATION

0.99+

OmarPERSON

0.99+

100%QUANTITY

0.99+

2020DATE

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

hpORGANIZATION

0.99+

two sitesQUANTITY

0.99+

Omer AsadPERSON

0.99+

first yearQUANTITY

0.99+

bothQUANTITY

0.99+

DiscoverORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

AssadPERSON

0.99+

OneQUANTITY

0.99+

firstQUANTITY

0.99+

ICTYORGANIZATION

0.99+

24QUANTITY

0.99+

13 workloadsQUANTITY

0.98+

First timeQUANTITY

0.98+

oneQUANTITY

0.98+

USLOCATION

0.98+

this weekDATE

0.98+

HPEORGANIZATION

0.98+

OmerPERSON

0.98+

single boxQUANTITY

0.97+

single modelQUANTITY

0.96+

primeraORGANIZATION

0.96+

first tierQUANTITY

0.95+

pandemicEVENT

0.95+

todayDATE

0.94+

singleQUANTITY

0.94+

threeQUANTITY

0.94+

1st 1QUANTITY

0.93+

zeroQUANTITY

0.92+

first technologiesQUANTITY

0.92+

PrimericaORGANIZATION

0.92+

twoQUANTITY

0.91+

PeterPERSON

0.91+

ArubaORGANIZATION

0.91+

single chassisQUANTITY

0.91+

100% availabilityQUANTITY

0.9+

Prospect AnalyticsORGANIZATION

0.9+

Tier oneQUANTITY

0.88+

80QUANTITY

0.87+

DepotORGANIZATION

0.85+

Green LakeLOCATION

0.85+

HP DiscoverORGANIZATION

0.85+

PrimeCOMMERCIAL_ITEM

0.85+

single softwareQUANTITY

0.83+

stew MinutemanPERSON

0.82+

takeawaysQUANTITY

0.81+

seven missionQUANTITY

0.81+

Breaking Analysis: Answering the top 10 questions about SuperCloud


 

>> From the theCUBE studios in Palo Alto in Boston, bringing you data driven insights from theCUBE and ETR. This is "Breaking Analysis" with Dave Vellante. >> Welcome to this week's Wikibon, theCUBE's insights powered by ETR. As we exited the isolation economy last year, supercloud is a term that we introduced to describe something new that was happening in the world of cloud. In this Breaking Analysis, we address the 10 most frequently asked questions we get around supercloud. Okay, let's review these frequently asked questions on supercloud that we're going to try to answer today. Look at an industry that's full of hype and buzzwords. Why the hell does anyone need a new term? Aren't hyperscalers building out superclouds? We'll try to answer why the term supercloud connotes something different from hyperscale clouds. And we'll talk about the problems that superclouds solve specifically. And we'll further define the critical aspects of a supercloud architecture. We often get asked, isn't this just multi-cloud? Well, we don't think so, and we'll explain why in this Breaking Analysis. Now in an earlier episode, we introduced the notion of super PaaS. Well, isn't a plain vanilla PaaS already a super PaaS? Again, we don't think so, and we'll explain why. Who will actually build and who are the players currently building superclouds? What workloads and services will run on superclouds? And 8-A or number nine, what are some examples that we can share of supercloud? And finally, we'll answer what you can expect next from us on supercloud? Okay, let's get started. Why do we need another buzzword? Well, late last year, ahead of re:Invent, we were inspired by a post from Jerry Chen called "Castles in the Cloud." Now in that blog post, he introduced the idea that there were sub-markets emerging in cloud that presented opportunities for investors and entrepreneurs that the cloud wasn't going to suck the hyperscalers. Weren't going to suck all the value out of the industry. And so we introduced this notion of supercloud to describe what we saw as a value layer emerging above the hyperscalers CAPEX gift, we sometimes call it. Now it turns out, that we weren't the only ones using the term as both Cornell and MIT have used the phrase in somewhat similar, but different contexts. The point is something new was happening in the AWS and other ecosystems. It was more than IaaS and PaaS, and wasn't just SaaS running in the cloud. It was a new architecture that integrates infrastructure, platform and software as services to solve new problems that the cloud vendors in our view, weren't addressing by themselves. It seemed to us that the ecosystem was pursuing opportunities across clouds that went beyond conventional implementations of multi-cloud. And we felt there was a structural change going on at the industry level, the supercloud, metaphorically was highlighting. So that's the background on why we felt a new catch phrase was warranted, love it or hate it. It's memorable and it's what we chose. Now to that last point about structural industry transformation. Andy Rappaport is sometimes and often credited with identifying the shift from the vertically integrated IBM mainframe era to the fragmented PC microprocesor-based era in his HBR article in 1991. In fact, it was David Moschella, who at the time was an IDC Analyst who first introduced the concept in 1987, four years before Rappaport's article was published. Moschella saw that it was clear that Intel, Microsoft, Seagate and others would replace the system vendors, and put that forth in a graphic that looked similar to the first two on this chart. We don't have to review the shift from IBM as the center of the industry to Wintel, that's well understood. What isn't as well known or accepted is what Moschella put out in his 2018 book called "Seeing Digital" which introduced the idea of "The Matrix" that's shown on the right hand side of this chart. Moschella posited that new services were emerging built on top of the internet and hyperscale clouds that would integrate other innovations and would define the next era of computing. He used the term Matrix because the conceptual depiction included not only horizontal technology rose like the cloud and the internet, but for the first time included connected industry verticals, the columns in this chart. Moschella pointed out that whereas historically, industry verticals had a closed value chain or stack and ecosystem of R&D, and production, and manufacturing, and distribution. And if you were in that industry, the expertise within that vertical generally stayed within that vertical and was critical to success. But because of digital and data, for the first time, companies were able to traverse industries, jump across industries and compete because data enabled them to do that. Examples, Amazon and content, payments, groceries, Apple, and payments, and content, and so forth. There are many examples. Data was now this unifying enabler and this marked a change in the structure of the technology landscape. And supercloud is meant to imply more than running in hyperscale clouds, rather it's the combination of multiple technologies enabled by CloudScale with new industry participants from those verticals, financial services and healthcare, manufacturing, energy, media, and virtually all in any industry. Kind of an extension of every company is a software company. Basically, every company now has the opportunity to build their own cloud or supercloud. And we'll come back to that. Let's first address what's different about superclouds relative to hyperscale clouds? You know, this one's pretty straightforward and obvious, I think. Hyperscale clouds, they're walled gardens where they want your data in their cloud and they want to keep you there. Sure, every cloud player realizes that not all data will go to their particular cloud so they're meeting customers where their data lives with initiatives like Amazon Outposts and Azure Arc, and Google Anthos. But at the end of the day, the more homogeneous they can make their environments, the better control, security, cost, and performance they can deliver. The more complex the environment, the more difficult it is to deliver on their brand promises. And of course, the lesser margin that's left for them to capture. Will the hyperscalers get more serious about cross-cloud services? Maybe, but they have plenty of work to do within their own clouds and within enabling their own ecosystems. They had a long way to go a lot of runway. So let's talk about specifically, what problems superclouds solve? We've all seen the stats from IDC or Gartner, or whomever the customers on average use more than one cloud. You know, two clouds, three clouds, five clouds, 20 clouds. And we know these clouds operate in disconnected silos for the most part. And that's a problem because each cloud requires different skills because the development environment is different as is the operating environment. They have different APIs, different primitives, and different management tools that are optimized for each respective hyperscale cloud. Their functions and value props don't extend to their competitors' clouds for the most part. Why would they? As a result, there's friction when moving between different clouds. It's hard to share data, it's hard to move work. It's hard to secure and govern data. It's hard to enforce organizational edicts and policies across these clouds, and on-prem. Supercloud is an architecture designed to create a single environment that enables management of workloads and data across clouds in an effort to take out complexity, accelerate application development, streamline operations and share data safely, irrespective of location. It's pretty straightforward, but non-trivial, which is why I always ask a company's CEO and executives if stock buybacks and dividends will yield as much return as building out superclouds that solve really specific and hard problems, and create differential value. Okay, let's dig a bit more into the architectural aspects of supercloud. In other words, what are the salient attributes of supercloud? So first and foremost, a supercloud runs a set of specific services designed to solve a unique problem and it can do so in more than one cloud. Superclouds leverage the underlying cloud native tooling of a hyperscale cloud, but they're optimized for a specific objective that aligns with the problem that they're trying to solve. For example, supercloud might be optimized for lowest cost or lowest latency, or sharing data, or governing, or securing that data, or higher performance for networking, for example. But the point is, the collection of services that is being delivered is focused on a unique value proposition that is not being delivered by the hyperscalers across clouds. A supercloud abstracts the underlying and siloed primitives of the native PaaS layer from the hyperscale cloud and then using its own specific platform as a service tooling, creates a common experience across clouds for developers and users. And it does so in a most efficient manner, meaning it has the metadata knowledge and management capabilities that can optimize for latency, bandwidth, or recovery, or data sovereignty, or whatever unique value that supercloud is delivering for the specific use case in their domain. And a supercloud comprises a super PaaS capability that allows ecosystem partners through APIs to add incremental value on top of the supercloud platform to fill gaps, accelerate features, and of course innovate. The services can be infrastructure-related, they could be application services, they could be data services, security services, user services, et cetera, designed and packaged to bring unique value to customers. Again, that hyperscalers are not delivering across clouds or on-premises. Okay, so another common question we get is, isn't that just multi-cloud? And what we'd say to that is yes, but no. You can call it multi-cloud 2.0, if you want, if you want to use it, it's kind of a commonly used rubric. But as Dell's Chuck Whitten proclaimed at Dell Technologies World this year, multi-cloud by design, is different than multi-cloud by default. Meaning to date, multi-cloud has largely been a symptom of what we've called multi-vendor or of M&A, you buy a company and they happen to use Google Cloud, and so you bring it in. And when you look at most so-called, multi-cloud implementations, you see things like an on-prem stack, which is wrapped in a container and hosted on a specific cloud or increasingly a technology vendor has done the work of building a cloud native version of their stack and running it on a specific cloud. But historically, it's been a unique experience within each cloud with virtually no connection between the cloud silos. Supercloud sets out to build incremental value across clouds and above hyperscale CAPEX that goes beyond cloud compatibility within each cloud. So if you want to call it multi-cloud 2.0, that's fine, but we chose to call it supercloud. Okay, so at this point you may be asking, well isn't PaaS already a version of supercloud? And again, we would say no, that supercloud and its corresponding superPaaS layer which is a prerequisite, gives the freedom to store, process and manage, and secure, and connect islands of data across a continuum with a common experience across clouds. And the services offered are specific to that supercloud and will vary by each offering. Your OpenShift, for example, can be used to construct a superPaaS, but in and of itself, isn't a superPaaS, it's generic. A superPaaS might be developed to support, for instance, ultra low latency database work. It would unlikely again, taking the OpenShift example, it's unlikely that off-the-shelf OpenShift would be used to develop such a low latency superPaaS layer for ultra low latency database work. The point is supercloud and its inherent superPaaS will be optimized to solve specific problems like that low latency example for distributed databases or fast backup and recovery for data protection, and ransomware, or data sharing, or data governance. Highly specific use cases that the supercloud is designed to solve for. Okay, another question we often get is who has a supercloud today and who's building a supercloud, and who are the contenders? Well, most companies that consider themselves cloud players will, we believe, be building or are building superclouds. Here's a common ETR graphic that we like to show with Net Score or spending momentum on the Y axis and overlap or pervasiveness in the ETR surveys on the X axis. And we've randomly chosen a number of players that we think are in the supercloud mix, and we've included the hyperscalers because they are enablers. Now remember, this is a spectrum of maturity it's a maturity model and we've added some of those industry players that we see building superclouds like CapitalOne, Goldman Sachs, Walmart. This is in deference to Moschella's observation around The Matrix and the industry structural changes that are going on. This goes back to every company, being a software company and rather than pattern match an outdated SaaS model, we see new industry structures emerging where software and data, and tools, specific to an industry will lead the next wave of innovation and bring in new value that traditional technology companies aren't going to solve, and the hyperscalers aren't going to solve. You know, we've talked a lot about Snowflake's data cloud as an example of supercloud. After being at Snowflake Summit, we're more convinced than ever that they're headed in this direction. VMware is clearly going after cross-cloud services you know, perhaps creating a new category. Basically, every large company we see either pursuing supercloud initiatives or thinking about it. Dell showed project Alpine at Dell Tech World, that's a supercloud. Snowflake introducing a new application development capability based on their superPaaS, our term of course, they don't use the phrase. Mongo, Couchbase, Nutanix, Pure Storage, Veeam, CrowdStrike, Okta, Zscaler. Yeah, all of those guys. Yes, Cisco and HPE. Even though on theCUBE at HPE Discover, Fidelma Russo said on theCUBE, she wasn't a fan of cloaking mechanisms, but then we talked to HPE's Head of Storage Services, Omer Asad is clearly headed in the direction that we would consider supercloud. Again, those cross-cloud services, of course, their emphasis is connecting as well on-prem. That single experience, which traditionally has not existed with multi-cloud or hybrid. And we're seeing the emergence of companies, smaller companies like Aviatrix and Starburst, and Clumio and others that are building versions of superclouds that solve for a specific problem for their customers. Even ISVs like Adobe, ADP, we've talked to UiPath. They seem to be looking at new ways to go beyond the SaaS model and add value within their cloud ecosystem specifically, around data as part of their and their customers digital transformations. So yeah, pretty much every tech vendor with any size or momentum and new industry players are coming out of hiding, and competing. Building superclouds that look a lot like Moschella's Matrix, with machine intelligence and blockchains, and virtual realities, and gaming, all enabled by the internet and hyperscale cloud CAPEX. So it's moving fast and it's the future in our opinion. So don't get too caught up in the past or you'll be left behind. Okay, what about examples? We've given a number in the past, but let's try to be a little bit more specific. Here are a few we've selected and we're going to answer the two questions in one section here. What workloads and services will run in superclouds and what are some examples? Let's start with analytics. Our favorite example is Snowflake, it's one of the furthest along with its data cloud, in our view. It's a supercloud optimized for data sharing and governance, query performance, and security, and ecosystem enablement. When you do things inside of that data cloud, what we call a super data cloud. Again, our term, not theirs. You can do things that you could not do in a single cloud. You can't do this with Redshift, You can't do this with SQL server and they're bringing new data types now with merging analytics or at least accommodate analytics and transaction type data, and bringing open source tooling with things like Apache Iceberg. And so it ticks the boxes we laid out earlier. I would say that a company like Databricks is also in that mix doing it, coming at it from a data science perspective, trying to create that consistent experience for data scientists and data engineering across clouds. Converge databases, running transaction and analytic workloads is another example. Take a look at what Couchbase is doing with Capella and how it's enabling stretching the cloud to the edge with ARM-based platforms and optimizing for low latency across clouds, and even out to the edge. Document database workloads, look at MongoDB, a very developer-friendly platform that with the Atlas is moving toward a supercloud model running document databases very, very efficiently. How about general purpose workloads? This is where VMware comes into to play. Very clearly, there's a need to create a common operating environment across clouds and on-prem, and out to the edge. And I say VMware is hard at work on that. Managing and moving workloads, and balancing workloads, and being able to recover very quickly across clouds for everyday applications. Network routing, take a look at what Aviatrix is doing across clouds, industry workloads. We see CapitalOne, it announced its cost optimization platform for Snowflake, piggybacking on Snowflake supercloud or super data cloud. And in our view, it's very clearly going to go after other markets is going to test it out with Snowflake, running, optimizing on AWS and it's going to expand to other clouds as Snowflake's business and those other clouds grows. Walmart working with Microsoft to create an on-premed Azure experience that's seamless. Yes, that counts, on-prem counts. If you can create that seamless and continuous experience, identical experience from on-prem to a hyperscale cloud, we would include that as a supercloud. You know, we've written about what Goldman is doing. Again, connecting its on-prem data and software tooling, and other capabilities to AWS for scale. And we can bet dollars to donuts that Oracle will be building a supercloud in healthcare with its Cerner acquisition. Supercloud is everywhere you look. So I'm sorry, naysayers it's happening all around us. So what's next? Well, with all the industry buzz and debate about the future, John Furrier and I, have decided to host an event in Palo Alto, we're motivated and inspired to further this conversation. And we welcome all points of view, positive, negative, multi-cloud, supercloud, hypercloud, all welcome. So theCUBE on Supercloud is coming on August 9th, out of our Palo Alto studios, we'll be running a live program on the topic. We've reached out to a number of industry participants, VMware, Snowflake, Confluent, Sky High Security, Gee Rittenhouse's new company, HashiCorp, CloudFlare. We've hit up Red Hat and we expect many of these folks will be in our studios on August 9th. And we've invited a number of industry participants as well that we're excited to have on. From industry, from financial services, from healthcare, from retail, we're inviting analysts, thought leaders, investors. We're going to have more detail in the coming weeks, but for now, if you're interested, please reach out to me or John with how you think you can advance the discussion and we'll see if we can fit you in. So mark your calendars, stay tuned for more information. Okay, that's it for today. Thanks to Alex Myerson who handles production and manages the podcast for Breaking Analysis. And I want to thank Kristen Martin and Cheryl Knight, they help get the word out on social and in our newsletters. And Rob Hof is our editor in chief over at SiliconANGLE, who does a lot of editing and appreciate you posting on SiliconANGLE, Rob. Thanks to all of you. Remember, all these episodes are available as podcasts wherever you listen. All you got to do is search Breaking Analysis podcast. It publish each week on wikibon.com and siliconangle.com. You can email me directly at david.vellante@siliconangle.com or DM me @DVellante, or comment on my LinkedIn post. And please do check out ETR.ai for the best survey data. And the enterprise tech business will be at AWS NYC Summit next Tuesday, July 12th. So if you're there, please do stop by and say hello to theCUBE, it's at the Javits Center. This is Dave Vellante for theCUBE insights powered by ETR. Thanks for watching. And we'll see you next time on "Breaking Analysis." (bright music)

Published Date : Jul 9 2022

SUMMARY :

From the theCUBE studios and how it's enabling stretching the cloud

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Alex MyersonPERSON

0.99+

SeagateORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

Dave VellantePERSON

0.99+

1987DATE

0.99+

Andy RappaportPERSON

0.99+

David MoschellaPERSON

0.99+

WalmartORGANIZATION

0.99+

Jerry ChenPERSON

0.99+

IntelORGANIZATION

0.99+

Chuck WhittenPERSON

0.99+

Cheryl KnightPERSON

0.99+

Rob HofPERSON

0.99+

1991DATE

0.99+

August 9thDATE

0.99+

AmazonORGANIZATION

0.99+

HPEORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

JohnPERSON

0.99+

MoschellaPERSON

0.99+

OracleORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

IBMORGANIZATION

0.99+

20 cloudsQUANTITY

0.99+

StarburstORGANIZATION

0.99+

Goldman SachsORGANIZATION

0.99+

DellORGANIZATION

0.99+

Fidelma RussoPERSON

0.99+

2018DATE

0.99+

two questionsQUANTITY

0.99+

AppleORGANIZATION

0.99+

AWSORGANIZATION

0.99+

AviatrixORGANIZATION

0.99+

Omer AsadPERSON

0.99+

Sky High SecurityORGANIZATION

0.99+

DatabricksORGANIZATION

0.99+

ConfluentORGANIZATION

0.99+

WintelORGANIZATION

0.99+

NutanixORGANIZATION

0.99+

CapitalOneORGANIZATION

0.99+

CouchbaseORGANIZATION

0.99+

HashiCorpORGANIZATION

0.99+

five cloudsQUANTITY

0.99+

Kristen MartinPERSON

0.99+

last yearDATE

0.99+

david.vellante@siliconangle.comOTHER

0.99+

two cloudsQUANTITY

0.99+

RobPERSON

0.99+

SnowflakeORGANIZATION

0.99+

MongoORGANIZATION

0.99+

Pure StorageORGANIZATION

0.99+

each cloudQUANTITY

0.99+

VeeamORGANIZATION

0.99+

John FurrierPERSON

0.99+

GartnerORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

first twoQUANTITY

0.99+

ClumioORGANIZATION

0.99+

CrowdStrikeORGANIZATION

0.99+

OktaORGANIZATION

0.99+

three cloudsQUANTITY

0.99+

MITORGANIZATION

0.99+

Javits CenterLOCATION

0.99+

first timeQUANTITY

0.99+

ZscalerORGANIZATION

0.99+

RappaportPERSON

0.99+

MoschellaORGANIZATION

0.99+

each weekQUANTITY

0.99+

late last yearDATE

0.99+

UiPathORGANIZATION

0.99+

10 most frequently asked questionsQUANTITY

0.99+

CloudFlareORGANIZATION

0.99+

IDCORGANIZATION

0.99+

one sectionQUANTITY

0.99+

SiliconANGLEORGANIZATION

0.98+

Seeing DigitalTITLE

0.98+

eachQUANTITY

0.98+

firstQUANTITY

0.98+

bothQUANTITY

0.98+

AdobeORGANIZATION

0.98+

more than one cloudQUANTITY

0.98+

each offeringQUANTITY

0.98+

Breaking Analysis: Answering the top 10 questions about supercloud


 

>> From theCUBE Studios in Palo Alto and Boston, bringing you data-driven insights from theCUBE and ETR. This is "Breaking Analysis" with Dave Vallante. >> Welcome to this week's Wikibon CUBE Insights powered by ETR. As we exited the isolation economy last year, Supercloud is a term that we introduced to describe something new that was happening in the world of cloud. In this "Breaking Analysis," we address the 10 most frequently asked questions we get around Supercloud. Okay, let's review these frequently asked questions on Supercloud that we're going to try to answer today. Look at an industry that's full of hype and buzzwords. Why the hell does anyone need a new term? Aren't hyperscalers building out Superclouds? We'll try to answer why the term Supercloud connotes something different from hyperscale clouds. And we'll talk about the problems that Superclouds solve specifically, and we'll further define the critical aspects of a Supercloud architecture. We often get asked, "Isn't this just multi-cloud?" Well, we don't think so, and we'll explain why in this "Breaking Analysis." Now, in an earlier episode, we introduced the notion of super PaaS. Well, isn't a plain vanilla PaaS already a super PaaS? Again, we don't think so, and we'll explain why. Who will actually build and who are the players currently building Superclouds? What workloads and services will run on Superclouds? And eight A or number nine, what are some examples that we can share of Supercloud? And finally, we'll answer what you can expect next from us on Supercloud. Okay, let's get started. Why do we need another buzzword? Well, late last year ahead of re:Invent, we were inspired by a post from Jerry Chen called castles in the cloud. Now, in that blog post, he introduced the idea that there were submarkets emerging in cloud that presented opportunities for investors and entrepreneurs. That the cloud wasn't going to suck the hyperscalers, weren't going to suck all the value out of the industry. And so we introduced this notion of Supercloud to describe what we saw as a value layer emerging above the hyperscalers CAPEX gift, we sometimes call it. Now, it turns out that we weren't the only ones using the term, as both Cornell and MIT, have used the phrase in somewhat similar, but different contexts. The point is, something new was happening in the AWS and other ecosystems. It was more than IS and PaaS, and wasn't just SaaS running in the cloud. It was a new architecture that integrates infrastructure, platform and software as services, to solve new problems that the cloud vendors, in our view, weren't addressing by themselves. It seemed to us that the ecosystem was pursuing opportunities across clouds that went beyond conventional implementations of multi-cloud. And we felt there was a structural change going on at the industry level. The Supercloud metaphorically was highlighting. So that's the background on why we felt a new catch phrase was warranted. Love it or hate it, it's memorable and it's what we chose. Now, to that last point about structural industry transformation. Andy Rapaport is sometimes and often credited with identifying the shift from the vertically integrated IBM mainframe era to the fragmented PC microprocesor based era in his HBR article in 1991. In fact, it was David Moschella, who at the time was an IDC analyst who first introduced the concept in 1987, four years before Rapaport's article was published. Moschella saw that it was clear that Intel, Microsoft, Seagate and others would replace the system vendors and put that forth in a graphic that looked similar to the first two on this chart. We don't have to review the shift from IBM as the center of the industry to Wintel. That's well understood. What isn't as well known or accepted is what Moschella put out in his 2018 book called "Seeing Digital" which introduced the idea of the matrix that's shown on the right hand side of this chart. Moschella posited that new services were emerging, built on top of the internet and hyperscale clouds that would integrate other innovations and would define the next era of computing. He used the term matrix, because the conceptual depiction included, not only horizontal technology rows, like the cloud and the internet, but for the first time included connected industry verticals, the columns in this chart. Moschella pointed out that, whereas historically, industry verticals had a closed value chain or stack and ecosystem of R&D and production and manufacturing and distribution. And if you were in that industry, the expertise within that vertical generally stayed within that vertical and was critical to success. But because of digital and data, for the first time, companies were able to traverse industries jump across industries and compete because data enabled them to do that. Examples, Amazon and content, payments, groceries, Apple and payments, and content and so forth. There are many examples. Data was now this unifying enabler and this marked a change in the structure of the technology landscape. And Supercloud is meant to imply more than running in hyperscale clouds. Rather, it's the combination of multiple technologies, enabled by cloud scale with new industry participants from those verticals; financial services, and healthcare, and manufacturing, energy, media, and virtually all and any industry. Kind of an extension of every company is a software company. Basically, every company now has the opportunity to build their own cloud or Supercloud. And we'll come back to that. Let's first address what's different about Superclouds relative to hyperscale clouds. Now, this one's pretty straightforward and obvious, I think. Hyperscale clouds, they're walled gardens where they want your data in their cloud and they want to keep you there. Sure, every cloud player realizes that not all data will go to their particular cloud. So they're meeting customers where their data lives with initiatives like Amazon Outposts and Azure Arc and Google Antos. But at the end of the day, the more homogeneous they can make their environments, the better control, security, costs, and performance they can deliver. The more complex the environment, the more difficult it is to deliver on their brand promises. And, of course, the less margin that's left for them to capture. Will the hyperscalers get more serious about cross cloud services? Maybe, but they have plenty of work to do within their own clouds and within enabling their own ecosystems. They have a long way to go, a lot of runway. So let's talk about specifically, what problems Superclouds solve. We've all seen the stats from IDC or Gartner or whomever, that customers on average use more than one cloud, two clouds, three clouds, five clouds, 20 clouds. And we know these clouds operate in disconnected silos for the most part. And that's a problem, because each cloud requires different skills, because the development environment is different as is the operating environment. They have different APIs, different primitives, and different management tools that are optimized for each respective hyperscale cloud. Their functions and value props don't extend to their competitors' clouds for the most part. Why would they? As a result, there's friction when moving between different clouds. It's hard to share data. It's hard to move work. It's hard to secure and govern data. It's hard to enforce organizational edicts and policies across these clouds and on-prem. Supercloud is an architecture designed to create a single environment that enables management of workloads and data across clouds in an effort to take out complexity, accelerate application development, streamline operations, and share data safely, irrespective of location. It's pretty straightforward, but non-trivial, which is why I always ask a company's CEO and executives if stock buybacks and dividends will yield as much return as building out Superclouds that solve really specific and hard problems and create differential value. Okay, let's dig a bit more into the architectural aspects of Supercloud. In other words, what are the salient attributes of Supercloud? So, first and foremost, a Supercloud runs a set of specific services designed to solve a unique problem, and it can do so in more than one cloud. Superclouds leverage the underlying cloud native tooling of a hyperscale cloud, but they're optimized for a specific objective that aligns with the problem that they're trying to solve. For example, Supercloud might be optimized for lowest cost or lowest latency or sharing data or governing or securing that data or higher performance for networking, for example. But the point is, the collection of services that is being delivered is focused on a unique value proposition that is not being delivered by the hyperscalers across clouds. A Supercloud abstracts the underlying and siloed primitives of the native PaaS layer from the hyperscale cloud, and then using its own specific platform as a service tooling, creates a common experience across clouds for developers and users. And it does so in the most efficient manner, meaning it has the metadata knowledge and management capabilities that can optimize for latency, bandwidth, or recovery or data sovereignty, or whatever unique value that Supercloud is delivering for the specific use case in their domain. And a Supercloud comprises a super PaaS capability that allows ecosystem partners through APIs to add incremental value on top of the Supercloud platform to fill gaps, accelerate features, and of course, innovate. The services can be infrastructure related, they could be application services, they could be data services, security services, user services, et cetera, designed and packaged to bring unique value to customers. Again, that hyperscalers are not delivering across clouds or on premises. Okay, so another common question we get is, "Isn't that just multi-cloud?" And what we'd say to that is yeah, "Yes, but no." You can call it multi-cloud 2.0, if you want. If you want to use, it's kind of a commonly used rubric. But as Dell's Chuck Whitten proclaimed at Dell Technologies World this year, multi-cloud, by design, is different than multi-cloud by default. Meaning, to date, multi-cloud has largely been a symptom of what we've called multi-vendor or of M&A. You buy a company and they happen to use Google cloud. And so you bring it in. And when you look at most so-called multi-cloud implementations, you see things like an on-prem stack, which is wrapped in a container and hosted on a specific cloud. Or increasingly, a technology vendor has done the work of building a cloud native version of their stack and running it on a specific cloud. But historically, it's been a unique experience within each cloud, with virtually no connection between the cloud silos. Supercloud sets out to build incremental value across clouds and above hyperscale CAPEX that goes beyond cloud compatibility within each cloud. So, if you want to call it multi-cloud 2.0, that's fine, but we chose to call it Supercloud. Okay, so at this point you may be asking, "Well isn't PaaS already a version of Supercloud?" And again, we would say, "No." That Supercloud and its corresponding super PaaS layer, which is a prerequisite, gives the freedom to store, process, and manage and secure and connect islands of data across a continuum with a common experience across clouds. And the services offered are specific to that Supercloud and will vary by each offering. OpenShift, for example, can be used to construct a super PaaS, but in and of itself, isn't a super PaaS, it's generic. A super PaaS might be developed to support, for instance, ultra low latency database work. It would unlikely, again, taking the OpenShift example, it's unlikely that off the shelf OpenShift would be used to develop such a low latency, super PaaS layer for ultra low latency database work. The point is, Supercloud and its inherent super PaaS will be optimized to solve specific problems like that low latency example for distributed databases or fast backup in recovery for data protection and ransomware, or data sharing or data governance. Highly specific use cases that the Supercloud is designed to solve for. Okay, another question we often get is, "Who has a Supercloud today and who's building a Supercloud and who are the contenders?" Well, most companies that consider themselves cloud players will, we believe, be building or are building Superclouds. Here's a common ETR graphic that we like to show with net score or spending momentum on the Y axis, and overlap or pervasiveness in the ETR surveys on the X axis. And we've randomly chosen a number of players that we think are in the Supercloud mix. And we've included the hyperscalers because they are enablers. Now, remember, this is a spectrum of maturity. It's a maturity model. And we've added some of those industry players that we see building Superclouds like Capital One, Goldman Sachs, Walmart. This is in deference to Moschella's observation around the matrix and the industry structural changes that are going on. This goes back to every company being a software company. And rather than pattern match and outdated SaaS model, we see new industry structures emerging where software and data and tools specific to an industry will lead the next wave of innovation and bring in new value that traditional technology companies aren't going to solve. And the hyperscalers aren't going to solve. We've talked a lot about Snowflake's data cloud as an example of Supercloud. After being at Snowflake Summit, we're more convinced than ever that they're headed in this direction. VMware is clearly going after cross cloud services, perhaps creating a new category. Basically, every large company we see either pursuing Supercloud initiatives or thinking about it. Dell showed Project Alpine at Dell Tech World. That's a Supercloud. Snowflake introducing a new application development capability based on their super PaaS, our term, of course. They don't use the phrase. Mongo, Couchbase, Nutanix, Pure Storage, Veeam, CrowdStrike, Okta, Zscaler. Yeah, all of those guys. Yes, Cisco and HPE. Even though on theCUBE at HPE Discover, Fidelma Russo said on theCUBE, she wasn't a fan of cloaking mechanisms. (Dave laughing) But then we talked to HPE's head of storage services, Omer Asad, and he's clearly headed in the direction that we would consider Supercloud. Again, those cross cloud services, of course, their emphasis is connecting as well on-prem. That single experience, which traditionally has not existed with multi-cloud or hybrid. And we're seeing the emergence of smaller companies like Aviatrix and Starburst and Clumio and others that are building versions of Superclouds that solve for a specific problem for their customers. Even ISVs like Adobe, ADP, we've talked to UiPath. They seem to be looking at new ways to go beyond the SaaS model and add value within their cloud ecosystem, specifically around data as part of their and their customer's digital transformations. So yeah, pretty much every tech vendor with any size or momentum, and new industry players are coming out of hiding and competing, building Superclouds that look a lot like Moschella's matrix, with machine intelligence and blockchains and virtual realities and gaming, all enabled by the internet and hyperscale cloud CAPEX. So it's moving fast and it's the future in our opinion. So don't get too caught up in the past or you'll be left behind. Okay, what about examples? We've given a number in the past but let's try to be a little bit more specific. Here are a few we've selected and we're going to answer the two questions in one section here. What workloads and services will run in Superclouds and what are some examples? Let's start with analytics. Our favorite example of Snowflake. It's one of the furthest along with its data cloud, in our view. It's a Supercloud optimized for data sharing and governance, and query performance, and security, and ecosystem enablement. When you do things inside of that data cloud, what we call a super data cloud. Again, our term, not theirs. You can do things that you could not do in a single cloud. You can't do this with Redshift. You can't do this with SQL server. And they're bringing new data types now with merging analytics or at least accommodate analytics and transaction type data and bringing open source tooling with things like Apache Iceberg. And so, it ticks the boxes we laid out earlier. I would say that a company like Databricks is also in that mix, doing it, coming at it from a data science perspective trying to create that consistent experience for data scientists and data engineering across clouds. Converge databases, running transaction and analytic workloads is another example. Take a look at what Couchbase is doing with Capella and how it's enabling stretching the cloud to the edge with arm based platforms and optimizing for low latency across clouds, and even out to the edge. Document database workloads, look at Mongo DB. A very developer friendly platform that where the Atlas is moving toward a Supercloud model, running document databases very, very efficiently. How about general purpose workloads? This is where VMware comes into play. Very clearly, there's a need to create a common operating environment across clouds and on-prem and out to the edge. And I say, VMware is hard at work on that, managing and moving workloads and balancing workloads, and being able to recover very quickly across clouds for everyday applications. Network routing, take a look at what Aviatrix is doing across clouds. Industry workloads, we see Capital One. It announced its cost optimization platform for Snowflake, piggybacking on Snowflake's Supercloud or super data cloud. And in our view, it's very clearly going to go after other markets. It's going to test it out with Snowflake, optimizing on AWS, and it's going to expand to other clouds as Snowflake's business and those other clouds grows. Walmart working with Microsoft to create an on-premed Azure experience that's seamless. Yes, that counts, on-prem counts. If you can create that seamless and continuous experience, identical experience from on-prem to a hyperscale cloud, we would include that as a Supercloud. We've written about what Goldman is doing. Again, connecting its on-prem data and software tooling, and other capabilities to AWS for scale. And you can bet dollars to donuts that Oracle will be building a Supercloud in healthcare with its Cerner acquisition. Supercloud is everywhere you look. So I'm sorry, naysayers, it's happening all around us. So what's next? Well, with all the industry buzz and debate about the future, John Furrier and I have decided to host an event in Palo Alto. We're motivated and inspired to further this conversation. And we welcome all points of view, positive, negative, multi-cloud, Supercloud, HyperCloud, all welcome. So theCUBE on Supercloud is coming on August 9th out of our Palo Alto studios. We'll be running a live program on the topic. We've reached out to a number of industry participants; VMware, Snowflake, Confluent, Skyhigh Security, G. Written House's new company, HashiCorp, CloudFlare. We've hit up Red Hat and we expect many of these folks will be in our studios on August 9th. And we've invited a number of industry participants as well that we're excited to have on. From industry, from financial services, from healthcare, from retail, we're inviting analysts, thought leaders, investors. We're going to have more detail in the coming weeks, but for now, if you're interested, please reach out to me or John with how you think you can advance the discussion, and we'll see if we can fit you in. So mark your calendars, stay tuned for more information. Okay, that's it for today. Thanks to Alex Myerson who handles production and manages the podcast for "Breaking Analysis." And I want to thank Kristen Martin and Cheryl Knight. They help get the word out on social and in our newsletters. And Rob Hof is our editor in chief over at SiliconANGLE, who does a lot of editing and appreciate you posting on SiliconANGLE, Rob. Thanks to all of you. Remember, all these episodes are available as podcasts wherever you listen. All you got to do is search, breaking analysis podcast. I publish each week on wikibon.com and siliconangle.com. Or you can email me directly at david.vellante@siliconangle.com. Or DM me @DVallante, or comment on my LinkedIn post. And please, do check out etr.ai for the best survey data in the enterprise tech business. We'll be at AWS NYC summit next Tuesday, July 12th. So if you're there, please do stop by and say hello to theCUBE. It's at the Javits Center. This is Dave Vallante for theCUBE Insights, powered by ETR. Thanks for watching. And we'll see you next time on "Breaking Analysis." (slow music)

Published Date : Jul 8 2022

SUMMARY :

This is "Breaking Analysis" stretching the cloud to the edge

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Alex MyersonPERSON

0.99+

SeagateORGANIZATION

0.99+

1987DATE

0.99+

Dave VallantePERSON

0.99+

MicrosoftORGANIZATION

0.99+

WalmartORGANIZATION

0.99+

1991DATE

0.99+

Andy RapaportPERSON

0.99+

Jerry ChenPERSON

0.99+

MoschellaPERSON

0.99+

OracleORGANIZATION

0.99+

Cheryl KnightPERSON

0.99+

David MoschellaPERSON

0.99+

Rob HofPERSON

0.99+

Palo AltoLOCATION

0.99+

August 9thDATE

0.99+

IntelORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

HPEORGANIZATION

0.99+

Chuck WhittenPERSON

0.99+

IBMORGANIZATION

0.99+

Goldman SachsORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Fidelma RussoPERSON

0.99+

20 cloudsQUANTITY

0.99+

AWSORGANIZATION

0.99+

WintelORGANIZATION

0.99+

DatabricksORGANIZATION

0.99+

two questionsQUANTITY

0.99+

DellORGANIZATION

0.99+

John FurrierPERSON

0.99+

2018DATE

0.99+

AppleORGANIZATION

0.99+

JohnPERSON

0.99+

BostonLOCATION

0.99+

AviatrixORGANIZATION

0.99+

StarburstORGANIZATION

0.99+

ConfluentORGANIZATION

0.99+

five cloudsQUANTITY

0.99+

ClumioORGANIZATION

0.99+

CouchbaseORGANIZATION

0.99+

first timeQUANTITY

0.99+

NutanixORGANIZATION

0.99+

MoschellaORGANIZATION

0.99+

Skyhigh SecurityORGANIZATION

0.99+

MITORGANIZATION

0.99+

HashiCorpORGANIZATION

0.99+

last yearDATE

0.99+

RobPERSON

0.99+

two cloudsQUANTITY

0.99+

three cloudsQUANTITY

0.99+

david.vellante@siliconangle.comOTHER

0.99+

first twoQUANTITY

0.99+

Kristen MartinPERSON

0.99+

MongoORGANIZATION

0.99+

GartnerORGANIZATION

0.99+

CrowdStrikeORGANIZATION

0.99+

OktaORGANIZATION

0.99+

Pure StorageORGANIZATION

0.99+

Omer AsadPERSON

0.99+

Capital OneORGANIZATION

0.99+

each cloudQUANTITY

0.99+

SnowflakeORGANIZATION

0.99+

VeeamORGANIZATION

0.99+

OpenShiftTITLE

0.99+

10 most frequently asked questionsQUANTITY

0.99+

RapaportPERSON

0.99+

SiliconANGLEORGANIZATION

0.99+

CloudFlareORGANIZATION

0.99+

one sectionQUANTITY

0.99+

Seeing DigitalTITLE

0.99+

VMwareORGANIZATION

0.99+

IDCORGANIZATION

0.99+

ZscalerORGANIZATION

0.99+

each weekQUANTITY

0.99+

Javits CenterLOCATION

0.99+

late last yearDATE

0.98+

firstQUANTITY

0.98+

AdobeORGANIZATION

0.98+

more than one cloudQUANTITY

0.98+

each offeringQUANTITY

0.98+