Image Title

Search Results for first workload:

Prem Balasubramanian and Suresh Mothikuru | Hitachi Vantara: Build Your Cloud Center of Excellence


 

(soothing music) >> Hey everyone, welcome to this event, "Build Your Cloud Center of Excellence." I'm your host, Lisa Martin. In the next 15 minutes or so my guest and I are going to be talking about redefining cloud operations, an application modernization for customers, and specifically how partners are helping to speed up that process. As you saw on our first two segments, we talked about problems enterprises are facing with cloud operations. We talked about redefining cloud operations as well to solve these problems. This segment is going to be focusing on how Hitachi Vantara's partners are really helping to speed up that process. We've got Johnson Controls here to talk about their partnership with Hitachi Vantara. Please welcome both of my guests, Prem Balasubramanian is with us, SVP and CTO Digital Solutions at Hitachi Vantara. And Suresh Mothikuru, SVP Customer Success Platform Engineering and Reliability Engineering from Johnson Controls. Gentlemen, welcome to the program, great to have you. >> Thank. >> Thank you, Lisa. >> First question is to both of you and Suresh, we'll start with you. We want to understand, you know, the cloud operations landscape is increasingly complex. We've talked a lot about that in this program. Talk to us, Suresh, about some of the biggest challenges and pin points that you faced with respect to that. >> Thank you. I think it's a great question. I mean, cloud has evolved a lot in the last 10 years. You know, when we were talking about a single cloud whether it's Azure or AWS and GCP, and that was complex enough. Now we are talking about multi-cloud and hybrid and you look at Johnson Controls, we have Azure we have AWS, we have GCP, we have Alibaba and we also support on-prem. So the architecture has become very, very complex and the complexity has grown so much that we are now thinking about whether we should be cloud native or cloud agnostic. So I think, I mean, sometimes it's hard to even explain the complexity because people think, oh, "When you go to cloud, everything is simplified." Cloud does give you a lot of simplicity, but it also really brings a lot more complexity along with it. So, and then next one is pretty important is, you know, generally when you look at cloud services, you have plenty of services that are offered within a cloud, 100, 150 services, 200 services. Even within those companies, you take AWS they might not know, an individual resource might not know about all the services we see. That's a big challenge for us as a customer to really understand each of the service that is provided in these, you know, clouds, well, doesn't matter which one that is. And the third one is pretty big, at least at the CTO the CIO, and the senior leadership level, is cost. Cost is a major factor because cloud, you know, will eat you up if you cannot manage it. If you don't have a good cloud governance process it because every minute you are in it, it's burning cash. So I think if you ask me, these are the three major things that I am facing day to day and that's where I use my partners, which I'll touch base down the line. >> Perfect, we'll talk about that. So Prem, I imagine that these problems are not unique to Johnson Controls or JCI, as you may hear us refer to it. Talk to me Prem about some of the other challenges that you're seeing within the customer landscape. >> So, yeah, I agree, Lisa, these are not very specific to JCI, but there are specific issues in JCI, right? So the way we think about these are, there is a common issue when people go to the cloud and there are very specific and unique issues for businesses, right? So JCI, and we will talk about this in the episode as we move forward. I think Suresh and his team have done some phenomenal step around how to manage this complexity. But there are customers who have a lesser complex cloud which is, they don't go to Alibaba, they don't have footprint in all three clouds. So their multi-cloud footprint could be a bit more manageable, but still struggle with a lot of the same problems around cost, around security, around talent. Talent is a big thing, right? And in Suresh's case I think it's slightly more exasperated because every cloud provider Be it AWS, JCP, or Azure brings in hundreds of services and there is nobody, including many of us, right? We learn every day, nowadays, right? It's not that there is one service integrator who knows all, while technically people can claim as a part of sales. But in reality all of us are continuing to learn in this landscape. And if you put all of this equation together with multiple clouds the complexity just starts to exponentially grow. And that's exactly what I think JCI is experiencing and Suresh's team has been experiencing, and we've been working together. But the common problems are around security talent and cost management of this, right? Those are my three things. And one last thing that I would love to say before we move away from this question is, if you think about cloud operations as a concept that's evolving over the last few years, and I have touched upon this in the previous episode as well, Lisa, right? If you take architectures, we've gone into microservices, we've gone into all these server-less architectures all the fancy things that we want. That helps us go to market faster, be more competent to as a business. But that's not simplified stuff, right? That's complicated stuff. It's a lot more distributed. Second, again, we've advanced and created more modern infrastructure because all of what we are talking is platform as a service, services on the cloud that we are consuming, right? In the same case with development we've moved into a DevOps model. We kind of click a button put some code in a repository, the code starts to run in production within a minute, everything else is automated. But then when we get to operations we are still stuck in a very old way of looking at cloud as an infrastructure, right? So you've got an infra team, you've got an app team, you've got an incident management team, you've got a soft knock, everything. But again, so Suresh can talk about this more because they are making significant strides in thinking about this as a single workload, and how do I apply engineering to go manage this? Because a lot of it is codified, right? So automation. Anyway, so that's kind of where the complexity is and how we are thinking, including JCI as a partner thinking about taming that complexity as we move forward. >> Suresh, let's talk about that taming the complexity. You guys have both done a great job of articulating the ostensible challenges that are there with cloud, especially multi-cloud environments that you're living in. But Suresh, talk about the partnership with Hitachi Vantara. How is it helping to dial down some of those inherent complexities? >> I mean, I always, you know, I think I've said this to Prem multiple times. I treat my partners as my internal, you know, employees. I look at Prem as my coworker or my peers. So the reason for that is I want Prem to have the same vested interest as a partner in my success or JCI success and vice versa, isn't it? I think that's how we operate and that's how we have been operating. And I think I would like to thank Prem and Hitachi Vantara for that really been an amazing partnership. And as he was saying, we have taken a completely holistic approach to how we want to really be in the market and play in the market to our customers. So if you look at my jacket it talks about OpenBlue platform. This is what JCI is building, that we are building this OpenBlue digital platform. And within that, my team, along with Prem's or Hitachi's, we have built what we call as Polaris. It's a technical platform where our apps can run. And this platform is automated end-to-end from a platform engineering standpoint. We stood up a platform engineering organization, a reliability engineering organization, as well as a support organization where Hitachi played a role. As I said previously, you know, for me to scale I'm not going to really have the talent and the knowledge of every function that I'm looking at. And Hitachi, not only they brought the talent but they also brought what he was talking about, Harc. You know, they have set up a lot and now we can leverage it. And they also came up with some really interesting concepts. I went and met them in India. They came up with this concept called IPL. Okay, what is that? They really challenged all their employees that's working for GCI to come up with innovative ideas to solve problems proactively, which is self-healing. You know, how you do that? So I think partners, you know, if they become really vested in your interests, they can do wonders for you. And I think in this case Hitachi is really working very well for us and in many aspects. And I'm leveraging them... You started with support, now I'm leveraging them in the automation, the platform engineering, as well as in the reliability engineering and then in even in the engineering spaces. And that like, they are my end-to-end partner right now? >> So you're really taking that holistic approach that you talked about and it sounds like it's a very collaborative two-way street partnership. Prem, I want to go back to, Suresh mentioned Harc. Talk a little bit about what Harc is and then how partners fit into Hitachi's Harc strategy. >> Great, so let me spend like a few seconds on what Harc is. Lisa, again, I know we've been using the term. Harc stands for Hitachi application reliability sectors. Now the reason we thought about Harc was, like I said in the beginning of this segment, there is an illusion from an architecture standpoint to be more modern, microservices, server-less, reactive architecture, so on and so forth. There is an illusion in your development methodology from Waterfall to agile, to DevOps to lean, agile to path program, whatever, right? Extreme program, so on and so forth. There is an evolution in the space of infrastructure from a point where you were buying these huge humongous servers and putting it in your data center to a point where people don't even see servers anymore, right? You buy it, by a click of a button you don't know the size of it. All you know is a, it's (indistinct) whatever that name means. Let's go provision it on the fly, get go, get your work done, right? When all of this is advanced when you think about operations people have been solving the problem the way they've been solving it 20 years back, right? That's the issue. And Harc was conceived exactly to fix that particular problem, to think about a modern way of operating a modern workload, right? That's exactly what Harc. So it brings together finest engineering talent. So the teams are trained in specific ways of working. We've invested and implemented some of the IP, we work with the best of the breed partner ecosystem, and I'll talk about that in a minute. And we've got these facilities in Dallas and I am talking from my office in Dallas, which is a Harc facility in the US from where we deliver for our customers. And then back in Hyderabad, we've got one more that we opened and these are facilities from where we deliver Harc services for our customers as well, right? And then we are expanding it in Japan and Portugal as we move into 23. That's kind of the plan that we are thinking through. However, that's what Harc is, Lisa, right? That's our solution to this cloud complexity problem. Right? >> Got it, and it sounds like it's going quite global, which is fantastic. So Suresh, I want to have you expand a bit on the partnership, the partner ecosystem and the role that it plays. You talked about it a little bit but what role does the partner ecosystem play in really helping JCI to dial down some of those challenges and the inherent complexities that we talked about? >> Yeah, sure. I think partners play a major role and JCI is very, very good at it. I mean, I've joined JCI 18 months ago, JCI leverages partners pretty extensively. As I said, I leverage Hitachi for my, you know, A group and the (indistinct) space and the cloud operations space, and they're my primary partner. But at the same time, we leverage many other partners. Well, you know, Accenture, SCL, and even on the tooling side we use Datadog and (indistinct). All these guys are major partners of our because the way we like to pick partners is based on our vision and where we want to go. And pick the right partner who's going to really, you know make you successful by investing their resources in you. And what I mean by that is when you have a partner, partner knows exactly what kind of skillset is needed for this customer, for them to really be successful. As I said earlier, we cannot really get all the skillset that we need, we rely on the partners and partners bring the the right skillset, they can scale. I can tell Prem tomorrow, "Hey, I need two parts by next week", and I guarantee it he's going to bring two parts to me. So they let you scale, they let you move fast. And I'm a big believer, in today's day and age, to get things done fast and be more agile. I'm not worried about failure, but for me moving fast is very, very important. And partners really do a very good job bringing that. But I think then they also really make you think, isn't it? Because one thing I like about partners they make you innovate whether they know it or not but they do because, you know, they will come and ask you questions about, "Hey, tell me why you are doing this. Can I review your architecture?" You know, and then they will try to really say I don't think this is going to work. Because they work with so many different clients, not JCI, they bring all that expertise and that's what I look from them, you know, just not, you know, do a T&M job for me. I ask you to do this go... They just bring more than that. That's how I pick my partners. And that's how, you know, Hitachi's Vantara is definitely one of a good partner from that sense because they bring a lot more innovation to the table and I appreciate about that. >> It sounds like, it sounds like a flywheel of innovation. >> Yeah. >> I love that. Last question for both of you, which we're almost out of time here, Prem, I want to go back to you. So I'm a partner, I'm planning on redefining CloudOps at my company. What are the two things you want me to remember from Hitachi Vantara's perspective? >> So before I get to that question, Lisa, the partners that we work with are slightly different from from the partners that, again, there are some similar partners. There are some different partners, right? For example, we pick and choose especially in the Harc space, we pick and choose partners that are more future focused, right? We don't care if they are huge companies or small companies. We go after companies that are future focused that are really, really nimble and can change for our customers need because it's not our need, right? When I pick partners for Harc my ultimate endeavor is to ensure, in this case because we've got (indistinct) GCI on, we are able to operate (indistinct) with the level of satisfaction above and beyond that they're expecting from us. And whatever I don't have I need to get from my partners so that I bring this solution to Suresh. As opposed to bringing a whole lot of people and making them stand in front of Suresh. So that's how I think about partners. What do I want them to do from, and we've always done this so we do workshops with our partners. We just don't go by tools. When we say we are partnering with X, Y, Z, we do workshops with them and we say, this is how we are thinking. Either you build it in your roadmap that helps us leverage you, continue to leverage you. And we do have minimal investments where we fix gaps. We're building some utilities for us to deliver the best service to our customers. And our intention is not to build a product to compete with our partner. Our intention is to just fill the wide space until they go build it into their product suite that we can then leverage it for our customers. So always think about end customers and how can we make it easy for them? Because for all the tool vendors out there seeing this and wanting to partner with Hitachi the biggest thing is tools sprawl, especially on the cloud is very real. For every problem on the cloud. I have a billion tools that are being thrown at me as Suresh if I'm putting my installation and it's not easy at all. It's so confusing. >> Yeah. >> So that's what we want. We want people to simplify that landscape for our end customers, and we are looking at partners that are thinking through the simplification not just making money. >> That makes perfect sense. There really is a very strong symbiosis it sounds like, in the partner ecosystem. And there's a lot of enablement that goes on back and forth it sounds like as well, which is really, to your point it's all about the end customers and what they're expecting. Suresh, last question for you is which is the same one, if I'm a partner what are the things that you want me to consider as I'm planning to redefine CloudOps at my company? >> I'll keep it simple. In my view, I mean, we've touched upon it in multiple facets in this interview about that, the three things. First and foremost, reliability. You know, in today's day and age my products has to be reliable, available and, you know, make sure that the customer's happy with what they're really dealing with, number one. Number two, my product has to be secure. Security is super, super important, okay? And number three, I need to really make sure my customers are getting the value so I keep my cost low. So these three is what I would focus and what I expect from my partners. >> Great advice, guys. Thank you so much for talking through this with me and really showing the audience how strong the partnership is between Hitachi Vantara and JCI. What you're doing together, we'll have to talk to you again to see where things go but we really appreciate your insights and your perspectives. Thank you. >> Thank you, Lisa. >> Thanks Lisa, thanks for having us. >> My pleasure. For my guests, I'm Lisa Martin. Thank you so much for watching. (soothing music)

Published Date : Mar 2 2023

SUMMARY :

In the next 15 minutes or so and pin points that you all the services we see. Talk to me Prem about some of the other in the episode as we move forward. that taming the complexity. and play in the market to our customers. that you talked about and it sounds Now the reason we thought about Harc was, and the inherent complexities But at the same time, we like a flywheel of innovation. What are the two things you want me especially in the Harc space, we pick for our end customers, and we are looking it sounds like, in the partner ecosystem. make sure that the customer's happy showing the audience how Thank you so much for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
SureshPERSON

0.99+

HitachiORGANIZATION

0.99+

Lisa MartinPERSON

0.99+

Suresh MothikuruPERSON

0.99+

JapanLOCATION

0.99+

Prem BalasubramanianPERSON

0.99+

JCIORGANIZATION

0.99+

LisaPERSON

0.99+

HarcORGANIZATION

0.99+

Johnson ControlsORGANIZATION

0.99+

DallasLOCATION

0.99+

IndiaLOCATION

0.99+

AlibabaORGANIZATION

0.99+

HyderabadLOCATION

0.99+

Hitachi VantaraORGANIZATION

0.99+

Johnson ControlsORGANIZATION

0.99+

PortugalLOCATION

0.99+

USLOCATION

0.99+

SCLORGANIZATION

0.99+

AccentureORGANIZATION

0.99+

bothQUANTITY

0.99+

AWSORGANIZATION

0.99+

two partsQUANTITY

0.99+

150 servicesQUANTITY

0.99+

SecondQUANTITY

0.99+

FirstQUANTITY

0.99+

next weekDATE

0.99+

200 servicesQUANTITY

0.99+

First questionQUANTITY

0.99+

PremPERSON

0.99+

tomorrowDATE

0.99+

PolarisORGANIZATION

0.99+

T&MORGANIZATION

0.99+

hundreds of servicesQUANTITY

0.99+

three thingsQUANTITY

0.98+

threeQUANTITY

0.98+

agileTITLE

0.98+

Prem Balasubramanian and Suresh Mothikuru | Hitachi Vantara: Build Your Cloud Center of Excellence


 

(soothing music) >> Hey everyone, welcome to this event, "Build Your Cloud Center of Excellence." I'm your host, Lisa Martin. In the next 15 minutes or so my guest and I are going to be talking about redefining cloud operations, an application modernization for customers, and specifically how partners are helping to speed up that process. As you saw on our first two segments, we talked about problems enterprises are facing with cloud operations. We talked about redefining cloud operations as well to solve these problems. This segment is going to be focusing on how Hitachi Vantara's partners are really helping to speed up that process. We've got Johnson Controls here to talk about their partnership with Hitachi Vantara. Please welcome both of my guests, Prem Balasubramanian is with us, SVP and CTO Digital Solutions at Hitachi Vantara. And Suresh Mothikuru, SVP Customer Success Platform Engineering and Reliability Engineering from Johnson Controls. Gentlemen, welcome to the program, great to have you. >> Thank. >> Thank you, Lisa. >> First question is to both of you and Suresh, we'll start with you. We want to understand, you know, the cloud operations landscape is increasingly complex. We've talked a lot about that in this program. Talk to us, Suresh, about some of the biggest challenges and pin points that you faced with respect to that. >> Thank you. I think it's a great question. I mean, cloud has evolved a lot in the last 10 years. You know, when we were talking about a single cloud whether it's Azure or AWS and GCP, and that was complex enough. Now we are talking about multi-cloud and hybrid and you look at Johnson Controls, we have Azure we have AWS, we have GCP, we have Alibaba and we also support on-prem. So the architecture has become very, very complex and the complexity has grown so much that we are now thinking about whether we should be cloud native or cloud agnostic. So I think, I mean, sometimes it's hard to even explain the complexity because people think, oh, "When you go to cloud, everything is simplified." Cloud does give you a lot of simplicity, but it also really brings a lot more complexity along with it. So, and then next one is pretty important is, you know, generally when you look at cloud services, you have plenty of services that are offered within a cloud, 100, 150 services, 200 services. Even within those companies, you take AWS they might not know, an individual resource might not know about all the services we see. That's a big challenge for us as a customer to really understand each of the service that is provided in these, you know, clouds, well, doesn't matter which one that is. And the third one is pretty big, at least at the CTO the CIO, and the senior leadership level, is cost. Cost is a major factor because cloud, you know, will eat you up if you cannot manage it. If you don't have a good cloud governance process it because every minute you are in it, it's burning cash. So I think if you ask me, these are the three major things that I am facing day to day and that's where I use my partners, which I'll touch base down the line. >> Perfect, we'll talk about that. So Prem, I imagine that these problems are not unique to Johnson Controls or JCI, as you may hear us refer to it. Talk to me Prem about some of the other challenges that you're seeing within the customer landscape. >> So, yeah, I agree, Lisa, these are not very specific to JCI, but there are specific issues in JCI, right? So the way we think about these are, there is a common issue when people go to the cloud and there are very specific and unique issues for businesses, right? So JCI, and we will talk about this in the episode as we move forward. I think Suresh and his team have done some phenomenal step around how to manage this complexity. But there are customers who have a lesser complex cloud which is, they don't go to Alibaba, they don't have footprint in all three clouds. So their multi-cloud footprint could be a bit more manageable, but still struggle with a lot of the same problems around cost, around security, around talent. Talent is a big thing, right? And in Suresh's case I think it's slightly more exasperated because every cloud provider Be it AWS, JCP, or Azure brings in hundreds of services and there is nobody, including many of us, right? We learn every day, nowadays, right? It's not that there is one service integrator who knows all, while technically people can claim as a part of sales. But in reality all of us are continuing to learn in this landscape. And if you put all of this equation together with multiple clouds the complexity just starts to exponentially grow. And that's exactly what I think JCI is experiencing and Suresh's team has been experiencing, and we've been working together. But the common problems are around security talent and cost management of this, right? Those are my three things. And one last thing that I would love to say before we move away from this question is, if you think about cloud operations as a concept that's evolving over the last few years, and I have touched upon this in the previous episode as well, Lisa, right? If you take architectures, we've gone into microservices, we've gone into all these server-less architectures all the fancy things that we want. That helps us go to market faster, be more competent to as a business. But that's not simplified stuff, right? That's complicated stuff. It's a lot more distributed. Second, again, we've advanced and created more modern infrastructure because all of what we are talking is platform as a service, services on the cloud that we are consuming, right? In the same case with development we've moved into a DevOps model. We kind of click a button put some code in a repository, the code starts to run in production within a minute, everything else is automated. But then when we get to operations we are still stuck in a very old way of looking at cloud as an infrastructure, right? So you've got an infra team, you've got an app team, you've got an incident management team, you've got a soft knock, everything. But again, so Suresh can talk about this more because they are making significant strides in thinking about this as a single workload, and how do I apply engineering to go manage this? Because a lot of it is codified, right? So automation. Anyway, so that's kind of where the complexity is and how we are thinking, including JCI as a partner thinking about taming that complexity as we move forward. >> Suresh, let's talk about that taming the complexity. You guys have both done a great job of articulating the ostensible challenges that are there with cloud, especially multi-cloud environments that you're living in. But Suresh, talk about the partnership with Hitachi Vantara. How is it helping to dial down some of those inherent complexities? >> I mean, I always, you know, I think I've said this to Prem multiple times. I treat my partners as my internal, you know, employees. I look at Prem as my coworker or my peers. So the reason for that is I want Prem to have the same vested interest as a partner in my success or JCI success and vice versa, isn't it? I think that's how we operate and that's how we have been operating. And I think I would like to thank Prem and Hitachi Vantara for that really been an amazing partnership. And as he was saying, we have taken a completely holistic approach to how we want to really be in the market and play in the market to our customers. So if you look at my jacket it talks about OpenBlue platform. This is what JCI is building, that we are building this OpenBlue digital platform. And within that, my team, along with Prem's or Hitachi's, we have built what we call as Polaris. It's a technical platform where our apps can run. And this platform is automated end-to-end from a platform engineering standpoint. We stood up a platform engineering organization, a reliability engineering organization, as well as a support organization where Hitachi played a role. As I said previously, you know, for me to scale I'm not going to really have the talent and the knowledge of every function that I'm looking at. And Hitachi, not only they brought the talent but they also brought what he was talking about, Harc. You know, they have set up a lot and now we can leverage it. And they also came up with some really interesting concepts. I went and met them in India. They came up with this concept called IPL. Okay, what is that? They really challenged all their employees that's working for GCI to come up with innovative ideas to solve problems proactively, which is self-healing. You know, how you do that? So I think partners, you know, if they become really vested in your interests, they can do wonders for you. And I think in this case Hitachi is really working very well for us and in many aspects. And I'm leveraging them... You started with support, now I'm leveraging them in the automation, the platform engineering, as well as in the reliability engineering and then in even in the engineering spaces. And that like, they are my end-to-end partner right now? >> So you're really taking that holistic approach that you talked about and it sounds like it's a very collaborative two-way street partnership. Prem, I want to go back to, Suresh mentioned Harc. Talk a little bit about what Harc is and then how partners fit into Hitachi's Harc strategy. >> Great, so let me spend like a few seconds on what Harc is. Lisa, again, I know we've been using the term. Harc stands for Hitachi application reliability sectors. Now the reason we thought about Harc was, like I said in the beginning of this segment, there is an illusion from an architecture standpoint to be more modern, microservices, server-less, reactive architecture, so on and so forth. There is an illusion in your development methodology from Waterfall to agile, to DevOps to lean, agile to path program, whatever, right? Extreme program, so on and so forth. There is an evolution in the space of infrastructure from a point where you were buying these huge humongous servers and putting it in your data center to a point where people don't even see servers anymore, right? You buy it, by a click of a button you don't know the size of it. All you know is a, it's (indistinct) whatever that name means. Let's go provision it on the fly, get go, get your work done, right? When all of this is advanced when you think about operations people have been solving the problem the way they've been solving it 20 years back, right? That's the issue. And Harc was conceived exactly to fix that particular problem, to think about a modern way of operating a modern workload, right? That's exactly what Harc. So it brings together finest engineering talent. So the teams are trained in specific ways of working. We've invested and implemented some of the IP, we work with the best of the breed partner ecosystem, and I'll talk about that in a minute. And we've got these facilities in Dallas and I am talking from my office in Dallas, which is a Harc facility in the US from where we deliver for our customers. And then back in Hyderabad, we've got one more that we opened and these are facilities from where we deliver Harc services for our customers as well, right? And then we are expanding it in Japan and Portugal as we move into 23. That's kind of the plan that we are thinking through. However, that's what Harc is, Lisa, right? That's our solution to this cloud complexity problem. Right? >> Got it, and it sounds like it's going quite global, which is fantastic. So Suresh, I want to have you expand a bit on the partnership, the partner ecosystem and the role that it plays. You talked about it a little bit but what role does the partner ecosystem play in really helping JCI to dial down some of those challenges and the inherent complexities that we talked about? >> Yeah, sure. I think partners play a major role and JCI is very, very good at it. I mean, I've joined JCI 18 months ago, JCI leverages partners pretty extensively. As I said, I leverage Hitachi for my, you know, A group and the (indistinct) space and the cloud operations space, and they're my primary partner. But at the same time, we leverage many other partners. Well, you know, Accenture, SCL, and even on the tooling side we use Datadog and (indistinct). All these guys are major partners of our because the way we like to pick partners is based on our vision and where we want to go. And pick the right partner who's going to really, you know make you successful by investing their resources in you. And what I mean by that is when you have a partner, partner knows exactly what kind of skillset is needed for this customer, for them to really be successful. As I said earlier, we cannot really get all the skillset that we need, we rely on the partners and partners bring the the right skillset, they can scale. I can tell Prem tomorrow, "Hey, I need two parts by next week", and I guarantee it he's going to bring two parts to me. So they let you scale, they let you move fast. And I'm a big believer, in today's day and age, to get things done fast and be more agile. I'm not worried about failure, but for me moving fast is very, very important. And partners really do a very good job bringing that. But I think then they also really make you think, isn't it? Because one thing I like about partners they make you innovate whether they know it or not but they do because, you know, they will come and ask you questions about, "Hey, tell me why you are doing this. Can I review your architecture?" You know, and then they will try to really say I don't think this is going to work. Because they work with so many different clients, not JCI, they bring all that expertise and that's what I look from them, you know, just not, you know, do a T&M job for me. I ask you to do this go... They just bring more than that. That's how I pick my partners. And that's how, you know, Hitachi's Vantara is definitely one of a good partner from that sense because they bring a lot more innovation to the table and I appreciate about that. >> It sounds like, it sounds like a flywheel of innovation. >> Yeah. >> I love that. Last question for both of you, which we're almost out of time here, Prem, I want to go back to you. So I'm a partner, I'm planning on redefining CloudOps at my company. What are the two things you want me to remember from Hitachi Vantara's perspective? >> So before I get to that question, Lisa, the partners that we work with are slightly different from from the partners that, again, there are some similar partners. There are some different partners, right? For example, we pick and choose especially in the Harc space, we pick and choose partners that are more future focused, right? We don't care if they are huge companies or small companies. We go after companies that are future focused that are really, really nimble and can change for our customers need because it's not our need, right? When I pick partners for Harc my ultimate endeavor is to ensure, in this case because we've got (indistinct) GCI on, we are able to operate (indistinct) with the level of satisfaction above and beyond that they're expecting from us. And whatever I don't have I need to get from my partners so that I bring this solution to Suresh. As opposed to bringing a whole lot of people and making them stand in front of Suresh. So that's how I think about partners. What do I want them to do from, and we've always done this so we do workshops with our partners. We just don't go by tools. When we say we are partnering with X, Y, Z, we do workshops with them and we say, this is how we are thinking. Either you build it in your roadmap that helps us leverage you, continue to leverage you. And we do have minimal investments where we fix gaps. We're building some utilities for us to deliver the best service to our customers. And our intention is not to build a product to compete with our partner. Our intention is to just fill the wide space until they go build it into their product suite that we can then leverage it for our customers. So always think about end customers and how can we make it easy for them? Because for all the tool vendors out there seeing this and wanting to partner with Hitachi the biggest thing is tools sprawl, especially on the cloud is very real. For every problem on the cloud. I have a billion tools that are being thrown at me as Suresh if I'm putting my installation and it's not easy at all. It's so confusing. >> Yeah. >> So that's what we want. We want people to simplify that landscape for our end customers, and we are looking at partners that are thinking through the simplification not just making money. >> That makes perfect sense. There really is a very strong symbiosis it sounds like, in the partner ecosystem. And there's a lot of enablement that goes on back and forth it sounds like as well, which is really, to your point it's all about the end customers and what they're expecting. Suresh, last question for you is which is the same one, if I'm a partner what are the things that you want me to consider as I'm planning to redefine CloudOps at my company? >> I'll keep it simple. In my view, I mean, we've touched upon it in multiple facets in this interview about that, the three things. First and foremost, reliability. You know, in today's day and age my products has to be reliable, available and, you know, make sure that the customer's happy with what they're really dealing with, number one. Number two, my product has to be secure. Security is super, super important, okay? And number three, I need to really make sure my customers are getting the value so I keep my cost low. So these three is what I would focus and what I expect from my partners. >> Great advice, guys. Thank you so much for talking through this with me and really showing the audience how strong the partnership is between Hitachi Vantara and JCI. What you're doing together, we'll have to talk to you again to see where things go but we really appreciate your insights and your perspectives. Thank you. >> Thank you, Lisa. >> Thanks Lisa, thanks for having us. >> My pleasure. For my guests, I'm Lisa Martin. Thank you so much for watching. (soothing music)

Published Date : Feb 27 2023

SUMMARY :

In the next 15 minutes or so and pin points that you all the services we see. Talk to me Prem about some of the other in the episode as we move forward. that taming the complexity. and play in the market to our customers. that you talked about and it sounds Now the reason we thought about Harc was, and the inherent complexities But at the same time, we like a flywheel of innovation. What are the two things you want me especially in the Harc space, we pick for our end customers, and we are looking it sounds like, in the partner ecosystem. make sure that the customer's happy showing the audience how Thank you so much for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
SureshPERSON

0.99+

HitachiORGANIZATION

0.99+

Lisa MartinPERSON

0.99+

Suresh MothikuruPERSON

0.99+

JapanLOCATION

0.99+

Prem BalasubramanianPERSON

0.99+

JCIORGANIZATION

0.99+

LisaPERSON

0.99+

HarcORGANIZATION

0.99+

Johnson ControlsORGANIZATION

0.99+

DallasLOCATION

0.99+

IndiaLOCATION

0.99+

AlibabaORGANIZATION

0.99+

HyderabadLOCATION

0.99+

Hitachi VantaraORGANIZATION

0.99+

Johnson ControlsORGANIZATION

0.99+

PortugalLOCATION

0.99+

USLOCATION

0.99+

SCLORGANIZATION

0.99+

AccentureORGANIZATION

0.99+

bothQUANTITY

0.99+

AWSORGANIZATION

0.99+

two partsQUANTITY

0.99+

150 servicesQUANTITY

0.99+

SecondQUANTITY

0.99+

FirstQUANTITY

0.99+

next weekDATE

0.99+

200 servicesQUANTITY

0.99+

First questionQUANTITY

0.99+

PremPERSON

0.99+

tomorrowDATE

0.99+

PolarisORGANIZATION

0.99+

T&MORGANIZATION

0.99+

hundreds of servicesQUANTITY

0.99+

three thingsQUANTITY

0.98+

threeQUANTITY

0.98+

agileTITLE

0.98+

Prem Balasubramanian & Suresh Mothikuru


 

(soothing music) >> Hey everyone, welcome to this event, "Build Your Cloud Center of Excellence." I'm your host, Lisa Martin. In the next 15 minutes or so my guest and I are going to be talking about redefining cloud operations, an application modernization for customers, and specifically how partners are helping to speed up that process. As you saw on our first two segments, we talked about problems enterprises are facing with cloud operations. We talked about redefining cloud operations as well to solve these problems. This segment is going to be focusing on how Hitachi Vantara's partners are really helping to speed up that process. We've got Johnson Controls here to talk about their partnership with Hitachi Vantara. Please welcome both of my guests, Prem Balasubramanian is with us, SVP and CTO Digital Solutions at Hitachi Vantara. And Suresh Mothikuru, SVP Customer Success Platform Engineering and Reliability Engineering from Johnson Controls. Gentlemen, welcome to the program, great to have you. >> Thank. >> Thank you, Lisa. >> First question is to both of you and Suresh, we'll start with you. We want to understand, you know, the cloud operations landscape is increasingly complex. We've talked a lot about that in this program. Talk to us, Suresh, about some of the biggest challenges and pin points that you faced with respect to that. >> Thank you. I think it's a great question. I mean, cloud has evolved a lot in the last 10 years. You know, when we were talking about a single cloud whether it's Azure or AWS and GCP, and that was complex enough. Now we are talking about multi-cloud and hybrid and you look at Johnson Controls, we have Azure we have AWS, we have GCP, we have Alibaba and we also support on-prem. So the architecture has become very, very complex and the complexity has grown so much that we are now thinking about whether we should be cloud native or cloud agnostic. So I think, I mean, sometimes it's hard to even explain the complexity because people think, oh, "When you go to cloud, everything is simplified." Cloud does give you a lot of simplicity, but it also really brings a lot more complexity along with it. So, and then next one is pretty important is, you know, generally when you look at cloud services, you have plenty of services that are offered within a cloud, 100, 150 services, 200 services. Even within those companies, you take AWS they might not know, an individual resource might not know about all the services we see. That's a big challenge for us as a customer to really understand each of the service that is provided in these, you know, clouds, well, doesn't matter which one that is. And the third one is pretty big, at least at the CTO the CIO, and the senior leadership level, is cost. Cost is a major factor because cloud, you know, will eat you up if you cannot manage it. If you don't have a good cloud governance process it because every minute you are in it, it's burning cash. So I think if you ask me, these are the three major things that I am facing day to day and that's where I use my partners, which I'll touch base down the line. >> Perfect, we'll talk about that. So Prem, I imagine that these problems are not unique to Johnson Controls or JCI, as you may hear us refer to it. Talk to me Prem about some of the other challenges that you're seeing within the customer landscape. >> So, yeah, I agree, Lisa, these are not very specific to JCI, but there are specific issues in JCI, right? So the way we think about these are, there is a common issue when people go to the cloud and there are very specific and unique issues for businesses, right? So JCI, and we will talk about this in the episode as we move forward. I think Suresh and his team have done some phenomenal step around how to manage this complexity. But there are customers who have a lesser complex cloud which is, they don't go to Alibaba, they don't have footprint in all three clouds. So their multi-cloud footprint could be a bit more manageable, but still struggle with a lot of the same problems around cost, around security, around talent. Talent is a big thing, right? And in Suresh's case I think it's slightly more exasperated because every cloud provider Be it AWS, JCP, or Azure brings in hundreds of services and there is nobody, including many of us, right? We learn every day, nowadays, right? It's not that there is one service integrator who knows all, while technically people can claim as a part of sales. But in reality all of us are continuing to learn in this landscape. And if you put all of this equation together with multiple clouds the complexity just starts to exponentially grow. And that's exactly what I think JCI is experiencing and Suresh's team has been experiencing, and we've been working together. But the common problems are around security talent and cost management of this, right? Those are my three things. And one last thing that I would love to say before we move away from this question is, if you think about cloud operations as a concept that's evolving over the last few years, and I have touched upon this in the previous episode as well, Lisa, right? If you take architectures, we've gone into microservices, we've gone into all these server-less architectures all the fancy things that we want. That helps us go to market faster, be more competent to as a business. But that's not simplified stuff, right? That's complicated stuff. It's a lot more distributed. Second, again, we've advanced and created more modern infrastructure because all of what we are talking is platform as a service, services on the cloud that we are consuming, right? In the same case with development we've moved into a DevOps model. We kind of click a button put some code in a repository, the code starts to run in production within a minute, everything else is automated. But then when we get to operations we are still stuck in a very old way of looking at cloud as an infrastructure, right? So you've got an infra team, you've got an app team, you've got an incident management team, you've got a soft knock, everything. But again, so Suresh can talk about this more because they are making significant strides in thinking about this as a single workload, and how do I apply engineering to go manage this? Because a lot of it is codified, right? So automation. Anyway, so that's kind of where the complexity is and how we are thinking, including JCI as a partner thinking about taming that complexity as we move forward. >> Suresh, let's talk about that taming the complexity. You guys have both done a great job of articulating the ostensible challenges that are there with cloud, especially multi-cloud environments that you're living in. But Suresh, talk about the partnership with Hitachi Vantara. How is it helping to dial down some of those inherent complexities? >> I mean, I always, you know, I think I've said this to Prem multiple times. I treat my partners as my internal, you know, employees. I look at Prem as my coworker or my peers. So the reason for that is I want Prem to have the same vested interest as a partner in my success or JCI success and vice versa, isn't it? I think that's how we operate and that's how we have been operating. And I think I would like to thank Prem and Hitachi Vantara for that really been an amazing partnership. And as he was saying, we have taken a completely holistic approach to how we want to really be in the market and play in the market to our customers. So if you look at my jacket it talks about OpenBlue platform. This is what JCI is building, that we are building this OpenBlue digital platform. And within that, my team, along with Prem's or Hitachi's, we have built what we call as Polaris. It's a technical platform where our apps can run. And this platform is automated end-to-end from a platform engineering standpoint. We stood up a platform engineering organization, a reliability engineering organization, as well as a support organization where Hitachi played a role. As I said previously, you know, for me to scale I'm not going to really have the talent and the knowledge of every function that I'm looking at. And Hitachi, not only they brought the talent but they also brought what he was talking about, Harc. You know, they have set up a lot and now we can leverage it. And they also came up with some really interesting concepts. I went and met them in India. They came up with this concept called IPL. Okay, what is that? They really challenged all their employees that's working for GCI to come up with innovative ideas to solve problems proactively, which is self-healing. You know, how you do that? So I think partners, you know, if they become really vested in your interests, they can do wonders for you. And I think in this case Hitachi is really working very well for us and in many aspects. And I'm leveraging them... You started with support, now I'm leveraging them in the automation, the platform engineering, as well as in the reliability engineering and then in even in the engineering spaces. And that like, they are my end-to-end partner right now? >> So you're really taking that holistic approach that you talked about and it sounds like it's a very collaborative two-way street partnership. Prem, I want to go back to, Suresh mentioned Harc. Talk a little bit about what Harc is and then how partners fit into Hitachi's Harc strategy. >> Great, so let me spend like a few seconds on what Harc is. Lisa, again, I know we've been using the term. Harc stands for Hitachi application reliability sectors. Now the reason we thought about Harc was, like I said in the beginning of this segment, there is an illusion from an architecture standpoint to be more modern, microservices, server-less, reactive architecture, so on and so forth. There is an illusion in your development methodology from Waterfall to agile, to DevOps to lean, agile to path program, whatever, right? Extreme program, so on and so forth. There is an evolution in the space of infrastructure from a point where you were buying these huge humongous servers and putting it in your data center to a point where people don't even see servers anymore, right? You buy it, by a click of a button you don't know the size of it. All you know is a, it's (indistinct) whatever that name means. Let's go provision it on the fly, get go, get your work done, right? When all of this is advanced when you think about operations people have been solving the problem the way they've been solving it 20 years back, right? That's the issue. And Harc was conceived exactly to fix that particular problem, to think about a modern way of operating a modern workload, right? That's exactly what Harc. So it brings together finest engineering talent. So the teams are trained in specific ways of working. We've invested and implemented some of the IP, we work with the best of the breed partner ecosystem, and I'll talk about that in a minute. And we've got these facilities in Dallas and I am talking from my office in Dallas, which is a Harc facility in the US from where we deliver for our customers. And then back in Hyderabad, we've got one more that we opened and these are facilities from where we deliver Harc services for our customers as well, right? And then we are expanding it in Japan and Portugal as we move into 23. That's kind of the plan that we are thinking through. However, that's what Harc is, Lisa, right? That's our solution to this cloud complexity problem. Right? >> Got it, and it sounds like it's going quite global, which is fantastic. So Suresh, I want to have you expand a bit on the partnership, the partner ecosystem and the role that it plays. You talked about it a little bit but what role does the partner ecosystem play in really helping JCI to dial down some of those challenges and the inherent complexities that we talked about? >> Yeah, sure. I think partners play a major role and JCI is very, very good at it. I mean, I've joined JCI 18 months ago, JCI leverages partners pretty extensively. As I said, I leverage Hitachi for my, you know, A group and the (indistinct) space and the cloud operations space, and they're my primary partner. But at the same time, we leverage many other partners. Well, you know, Accenture, SCL, and even on the tooling side we use Datadog and (indistinct). All these guys are major partners of our because the way we like to pick partners is based on our vision and where we want to go. And pick the right partner who's going to really, you know make you successful by investing their resources in you. And what I mean by that is when you have a partner, partner knows exactly what kind of skillset is needed for this customer, for them to really be successful. As I said earlier, we cannot really get all the skillset that we need, we rely on the partners and partners bring the the right skillset, they can scale. I can tell Prem tomorrow, "Hey, I need two parts by next week", and I guarantee it he's going to bring two parts to me. So they let you scale, they let you move fast. And I'm a big believer, in today's day and age, to get things done fast and be more agile. I'm not worried about failure, but for me moving fast is very, very important. And partners really do a very good job bringing that. But I think then they also really make you think, isn't it? Because one thing I like about partners they make you innovate whether they know it or not but they do because, you know, they will come and ask you questions about, "Hey, tell me why you are doing this. Can I review your architecture?" You know, and then they will try to really say I don't think this is going to work. Because they work with so many different clients, not JCI, they bring all that expertise and that's what I look from them, you know, just not, you know, do a T&M job for me. I ask you to do this go... They just bring more than that. That's how I pick my partners. And that's how, you know, Hitachi's Vantara is definitely one of a good partner from that sense because they bring a lot more innovation to the table and I appreciate about that. >> It sounds like, it sounds like a flywheel of innovation. >> Yeah. >> I love that. Last question for both of you, which we're almost out of time here, Prem, I want to go back to you. So I'm a partner, I'm planning on redefining CloudOps at my company. What are the two things you want me to remember from Hitachi Vantara's perspective? >> So before I get to that question, Lisa, the partners that we work with are slightly different from from the partners that, again, there are some similar partners. There are some different partners, right? For example, we pick and choose especially in the Harc space, we pick and choose partners that are more future focused, right? We don't care if they are huge companies or small companies. We go after companies that are future focused that are really, really nimble and can change for our customers need because it's not our need, right? When I pick partners for Harc my ultimate endeavor is to ensure, in this case because we've got (indistinct) GCI on, we are able to operate (indistinct) with the level of satisfaction above and beyond that they're expecting from us. And whatever I don't have I need to get from my partners so that I bring this solution to Suresh. As opposed to bringing a whole lot of people and making them stand in front of Suresh. So that's how I think about partners. What do I want them to do from, and we've always done this so we do workshops with our partners. We just don't go by tools. When we say we are partnering with X, Y, Z, we do workshops with them and we say, this is how we are thinking. Either you build it in your roadmap that helps us leverage you, continue to leverage you. And we do have minimal investments where we fix gaps. We're building some utilities for us to deliver the best service to our customers. And our intention is not to build a product to compete with our partner. Our intention is to just fill the wide space until they go build it into their product suite that we can then leverage it for our customers. So always think about end customers and how can we make it easy for them? Because for all the tool vendors out there seeing this and wanting to partner with Hitachi the biggest thing is tools sprawl, especially on the cloud is very real. For every problem on the cloud. I have a billion tools that are being thrown at me as Suresh if I'm putting my installation and it's not easy at all. It's so confusing. >> Yeah. >> So that's what we want. We want people to simplify that landscape for our end customers, and we are looking at partners that are thinking through the simplification not just making money. >> That makes perfect sense. There really is a very strong symbiosis it sounds like, in the partner ecosystem. And there's a lot of enablement that goes on back and forth it sounds like as well, which is really, to your point it's all about the end customers and what they're expecting. Suresh, last question for you is which is the same one, if I'm a partner what are the things that you want me to consider as I'm planning to redefine CloudOps at my company? >> I'll keep it simple. In my view, I mean, we've touched upon it in multiple facets in this interview about that, the three things. First and foremost, reliability. You know, in today's day and age my products has to be reliable, available and, you know, make sure that the customer's happy with what they're really dealing with, number one. Number two, my product has to be secure. Security is super, super important, okay? And number three, I need to really make sure my customers are getting the value so I keep my cost low. So these three is what I would focus and what I expect from my partners. >> Great advice, guys. Thank you so much for talking through this with me and really showing the audience how strong the partnership is between Hitachi Vantara and JCI. What you're doing together, we'll have to talk to you again to see where things go but we really appreciate your insights and your perspectives. Thank you. >> Thank you, Lisa. >> Thanks Lisa, thanks for having us. >> My pleasure. For my guests, I'm Lisa Martin. Thank you so much for watching. (soothing music)

Published Date : Feb 24 2023

SUMMARY :

In the next 15 minutes or so and pin points that you all the services we see. Talk to me Prem about some of the other in the episode as we move forward. that taming the complexity. and play in the market to our customers. that you talked about and it sounds Now the reason we thought about Harc was, and the inherent complexities But at the same time, we like a flywheel of innovation. What are the two things you want me especially in the Harc space, we pick for our end customers, and we are looking it sounds like, in the partner ecosystem. make sure that the customer's happy showing the audience how Thank you so much for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
SureshPERSON

0.99+

HitachiORGANIZATION

0.99+

Lisa MartinPERSON

0.99+

Suresh MothikuruPERSON

0.99+

JapanLOCATION

0.99+

Prem BalasubramanianPERSON

0.99+

JCIORGANIZATION

0.99+

LisaPERSON

0.99+

HarcORGANIZATION

0.99+

Johnson ControlsORGANIZATION

0.99+

DallasLOCATION

0.99+

IndiaLOCATION

0.99+

AlibabaORGANIZATION

0.99+

HyderabadLOCATION

0.99+

Hitachi VantaraORGANIZATION

0.99+

Johnson ControlsORGANIZATION

0.99+

PortugalLOCATION

0.99+

USLOCATION

0.99+

SCLORGANIZATION

0.99+

AccentureORGANIZATION

0.99+

bothQUANTITY

0.99+

AWSORGANIZATION

0.99+

two partsQUANTITY

0.99+

150 servicesQUANTITY

0.99+

SecondQUANTITY

0.99+

FirstQUANTITY

0.99+

next weekDATE

0.99+

200 servicesQUANTITY

0.99+

First questionQUANTITY

0.99+

PremPERSON

0.99+

tomorrowDATE

0.99+

PolarisORGANIZATION

0.99+

T&MORGANIZATION

0.99+

hundreds of servicesQUANTITY

0.99+

three thingsQUANTITY

0.98+

threeQUANTITY

0.98+

agileTITLE

0.98+

Why Should Customers Care About SuperCloud


 

Hello and welcome back to Supercloud 2 where we examine the intersection of cloud and data in the 2020s. My name is Dave Vellante. Our Supercloud panel, our power panel is back. Maribel Lopez is the founder and principal analyst at Lopez Research. Sanjeev Mohan is former Gartner analyst and principal at Sanjeev Mohan. And Keith Townsend is the CTO advisor. Folks, welcome back and thanks for your participation today. Good to see you. >> Okay, great. >> Great to see you. >> Thanks. Let me start, Maribel, with you. Bob Muglia, we had a conversation as part of Supercloud the other day. And he said, "Dave, I like the work, you got to simplify this a little bit." So he said, quote, "A Supercloud is a platform." He said, "Think of it as a platform that provides programmatically consistent services hosted on heterogeneous cloud providers." And then Nelu Mihai said, "Well, wait a minute. This is just going to create more stove pipes. We need more standards in an architecture," which is kind of what Berkeley Sky Computing initiative is all about. So there's a sort of a debate going on. Is supercloud an architecture, a platform? Or maybe it's just another buzzword. Maribel, do you have a thought on this? >> Well, the easy answer would be to say it's just a buzzword. And then we could just kill the conversation and be done with it. But I think the term, it's more than that, right? The term actually isn't new. You can go back to at least 2016 and find references to supercloud in Cornell University or assist in other documents. So, having said this, I think we've been talking about Supercloud for a while, so I assume it's more than just a fancy buzzword. But I think it really speaks to that undeniable trend of moving towards an abstraction layer to deal with the chaos of what we consider managing multiple public and private clouds today, right? So one definition of the technology platform speaks to a set of services that allows companies to build and run that technology smoothly without worrying about the underlying infrastructure, which really gets back to something that Bob said. And some of the question is where that lives. And you could call that an abstraction layer. You could call it cross-cloud services, hybrid cloud management. So I see momentum there, like legitimate momentum with enterprise IT buyers that are trying to deal with the fact that they have multiple clouds now. So where I think we're moving is trying to define what are the specific attributes and frameworks of that that would make it so that it could be consistent across clouds. What is that layer? And maybe that's what the supercloud is. But one of the things I struggle with with supercloud is. What are we really trying to do here? Are we trying to create differentiated services in the supercloud layer? Is a supercloud just another variant of what AWS, GCP, or others do? You spoken to Walmart about its cloud native platform, and that's an example of somebody deciding to do it themselves because they need to deal with this today and not wait for some big standards thing to happen. So whatever it is, I do think it's something. I think we're trying to maybe create an architecture out of it would be a better way of saying it so that it does get to those set of principles, but it also needs to be edge aware. I think whenever we talk about supercloud, we're always talking about like the big centralized cloud. And I think we need to think about all the distributed clouds that we're looking at in edge as well. So that might be one of the ways that supercloud evolves. >> So thank you, Maribel. Keith, Brian Gracely, Gracely's law, things kind of repeat themselves. We've seen it all before. And so what Muglia brought to the forefront is this idea of a platform where the platform provider is really responsible for the architecture. Of course, the drawback is then you get a a bunch of stove pipes architectures. But practically speaking, that's kind of the way the industry has always evolved, right? >> So if we look at this from the practitioner's perspective and we talk about platforms, traditionally vendors have provided the platforms for us, whether it's distribution of lineage managed by or provided by Red Hat, Windows, servers, .NET, databases, Oracle. We think of those as platforms, things that are fundamental we can build on top. Supercloud isn't today that. It is a framework or idea, kind of a visionary goal to get to a point that we can have a platform or a framework. But what we're seeing repeated throughout the industry in customers, whether it's the Walmarts that's kind of supersized the idea of supercloud, or if it's regular end user organizations that are coming out with platform groups, groups who normalize cloud native infrastructure, AWS multi-cloud, VMware resources to look like one thing internally to their developers. We're seeing this trend that there's a desire for a platform that provides the capabilities of a supercloud. >> Thank you for that. Sanjeev, we often use Snowflake as a supercloud example, and now would presumably would be a platform with an architecture that's determined by the vendor. Maybe Databricks is pushing for a more open architecture, maybe more of that nirvana that we were talking about before to solve for supercloud. But regardless, the practitioner discussions show. At least currently, there's not a lot of cross-cloud data sharing. I think it could be a killer use case, egress charges or a barrier. But how do you see it? Will that change? Will we hide that underlying complexity and start sharing data across cloud? Is that something that you think Snowflake or others will be able to achieve? >> So I think we are already starting to see some of that happen. Snowflake is definitely one example that gets cited a lot. But even we don't talk about MongoDB in this like, but you could have a MongoDB cluster, for instance, with nodes sitting in different cloud providers. So there are companies that are starting to do it. The advantage that these companies have, let's take Snowflake as an example, it's a centralized proprietary platform. And they are building the capabilities that are needed for supercloud. So they're building things like you can push down your data transformations. They have the entire security and privacy suite. Data ops, they're adding those capabilities. And if I'm not mistaken, it'll be very soon, we will see them offer data observability. So it's all works great as long as you are in one platform. And if you want resilience, then Snowflake, Supercloud, great example. But if your primary goal is to choose the most cost-effective service irrespective of which cloud it sits in, then things start falling sideways. For example, I may be a very big Snowflake user. And I like Snowflake's resilience. I can move from one cloud to another cloud. Snowflake does it for me. But what if I want to train a very large model? Maybe Databricks is a better platform for that. So how do I do move my workload from one platform to another platform? That tooling does not exist. So we need server hybrid, cross-cloud, data ops platform. Walmart has done a great job, but they built it by themselves. Not every company is Walmart. Like Maribel and Keith said, we need standards, we need reference architectures, we need some sort of a cost control. I was just reading recently, Accenture has been public about their AWS bill. Every time they get the bill is tens of millions of lines, tens of millions 'cause there are over thousand teams using AWS. If we have not been able to corral a usage of a single cloud, now we're talking about supercloud, we've got multiple clouds, and hybrid, on-prem, and edge. So till we've got some cross-platform tooling in place, I think this will still take quite some time for it to take shape. >> It's interesting. Maribel, Walmart would tell you that their on-prem infrastructure is cheaper to run than the stuff in the cloud. but at the same time, they want the flexibility and the resiliency of their three-legged stool model. So the point as Sanjeev was making about hybrid. It's an interesting balance, isn't it, between getting your lowest cost and at the same time having best of breed and scale? >> It's basically what you're trying to optimize for, as you said, right? And by the way, to the earlier point, not everybody is at Walmart's scale, so it's not actually cheaper for everybody to have the purchasing power to make the cloud cheaper to have it on-prem. But I think what you see almost every company, large or small, moving towards is this concept of like, where do I find the agility? And is the agility in building the infrastructure for me? And typically, the thing that gives you outside advantage as an organization is not how you constructed your cloud computing infrastructure. It might be how you structured your data analytics as an example, which cloud is related to that. But how do you marry those two things? And getting back to sort of Sanjeev's point. We're in a real struggle now where one hand we want to have best of breed services and on the other hand we want it to be really easy to manage, secure, do data governance. And those two things are really at odds with each other right now. So if you want all the knobs and switches of a service like geospatial analytics and big query, you're going to have to use Google tools, right? Whereas if you want visibility across all the clouds for your application of state and understand the security and governance of that, you're kind of looking for something that's more cross-cloud tooling at that point. But whenever you talk to somebody about cross-cloud tooling, they look at you like that's not really possible. So it's a very interesting time in the market. Now, we're kind of layering this concept of supercloud on it. And some people think supercloud's about basically multi-cloud tooling, and some people think it's about a whole new architectural stack. So we're just not there yet. But it's not all about cost. I mean, cloud has not been about cost for a very, very long time. Cloud has been about how do you really make the most of your data. And this gets back to cross-cloud services like Snowflake. Why did they even exist? They existed because we had data everywhere, but we need to treat data as a unified object so that we can analyze it and get insight from it. And so that's where some of the benefit of these cross-cloud services are moving today. Still a long way to go, though, Dave. >> Keith, I reached out to my friends at ETR given the macro headwinds, And you're right, Maribel, cloud hasn't really been about just about cost savings. But I reached out to the ETR, guys, what's your data show in terms of how customers are dealing with the economic headwinds? And they said, by far, their number one strategy to cut cost is consolidating redundant vendors. And a distant second, but still notable was optimizing cloud costs. Maybe using reserve instances, or using more volume buying. Nowhere in there. And I asked them to, "Could you go look and see if you can find it?" Do we see repatriation? And you hear this a lot. You hear people whispering as analysts, "You better look into that repatriation trend." It's pretty big. You can't find it. But some of the Walmarts in the world, maybe even not repatriating, but they maybe have better cost structure on-prem. Keith, what are you seeing from the practitioners that you talk to in terms of how they're dealing with these headwinds? >> Yeah, I just got into a conversation about this just this morning with (indistinct) who is an analyst over at GigaHome. He's reading the same headlines. Repatriation is happening at large scale. I think this is kind of, we have these quiet terms now. We have quiet quitting, we have quiet hiring. I think we have quiet repatriation. Most people haven't done away with their data centers. They're still there. Whether they're completely on-premises data centers, and they own assets, or they're partnerships with QTX, Equinix, et cetera, they have these private cloud resources. What I'm seeing practically is a rebalancing of workloads. Do I really need to pay AWS for this instance of SAP that's on 24 hours a day versus just having it on-prem, moving it back to my data center? I've talked to quite a few customers who were early on to moving their static SAP workloads onto the public cloud, and they simply moved them back. Surprising, I was at VMware Explore. And we can talk about this a little bit later on. But our customers, net new, not a lot that were born in the cloud. And they get to this point where their workloads are static. And they look at something like a Kubernetes, or a OpenShift, or VMware Tanzu. And they ask the question, "Do I need the scalability of cloud?" I might consider being a net new VMware customer to deliver this base capability. So are we seeing repatriation as the number one reason? No, I think internal IT operations are just naturally come to this realization. Hey, I have these resources on premises. The private cloud technologies have moved far along enough that I can just simply move this workload back. I'm not calling it repatriation, I'm calling it rightsizing for the operating model that I have. >> Makes sense. Yeah. >> Go ahead. >> If I missed something, Dave, why we are on this topic of repatriation. I'm actually surprised that we are talking about repatriation as a very big thing. I think repatriation is happening, no doubt, but it's such a small percentage of cloud migration that to me it's a rounding error in my opinion. I think there's a bigger problem. The problem is that people don't know where the cost is. If they knew where the cost was being wasted in the cloud, they could do something about it. But if you don't know, then the easy answer is cloud costs a lot and moving it back to on-premises. I mean, take like Capital One as an example. They got rid of all the data centers. Where are they going to repatriate to? They're all in the cloud at this point. So I think my point is that data observability is one of the places that has seen a lot of traction is because of cost. Data observability, when it first came into existence, it was all about data quality. Then it was all about data pipeline reliability. And now, the number one killer use case is FinOps. >> Maribel, you had a comment? >> Yeah, I'm kind of in violent agreement with both Sanjeev and Keith. So what are we seeing here? So the first thing that we see is that many people wildly overspent in the big public cloud. They had stranded cloud credits, so to speak. The second thing is, some of them still had infrastructure that was useful. So why not use it if you find the right workloads to what Keith was talking about, if they were more static workloads, if it was already there? So there is a balancing that's going on. And then I think fundamentally, from a trend standpoint, these things aren't binary. Everybody, for a while, everything was going to go to the public cloud and then people are like, "Oh, it's kind of expensive." Then they're like, "Oh no, they're going to bring it all on-prem 'cause it's really expensive." And it's like, "Well, that doesn't necessarily get me some of the new features and functionalities I might want for some of my new workloads." So I'm going to put the workloads that have a certain set of characteristics that require cloud in the cloud. And if I have enough capability on-prem and enough IT resources to manage certain things on site, then I'm going to do that there 'cause that's a more cost-effective thing for me to do. It's not binary. That's why we went to hybrid. And then we went to multi just to describe the fact that people added multiple public clouds. And now we're talking about super, right? So I don't look at it as a one-size-fits-all for any of this. >> A a number of practitioners leading up to Supercloud2 have told us that they're solving their cloud complexity by going in monocloud. So they're putting on the blinders. Even though across the organization, there's other groups using other clouds. You're like, "In my group, we use AWS, or my group, we use Azure. And those guys over there, they use Google. We just kind of keep it separate." Are you guys hearing this in your view? Is that risky? Are they missing out on some potential to tap best of breed? What do you guys think about that? >> Everybody thinks they're monocloud. Is anybody really monocloud? It's like a group is monocloud, right? >> Right. >> This genie is out of the bottle. We're not putting the genie back in the bottle. You might think your monocloud and you go like three doors down and figure out the guy or gal is on a fundamentally different cloud, running some analytics workload that you didn't know about. So, to Sanjeev's earlier point, they don't even know where their cloud spend is. So I think the concept of monocloud, how that's actually really realized by practitioners is primary and then secondary sources. So they have a primary cloud that they run most of their stuff on, and that they try to optimize. And we still have forked workloads. Somebody decides, "Okay, this SAP runs really well on this, or these analytics workloads run really well on that cloud." And maybe that's how they parse it. But if you really looked at it, there's very few companies, if you really peaked under the hood and did an analysis that you could find an actual monocloud structure. They just want to pull it back in and make it more manageable. And I respect that. You want to do what you can to try to streamline the complexity of that. >> Yeah, we're- >> Sorry, go ahead, Keith. >> Yeah, we're doing this thing where we review AWS service every day. Just in your inbox, learn about a new AWS service cursory. There's 238 AWS products just on the AWS cloud itself. Some of them are redundant, but you get the idea. So the concept of monocloud, I'm in filing agreement with Maribel on this that, yes, a group might say I want a primary cloud. And that primary cloud may be the AWS. But have you tried the licensed Oracle database on AWS? It is really tempting to license Oracle on Oracle Cloud, Microsoft on Microsoft. And I can't get RDS anywhere but Amazon. So while I'm driven to desire the simplicity, the reality is whether be it M&A, licensing, data sovereignty. I am forced into a multi-cloud management style. But I do agree most people kind of do this one, this primary cloud, secondary cloud. And I guarantee you're going to have a third cloud or a fourth cloud whether you want to or not via shadow IT, latency, technical reasons, et cetera. >> Thank you. Sanjeev, you had a comment? >> Yeah, so I just wanted to mention, as an organization, I'm complete agreement, no organization is monocloud, at least if it's a large organization. Large organizations use all kinds of combinations of cloud providers. But when you talk about a single workload, that's where the program arises. As Keith said, the 238 services in AWS. How in the world am I going to be an expert in AWS, but then say let me bring GCP or Azure into a single workload? And that's where I think we probably will still see monocloud as being predominant because the team has developed its expertise on a particular cloud provider, and they just don't have the time of the day to go learn yet another stack. However, there are some interesting things that are happening. For example, if you look at a multi-cloud example where Oracle and Microsoft Azure have that interconnect, so that's a beautiful thing that they've done because now in the newest iteration, it's literally a few clicks. And then behind the scene, your .NET application and your Oracle database in OCI will be configured, the identities in active directory are federated. And you can just start using a database in one cloud, which is OCI, and an application, your .NET in Azure. So till we see this kind of a solution coming out of the providers, I think it's is unrealistic to expect the end users to be able to figure out multiple clouds. >> Well, I have to share with you. I can't remember if he said this on camera or if it was off camera so I'll hold off. I won't tell you who it is, but this individual was sort of complaining a little bit saying, "With AWS, I can take their best AI tools like SageMaker and I can run them on my Snowflake." He said, "I can't do that in Google. Google forces me to go to BigQuery if I want their excellent AI tools." So he was sort of pushing, kind of tweaking a little bit. Some of the vendor talked that, "Oh yeah, we're so customer-focused." Not to pick on Google, but I mean everybody will say that. And then you say, "If you're so customer-focused, why wouldn't you do a ABC?" So it's going to be interesting to see who leads that integration and how broadly it's applied. But I digress. Keith, at our first supercloud event, that was on August 9th. And it was only a few months after Broadcom announced the VMware acquisition. A lot of people, myself included said, "All right, cuts are coming." Generally, Tanzu is probably going to be under the radar, but it's Supercloud 22 and presumably VMware Explore, the company really... Well, certainly the US touted its Tanzu capabilities. I wasn't at VMware Explore Europe, but I bet you heard similar things. Hawk Tan has been blogging and very vocal about cross-cloud services and multi-cloud, which doesn't happen without Tanzu. So what did you hear, Keith, in Europe? What's your latest thinking on VMware's prospects in cross-cloud services/supercloud? >> So I think our friend and Cube, along host still be even more offended at this statement than he was when I sat in the Cube. This was maybe five years ago. There's no company better suited to help industries or companies, cross-cloud chasm than VMware. That's not a compliment. That's a reality of the industry. This is a very difficult, almost intractable problem. What I heard that VMware Europe were customers serious about this problem, even more so than the US data sovereignty is a real problem in the EU. Try being a company in Switzerland and having the Swiss data solvency issues. And there's no local cloud presence there large enough to accommodate your data needs. They had very serious questions about this. I talked to open source project leaders. Open source project leaders were asking me, why should I use the public cloud to host Kubernetes-based workloads, my projects that are building around Kubernetes, and the CNCF infrastructure? Why should I use AWS, Google, or even Azure to host these projects when that's undifferentiated? I know how to run Kubernetes, so why not run it on-premises? I don't want to deal with the hardware problems. So again, really great questions. And then there was always the specter of the problem, I think, we all had with the acquisition of VMware by Broadcom potentially. 4.5 billion in increased profitability in three years is a unbelievable amount of money when you look at the size of the problem. So a lot of the conversation in Europe was about industry at large. How do we do what regulators are asking us to do in a practical way from a true technology sense? Is VMware cross-cloud great? >> Yeah. So, VMware, obviously, to your point. OpenStack is another way of it. Actually, OpenStack, uptake is still alive and well, especially in those regions where there may not be a public cloud, or there's public policy dictating that. Walmart's using OpenStack. As you know in IT, some things never die. Question for Sanjeev. And it relates to this new breed of data apps. And Bob Muglia and Tristan Handy from DBT Labs who are participating in this program really got us thinking about this. You got data that resides in different clouds, it maybe even on-prem. And the machine polls data from different systems. No humans involved, e-commerce, ERP, et cetera. It creates a plan, outcomes. No human involvement. Today, you're on a CRM system, you're inputting, you're doing forms, you're, you're automating processes. We're talking about a new breed of apps. What are your thoughts on this? Is it real? Is it just way off in the distance? How does machine intelligence fit in? And how does supercloud fit? >> So great point. In fact, the data apps that you're talking about, I call them data products. Data products first came into limelight in the last couple of years when Jamal Duggan started talking about data mesh. I am taking data products out of the data mesh concept because data mesh, whether data mesh happens or not is analogous to data products. Data products, basically, are taking a product management view of bringing data from different sources based on what the consumer needs. We were talking earlier today about maybe it's my vacation rentals, or it may be a retail data product, it may be an investment data product. So it's a pre-packaged extraction of data from different sources. But now I have a product that has a whole lifecycle. I can version it. I have new features that get added. And it's a very business data consumer centric. It uses machine learning. For instance, I may be able to tell whether this data product has stale data. Who is using that data? Based on the usage of the data, I may have a new data products that get allocated. I may even have the ability to take existing data products, mash them up into something that I need. So if I'm going to have that kind of power to create a data product, then having a common substrate underneath, it can be very useful. And that could be supercloud where I am making API calls. I don't care where the ERP, the CRM, the survey data, the pricing engine where they sit. For me, there's a logical abstraction. And then I'm building my data product on top of that. So I see a new breed of data products coming out. To answer your question, how early we are or is this even possible? My prediction is that in 2023, we will start seeing more of data products. And then it'll take maybe two to three years for data products to become mainstream. But it's starting this year. >> A subprime mortgages were a data product, definitely were humans involved. All right, let's talk about some of the supercloud, multi-cloud players and what their future looks like. You can kind of pick your favorites. VMware, Snowflake, Databricks, Red Hat, Cisco, Dell, HP, Hashi, IBM, CloudFlare. There's many others. cohesive rubric. Keith, I wanted to start with CloudFlare because they actually use the term supercloud. and just simplifying what they said. They look at it as taking serverless to the max. You write your code and then you can deploy it in seconds worldwide, of course, across the CloudFlare infrastructure. You don't have to spin up containers, you don't go to provision instances. CloudFlare worries about all that infrastructure. What are your thoughts on CloudFlare this approach and their chances to disrupt the current cloud landscape? >> As Larry Ellison said famously once before, the network is the computer, right? I thought that was Scott McNeley. >> It wasn't Scott McNeley. I knew it was on Oracle Align. >> Oracle owns that now, owns that line. >> By purpose or acquisition. >> They should have just called it cloud. >> Yeah, they should have just called it cloud. >> Easier. >> Get ahead. >> But if you think about the CloudFlare capability, CloudFlare in its own right is becoming a decent sized cloud provider. If you have compute out at the edge, when we talk about edge in the sense of CloudFlare and points of presence, literally across the globe, you have all of this excess computer, what do you do with it? First offering, let's disrupt data in the cloud. We can't start the conversation talking about data. When they say we're going to give you object-oriented or object storage in the cloud without egress charges, that's disruptive. That we can start to think about supercloud capability of having compute EC2 run in AWS, pushing and pulling data from CloudFlare. And now, I've disrupted this roach motel data structure, and that I'm freely giving away bandwidth, basically. Well, the next layer is not that much more difficult. And I think part of CloudFlare's serverless approach or supercloud approaches so that they don't have to commit to a certain type of compute. It is advantageous. It is a feature for me to be able to go to EC2 and pick a memory heavy model, or a compute heavy model, or a network heavy model, CloudFlare is taken away those knobs. and I'm just giving code and allowing that to run. CloudFlare has a massive network. If I can put the code closest using the CloudFlare workers, if I can put that code closest to where the data is at or residing, super compelling observation. The question is, does it scale? I don't get the 238 services. While Server List is great, I have to know what I'm going to build. I don't have a Cognito, or RDS, or all these other services that make AWS, GCP, and Azure appealing from a builder's perspective. So it is a very interesting nascent start. It's great because now they can hide compute. If they don't have the capacity, they can outsource that maybe at a cost to one of the other cloud providers, but kind of hiding the compute behind the surplus architecture is a really unique approach. >> Yeah. And they're dipping their toe in the water. And they've announced an object store and a database platform and more to come. We got to wrap. So I wonder, Sanjeev and Maribel, if you could maybe pick some of your favorites from a competitive standpoint. Sanjeev, I felt like just watching Snowflake, I said, okay, in my opinion, they had the right strategy, which was to run on all the clouds, and then try to create that abstraction layer and data sharing across clouds. Even though, let's face it, most of it might be happening across regions if it's happening, but certainly outside of an individual account. But I felt like just observing them that anybody who's traditional on-prem player moving into the clouds or anybody who's a cloud native, it just makes total sense to write to the various clouds. And to the extent that you can simplify that for users, it seems to be a logical strategy. Maybe as I said before, what multi-cloud should have been. But are there companies that you're watching that you think are ahead in the game , or ones that you think are a good model for the future? >> Yes, Snowflake, definitely. In fact, one of the things we have not touched upon very much, and Keith mentioned a little bit, was data sovereignty. Data residency rules can require that certain data should be written into certain region of a certain cloud. And if my cloud provider can abstract that or my database provider, then that's perfect for me. So right now, I see Snowflake is way ahead of this pack. I would not put MongoDB too far behind. They don't really talk about this thing. They are in a different space, but now they have a lakehouse, and they've got all of these other SQL access and new capabilities that they're announcing. So I think they would be quite good with that. Oracle is always a dark forest. Oracle seems to have revived its Cloud Mojo to some extent. And it's doing some interesting stuff. Databricks is the other one. I have not seen Databricks. They've been very focused on lakehouse, unity, data catalog, and some of those pieces. But they would be the obvious challenger. And if they come into this space of supercloud, then they may bring some open source technologies that others can rely on like Delta Lake as a table format. >> Yeah. One of these infrastructure players, Dell, HPE, Cisco, even IBM. I mean, I would be making my infrastructure as programmable and cloud friendly as possible. That seems like table stakes. But Maribel, any companies that stand out to you that we should be paying attention to? >> Well, we already mentioned a bunch of them, so maybe I'll go a slightly different route. I'm watching two companies pretty closely to see what kind of traction they get in their established companies. One we already talked about, which is VMware. And the thing that's interesting about VMware is they're everywhere. And they also have the benefit of having a foot in both camps. If you want to do it the old way, the way you've always done it with VMware, they got all that going on. If you want to try to do a more cross-cloud, multi-cloud native style thing, they're really trying to build tools for that. So I think they have really good access to buyers. And that's one of the reasons why I'm interested in them to see how they progress. The other thing, I think, could be a sleeping horse oddly enough is Google Cloud. They've spent a lot of work and time on Anthos. They really need to create a certain set of differentiators. Well, it's not necessarily in their best interest to be the best multi-cloud player. If they decide that they want to differentiate on a different layer of the stack, let's say they want to be like the person that is really transformative, they talk about transformation cloud with analytics workloads, then maybe they do spend a good deal of time trying to help people abstract all of the other underlying infrastructure and make sure that they get the sexiest, most meaningful workloads into their cloud. So those are two people that you might not have expected me to go with, but I think it's interesting to see not just on the things that might be considered, either startups or more established independent companies, but how some of the traditional providers are trying to reinvent themselves as well. >> I'm glad you brought that up because if you think about what Google's done with Kubernetes. I mean, would Google even be relevant in the cloud without Kubernetes? I could argue both sides of that. But it was quite a gift to the industry. And there's a motivation there to do something unique and different from maybe the other cloud providers. And I'd throw in Red Hat as well. They're obviously a key player and Kubernetes. And Hashi Corp seems to be becoming the standard for application deployment, and terraform, or cross-clouds, and there are many, many others. I know we're leaving lots out, but we're out of time. Folks, I got to thank you so much for your insights and your participation in Supercloud2. Really appreciate it. >> Thank you. >> Thank you. >> Thank you. >> This is Dave Vellante for John Furrier and the entire Cube community. Keep it right there for more content from Supercloud2.

Published Date : Jan 10 2023

SUMMARY :

And Keith Townsend is the CTO advisor. And he said, "Dave, I like the work, So that might be one of the that's kind of the way the that we can have a Is that something that you think Snowflake that are starting to do it. and the resiliency of their and on the other hand we want it But I reached out to the ETR, guys, And they get to this point Yeah. that to me it's a rounding So the first thing that we see is to Supercloud2 have told us Is anybody really monocloud? and that they try to optimize. And that primary cloud may be the AWS. Sanjeev, you had a comment? of a solution coming out of the providers, So it's going to be interesting So a lot of the conversation And it relates to this So if I'm going to have that kind of power and their chances to disrupt the network is the computer, right? I knew it was on Oracle Align. Oracle owns that now, Yeah, they should have so that they don't have to commit And to the extent that you And if my cloud provider can abstract that that stand out to you And that's one of the reasons Folks, I got to thank you and the entire Cube community.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
KeithPERSON

0.99+

Dave VellantePERSON

0.99+

Jamal DugganPERSON

0.99+

Nelu MihaiPERSON

0.99+

IBMORGANIZATION

0.99+

MaribelPERSON

0.99+

Bob MugliaPERSON

0.99+

CiscoORGANIZATION

0.99+

DellORGANIZATION

0.99+

EuropeLOCATION

0.99+

OracleORGANIZATION

0.99+

Tristan HandyPERSON

0.99+

Keith TownsendPERSON

0.99+

Larry EllisonPERSON

0.99+

Brian GracelyPERSON

0.99+

BobPERSON

0.99+

HPORGANIZATION

0.99+

AWSORGANIZATION

0.99+

EquinixORGANIZATION

0.99+

QTXORGANIZATION

0.99+

WalmartORGANIZATION

0.99+

Maribel LopezPERSON

0.99+

August 9thDATE

0.99+

DavePERSON

0.99+

GracelyPERSON

0.99+

AmazonORGANIZATION

0.99+

WalmartsORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

SanjeevPERSON

0.99+

MicrosoftORGANIZATION

0.99+

HashiORGANIZATION

0.99+

GigaHomeORGANIZATION

0.99+

DatabricksORGANIZATION

0.99+

2023DATE

0.99+

Hawk TanPERSON

0.99+

GoogleORGANIZATION

0.99+

two companiesQUANTITY

0.99+

two thingsQUANTITY

0.99+

BroadcomORGANIZATION

0.99+

SwitzerlandLOCATION

0.99+

SnowflakeTITLE

0.99+

SnowflakeORGANIZATION

0.99+

HPEORGANIZATION

0.99+

twoQUANTITY

0.99+

238 servicesQUANTITY

0.99+

two peopleQUANTITY

0.99+

2016DATE

0.99+

GartnerORGANIZATION

0.99+

tens of millionsQUANTITY

0.99+

three yearsQUANTITY

0.99+

DBT LabsORGANIZATION

0.99+

fourth cloudQUANTITY

0.99+

ML & AI Keynote Analysis | AWS re:Invent 2022


 

>>Hey, welcome back everyone. Day three of eight of us Reinvent 2022. I'm John Farmer with Dave Volante, co-host the q Dave. 10 years for us, the leader in high tech coverage is our slogan. Now 10 years of reinvent day. We've been to every single one except with the original, which we would've come to if Amazon actually marketed the event, but they didn't. It's more of a customer event. This is day three. Is the machine learning ai keynote sws up there. A lot of announcements. We're gonna break this down. We got, we got Andy Thra here, vice President, prince Constellation Research. Andy, great to see you've been on the cube before one of our analysts bringing the, bringing the, the analysis, commentary to the keynote. This is your wheelhouse. Ai. What do you think about Swami up there? I mean, he's awesome. We love him. Big fan Oh yeah. Of of the Cuban we're fans of him, but he got 13 announcements. >>A lot. A lot, >>A lot. >>So, well some of them are, first of all, thanks for having me here and I'm glad to have both of you on the same show attacking me. I'm just kidding. But some of the announcement really sort of like a game changer announcements and some of them are like, meh, you know, just to plug in the holes what they have and a lot of golf claps. Yeah. Meeting today. And you could have also noticed that by, when he was making the announcements, you know, the, the, the clapping volume difference, you could say, which is better, right? But some of the announcements are, are really, really good. You know, particularly we talked about, one of that was Microsoft took that out of, you know, having the open AI in there, doing the large language models. And then they were going after that, you know, having the transformer available to them. And Amazon was a little bit weak in the area, so they couldn't, they don't have a large language model. So, you know, they, they are taking a different route saying that, you know what, I'll help you train the large language model by yourself, customized models. So I can provide the necessary instance. I can provide the instant volume, memory, the whole thing. Yeah. So you can train the model by yourself without depending on them kind >>Of thing. So Dave and Andy, I wanna get your thoughts cuz first of all, we've been following Amazon's deep bench on the, on the infrastructure pass. They've been doing a lot of machine learning and ai, a lot of data. It just seems that the sentiment is that there's other competitors doing a good job too. Like Google, Dave. And I've heard folks in the hallway, even here, ex Amazonians saying, Hey, they're train their models on Google than they bring up the SageMaker cuz it's better interface. So you got, Google's making a play for being that data cloud. Microsoft's obviously putting in a, a great kind of package to kind of make it turnkey. How do they really stand versus the competition guys? >>Good question. So they, you know, each have their own uniqueness and the we variation that take it to the field, right? So for example, if you were to look at it, Microsoft is known for as industry or later things that they are been going after, you know, industry verticals and whatnot. So that's one of the things I looked here, you know, they, they had this omic announcement, particularly towards that healthcare genomics space. That's a huge space for hpz related AIML applications. And they have put a lot of things in together in here in the SageMaker and in the, in their models saying that, you know, how do you, how do you use this transmit to do things like that? Like for example, drug discovery, for genomics analysis, for cancer treatment, the whole, right? That's a few volumes of data do. So they're going in that healthcare area. Google has taken a different route. I mean they want to make everything simple. All I have to do is I gotta call an api, give what I need and then get it done. But Amazon wants to go at a much deeper level saying that, you know what? I wanna provide everything you need. You can customize the whole thing for what you need. >>So to me, the big picture here is, and and Swami references, Hey, we are a data company. We started, he talked about books and how that informed them as to, you know, what books to place front and center. Here's the, here's the big picture. In my view, companies need to put data at the core of their business and they haven't, they've generally put humans at the core of their business and data. And now machine learning are at the, at the outside and the periphery. Amazon, Google, Microsoft, Facebook have put data at their core. So the question is how do incumbent companies, and you mentioned some Toyota Capital One, Bristol Myers Squibb, I don't know, are those data companies, you know, we'll see, but the challenge is most companies don't have the resources as you well know, Andy, to actually implement what Google and Facebook and others have. >>So how are they gonna do that? Well, they're gonna buy it, right? So are they gonna build it with tools that's kind of like you said the Amazon approach or are they gonna buy it from Microsoft and Google, I pulled some ETR data to say, okay, who are the top companies that are showing up in terms of spending? Who's spending with whom? AWS number one, Microsoft number two, Google number three, data bricks. Number four, just in terms of, you know, presence. And then it falls down DataRobot, Anaconda data icu, Oracle popped up actually cuz they're embedding a lot of AI into their products and, and of course IBM and then a lot of smaller companies. But do companies generally customers have the resources to do what it takes to implement AI into applications and into workflows? >>So a couple of things on that. One is when it comes to, I mean it's, it's no surprise that the, the top three or the hyperscalers, because they all want to bring their business to them to run the specific workloads on the next biggest workload. As you was saying, his keynote are two things. One is the A AIML workloads and the other one is the, the heavy unstructured workloads that he was talking about. 80%, 90% of the data that's coming off is unstructured. So how do you analyze that? Such as the geospatial data. He was talking about the volumes of data you need to analyze the, the neural deep neural net drug you ought to use, only hyperscale can do it, right? So that's no wonder all of them on top for the data, one of the things they announced, which not many people paid attention, there was a zero eight L that that they talked about. >>What that does is a little bit of a game changing moment in a sense that you don't have to, for example, if you were to train the data, data, if the data is distributed everywhere, if you have to bring them all together to integrate it, to do that, it's a lot of work to doing the dl. So by taking Amazon, Aurora, and then Rich combine them as zero or no ETL and then have Apaches Apaches Spark applications run on top of analytical applications, ML workloads. That's huge. So you don't have to move around the data, use the data where it is, >>I, I think you said it, they're basically filling holes, right? Yeah. They created this, you know, suite of tools, let's call it. You might say it's a mess. It's not a mess because it's, they're really powerful but they're not well integrated and now they're starting to take the seams as I say. >>Well yeah, it's a great point. And I would double down and say, look it, I think that boring is good. You know, we had that phase in Kubernetes hype cycle where it got boring and that was kind of like, boring is good. Boring means we're getting better, we're invisible. That's infrastructure that's in the weeds, that's in between the toes details. It's the stuff that, you know, people we have to get done. So, you know, you look at their 40 new data sources with data Wrangler 50, new app flow connectors, Redshift Auto Cog, this is boring. Good important shit Dave. The governance, you gotta get it and the governance is gonna be key. So, so to me, this may not jump off the page. Adam's keynote also felt a little bit of, we gotta get these gaps done in a good way. So I think that's a very positive sign. >>Now going back to the bigger picture, I think the real question is can there be another independent cloud data cloud? And that's the, to me, what I try to get at my story and you're breaking analysis kind of hit a home run on this, is there's interesting opportunity for an independent data cloud. Meaning something that isn't aws, that isn't, Google isn't one of the big three that could sit in. And so let me give you an example. I had a conversation last night with a bunch of ex Amazonian engineering teams that left the conversation was interesting, Dave. They were like talking, well data bricks and Snowflake are basically batch, okay, not transactional. And you look at Aerospike, I can see their booth here. Transactional data bases are hot right now. Streaming data is different. Confluence different than data bricks. Is data bricks good at hosting? >>No, Amazon's better. So you start to see these kinds of questions come up where, you know, data bricks is great, but maybe not good for this, that and the other thing. So you start to see the formation of swim lanes or visibility into where people might sit in the ecosystem, but what came out was transactional. Yep. And batch the relationship there and streaming real time and versus you know, the transactional data. So you're starting to see these new things emerge. Andy, what do you, what's your take on this? You're following this closely. This seems to be the alpha nerd conversation and it all points to who's gonna have the best data cloud, say data, super clouds, I call it. What's your take? >>Yes, data cloud is important as well. But also the computational that goes on top of it too, right? Because when, when the data is like unstructured data, it's that much of a huge data, it's going to be hard to do that with a low model, you know, compute power. But going back to your data point, the training of the AIML models required the batch data, right? That's when you need all the, the historical data to train your models. And then after that, when you do inference of it, that's where you need the streaming real time data that's available to you too. You can make an inference. One of the things, what, what they also announced, which is somewhat interesting, is you saw that they have like 700 different instances geared towards every single workload. And there are some of them very specifically run on the Amazon's new chip. The, the inference in two and theran tr one chips that basically not only has a specific instances but also is run on a high powered chip. And then if you have that data to support that, both the training as well as towards the inference, the efficiency, again, those numbers have to be proven. They claim that it could be anywhere between 40 to 60% faster. >>Well, so a couple things. You're definitely right. I mean Snowflake started out as a data warehouse that was simpler and it's not architected, you know, in and it's first wave to do real time inference, which is not now how, how could they, the other second point is snowflake's two or three years ahead when it comes to governance, data sharing. I mean, Amazon's doing what always does. It's copying, you know, it's customer driven. Cuz they probably walk into an account and they say, Hey look, what's Snowflake's doing for us? This stuff's kicking ass. And they go, oh, that's a good idea, let's do that too. You saw that with separating compute from storage, which is their tiering. You saw it today with extending data, sharing Redshift, data sharing. So how does Snowflake and data bricks approach this? They deal with ecosystem. They bring in ecosystem partners, they bring in open source tooling and that's how they compete. I think there's unquestionably an opportunity for a data cloud. >>Yeah, I think, I think the super cloud conversation and then, you know, sky Cloud with Berkeley Paper and other folks talking about this kind of pre, multi-cloud era. I mean that's what I would call us right now. We are, we're kind of in the pre era of multi-cloud, which by the way is not even yet defined. I think people use that term, Dave, to say, you know, some sort of magical thing that's happening. Yeah. People have multiple clouds. They got, they, they end up by default, not by design as Dell likes to say. Right? And they gotta deal with it. So it's more of they're inheriting multiple cloud environments. It's not necessarily what they want in the situation. So to me that is a big, big issue. >>Yeah, I mean, again, going back to your snowflake and data breaks announcements, they're a data company. So they, that's how they made their mark in the market saying that, you know, I do all those things, therefore you have, I had to have your data because it's a seamless data. And, and Amazon is catching up with that with a lot of that announcements they made, how far it's gonna get traction, you know, to change when I to say, >>Yeah, I mean to me, to me there's no doubt about Dave. I think, I think what Swamee is doing, if Amazon can get corner the market on out of the box ML and AI capabilities so that people can make it easier, that's gonna be the end of the day tell sign can they fill in the gaps. Again, boring is good competition. I don't know mean, mean I'm not following the competition. Andy, this is a real question mark for me. I don't know where they stand. Are they more comprehensive? Are they more deeper? Are they have deeper services? I mean, obviously shows to all the, the different, you know, capabilities. Where, where, where does Amazon stand? What's the process? >>So what, particularly when it comes to the models. So they're going at, at a different angle that, you know, I will help you create the models we talked about the zero and the whole data. We'll get the data sources in, we'll create the model. We'll move the, the whole model. We are talking about the ML ops teams here, right? And they have the whole functionality that, that they built ind over the year. So essentially they want to become the platform that I, when you come in, I'm the only platform you would use from the model training to deployment to inference, to model versioning to management, the old s and that's angle they're trying to take. So it's, it's a one source platform. >>What about this idea of technical debt? Adrian Carro was on yesterday. John, I know you talked to him as well. He said, look, Amazon's Legos, you wanna buy a toy for Christmas, you can go out and buy a toy or do you wanna build a, to, if you buy a toy in a couple years, you could break and what are you gonna do? You're gonna throw it out. But if you, if you, if part of your Lego needs to be extended, you extend it. So, you know, George Gilbert was saying, well, there's a lot of technical debt. Adrian was countering that. Does Amazon have technical debt or is that Lego blocks analogy the right one? >>Well, I talked to him about the debt and one of the things we talked about was what do you optimize for E two APIs or Kubernetes APIs? It depends on what team you're on. If you're on the runtime gene, you're gonna optimize for Kubernetes, but E two is the resources you want to use. So I think the idea of the 15 years of technical debt, I, I don't believe that. I think the APIs are still hardened. The issue that he brings up that I think is relevant is it's an end situation, not an or. You can have the bag of Legos, which is the primitives and build a durable application platform, monitor it, customize it, work with it, build it. It's harder, but the outcome is durability and sustainability. Building a toy, having a toy with those Legos glued together for you, you can get the play with, but it'll break over time. Then you gotta replace it. So there's gonna be a toy business and there's gonna be a Legos business. Make your own. >>So who, who are the toys in ai? >>Well, out of >>The box and who's outta Legos? >>The, so you asking about what what toys Amazon building >>Or, yeah, I mean Amazon clearly is Lego blocks. >>If people gonna have out the box, >>What about Google? What about Microsoft? Are they basically more, more building toys, more solutions? >>So Google is more of, you know, building solutions angle like, you know, I give you an API kind of thing. But, but if it comes to vertical industry solutions, Microsoft is, is is ahead, right? Because they have, they have had years of indu industry experience. I mean there are other smaller cloud are trying to do that too. IBM being an example, but you know, the, now they are starting to go after the specific industry use cases. They think that through, for example, you know the medical one we talked about, right? So they want to build the, the health lake, security health lake that they're trying to build, which will HIPPA and it'll provide all the, the European regulations, the whole line yard, and it'll help you, you know, personalize things as you need as well. For example, you know, if you go for a certain treatment, it could analyze you based on your genome profile saying that, you know, the treatment for this particular person has to be individualized this way, but doing that requires a anomalous power, right? So if you do applications like that, you could bring in a lot of the, whether healthcare, finance or what have you, and then easy for them to use. >>What's the biggest mistake customers make when it comes to machine intelligence, ai, machine learning, >>So many things, right? I could start out with even the, the model. Basically when you build a model, you, you should be able to figure out how long that model is effective. Because as good as creating a model and, and going to the business and doing things the right way, there are people that they leave the model much longer than it's needed. It's hurting your business more than it is, you know, it could be things like that. Or you are, you are not building a responsibly or later things. You are, you are having a bias and you model and are so many issues. I, I don't know if I can pinpoint one, but there are many, many issues. Responsible ai, ethical ai. All >>Right, well, we'll leave it there. You're watching the cube, the leader in high tech coverage here at J three at reinvent. I'm Jeff, Dave Ante. Andy joining us here for the critical analysis and breaking down the commentary. We'll be right back with more coverage after this short break.

Published Date : Nov 30 2022

SUMMARY :

Ai. What do you think about Swami up there? A lot. of, you know, having the open AI in there, doing the large language models. So you got, Google's making a play for being that data cloud. So they, you know, each have their own uniqueness and the we variation that take it to have the resources as you well know, Andy, to actually implement what Google and they gonna build it with tools that's kind of like you said the Amazon approach or are they gonna buy it from Microsoft the neural deep neural net drug you ought to use, only hyperscale can do it, right? So you don't have to move around the data, use the data where it is, They created this, you know, It's the stuff that, you know, people we have to get done. And so let me give you an example. So you start to see these kinds of questions come up where, you know, it's going to be hard to do that with a low model, you know, compute power. was simpler and it's not architected, you know, in and it's first wave to do real time inference, I think people use that term, Dave, to say, you know, some sort of magical thing that's happening. you know, I do all those things, therefore you have, I had to have your data because it's a seamless data. the different, you know, capabilities. at a different angle that, you know, I will help you create the models we talked about the zero and you know, George Gilbert was saying, well, there's a lot of technical debt. Well, I talked to him about the debt and one of the things we talked about was what do you optimize for E two APIs or Kubernetes So Google is more of, you know, building solutions angle like, you know, I give you an API kind of thing. you know, it could be things like that. We'll be right back with more coverage after this short break.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

George GilbertPERSON

0.99+

AmazonORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

AdrianPERSON

0.99+

DavePERSON

0.99+

AndyPERSON

0.99+

GoogleORGANIZATION

0.99+

IBMORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

Adrian CarroPERSON

0.99+

Dave VolantePERSON

0.99+

Andy ThraPERSON

0.99+

90%QUANTITY

0.99+

15 yearsQUANTITY

0.99+

JohnPERSON

0.99+

AdamPERSON

0.99+

13 announcementsQUANTITY

0.99+

LegoORGANIZATION

0.99+

John FarmerPERSON

0.99+

Dave AntePERSON

0.99+

twoQUANTITY

0.99+

10 yearsQUANTITY

0.99+

AWSORGANIZATION

0.99+

DellORGANIZATION

0.99+

LegosORGANIZATION

0.99+

Bristol Myers SquibbORGANIZATION

0.99+

OracleORGANIZATION

0.99+

Constellation ResearchORGANIZATION

0.99+

OneQUANTITY

0.99+

ChristmasEVENT

0.99+

second pointQUANTITY

0.99+

yesterdayDATE

0.99+

AnacondaORGANIZATION

0.99+

todayDATE

0.99+

Berkeley PaperORGANIZATION

0.99+

oneQUANTITY

0.99+

eightQUANTITY

0.98+

700 different instancesQUANTITY

0.98+

three yearsQUANTITY

0.98+

SwamiPERSON

0.98+

AerospikeORGANIZATION

0.98+

bothQUANTITY

0.98+

SnowflakeORGANIZATION

0.98+

two thingsQUANTITY

0.98+

60%QUANTITY

0.98+

Rob Enslin, UiPath & Daniel Dines, UiPath | UiPath Forward 5


 

>> Male: TheCUBE presents, UIPATH, Forward 5 brought to you by, UIPATH. >> Okay the party has started here at forward 5 UIPATH big customer event if you're watching the cube. We're wrapping up day one with the co-CE0 segment. Daniel Dines is here. He's the founder and Co-CEO of UIPATH and Rob Enslin, is co-CEO. Gents, great to see you. Thanks for spending some time with us. I know you're super busy. >> Thanks Dave. >> So I've been looking forward to this. Daniel you know I've followed the company for a long time. The really interesting path you took, to get to where you are today. How did you guys meet? And why did you decide to hire Rob? >> Male: (laughs) >> Rob: Well let me start. I uh, I was looking for a partner. Actually, in our work to your stand here, we are talking about how, how you feel in this job. You feel so alone. Because you are the center of all pressure points. And having a partner, having someone that has your back, it's kind of awesome. So I was looking for a partner. And our current friend, Carl Escenbach, he introduced us to each other, and we instantly clicked. And this is the type of job where it's uh either work well or it doesn't. It cannot be anything in the middle. >> Right, okay with Carl, we know Carl well. Awesome operator. Knows the business super well. So Rob, what attracted you to UIPATH? You had a great situation at google. You guys were growing like crazy. Why did you decide to come here? What did you see that attracted you? >> Yeah you know when I, when I went to google, I went to google because I really believed that data and AI was necessary for companies. And business is to be competitive in the future. And we did some great stuff at google cloud in the 3 years. But I knew UIPATH from a couple of years ago when they were mainly a RPA space. And I just felt that there was a place in time when automation was going expand. And as I sat down with Carl a couple of times, spoke to carl. And then I sat down with Daniel, I knew that there was something special with UIPATH, that could be a generational opportunity. Not any for myself but for the company in the future. And then I, you know I got to know Daniel. And at this stage of my career I was like, I'm pretty fussy about what I want to do and what I want and where I want to go. First of all, I want to go to a company that had great product, had a great culture, and I wanted to work with somebody that we could shake the future together and you know, Daniel and I just hit it off from the very first time we met. He got to meet my family, my dogs and we did the whole, we did the whole courting thing before we actually decided this was going to be a good thing for both of us. >> Dave: That's good. >> Rob: Yeah. >> Dave: You got to meet the family. That's very good. >> We just had, John Furrier and I just had, Mohit Aron and Sanjay Poonen into out studio. Cause Mohit, you know, formal google. Long time. And they decided to kind of split duties. Mohit's going into product, he didn't keep his CEO title. He walked. How are you guys splitting you time? What are each of you going to, responsible for? >> Daniel: Well its, its kind of similar. On a day by day operation I, I rely heavily on Rob. We do it together. Strategic decisions about the company's destiny. I'm doing mostly the product these days. Which is a big relief for me. And I think we also split a bit of customers visit. Which is great. I still enjoy meeting customers. I need, customers are food for my cause. >> Dave: (laughs) yeah and your awesome product visionary. You've been there since day one. Now Rob, you said in the key note today that you've seen around about a hundred customers. You've transverse the world. What did you learn from them that informed you? That gave you confidence that the the move to the internet platform, even though you had already started that. >> Male: Yeah. >> But you're really doubling down on that >> Rob: You know when I... >> from a stand point. >> Rob: You know Dave, when you think about it, like I was, I was so impressed that Daniel had the vision to create a platform 3 years ago. >> Dave: Yeah. >> All right. And as we went around the world. As I went around the world, and it was one of the very first things I've seen. I've got to understand how customers see UIPATH, from their advantage point. What are they looking for from us? Why is this company, why doe customers like this company so much? And as I went around the world. I went to Asia a couple, I went to Asia, Australia, Singapore, Japan. I was in Europe twice. We did the trip together. We went to visit customers. And it was very much the same thing. Helps us expand automation faster. And we are so surprise, at the break of your platform. We never knew that. And so it kind of just had, for me, it was conviction. It's like, this walls is the right decision you've made. There's so much opportunity there. And that's, you know that's kind of what I've learned through the last four five months. >> Dave: Now as you know Daniel, I've written a lot about your company. One of the things I've said is that, that start ups, if I can call you that back pre-IPO, typically don't have as much international exposure as UIPATH had. I mean you sort of, you sort of started as an international company and became more US centric. You said, in the, in the key note today, you're talking to Ray Wong about people may don't understand that challenges of FX. Point being, when you convert international dollars into US dollars there are less of them cause the dollars stronger. But still, I've always felt like that international footprint is an advantage. Rob you came from SAP, you know, again European based company. I don't, (stutters), do you regret that? Now? I mean I know it's technical, I'm sure you don't, but talk about that sort of international exposure? Why that's a long term benefit. >> Well, you, first of all, you expand faster. I think we expanded faster than our competition because our global footprint was larger. And we had the courage. Go in Japan, for instance. Everybody told me, it's impossible to make for such a small starter. It's impossible to make a business in Japan. But we didn't believe it. We're just crazy and we went there, and be built a very sizable business in Japan. Fifty-five percent of our revenue, even today, it's outside U.S. Now of course that has a down side. When uh, When the local currencies, you know, are losing the value compared to the dollars, we're impacted. As we go to... to investors, until now, so we are seeing like a (indistinct) in terms of ARI. It's huge. Only because (indistinct) and losing the business in Russia. But it still, it's the strength of our company. Things will come back. And then, you know, the growth engine will re-accelerate again. >> Dave: Yeah but when the dollars weakens that'll be in your favor. Rob I want to pick up on something you said today in your keynote. You went back and started, you know the cycles of ERP and you know, internet, et cetera. I kind of have a love hate with ERP. I have to be honest. >> Male: (laughing) >> But it, but but (chuckles) but if I go back to that. Late eighties nineties, you wouldn't have be able to pick SAP as the winner. And then SAP emerged. You know, very clearly. But the more interesting thing, is that the customers who are implementing ERP well. The practitioners did better than their peers, and dominated their industries. And their stocks went up. Their evaluations went up. Different worlds obviously but, do you see the same thing happening with RPA and automation? What gives you confidence that that's the case? >> I absolutely do see the same thing happening with automation and RPA being a part of, in being a part of that. The reason, the reason I believe that is speed is so critical. (stutters) And if you think about how hard it is for a CIO or a c level executive to consume the technology coming at them, plus all the changes in the world being thrown at them. It's compiling and compiling and compiling. We have an incredible solution, that can help companies. And there comes certain times, the love outcomes to the business. Like no one else gets. And when I see that, I view that as just like the beginning of what's going to happen in the future so, in many ways, and I've said this to many of my friends, it feels like 1992, 1993 to me. And it's interesting because no one really understood then why SAP would be great in 1992 and 93. And they got a couple of things right. They got the eco system right. Their new partners were important. And the knew they needed to drive business outcome for companies, in which they did. And so I feel like we are in a very similar place. Very different technology obviously. And the speed of change now is so dramatic, compared to what it was. And there's very few technology that can provide that level of speed and accomodation to their customers. >> All right, let's talk about priorities. You guys got a lot of work to do and you've, you've laid it out to the financial community. You've got to have profitable growth, because of FX, it part, you've lowered your forecast. But I think there's some conservative in their as well. Um, but you got to do that balance. You've given some guidance on gross margins. Cloud maybe brings that down a little bit. RnD I saw wide range. Thirteen to seventeen percent. I hope you keep spending on RnD. Big fan of that. You know stock buybacks and, RnD if in your position are going to be better. And the product priorities, continue to build that out. But question, let's start with the product. So you've got an on-prem stack and you've got a cloud stack that's emerging, how do you balance those out? How do you do the integration? You've done a great job with the integration. Does it, are you concerned about your ability to continue to work at that speed with two code bases? I wonder if you could address that? >> Daniel: We've become a cloud first company. We deliver all of our products first in the cloud. We've deliver on the two week (indistinct) in the cloud. So that helps us integrate quite fast. I think we made a very good business decision to build our cloud team in Seattle. In Bellevue to be specific. And we have access to great talent that knows how to build serious cloud service. Which is hard to find dollar. And uh, so, and also we, we have, we benef- one of our only benefits was, we have the really good architecture. We have an architecture that work easily on-prem and on the cloud. And even today, our work flow foundation, our local designers, were easy to modernize. So right now we are launching studio weapon. But behind the scene, it's the same workflow engine. Our customers don't have to rewrite anything. It just works. And it does the same to take our own brand product and brand it in the multicloud. So, it's, there is no friction at all. Actually cloud is just helping us accelerate. But we benefit then again of a really solid architectural foundation. >> Daniel: Architecture matters. We've seen that in this industry. We got the B52s rocking out in the background, I love it, but I've got so many questions for you guys. I want to talk about the go to market. Because Rob, it's obviously a strength of yours. You've come in. You've communicated to the street, that you're reshaping the sales floors. Are they lowering the ratios of sales? People, the customers at the high end, mid range as well, using digital. I mean the numbers are one to ten now. At the top. One to maybe fifty at the mid range. Where are you in terms of that journey? You've got to find people, you got to train them, how do you get the productivity out of those guys? Take us through your thinking there? >> Rob: Yeah firstly, I think we have enough resources. Having resources is not an issue. Um, we have an incredible vehicle to acquire customers inside the company. Our digital sales motion, it's probably the best I've seen. And so we have the ability to acquire customers really fast. And we get the first workload in really fast. The challenge is we need to, we need to be able to drive a (indistinct) model and we graduate customs when we acquire them into the direct sales floors. And then direct sales floors, we're not going to go one to thirty, we're talking one to ten for the direct sales floor. And even the high up in the pyramid, we want to have an even denser model than that. And the whole purpose is to drive the time to consumption much quicker, much faster. So we know exactly if we acquire a customer, will they spend? Do they have a (indistinct) spend? On what level do they have a (indistinct) spend? And therefore when we capture them, we can immediately surround them, and put the right resources so we can grow faster. We think this will have a significant impact on the organization. We'll start to implement certain pieces in the next quarter. Um, things like packaging solutions. Putting them in, enabling the sales organization. And buy the beginning of next year, we'll be ready to actually go full board, globally. We already put some pieces in place when I joined. Chris Weber, my chief business officer, did a great job doing some of those pieces. So we're on the journey already. >> Dave: Yeah and even before you guys were public and you weren't publishing your NRR numbers. Our ETR survey partner, we, we always thought you had very low churn. And I think you broke out just yesterday. The, the NRR for overseas vs U.S, U.S I think was 140 plus percent. >> Male: Yeah >> Very very strong. A little, a little less overseas but the churn is still very low. >> Male: Yep. >> Okay so that's super positive. Customer affinity, I was wanted to code these events. I listen to the key notes very carefully, and then interview customers on the cube, and I try to identify, is there alignment there? And I see very strong alignment, I have to say, and strong customer affinity. So that's in your favor. I have, Daniel, I got another question for you on product. What is Symantec automation? What the heck is that? Can you explain that? I don't understand >> Dave, have you seen the demo in my (indistinct)? >> Dave: You know, I had to leave and do interviews, so I, uh, I missed it. >> I think, I think that demo answer complete your question. So in the s-, you know there saying that great, you can not distinguish great technology by magic. I think technology should be simple. And we, we show today, one of the simplest demo that you can imagine. But it's so, such a complex technology behind the scene, that you also can not imagine. So what was demo? We show how one business user, without any technical skills, can build any type of document. Can be a passport, can be an invoice, can be a legal (indistinct), and just go, "I want to copy data from here, and I want to paste data there". Can be a spreadsheet, can be another obligation, and like a human user, without understanding, without having prior knowledge about data, document layout, about screens, screens layouts, nothing, we analyze real time. Document. We discover, we discover the meaning of the information. We analyze the screen. We understand the screen but we understand the meaning of the screen. And we understand how the information in one side relate to the other side. And we just connects the dots and we copy the information and we paste it. A job that you'll do as a human user, maybe three minutes, is done in ten seconds. This is powerful. >> Yeah that is powerful. Thank you for that. I mean, and you take the date, whether it's transaction data or unstructured data and and and bring meaning out of it. That's powerful. Last question and I'll let you guys go. Rob, you got traders, and you've got long term investors. All right traders going to be defensive, today. I get that. Make the case for UIPATH, for long term investors. >> Rob: I think we're going to be a multi-gern- multi-billion company and we're going to be a generational company of our time. And we will define enterprise automation. And it's going to be a long term game and we feel like really strong that we'll be the lead in that game. >> Dave: Guys, thanks so much for coming to the cube. Great show. Always fun at UiPath Forward. Really appreciate your time. Thank you. >> Thanks dave. >> Appreciate it as well. >> Okay wrap it up, day one, we're here tomorrow, first thing, Dave Vellante and Dave Nicholson. Thanks for watching, forward 5, Uipath big customer event, we'll see you tomorrow. (music)

Published Date : Sep 29 2022

SUMMARY :

brought to you by, UIPATH. Okay the party has started to get to where you are today. It cannot be anything in the middle. So Rob, what attracted you to UIPATH? And then I, you know I got to know Daniel. Dave: You got to meet the And they decided to kind of split duties. And I think we also split the move to the internet platform, that Daniel had the vision And that's, you know that's I mean you sort of, you sort of started When the local currencies, you know, I have to be honest. is that the customers who the love outcomes to the business. And the product priorities, And it does the same to I mean the numbers are one And so we have the ability to And I think you broke out just yesterday. but the churn is still very low. I listen to the key notes very carefully, to leave and do interviews, And we just connects the dots I mean, and you take the date, And it's going to be a long term game much for coming to the cube. we'll see you tomorrow.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DanielPERSON

0.99+

EuropeLOCATION

0.99+

DavePERSON

0.99+

Chris WeberPERSON

0.99+

JapanLOCATION

0.99+

Dave NicholsonPERSON

0.99+

AsiaLOCATION

0.99+

SeattleLOCATION

0.99+

Dave VellantePERSON

0.99+

Carl EscenbachPERSON

0.99+

CarlPERSON

0.99+

RobPERSON

0.99+

SingaporeLOCATION

0.99+

1992DATE

0.99+

UiPathORGANIZATION

0.99+

Rob EnslinPERSON

0.99+

BellevueLOCATION

0.99+

Sanjay PoonenPERSON

0.99+

RussiaLOCATION

0.99+

three minutesQUANTITY

0.99+

Fifty-five percentQUANTITY

0.99+

UIPATHORGANIZATION

0.99+

AustraliaLOCATION

0.99+

Ray WongPERSON

0.99+

SymantecORGANIZATION

0.99+

thirtyQUANTITY

0.99+

ThirteenQUANTITY

0.99+

tomorrowDATE

0.99+

MohitPERSON

0.99+

ten secondsQUANTITY

0.99+

two weekQUANTITY

0.99+

93DATE

0.99+

U.S.LOCATION

0.99+

bothQUANTITY

0.99+

1993DATE

0.99+

googleORGANIZATION

0.99+

oneQUANTITY

0.99+

todayDATE

0.99+

yesterdayDATE

0.99+

firstQUANTITY

0.99+

Daniel DinesPERSON

0.99+

carlPERSON

0.99+

twiceQUANTITY

0.99+

tenQUANTITY

0.99+

SAPORGANIZATION

0.99+

fiftyQUANTITY

0.99+

Owen Garrett, Deepfence | Kubecon + Cloudnativecon Europe 2022


 

>>The cube presents, Coon and cloud native con Europe, 2022, brought to you by red hat, the cloud native computing foundation and its ecosystem partners. >>Welcome to Valencia Spain in Coon and cloud native con Europe, 2022. I'm Keith Townsend, along with my host, Paul Gillon senior editor, enterprise architecture at Silicon angle. We are continuing the conversation here at KU con cloud native con around security app defense. Paul, were you aware it was this many security challenges and, and that were native to like cloud native >>Well there's security challenges with every new technology. And as we heard, uh, today from our, some of our earlier guests, uh, containers and Kubernetes naturally introduce new variables in the landscape and that creates the potential vulnerabilities. So there's a whole industry that's evolving around that. And what we've been looking at today, yesterday, we talked very much about managing Kubernetes today. We're talking about many of the nuances of building a, a Kubernetes based environment and security is clearly one of them. >>So welcome our guests on Garrett, head of products. >>Thank >>You and community at deep fence. You know what I'm going. I'm going to start out the question with a pretty interesting security at scale is one of your taglines. >>Absolutely. >>What does that mean? Exactly. >>So Kubernetes is all about scale securing applications and Kubernetes is a completely different game to securing your traditional monolithic legacy enterprise applications. Kubernetes grows it scales it's elastic, and the perimeter around a Kubernetes application is very, very porous. There are lots of entry points. So you can't think about securing a cloud native application. The way that you might have secured a monolith securing a monolith is like securing a castle. You build a wall around it. You put guards on the gate. You control, who comes in and out, and job is more or less done securing a cloud native application. It's like securing a city. People are roaming through the city without checks and balances. There are lots of services in the city that you've got to check and monitor. It's extremely porous. So sec, all of the security problems in Kubernetes with cloud native applications, they're amplified by scale, the size of the application, the number of nodes and the complexity of the application and the way that it's built and delivered. >>That's, uh, kind of a chilling phrase. The perimeter is porous. Uh, yeah, companies are adopting Kubernetes right now. Evidently bringing in all of these new, these new, uh, vulnerability points. Do they know what they're getting into >>Many don't, there's, there's a huge amount of work around trying to help organizations make the transition from thinking about applications as single components to thinking about them as microservices with multiple little, little components, it's a really essential step because that's what allows businesses to evolve, to digitize, to deliver services, using APIs, mobile, mobile apps. So it's a necessary technical change, but it brings with it. Lots of challenges and security is one of those biggest challenges. >>So as I'm thinking about that poorest nature, I can't help, but think, you know, if I have my, my traditional IPS does a really great job of blocking that centralized data center and access to that centralized data center. As I think about that city example that you gave me, I'm thinking, you know what? I have intruders or not even intruders. I have bad actors within my city. You >>Do you, how >>Do, how does deep defense help protect me from those bad actors that are inside or roaming the city? >>So this is the wonderful, unique technology we have within deep fence. So we install little sensors, little lightweight sensors on each host. That's running your application on Kubernetes nodes as a Damon set against Fargate instances on Docker hosts on bare metal. And those sensors install little taps into the network using E B P F and they monitor the workloads. So it's a little bit like having CCTV cameras throughout your city tracking what's happening. There are a lot of solutions which we'll look at what happens on a workload traditional XDR solutions that look for things like process changes or file system changes. And we gather those signals indicators of compromise, but those alone are too little too late. They tell you that a breach has probably already happened. What deep defense does is we also look at the network. We gather network signals. We can see someone using a, a reconnaissance tool roaming through your application, sending probe traffic to try and find weak points. >>We can see them then elevating the level of attack and trying to weaponize a particular exploit that they might have find, or vulnerability that they find. We can see everything that comes into each of the components, not just at the perimeter, but right inside your application. We see what happens in those components process file, integrity, changes. And we see what comes out, attempt exfiltrate, something that looks like a database file or et cetera password. And we put all of these little subtle signals, the indicators of attack, the network based signals and the indicators of compromise. We put those together and we build a picture of the threats against each of the workloads in your cloud, native application. There's lots and lots of background, recon traffic. We see that you generally don't need to worry about that. It's just noise. But as that elevates and you see evidence of exploits and later spread, we identify that we'll let you know, or we can step in and we can proactively block the behavior that's causing those problems. So we can stop someone from accessing a component, or if a component's compromised, we can, we can freeze it and restart it. And this is a key part of the technology within our threat striker security observability platform, >>Uh, false alerts are the bane of the security ministry's existence. What do you do to protect against those? >>So we use a range of heuristics and a degree, a small degree of machine learning to try and piece together. What's happening. It's a complicated picture. So some of your viewers will have heard of a might attack matrix. So a dictionary of techniques and tactics and, and protocols that attackers might use in order to attack an infrastructure. So we gather the signals, those TTPs, and we then build a model to try and understand how those little signals pieced together. So maybe there's, you know, there's a guy with a striped striped vest that is trying the doors in your city, you know, a low level criminal who isn't getting anywhere. We'll pick that up and that's low risk. But then if we see that person infiltrate a building, because they find an open door, then that raises the level of risk. So we monitor the growing level of risk against each workload. >>And once it hits a level of concern, then we let you know, but you can then forensically go back in time and look at all of the signals that surround that. So we don't just tell you, there was an alert and a file was compromised in your workload, do something about it. We tell you the file was compromised. And prior to that, there were these events, process failures. Those could have been caused by network events that are correlated to a vulnerability that we know. And those in, in turn could have been discovered by recon traffic. So we help you build that entire active picture up. Every application's different. You need to have the context to understand and interpret signals that a solution like threat striker gives you, and we give you that context. >>So I would push back. If I'm a platform team, say, you know what? I have a service mesh. I, I have trusted traffic going to trucked traffic going from trusted sources. I'm, I'm cutting off the problem even before it happens. Why should I use, uh, deep fix? >>So a service mesh won't cut off the problem. It'll just hide the problem because a service mesh will just encrypt the traffic between each of the components. It doesn't stop the bad traffic flowing. If a component is compromised, people can still talk to another component and the service mesh happily encrypts it and hides it. What we do. We love service meshes because we can decrypt the traffic or we can inspect the individual application components before they talk to the mesh side car. So we can pull out and see the plane, text traffic. We can identify things that other tools wouldn't have a hope of, of identifying. >>So, you know, you, you just, uh, triggered something. >>Yeah. >>A lot of companies do not like decrypting that traffic after it's been sent, they don't want anyone else, including security tools to see it. Yeah. How do you ensure, how do you serve those clients? >>So we serve those clients by having an architecture that sits entirely on premise in their infrastructure. Their sensitive data never leaves their network, their VPCs, their, their boundary. They install a threat striker console. So this is the tool that does all of the analysis and make the protection decisions. They run that themselves. They deploy the threat, striker sensors in their production environment. They talk over secure links, authenticated to the console. So everything sits within their power view, their level of their degree of control. >>So if, if they're building a, a, a cloud application though, or, or a hybrid cloud application, how do you connect? How do you deal with the cloud side? >>So whether their production environments are next to the threat striker console, whether they're running on remote clouds, our sensors will run in all of those environments and the console will manage a complex hybrid environment. It will show you traffic running in your Kubernetes cluster and AWS traffic Mon running on your VMs on Google traffic, running in your 4g instances on again, on AWS and on your on-prem instances, it gathers that data securely from each of those remote places, sends it to the console that you own and operate securely. So you have full control over what is captured. It's encrypted, it's authenticated, it's streamed back. So it never leaves your level of control. >>Talk to me about the overhead. How is this deployed and managed with MI environment? >>So there are two components, as we've learned, we have the console. All of the work is done on the console, the any necessary decryption, all the calculation that runs on a Kubernetes cluster, that, that you would deploy, that you would scale. So that's fully in your control. Then you need to install little sensors on each of your production environments to bring the data back to the console. >>Now those on pots, or are those in running inside of, uh, containers themselves. >>So they are container based. They're typically deployed as a demon set. So one instance per node in your Kubernetes cluster, they are, we have put a lot of engineering work into making those as lightweight as possible. They do very little analysis themselves. They do a little bit of pre-filtering of network traffic to reduce the bandwidth, and then they pass the packets back to the management console. So our goal is to have the minimal impact on customers, production environments, so that they can scale and operate without an impact on the performance or availability of their applications. And we have customers who are monitoring services running on literally thousands of Kubernetes nodes and streaming the data back to their management console and using that to analyze from a single point of control what's going on in their applications. >>So we hear time and again, CIOs complaining that they have too many point security products. Yes, I think average of 87 in, in, in the enterprise, according to, to one survey, aren't you just another, >>And that is the big challenge with security. There is no silver bullet product that will secure everything that you have. You have your, the what, you're the, what you're securing scales over space from your infrastructure to the containers and the workloads and the application code. It scales over time. Are you secure? Are you putting security measures in, at shift left development when you deploy or are you securing production? And it scales over the environments. There is no silver bullet that will provide best to breed security across that entire set of dimensions. There are large organizations that will present you with holistic solutions, which are a bunch of different solutions with the same logo on them, bundle together under the same umbrella. Those don't necessarily solve the problem. You need to understand the risks that your organization is faced. And then what are the best to breed solutions for each of those risks and for the life cycle of your application at deep fence, we are about securing your production environment. >>Your developers have built applications. They've secured those applications using tools like SNCC, and they've ticked and signed off saying with this list of documented vulnerabilities, my application is secure. It's now ready to go into production. But when I talk to, to application security people to ops people, and I say, are the applications in your Kubernetes environment? Are they secure? They say, look, honestly, I don't know, the developers have signed off something, but that's not what I'm running. I've had to inject things into the application. So it's different. There could have been issues that were, that were discovered after the developers signed it off. The developers made exceptions, but also 60, 80% of the code I'm running in production. Didn't come from my development team. It's infrastructure, it's third party modules. So when you look at security as a whole, you realize there are so many ax axis that you have to consider. There are so many points along these, a axis, and you need to figure out in a kind of a van diagram fashion, how are you going to address security issues at each of those points? So when it comes to production security, if you want a best breed solution for finding vulnerabilities in your production environment, threat map, open source, we'll do that. And then for monitoring attack behavior threat striker enterprise will do that. Then deep defense is a great set of solutions to look at. >>So on. Thanks for stopping by security at layers is a repetitive thing that we hear security experts talk about. Not one solution will solve every problem when it comes to security from Valencia Spain, I'm Keith Townson, along with Paul Gillon and you're watching the Q the leader in high tech coverage.

Published Date : May 19 2022

SUMMARY :

The cube presents, Coon and cloud native con Europe, 2022, brought to you by red hat, We are continuing the conversation And as we heard, uh, I'm going to start out the question with a pretty interesting security at scale is What does that mean? So sec, all of the security problems in Kubernetes with cloud native applications, all of these new, these new, uh, vulnerability points. So it's a necessary technical that you gave me, I'm thinking, you know what? So we install We see that you generally don't need to worry about What do you do to protect against those? So we gather the signals, those TTPs, and we then build a model to So we help you build that entire active picture up. If I'm a platform team, say, you know what? So we can pull How do you ensure, how do you serve those clients? So we serve those clients by having an architecture that sits entirely on premise So you have full control over what is captured. Talk to me about the overhead. So that's fully in your control. Now those on pots, or are those in running inside of, uh, So our goal is to have the minimal impact on customers, So we hear time and again, CIOs complaining that they have too many point security products. And that is the big challenge with security. So when you look at security as a whole, you realize there are so many ax axis that you have So on.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Keith TownsendPERSON

0.99+

Paul GillonPERSON

0.99+

Keith TownsonPERSON

0.99+

yesterdayDATE

0.99+

PaulPERSON

0.99+

Owen GarrettPERSON

0.99+

two componentsQUANTITY

0.99+

thousandsQUANTITY

0.99+

AWSORGANIZATION

0.99+

KubernetesTITLE

0.98+

EuropeLOCATION

0.98+

eachQUANTITY

0.98+

Valencia SpainLOCATION

0.98+

CloudnativeconORGANIZATION

0.98+

each hostQUANTITY

0.98+

todayDATE

0.98+

Valencia SpainLOCATION

0.98+

KubeconORGANIZATION

0.97+

oneQUANTITY

0.96+

2022DATE

0.96+

one surveyQUANTITY

0.96+

DeepfenceORGANIZATION

0.95+

one instanceQUANTITY

0.94+

single pointQUANTITY

0.93+

GarrettPERSON

0.93+

each workloadQUANTITY

0.89+

GoogleORGANIZATION

0.86+

87 inQUANTITY

0.8+

one solutionQUANTITY

0.8+

80%QUANTITY

0.8+

DockerTITLE

0.76+

single componentsQUANTITY

0.73+

red hatORGANIZATION

0.72+

KubernetesORGANIZATION

0.71+

60,QUANTITY

0.7+

SiliconORGANIZATION

0.7+

DamonTITLE

0.67+

lots of servicesQUANTITY

0.65+

SNCCORGANIZATION

0.64+

KU conORGANIZATION

0.64+

conORGANIZATION

0.64+

so many pointsQUANTITY

0.53+

Coon and cloud native conORGANIZATION

0.51+

FargateTITLE

0.49+

cloud nativeEVENT

0.49+

CoonORGANIZATION

0.46+

cloud native conEVENT

0.43+

axisCOMMERCIAL_ITEM

0.38+

axisTITLE

0.28+

Benoit Dageville, Snowflake | AWS re:Invent 2021


 

(upbeat music) >> Hi, everyone, welcome back to theCUBE's coverage of AWS re:Invent 2021. We're wrapping up four days of coverage, two sets. Two remote sets, one in Boston, one in Palo Alto. And really, it's a pleasure to introduce Benoit Dageville. He's the Press Co-founder of Snowflake and President of Products. Benoit, thanks for taking some time out and coming to theCUBE. >> Yeah, thank you for having me, Dave. >> You know, it's really a pleasure. We've been watching Snowflake since, maybe not 2012, but mid last decade you hit our radar. We said, "Wow, this company is going to go places." And yeah, we made that call correctly. But it's been a pleasure to sort of follow you. We've talked a little bit remotely. I kind of want to go back to some of the fundamentals. First of all, I wanted mention your earnings last night. If you guys didn't see it, again, triple digit growth, $1.8 billion RPO, cashflow actually looking pretty good. So, pretty amazing. Oh, and 173% NRR, you know, wow. And Mike Scarpelli is kind of bummed that you did so well. And I know why, right? Because it's going to be at some point, and he dials it down for the expectations and Wall Street says, "Oh, he's sandbagging." And then at some point you're actually going to meet expectations and people are going to go, "Oh, they met expectations." But anyway, he's a smart guy, he know what he's doing. (Benoit laughing) I loved it, it was so funny listening to him last night. But anyway, I want to go back to, when I talked to practitioners about data warehousing pre-cloud, they would say sound bites like, it's like a snake swallowing a basketball, they would tell me. And the other thing they said, "We just chased the chips. Every time a new Intel chip comes out, we have to bring in new servers, and we're struggling." The cloud changed all that. Your vision and Terry's vision changed all that. Maybe go back to the fundamentals of what you saw. >> Yeah, we really wanted to address what we call the data challenges. And if you remember at that time, data challenge was first of the volume of data, machine-generated data. So it was way more than just structured data, right? Machine-generated data is weblogs, and it's at petabyte scale. And there was no good solution for that type of data. Big data was not a great solution, Hadoop was really bad. And there was no good solution for that. So we thought we should do something for big data. The other aspect was concurrency, right? Everyone wants to use these data analytic platform in an enterprise, right? And you have more and more workload running against the same data, and the systems that were built were not scaling for these workloads. So you had to silo data, right? That's the only way big enterprise could deal with that, is to create many different silos, Oracle, Teradata, data mass, you would hear data mass. All of it was to afloat, right, this data? And then there was the, what do we call, data sharing. How to get access to data which is not born inside the enterprise, right? So with Terry, we wanted to solve all these challenges and we thought the only way to solve it was the cloud. And the cloud has really two free aspects. One is the elasticity, for all of a sudden, you can run every workload that you want concurrently, in parallel, on different computer resources, and you can run them against the same data. So this is kind of the data lake model, if you want. At the same time, you can, in the cloud, create a service. So you can remove complexity from users and make it really easy for new workloads to be added to the system, because you can manage, you can create a managed service, where all the sudden our customers, they don't need to manage infrastructure, they don't need to patch, they don't need to tune. Everything is done by Snowflake, the service, and they can just load in and run their query. And the third aspect is really collaboration. Is how to connect data sets together. And that's almost a new product for Snowflake, this data sharing. So we really at Snowflake was all about combining big data and data warehouse in one system in the cloud, and have only one single system where you can put all your data and all your workload. >> So you weren't necessarily trying to solve the data warehouse problem, you were trying to solve a data problem. And then it just so happened data warehouse was a logical entry point for you. >> It's really not that. Yes, we wanted to solve the data problem. And for us big data was a really important problem to solve. So from day one, Snowflake was all about machine generated data, petabyte scale, but we wanted to do it right. And for us, right was not compromising on data warehouse principle, which is a CDT of transaction, which is really fast response time, and which is also simplicity. So as I said, we wanted to solve kind of all the problems at the time of volume of data, concurrency, and these sharing aspects. >> This was 2012. You knew at that time that Hadoop wasn't going to be the answer. >> No, I mean, we were really, I mean, everyone knew that. Everyone knew Hadoop was really bad. You know, complex to manage, really slow. It had good aspects, right? This was the only system that could manage petabyte scale data sets. That's the only thing- >> Cheaply. >> Yeah, and cheaply which was good. And we wanted really to do that, plus have all the good attributes of data warehouse system. And at the same time, we wanted to build a system where if you are data warehouse customer, if you are coming from Teradata, you can migrate to Snowflake and you will get to a system which is faster than what you had on-premise, right. That's why it's pretty cool. So we wanted to do big data without compromising on data warehouse. >> So several years ago we looked at the hyperscalers and said, "Wow, last year they spent $100 billion in CapEx." And so, we started to think about this abstraction layer. And then we saw what you guys announced with the data cloud. We call it super clouds. And we see that as exactly what you're building. So that's clearly not just a data warehouse or database, it's technology that really hides the underlying complexity of all those clouds, and it allows you to have federated governance and data sharing, all those things. Can you talk about sort of how you think about that architecture? >> So for me, what I say is that really Snowflake is the worldwide web of data. And we are indeed a super cloud, or we are super-posed to the infrastructure cloud, which is our friends at Amazon, and of course, Azure, I mean, Microsoft and Google. And as any cloud, we have regions, Snowflake regions all over the world, and located on different cloud providers. At the same time, our platform is global in the sense that every region interconnects with all the other regions, this is our snow grid and data mesh, if you want. So that as an organization you can have your presence on several Snowflake region. It doesn't matter which cloud provider, so you can mix AWS with Azure. You can use our cloud like that. And indeed you can, this is a cloud where you can store your data, that's the thing that really matters, and data is structured, but it's machine structure, as I say, machine generated, petabyte scale, but there's also unstructured, right? We have added support for images, text, videos, where you can process this data in our system, and that's the workload spout. And workload, what is very important is that you can run this workload, any number of workloads. So the number of workloads is effectively unlimited with Snowflake because each workload can have its dedicated set of compute resources all operating on the same data set. And the type of workloads is also very important. It's not only about dashboards and data warehouse, it's data engineering, it's data science, it's building application. We have many of our customers who are building full-scale cloud applications on top of Snowflake. >> Yeah so the other thing, if you're not familiar with Snowflake, I don't know, maybe your head has been in the sand for a while, but separating compute and storage, I don't know if you were the first, but you were certainly the first to popularize it. And that allowed you to solve that chasing the chips problem and the swallowing the basketball, right? Because you have virtually infinite resources now at your disposal. >> Yeah, this is really the concurrency challenge that I was mentioning. Everyone wants to access the data. And of course, if everyone runs on the same set of compute resources, you have a bottleneck. So Snowflake was really about this multi-workload. We call it Multi-Cluster Shared Data Architecture. But it's not difficult to run multiple cluster if you don't have consistency of data. So how to do that while maintaining transactional property of data as CDT, right? You cannot modify data from different clusters. And when you commit, every other cluster will immediately see the change, right, as if everyone was running on the same cluster. So that was the challenge that we solve when we started Snowflake. >> Used the term data mesh. What is data mesh to Snowflake? Is it a concept, is it fabric? >> No, it's a very interesting point. As much as we like to centralize data, this becomes a bottleneck, right? When you are a large organization with different independent units, everyone wants to manage their own data and they have domain-specific expertise about that data. So having it centralized in IT is not practical. At the same time, you really want to be able to connect these different data sets together and join different data together, right? So that's the data mesh architecture. Each data set is managed independently by business owners, and then there is a contract which is exposed to others, and you can combine. And Snowflake architectures with data sharing, right. Data sharing that can happen within an organization, or across organization, allows you to connect any data with any other data on our platform. >> Yeah, so when I first heard that term, you guys using the term data mesh, I got very excited because it was kind of the data mesh is, my view, anyway, is going to be the fundamental architecture of this decade and beyond. And the principles, if I understand it correctly, you're applying the principles of Jim Octagon's data mesh within Snowflake. So decentralized data doesn't have to be physically in one place. Logically it's in the data cloud. >> It's logically decentralized, right? It's independently managed, and the reason, right, is the data that you need to use is not produced by your, even if in your company you want to centralize the data and having only one organization, let's say IT managing that, let's say, pretend. Yet you need to connect with other datasets, which is managed by other organizations. So by nature, the data that you use cannot be centralized, right? So now that you have this principle, if you have a platform where you can store all the data, wherever it is, and you can connect these data very seamlessly, then we can use that platform for your enterprise, right? To have different business units independently manage their data sets, connects these together so that as a company you have a 360 view of your customers, for example. But you can expand that outside of your enterprise and connect with data sets, which are from your vertical, for example, financial data set that you don't have in your company, or any public data set. >> And the other key principles, I think, that you've touched on really is the line of business now. Increasingly they're building data products that are creating value, and then also there's a self-service component. Assuming there's the fourth principle, governance. You got to have federated governance. And it seems like you've kind of ticked the boxes, more than tick the boxes, but engineered a solution to solve for those. >> No, it's very true. So Snowflake was really built to be really simple to use. And you're right. Our vision was, it would be more than IT, right? Who is going to use Snowflake is going now to be business unit, because you do not have to manage infrastructure. You do not have to patch. You do not have to do these things that business cannot do. You just have to load your data and run your queries, and run your applications. So now business can directly use Snowflake and create value from that. And yes, you're right, then connect that data with other data sets and to get maximum insights. >> Can you please talk about some of the things you do with AWS here at the event. I'm interested in what you're doing with your machine learning initiatives that you've recently announced, the AI piece. >> Yes, so one key aspects is data is not only about SQL, right? We started with SQL, but we expanded our platform to what we call data programmability, which is really about running program at scale across a large volume of data. And this was made popular with a programming model which was introduced by Pendal, DataFrames. Later taken by Spark, and now we have DataFrames in Snowflake, Where we are different than other systems, is that these DataFrame programs, which are in Python, or Java, or Scala, you program with data. These DataFrames are compiled to our single execution platforms. So we have one single execution platform, which is a data flow execution platform, which can run both SQL very efficiently, as I said, data warehouse speed, and also these very complex programs running Python and Java against this data. And this is a single platform. You don't need to use two different systems. >> Now so, you kind of really attack the traditional analytics base. People said, "Wow, Snowflake's really easy." Now you're injecting AI and machine intelligence. I see Databricks coming at it from the other angle. They started with machine learning, now they're sort of going after the analytics. Does there need to be a semantic layer to connect, 'cause it's the same raw data. Does there need to be a semantic layer to connect those two worlds? >> Yes, and that's what we are doing in our platform. And that's very novel to Snowflake. As I said, you interact with data in different program. You pick your program. You are a SQL programmer, use SQL. You are a Python programmer, use DataFrames with Python. It doesn't really matter. And then the semantic layer is our compiler and our processing engine, is going to translate both your program and my program in Python, your program in SQL, to the same execution platform and to the same programming language that Snowflake internally, we don't expose our programming language, but it's a data flow programming language that our execution platform executes. So at the end, we might execute exactly the same program, potentially. And that's very important because we spent all our IP and all our time, engineering time to optimize this platform, to make it the fastest platform. And we want to use that platform for any type of workloads, whether it's data programs or SQL. >> Now, you and Terry were at Oracle, so you know a lot about bench marketing. As Larry would stand up and say, "We killed the competition." You guys are probably behind it, right. So you know all about that. >> We are very behind it. >> So you know a lot about that. I've had some experience, I'm not a technologist, but I'm an observer and analyst. You have to take benchmarking with a very big grain of salt. So you guys have generally stayed away from that. Databricks came out and they came up with all these benchmarks. So you had to respond, because otherwise it's out there. Now you reran the benchmarks, you took out the materialized views and all the expensive stuff that they included in your cost, your price performance, but then you wrote, I thought, a very cogent blog. Maybe you could talk about sort of why you did that and your general philosophy around bench marketing. >> Yeah, from day one, with Terry we say never again we will participate in this really stupid benchmark war, because it's really not in the interest of customers. And we have been really at the frontline of that war with Terry, both of us, really doing special tricks, right? And optimizing this query to death, this query that no one runs apart from the synthetic benchmark. We optimize them to death to have the best number when we were at Oracle. And we decided that this is really not helping customers in the end. So we said, with Snowflake, we'll not do that. And actually, we are not the only one not to do that. If you look at who has published TPC-DS, you will see no one, none of the big vendors. It's not because they cannot run TPC-DS, Oracle can run it, I know that. And all the other big data warehouse vendor can, but it's something of a little bit of past. And TPC was really important at some point, and is not really relevant now. So we are not going to compete. And that's what we said is basically now our blog. We are not interesting in participating in this war. We want to invest our engineering effort and our IP in solving real world issues and performance issues that we have. And we want to improve our engine for these real world customers. And the nice thing with Snowflake, because it's a service, we see exactly all the queries that our customers are executing. So we know where we are struggling as a system, and that's where we want to invest and we want to improve. And if you look at many announcements that we made, it's all about under-the-cover improving Snowflake and getting the benefit of this improvement to our customer. So that was the message of that blog. And yes, the message was okay. Mr. Databricks, it's nice, and it's perfect that, I mean, everyone makes a decision, right? We made the decision not to participate. Databricks made another decision, which is very fine, and that's fine that they publish their number on their system. Where it is not fine is that they published number using Snowflake and misrepresenting our performance. And that's what we wanted also to correct. >> Yeah, well, thank you for going into that. I know it's, look, leaders don't necessarily have to get involved in that mudslide. (crosstalk) Enough said about that, so that's cool. I want to ask you, I interviewed Frank last spring, right after the lockdown, he was kind enough to come on virtually, and I asked him about on-prem. And he was, you know Frank, he doesn't mix words, He said, "We're not getting into a halfway house. That's not going to happen." And of course, you really can't do what you do on-prem. You can't separate compute, some have tried, but it's not the same. But at the same time that you see like Andreessen comes out with this blog that says a huge portion of your cost of goods sold is going to be the cloud, so you're going to have to repatriate. Help me square that circle. Is it cloud forever? Is it will you never say never? What can you share of that? >> I will never say never, it's not my style. I always say you can always change your mind, and maybe different factors can change your mind. What was true at some point might not be true at a later point. But as of now, I don't see any reason for us to go on-premise. As you mentioned at the beginning, right, Snowflake is growing like crazy. The world is moving to the cloud. I think maybe it goes both ways, but I would say 90% or 99% of the world is moving to the cloud. Maybe 1% is coming back for some very specific reasons. I don't think that the world is going to move back on-premise. So in the end we might miss a small percentage of the workload that will stay on-premise and that's okay. >> And as well, if you dig into some of the financial statements you'll see, read the notes where you've renegotiated, right? We're talking big numbers. Hundreds and hundreds of millions of dollars of cost reduction, actually more, over a 10 year period. Billions of your cloud bills. So the cloud suppliers, they don't want to lose you as a customer, right? You're one of their biggest customer. So it's awesome. Last question is kind of, your work now is to really drive the data cloud, get adoption up, build that supercloud, we call it. Maybe you could talk a little bit about how you see the future. >> The future is really broadened, the scope of Snowflake, and really, I would say the marketplace, and data sharing, and services, which are directly built natively on Snowflake and are shared through our platform, and can operate, it can mix data on provider-side with data on consumer-side, and creating this collaboration within the Snowflake data cloud, I think is really the future. And we are really only scratching the surface of that. And you can see the enthusiasm of Snowflake data cloud and vertical industry We have nuanced the final show data cloud. Industry, complete vertical industry, latching on that concept and collaborating via Snowflake, which was not possible before. And I think you talked about machine learning, for example. Machine learning, collaboration through machine learning, the ones who are building this advanced model might not be the same as the one who are consuming this model, right? It might be this collaboration between expertise and consumer of that expertise. So we are really at the beginning of this interconnected world. And to me the world wide web of data that we are creating is really going to be amazing. And it's all about connecting. >> And I'm glad you mentioned the ecosystem. I didn't give enough attention to that. Because as a cloud provider, which essentially you are, you've got to have a strong ecosystem. That's a hallmark of cloud. And then the other, vertical, that we didn't touch on, is media and entertainment. A lot of direct-to-consumer. I think healthcare is going to be a huge vertical for you guys. All right we got to go, Terry. Thanks so much for coming on "theCUBE." I really appreciate you. >> Thanks, Dave. >> And thank you for watching. This a wrap from AWS re:Invent 2021. "theCUBE," the leader in global tech coverage. We'll see you next time. (upbeat music)

Published Date : Dec 3 2021

SUMMARY :

and coming to theCUBE. and he dials it down for the expectations At the same time, you can, in So you weren't So as I said, we wanted to You knew at that time that Hadoop That's the only thing- And at the same time, we And then we saw what you guys is that you can run this And that allowed you to solve that And when you commit, every other cluster What is data mesh to Snowflake? At the same time, you really And the principles, if I is the data that you need to And the other key principles, I think, and to get maximum insights. some of the things you do and now we have DataFrames in Snowflake, 'cause it's the same raw data. and to the same programming language So you know all about that. and all the expensive stuff And the nice thing with But at the same time that you see So in the end we might And as well, if you dig into And I think you talked about And I'm glad you And thank you for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
FrankPERSON

0.99+

Mike ScarpelliPERSON

0.99+

Benoit DagevillePERSON

0.99+

LarryPERSON

0.99+

TerryPERSON

0.99+

BostonLOCATION

0.99+

$1.8 billionQUANTITY

0.99+

AWSORGANIZATION

0.99+

BenoitPERSON

0.99+

Palo AltoLOCATION

0.99+

OracleORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

90%QUANTITY

0.99+

$100 billionQUANTITY

0.99+

AmazonORGANIZATION

0.99+

DavePERSON

0.99+

last yearDATE

0.99+

GoogleORGANIZATION

0.99+

99%QUANTITY

0.99+

2012DATE

0.99+

TeradataORGANIZATION

0.99+

SQLTITLE

0.99+

two setsQUANTITY

0.99+

SnowflakeTITLE

0.99+

oneQUANTITY

0.99+

AndreessenPERSON

0.99+

Two remote setsQUANTITY

0.99+

one systemQUANTITY

0.99+

OneQUANTITY

0.99+

bothQUANTITY

0.99+

firstQUANTITY

0.99+

HundredsQUANTITY

0.99+

1%QUANTITY

0.99+

third aspectQUANTITY

0.99+

ScalaTITLE

0.99+

SnowflakeORGANIZATION

0.99+

PythonTITLE

0.99+

IntelORGANIZATION

0.99+

DatabricksPERSON

0.99+

two free aspectsQUANTITY

0.99+

mid last decadeDATE

0.99+

JavaTITLE

0.99+

Jim OctagonPERSON

0.99+

both waysQUANTITY

0.99+

fourth principleQUANTITY

0.98+

two worldsQUANTITY

0.98+

last nightDATE

0.98+

173%QUANTITY

0.98+

360 viewQUANTITY

0.98+

several years agoDATE

0.98+

each workloadQUANTITY

0.97+

last springDATE

0.97+

CapExORGANIZATION

0.97+

Wall StreetORGANIZATION

0.97+

one organizationQUANTITY

0.95+

single platformQUANTITY

0.95+

four daysQUANTITY

0.95+

FirstQUANTITY

0.95+

SnowflakeEVENT

0.94+

AzureORGANIZATION

0.94+

Jessica Alexander, CrowdStrike | AWS re:Invent 2021


 

(upbeat music) >> Hey, welcome to theCUBE's coverage of AWS re:Invent 2021. I'm Lisa Martin, and I'm pleased to be joined by Jessica Alexander, who is the VP of Cloud Solutions Sales and Alliances at CrowdStrike. Jessica, welcome to the program. >> Thank you, Lisa. It's great to be here. >> So we're going to unpack a lot today, some news, what's going on with the threat landscape, what you're seeing across industries, but I want to get started talking a little bit about your team. As I mentioned, VP of Cloud Solutions Sales and Alliances. Talk to me about your team because you have a unique GTM here that I'd like to get into. >> Sure. Thank you, Lisa. Well, we recently launched our new cloud security products, Cloud Workload Protection and Horizon earlier this year. So we wanted to make sure that we accelerated our entry into this new product market, this new addressable market, and so we established not only a cloud sales specialist team that helps our core sellers as well as our partners sell our new cloud security products but we also wanted to make sure it was tightly integrated and aligned with our Cloud Alliances so specifically our co-sell relationship and partnership that we have with AWS. >> Got it. Let's talk about some of the things you mentioned, Aksino acceleration entering into the market. We saw a lot of acceleration in the last 20 months and counting, especially with respect to cloud adoption, digital transformation, but also the threat landscape things have accelerated. Wanted to get some information from you on what you've seen. We've seen and talked to a lot of folks on ransomware stats, you know, it's up nearly 11x in the first half of '21, but you guys have some unique stats and insights on that. Talk to me about what CrowdStrike is seeing with respect to that threat landscape and who it's impacting. >> Sure. You know, we have a unique perspective. CrowdStrike has millions of sensors out in our customer environments, they're feeding trillions of events into the cloud and we're able to correlate this data in real time, so this gives us a very unique perspective into what's happening in adversary activity out in the world. We also get feeds from our incident response teams that are actively responding to issues, as well as our Intel operatives out in the world. So, you know, we correlate these three sources of data into our threat graph in the cloud powered by AWS, which gives us very good insights into activity that we're seeing from an adversary perspective. So we also have a group called the OverWatch team, they are 24 by seven, you know, humans monitoring our cloud and monitoring our customer's networks to detect or, you know, get pre-breach activity information. And what they're seeing is that, you know, over this last year, an adversary is able to enter a network and move laterally into that network within one hour and 32 minutes. Now, you know, this is really fast, especially when you consider that in 2020, that average was four hours and 37 minutes for a threat actor to move laterally, you know, infiltrate a network and then move laterally. So, you know, the themes that we're seeing are adversaries are getting a lot faster and a lot more efficient, and, you know, as more companies are moving to remote work environments, you know, setting up virtual infrastructure for employees to use for work and productivity, you know, that threat landscape becomes more critical. >> Right? It becomes more critical. It becomes bigger. And of course we are in this work from anywhere environment that's going to last or some amount of it will persist permanently. So what you're saying is you're seeing a 4x increase in the speed with which adversaries can get in and laterally move within a network, so dramatically faster in a year over year period, where, so there's been so much flux in every market and of course in our lives, what are some of the things that you're helping customers do to combat this growing challenge? >> Well, it really goes back to being predictive and having that real time snapshot of what's going on and being able to proactively reach out to customers before anything bad happens and, you know, we're also seeing that ransomware continues to be an issue for customers, so, you know, having the ability to prevent these attacks and ransomware from happening in the first place and really taking the advantage that an adversary may have from a speed or intelligence perspective, taking that advantage away by having the Falcon Platform actively monitoring our customer environments is a big advantage. >> So let's talk about, speaking of advantages, what are you guys announcing at re:Invent this year? >> Sure. Well, we have two new service integrations with Amazon EKS, AWS Outpost and AWS Firelands to talk about this year. The cool thing is that, you know, customers are going to get our wonderful breach protection that we have, you know, the gold standard of breach protection, they'll have that available on various cloud services. And what it does is it provides consistent security and simplified operational management across AWS services, as customers extend those from public cloud to the data center, to the edge. And you know, the other great benefit is that it accelerates threat hunting, so we were talking about, you know, being able to predict and see what adversaries are doing. You know, one of the great customer benefits is that they can do that with their own teams and be able to do that on a cloud infrastructure as well. >> And how much of the events of the last 20 months was a catalyst or were catalysts for these integrations that you just mentioned? I imagine the threat landscape growing ransomware becoming a 'when we get hit not if' would have been some of those catalysts. >> Well, you know, we're seeing that the adoption of cloud services, especially for end user computing is growing much faster than traditional on-prem desktops, laptops, as people continue to work remotely and customers need to be, or corporations need to be efficient at how they manage end user computing environments. So, you know, we are seeing that adversary activity is picking up, they're getting smarter about, you know, leveraging cloud services and potential misconfigurations, there're really four key areas that we see customers struggle with, whether it be, you know, the complexity of cloud services, whether it be shadow IT, and a lot of the security folks don't necessarily know where all the cloud services are being deployed, then you've got, you know, kind of the advanced techniques that adversaries are using to get into networks. And then, you know, last but certainly not least is skills shortage. We're finding that a lot of customers want a turnkey solution, where they don't have to have a team of cloud security specialists to respond or handle any misconfigurations or issues that come up. They want to have a turnkey solution, a team that's already watching and reaching out to them to say, "Hey, you may want to look into XYZ and update a policy, or, you know, activate this new, you know, this feature in the platform." >> Yeah. That real time, the ability to have something that's turnkey is critical in this day and age where things are moving so quickly, there's so much being accelerated, good stuff and bad stuff. But also you mentioned that cybersecurity skills gap, which is in its, I think it's in its fifth year now, which is a big challenge for organizations as this scattered, work from anywhere persists as does the growth of the threat landscape. Let's get into now, for, you mentioned the adoption of cloud services has gone up considerably in this interesting time period, how is CrowdStrike helping customers do that securely, migrate from on-prem to the cloud with that security and that confidence that their landscape is protected? >> Yeah, well, we find obviously in the shared responsibility model, the great thing is that, you know, CrowdStrike and AWS team up to help, you know, customers have a better together experience as they migrate to the cloud. AWS is obviously responsible for the security of the cloud and customers are responsible for the security in the cloud. And in speaking with our customers who are moving or have moved to cloud services, and they really want a trusted and simple platform to use when securing their data and applications. So what, you know, they also have hybrid environments that can get complex to support, and, you know, we want to be able to provide them with a unified platform, a unified experience, regardless of where the workload is running or what services that it's using. You know, they have that unified visibility and protection across all of the cloud workloads. We're also, you know, seeing that, especially the reason we're doing this great integration with Outpost and EKS Anywhere is that customers are, you know, taking their cloud services out to their data centers as well as to the edge locations and branch offices, so they want to be able to run EKS on their own infrastructure. So it's important that customers have that portability that regardless of whether it's a laptop or an EC2 instance or an EKS container, you know, they have that portability throughout the continuum of their cloud journey. >> That continuum is absolutely critical as we, you know, talk about cloud and application or continuum from the customer's perspective, the cloud continuum is something that is front and center for customers, I imagine in every industry. >> Oh, for sure, 'cause every industry is adopting cloud maybe at a different speed, maybe for different applications, but, you know, everybody's moving to the cloud. >> So talk to me about what you're announcing with AWS, let's get into a little bit about the partnership that CloudStrike and AWS have, let's unpack that a bit. >> Sure. You know, we've been an AWS advanced technology partner for over five years. We've had our products, we now have six of our CrowdStrike products listed on AWS Marketplace. We're an active co-sell partner and, you know, have our security competency and our well-architected certification. And really it's about building trust with our customers. You know, AWS has a lot of wonderful partner products for customers to use and it's really about building trust that, you know, we're validated, we're vetted, we have a lot of customers who are using our products with AWS, and, you know, I think it's that tight collaboration, for example, if you look at what we're doing with Humio, we've implemented a quick start program, which AWS has to get customers quickly deployed with an integration or a new capability with a partner product. And what this does is it spins up a quick cloud formation template, customer can integrate it very quickly with the AWS Firelands and then, you know, all that log information coming from the AWS containers is easily ingested into the Humio platform. And so, you know, it really reduces the time to get the integration up and running as well as pulling all that data into the Humio platform so that customers can, like we said earlier, go back and threat hunt across, you know, different cloud service components in a quick and easy way. >> Quick and easy is good as is faster time to value. You mentioned the word trust, and, you know, we talk about trust, we've been talking about it for years as it relates to technology, but I'm curious, Jessica, in the last year and a half, if your customer conversations have changed, is trust now even more important than ever as there are so many things in flux, have you noticed any sort of change there in your customer conversations? >> Well, you know, I think trust is extensible. And over the last 10 years, CrowdStrike's done a really great job of building customer trust. And, you know, we started out as, you know, kind of primarily EDR and we've moved into prevention and now we're moving into identity protection and XDR so, you know, I see a pattern that, you know, we've built this amazing core of trust across our existing customers, and as we offer more capabilities, whether it be, you know, cloud security or XDR, identity protection, you know, customers trust us and so they're very willing to say, "ah well, I want to try out these new capabilities that CrowdStrike has because we trust you guys, you know, you've done a lot to protect our brand and, you know, really make our internal teams a lot more efficient and a lot smarter." So, you know, I think while trust is important, it's also something that we get to carry forward as we enter new markets and continue to innovate and provide new capabilities for our customers. >> And really extending that trusted, valued partner relationship that you've already established with customers in every industry. So where can customers go? So the joint GTM customers, and you said products available in the AWS marketplace, but where do you recommend customers go to learn more about how they can work with these joint solutions that CrowdStrike and AWS have together? >> Absolutely. We have a landing page on AWS, if you Google AWS and CrowdStrike, whether it be marketplace or EKS Anywhere, Amazon outposts, we're on all the joint product pages with Amazon, as well as always going to crowdstrike.com and looking up our cloud security products. >> Got it. And last question for you, Jessica, summarize the announcement in terms of business outcomes that it's going to enable your joint customers to achieve. >> Absolutely. You know, I think it goes back to probably the primary reason is complexity. And, you know, with complexity comes risk and blind spots so being able to have a unified platform that no matter where the workload is, or the employee may be, they are protected and have, you know, a unified platform and experience to manage their security risk. >> Excellent. Jessica, thank you so much for coming on the program today, sharing with me, what's new with CrowdStrike, some of the things that you're seeing, and what you're helping customers to accomplish in a very dynamic environment, we appreciate your time and your insights. >> Thank you for having me, Lisa. >> For Jessica Alexander, I'm Lisa Martin, and you're watching theCUBE's coverage of AWS re:Invent 2021. (gentle music)

Published Date : Dec 1 2021

SUMMARY :

and I'm pleased to be It's great to be here. that I'd like to get into. that we have with AWS. of the things you mentioned, and a lot more efficient, and, you know, in the speed with which for customers, so, you know, that we have, you know, that you just mentioned? And then, you know, last the ability to have something to help, you know, you know, talk about cloud and application but, you know, everybody's So talk to me about what with the AWS Firelands and then, you know, and, you know, we talk about trust, whether it be, you know, and you said products available if you Google AWS and CrowdStrike, that it's going to enable your they are protected and have, you know, Jessica, thank you so much and you're watching theCUBE's coverage

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JessicaPERSON

0.99+

Lisa MartinPERSON

0.99+

Jessica AlexanderPERSON

0.99+

AWSORGANIZATION

0.99+

LisaPERSON

0.99+

2020DATE

0.99+

CrowdStrikeORGANIZATION

0.99+

fifth yearQUANTITY

0.99+

CrowdStrikeTITLE

0.99+

AmazonORGANIZATION

0.99+

24QUANTITY

0.99+

four hoursQUANTITY

0.99+

sixQUANTITY

0.99+

HumioTITLE

0.99+

one hourQUANTITY

0.99+

4xQUANTITY

0.98+

OverWatchORGANIZATION

0.98+

this yearDATE

0.98+

over five yearsQUANTITY

0.98+

trillions of eventsQUANTITY

0.97+

sevenQUANTITY

0.97+

millions of sensorsQUANTITY

0.96+

oneQUANTITY

0.96+

todayDATE

0.96+

Cloud Solutions Sales and AlliancesORGANIZATION

0.96+

37 minutesQUANTITY

0.96+

last yearDATE

0.95+

theCUBEORGANIZATION

0.94+

first half of '21DATE

0.93+

AWS OutpostORGANIZATION

0.93+

earlier this yearDATE

0.92+

last 20 monthsDATE

0.92+

three sourcesQUANTITY

0.91+

firstQUANTITY

0.91+

last year and a halfDATE

0.89+

two new service integrationsQUANTITY

0.89+

IntelORGANIZATION

0.88+

crowdstrike.comOTHER

0.87+

OutpostORGANIZATION

0.87+

EKSTITLE

0.87+

last 10 yearsDATE

0.86+

Google AWSORGANIZATION

0.86+

EC2TITLE

0.86+

AWS FirelandsORGANIZATION

0.84+

32 minutesQUANTITY

0.81+

CloudStrikeORGANIZATION

0.81+

Amazon EKSORGANIZATION

0.79+

EKSORGANIZATION

0.79+

re:Invent 2021EVENT

0.77+

Cloud Solutions Sales and AlliancesORGANIZATION

0.74+

2021TITLE

0.71+

re:EVENT

0.69+

a yearQUANTITY

0.68+

AksinoORGANIZATION

0.66+

VPPERSON

0.63+

AWS reInvent Jessica Alexander


 

(upbeat music) >> Hey, welcome to theCUBE's coverage of AWS re:Invent 2021. I'm Lisa Martin, and I'm pleased to be joined by Jessica Alexander, who is the VP of Cloud Solutions Sales and Alliances at CrowdStrike. Jessica, welcome to the program. >> Thank you, Lisa. It's great to be here. >> So we're going to unpack a lot today, some news, what's going on with the threat landscape, what you're seeing across industries, but I want to get started talking a little bit about your team. As I mentioned, VP of Cloud Solutions Sales and Alliances. Talk to me about your team because you have a unique GTM here that I'd like to get into. >> Sure. Thank you, Lisa. Well, we recently launched our new cloud security products, Cloud Workload Protection and Horizon earlier this year. So we wanted to make sure that we accelerated our entry into this new product market, this new addressable market, and so we established not only a cloud sales specialist team that helps our core sellers as well as our partners sell our new cloud security products but we also wanted to make sure it was tightly integrated and aligned with our Cloud Alliances so specifically our co-sell relationship and partnership that we have with AWS. >> Got it. Let's talk about some of the things you mentioned, Aksino acceleration entering into the market. We saw a lot of acceleration in the last 20 months and counting, especially with respect to cloud adoption, digital transformation, but also the threat landscape things have accelerated. Wanted to get some information from you on what you've seen. We've seen and talked to a lot of folks on ransomware stats, you know, it's up nearly 11x in the first half of '21, but you guys have some unique stats and insights on that. Talk to me about what CrowdStrike is seeing with respect to that threat landscape and who it's impacting. >> Sure. You know, we have a unique perspective. CrowdStrike has millions of sensors out in our customer environments, they're feeding trillions of events into the cloud and we're able to correlate this data in real time, so this gives us a very unique perspective into what's happening in adversary activity out in the world. We also get feeds from our incident response teams that are actively responding to issues, as well as our Intel operatives out in the world. So, you know, we correlate these three sources of data into our threat graph in the cloud powered by AWS, which gives us very good insights into activity that we're seeing from an adversary perspective. So we also have a group called the OverWatch team, they are 24 by seven, you know, humans monitoring our cloud and monitoring our customer's networks to detect or, you know, get pre-breach activity information. And what they're seeing is that, you know, over this last year, an adversary is able to enter a network and move laterally into that network within one hour and 32 minutes. Now, you know, this is really fast, especially when you consider that in 2020, that average was four hours and 37 minutes for a threat actor to move laterally, you know, infiltrate a network and then move laterally. So, you know, the themes that we're seeing are adversaries are getting a lot faster and a lot more efficient, and, you know, as more companies are moving to remote work environments, you know, setting up virtual infrastructure for employees to use for work and productivity, you know, that threat landscape becomes more critical. >> Right? It becomes more critical. It becomes bigger. And of course we are in this work from anywhere environment that's going to last or some amount of it will persist permanently. So what you're saying is you're seeing a 4x increase in the speed with which adversaries can get in and laterally move within a network, so dramatically faster in a year over year period, where, so there's been so much flux in every market and of course in our lives, what are some of the things that you're helping customers do to combat this growing challenge? >> Well, it really goes back to being predictive and having that real time snapshot of what's going on and being able to proactively reach out to customers before anything bad happens and, you know, we're also seeing that ransomware continues to be an issue for customers, so, you know, having the ability to prevent these attacks and ransomware from happening in the first place and really taking the advantage that an adversary may have from a speed or intelligence perspective, taking that advantage away by having the Falcon Platform actively monitoring our customer environments is a big advantage. >> So let's talk about, speaking of advantages, what are you guys announcing at re:Invent this year? >> Sure. Well, we have two new service integrations with Amazon EKS, AWS Outpost and AWS Firelands to talk about this year. The cool thing is that, you know, customers are going to get our wonderful breach protection that we have, you know, the gold standard of breach protection, they'll have that available on various cloud services. And what it does is it provides consistent security and simplified operational management across AWS services, as customers extend those from public cloud to the data center, to the edge. And you know, the other great benefit is that it accelerates threat hunting, so we were talking about, you know, being able to predict and see what adversaries are doing. You know, one of the great customer benefits is that they can do that with their own teams and be able to do that on a cloud infrastructure as well. >> And how much of the events of the last 20 months was a catalyst or were catalysts for these integrations that you just mentioned? I imagine the threat landscape growing ransomware becoming a 'when we get hit not if' would have been some of those catalysts. >> Well, you know, we're seeing that the adoption of cloud services, especially for end user computing is growing much faster than traditional on-prem desktops, laptops, as people continue to work remotely and customers need to be, or corporations need to be efficient at how they manage end user computing environments. So, you know, we are seeing that adversary activity is picking up, they're getting smarter about, you know, leveraging cloud services and potential misconfigurations, there're really four key areas that we see customers struggle with, whether it be, you know, the complexity of cloud services, whether it be shadow IT, and a lot of the security folks don't necessarily know where all the cloud services are being deployed, then you've got, you know, kind of the advanced techniques that adversaries are using to get into networks. And then, you know, last but certainly not least is skills shortage. We're finding that a lot of customers want a turnkey solution, where they don't have to have a team of cloud security specialists to respond or handle any misconfigurations or issues that come up. They want to have a turnkey solution, a team that's already watching and reaching out to them to say, "Hey, you may want to look into XYZ and update a policy, or, you know, activate this new, you know, this feature in the platform." >> Yeah. That real time, the ability to have something that's turnkey is critical in this day and age where things are moving so quickly, there's so much being accelerated, good stuff and bad stuff. But also you mentioned that cybersecurity skills gap, which is in its, I think it's in its fifth year now, which is a big challenge for organizations as this scattered, work from anywhere persists as does the growth of the threat landscape. Let's get into now, for, you mentioned the adoption of cloud services has gone up considerably in this interesting time period, how is CrowdStrike helping customers do that securely, migrate from on-prem to the cloud with that security and that confidence that their landscape is protected? >> Yeah, well, we find obviously in the shared responsibility model, the great thing is that, you know, CrowdStrike and AWS team up to help, you know, customers have a better together experience as they migrate to the cloud. AWS is obviously responsible for the security of the cloud and customers are responsible for the security in the cloud. And in speaking with our customers who are moving or have moved to cloud services, and they really want a trusted and simple platform to use when securing their data and applications. So what, you know, they also have hybrid environments that can get complex to support, and, you know, we want to be able to provide them with a unified platform, a unified experience, regardless of where the workload is running or what services that it's using. You know, they have that unified visibility and protection across all of the cloud workloads. We're also, you know, seeing that, especially the reason we're doing this great integration with Outpost and EKS Anywhere is that customers are, you know, taking their cloud services out to their data centers as well as to the edge locations and branch offices, so they want to be able to run EKS on their own infrastructure. So it's important that customers have that portability that regardless of whether it's a laptop or an EC2 instance or an EKS container, you know, they have that portability throughout the continuum of their cloud journey. >> That continuum is absolutely critical as we, you know, talk about cloud and application or continuum from the customer's perspective, the cloud continuum is something that is front and center for customers, I imagine in every industry. >> Oh, for sure, 'cause every industry is adopting cloud maybe at a different speed, maybe for different applications, but, you know, everybody's moving to the cloud. >> So talk to me about what you're announcing with AWS, let's get into a little bit about the partnership that CloudStrike and AWS have, let's unpack that a bit. >> Sure. You know, we've been an AWS advanced technology partner for over five years. We've had our products, we now have six of our CrowdStrike products listed on AWS Marketplace. We're an active co-sell partner and, you know, have our security competency and our well-architected certification. And really it's about building trust with our customers. You know, AWS has a lot of wonderful partner products for customers to use and it's really about building trust that, you know, we're validated, we're vetted, we have a lot of customers who are using our products with AWS, and, you know, I think it's that tight collaboration, for example, if you look at what we're doing with Humio, we've implemented a quick start program, which AWS has to get customers quickly deployed with an integration or a new capability with a partner product. And what this does is it spins up a quick cloud formation template, customer can integrate it very quickly with the AWS Firelands and then, you know, all that log information coming from the AWS containers is easily ingested into the Humio platform. And so, you know, it really reduces the time to get the integration up and running as well as pulling all that data into the Humio platform so that customers can, like we said earlier, go back and threat hunt across, you know, different cloud service components in a quick and easy way. >> Quick and easy is good as is faster time to value. You mentioned the word trust, and, you know, we talk about trust, we've been talking about it for years as it relates to technology, but I'm curious, Jessica, in the last year and a half, if your customer conversations have changed, is trust now even more important than ever as there are so many things in flux, have you noticed any sort of change there in your customer conversations? >> Well, you know, I think trust is extensible. And over the last 10 years, CrowdStrike's done a really great job of building customer trust. And, you know, we started out as, you know, kind of primarily EDR and we've moved into prevention and now we're moving into identity protection and XDR so, you know, I see a pattern that, you know, we've built this amazing core of trust across our existing customers, and as we offer more capabilities, whether it be, you know, cloud security or XDR, identity protection, you know, customers trust us and so they're very willing to say, "ah well, I want to try out these new capabilities that CrowdStrike has because we trust you guys, you know, you've done a lot to protect our brand and, you know, really make our internal teams a lot more efficient and a lot smarter." So, you know, I think while trust is important, it's also something that we get to carry forward as we enter new markets and continue to innovate and provide new capabilities for our customers. >> And really extending that trusted, valued partner relationship that you've already established with customers in every industry. So where can customers go? So the joint GTM customers, and you said products available in the AWS marketplace, but where do you recommend customers go to learn more about how they can work with these joint solutions that CrowdStrike and AWS have together? >> Absolutely. We have a landing page on AWS, if you Google AWS and CrowdStrike, whether it be marketplace or EKS Anywhere, Amazon outposts, we're on all the joint product pages with Amazon, as well as always going to crowdstrike.com and looking up our cloud security products. >> Got it. And last question for you, Jessica, summarize the announcement in terms of business outcomes that it's going to enable your joint customers to achieve. >> Absolutely. You know, I think it goes back to probably the primary reason is complexity. And, you know, with complexity comes risk and blind spots so being able to have a unified platform that no matter where the workload is, or the employee may be, they are protected and have, you know, a unified platform and experience to manage their security risk. >> Excellent. Jessica, thank you so much for coming on the program today, sharing with me, what's new with CrowdStrike, some of the things that you're seeing, and what you're helping customers to accomplish in a very dynamic environment, we appreciate your time and your insights. >> Thank you for having me, Lisa. >> For Jessica Alexander, I'm Lisa Martin, and you're watching theCUBE's coverage of AWS re:Invent 2021. (gentle music)

Published Date : Nov 10 2021

SUMMARY :

and I'm pleased to be It's great to be here. that I'd like to get into. that we have with AWS. of the things you mentioned, and a lot more efficient, and, you know, in the speed with which for customers, so, you know, that we have, you know, that you just mentioned? And then, you know, last the ability to have something to help, you know, you know, talk about cloud and application but, you know, everybody's So talk to me about what with the AWS Firelands and then, you know, and, you know, we talk about trust, whether it be, you know, and you said products available if you Google AWS and CrowdStrike, that it's going to enable your they are protected and have, you know, Jessica, thank you so much and you're watching theCUBE's coverage

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JessicaPERSON

0.99+

Lisa MartinPERSON

0.99+

Jessica AlexanderPERSON

0.99+

AWSORGANIZATION

0.99+

LisaPERSON

0.99+

2020DATE

0.99+

CrowdStrikeORGANIZATION

0.99+

fifth yearQUANTITY

0.99+

AmazonORGANIZATION

0.99+

24QUANTITY

0.99+

four hoursQUANTITY

0.99+

sixQUANTITY

0.99+

HumioTITLE

0.99+

one hourQUANTITY

0.99+

4xQUANTITY

0.98+

OverWatchORGANIZATION

0.98+

this yearDATE

0.98+

over five yearsQUANTITY

0.98+

trillions of eventsQUANTITY

0.97+

sevenQUANTITY

0.97+

millions of sensorsQUANTITY

0.96+

oneQUANTITY

0.96+

todayDATE

0.96+

Cloud Solutions Sales and AlliancesORGANIZATION

0.96+

37 minutesQUANTITY

0.96+

last yearDATE

0.95+

theCUBEORGANIZATION

0.94+

first half of '21DATE

0.93+

AWS OutpostORGANIZATION

0.93+

earlier this yearDATE

0.92+

last 20 monthsDATE

0.92+

three sourcesQUANTITY

0.91+

firstQUANTITY

0.91+

CrowdStrikeTITLE

0.9+

last year and a halfDATE

0.89+

two new service integrationsQUANTITY

0.89+

IntelORGANIZATION

0.88+

crowdstrike.comOTHER

0.87+

OutpostORGANIZATION

0.87+

EKSTITLE

0.87+

last 10 yearsDATE

0.86+

Google AWSORGANIZATION

0.86+

EC2TITLE

0.86+

AWS FirelandsORGANIZATION

0.84+

32 minutesQUANTITY

0.81+

CloudStrikeORGANIZATION

0.81+

Amazon EKSORGANIZATION

0.79+

EKSORGANIZATION

0.79+

re:Invent 2021EVENT

0.77+

Cloud Solutions Sales and AlliancesORGANIZATION

0.74+

re:EVENT

0.69+

a yearQUANTITY

0.68+

AksinoORGANIZATION

0.66+

VPPERSON

0.63+

nearly 11xQUANTITY

0.62+

Falcon PlatformTITLE

0.61+

GTMORGANIZATION

0.61+

JT Giri, nOps | CUBE Conversation


 

>>mhm >>Hello and welcome to this cube conversation here in Palo alto California, I'm john for a year host of the cube, we're here with a great guest Jt gear, Ceo and founder and ops Hot Startup. Jt Welcome to the cube conversation. >>Hey, that sound, thanks for having me. It sounds like we know each other, we used to run into each other at meat out. So yeah, >>it's fun to talk to you because I know you're, you know, scratching the devops it from the beginning before devops was devops before infrastructure of code was infrastructure as code. All that's played out. So it's really a great ride. I know you had a good time doing it a lot of action though. If you look at devops it's kind of like this new, I won't say devops two point because it kind of cliche but you're starting to see the mature ization of companies besides the early adopters and the people who are hardcore adopting and they realize this is amazing and then they? Re platform in the cloud and they go great, let's do more and next thing, you know, they have an operations issue and they got a really kind of stabilize and then also not break anything. So this is kind of the wheelhouse of what you guys are doing in ops reminds me of no ops, no operations, you know, we don't want to have a lot of extra stuff. This is a big thing. Take a but take them in to explain the company, you're what you guys stand for and what you're all about. >>Yeah, so you know, our main focus is more on the operation side, so, you know the reason why you move to cloud or the reason why you have devops practices, you want to go fast. Um but you know when you're building cloud infrastructure, you have to make trade offs right? You have to maybe some environment, maybe you have to optimize for S L A. And maybe another workload, you have to optimize for um you know, maybe costs, right? So what we're on a mission to do is to make sure that companies are able to make the right trade offs, right? We help companies to make sure all their workload, every single resource in the cloud is aligned with the business needs, you know, so we do a lot of cool things by like, you know, bringing accountability mapping and we're close to different genes. But yeah, the end goal is, can we make sure that every single resource on data Bs is aligned with the business needs >>and they're also adding stuff. Every reinvent zillion more services get announced. So a lot, a lot of stuff going on, I gotta ask you while I got you here, what is the definition of cloud apps these days, from your standpoint and why is it important? A lot of folks are looking at this and they want to have stable operations. They love the cloud really can't deny the cloud value at all. But cloud ops has become a big topic. What is cloud apps and why is it important. >>Right? I mean, first of all, Like you just mentioned, right? Like Amazon keeps on launching more services. It's over 200. So the environment is very complex, Right? And then mm complexity within the services is uh pretty uh you really need to be the main expert for example, know everything about do So, you know, our question to us is, let's say if you find a critical issue, uh let's say you want to uh you know, enable multi AZ on your RDS for example. Uh and it's critical because you know, you're running a uh high availability workloads on AWS. How do you follow up on that right to us. Operation is how do you build a cloud backlog? How do you prioritize, how do you come together as a team to actually remediate those issues? No one is tackling that job, everyone's surfaces like, hey, here's 1000 things that are wrong with your environment. No one is focused on like how do you go from these issues to prioritization to backlog to actually coming together as a team. You know, I've been fixing some of those issues. That's that's what operation means is >>I know it's totally hard because sometimes I don't even know what's going on. I gotta ask you why, why is it harder now? Why are people, I mean I get the impression that people like looking the other way? I hope it goes the problem kind of goes away. What are the challenges? What's the big blocker from getting at the root cause or trying to solve these problems? What's the big thing that's holding people back? >>Yeah, I mean, when I first got into, you know, I t you know, I was working in data center and every time we needed a server, you know, we have to ask for approvals, right? And you finally got a server, but nowadays anyone could provision resources. And normally you have different people within the team's provisioning resources and you can have hundreds of different teams who are provisioning resources. So the complexity uh and the speed that we are, you know, provisioning resources across multiple people, it just continues to go higher and higher. So that's why uh you know, on the surface it might look that hey, this, you know, maybe the biggest instance uh is, you know, aligned with the business needs, you know, looking at the changes, it's hard to know, are those aligned with the business? They're not? So that's that's that's where the complexity and player. >>So the question I get a lot from people we talk about devops and cloud, cloud apps or cloud management or whatever kind of buzz words out there, it kind of comes down to cloud apps and cloud management seems to be the category, people focus on. How is cloud ops different then? Say the traditional cloud management and what impact does it have for customers and why should they care and what do they need an option. >>Right. So one of the things we do uh and and we do think that cloud operation is sort of an evolution from cloud management. We make sure that Every single resource 1st, first of all blondes and workload. So and you know, workload could be a group of microservices uh and then uh you know every single workload has owners like define owners who are responsible for making sure they managed budget that they're responsible for security that normally doesn't exist. Right? Cloud is this black box, you know where multiple people are provisioning resources, you know, everyone tries to sort of build sort of a structure to kind of see like what are these resources for? What are these resources for as part of onboarding to end up? So what we do, we actually, you know, analyze all your metadata. We create like 56 workloads and then we say here is a bucket where there's there, this is totally unassigned, right? And then we actually walked them through assigning different roles and also we walk them through to kinda looking under this unallocated resources and assign resources for those as well. So once you're done, every single resource has clear definition, right? Is this a compliant? Uh you know hip hop workload, what are the run books, what is this for? John I don't know if he heard that before. Sometimes there are workloads running and how people don't know, I don't even know who is the owner, right? So after you're done with an office and after you're managing and uh, you know, uh, managing your workload on and off, you have full visibility and clear understanding of what are the. It's funny, it's >>funny you mentioned the workloads being kind of either not knowing the owners, but also we see people um, with the workloads sometimes it's like throwing a switch and leaving the hose on the water on. And next thing you know, they get the bill. They're like, oh my God, what happened? Why did I leave? What, what is this? So there's a lot of things that you could miss. This brings up the point you just said and what you said earlier aligning resources across the cloud uh and and having accountability. And then you, you mentioned at the top of this interview that aligning with the business needs. I find that fastest. I would like to take him in to explain because it sounds really hard. I get how you can align the resources and do some things, identify what's going on, accountability kind of map that that's, that's good tech. How does that, how do you get that to the alignment on the business side. >>Yeah. I mean we start by, first of all, like I said, you know, we use machine learning to play these workloads? And then we asked basic questions about the workload. You know, what is this workload for? Uh Do you need to meet with any kind of compliance is for this workload? Uh What is your S. O. A. For this workload? You know, depending on that. We we make recommendations. Uh So we kind of ask those questions and we also walk them through where they create roles. Like we asked who was responsible for creating budgets or managing security for this workload and guess what also the you know the bucket where resources are allocated for. We ask for you know, owners for that as well like in this bucket who's the owner for who's going to monitor the budget and things like that. So you know we asked, you know, we start by just asking the question, having teams complete that sort of information and also you know, why do you a little bit more information on how this aligns with the business needs? You know, >>talk about the complexity side of it. I love that conversation around the number of services. You said 200 services depending how you count what you call services in the thousands of so many different things uh knobs to turn on amazon uh web services. So why are people um focused on the complexity and the partnering side? Because you know, it's the clouds at E. P. I. Based system. So you're dealing with a lot of different diverse resources. So you have complexity and diversity. Can you talk me through how that works? Because that's that seems to be a tough beast to tame the difference between the complexity of services and also working with other people. >>Yeah for sure like this this normal to have um you know maybe thousands of lambda functions in their application. We're working with a customer where within last month there were nine million containers that launched and got terminated right there, pretty much leveraging, auto scaling and things like that. So these environments are like very complex. You know, there's a lot of moving pieces even, you know, depending on the type of services they're using. So again what we do, you know we when we look at tags and we look at other variables like environments and we look at who's provisioning resources, those resources and we try to group them together and that way there's accountability uh you know if the cost goes up for one workload were able to show that team like your cost is going up uh And also we can show uh unallocated bucket that hey within last week Your cost is you know, $4,000 higher in the unallocated bucket. Where would you like to move this these resources to just like an ongoing game. You >>know, you know jt I was talking with my friend jerry Chen is that Greylock partners is a V. C. Has been on the cube many times a couple of years ago. We're talking about how you can build a business within the cloud, in the shadows of the clouds, what he called it, but I called it more the enabling side and and that's happened now, you're seeing the massive growth. I'm also talking to some C X O C IOS or CSOs and they're like trying to figure out which companies that are evolving and growing to be to buy from, get to get the technology. Uh and they always say to me john I'm looking for game changing kind of impact. I'm looking for the efficiency and you know, enablement, the classic kind of criteria. So how would you guys position yourself to those buyers out there that might want to look at you guys as a solution and ups what game changing aspect of what you do is out there, how would you talk to that that C I O or C. So or buyer um out in the end the enterprise and the thieves ran his piece. What would you say to them? >>Yeah, I think the biggest uh advantage and I think right now it's a necessity, you hear these stories where, you know, people provision resources, they don't even know which project is it for. It's just very hard to govern the cloud environment, but I believe we're the only tool. Mhm where you want to compromise on the speed, right? The whole reason um cloud but they want to innovate faster. No one wants to follow that. Right? But I think what's important. We need to make sure everything is aligned with the business value. Uh, we allow people to do that. You know, we, we, we can both fast at the same time. You can have some sort of guard rails. So there are proper ownership. There's accountability. People are collaborating and people are also rightsizing terminating resources, they're not using. It's like, you know, I think if companies are looking for a tool that's gonna drive better accountability on how people build and collaborate on cloud, I think reply the best solution. >>So people are evolving with the cloud and you mentioned terminating services. That's a huge deal in cloud. Native things are being spun up and turned off all the time. So you need to have good law, You have a good visibility, observe ability is one of the hottest buzzwords out there. We see a zillion companies saying, hey, we're observe ability, which is to me is just monitoring stuff. They can sure you're tracking everything. So when you have all this and you start to operationalize this next gen, next level cloud scale, cost optimization and visibility is huge. Um, what is the, what is the secret sauce uh, for that you guys offer? Because the change management is a big 12 teams are changing too cost team accountability. All this is kind of, it's not just speeds and feeds, there's, it's kind of intersection of both. What's your take on that reaction to that? >>Yeah, I think it's the Delta. Right? So change management, What you're really looking for is not a, like a fire hose, you're looking for. What changed what the root cause who did it, what happened? Right. Because it's totally normal for someone to provision maybe thousands or even millions containers. But how many of those got shut down? What is the delta and uh, you know, if there is a, there is an anomaly, what is the root cause? Right? Uh, how we fix it. So you know the way we've changed managers, change management is a lot different. We really get to the root cause analysis and we really help companies to make, really show what changed and how they can take action to a media. But if there were issues, >>I want to put a little plug in for you guys. I noticed you guys have a really strong net promoter score. You have happy customers also get partners. A lot of enablement there. You kind of got a lot of things going on. Um, explain what you guys are all about. How did you get here? What's the day in the life of a customer that you're serving? Why then why are the scores so high? Um, take us through a use case of someone getting that value. >>Yeah. So I, I come from like a consulting background, john so you know, I was migrating companies to read the Bs when the institute was in beta and then I, you know, founded a consulting company over 100 employees. Really successful interview. S premier partner called in clouds. And so Enos was born there because because you know it was, it was born out a consulting company, there are a lot of other partners who are leveraging the tools to help their customers and it goes back to our point earlier, john like amazon has to wonder services, right? We are noticing customers are open to work with partners and uh you know with different partners that really helped them to make sure they're making the right decisions when they are building on cloud. So a lot of the partners, a lot of the consulting companies are leveraging uh and hopes to deliver value to their customers as far as uh you know how we actually operate. You know, we pay attention to uh you know what, what customers are looking for, what, where are the next sort of challenges uh you know, customers are facing in a cloud environment world like super obsessed, you know, like we're trying to figure out how do we make sure every single resource is aligned with the business value without slowing companies down so that really drives us, we're constantly welcome customers to stay true to the admission >>and that's the ethos of devops moving fast. The old quote Mark Zuckerberg used to have move fast, break stuff and then he revised it to move move fast and make it stable, which is essentially operational thing. Right, so you're starting to see that maturity, I noticed that you guys also have a really cool pricing model, very easy to get in and you have a high end too. So talk us through about how to engage with you guys, how do people get involved? Just click and just jump in there, buying software buying services, take a minute to explain how people can, can work with you. >>Yeah, it's just, it's just signing up on our site, you know, our pricing is tier model, uh you know, once you sign up, if you do need help with, you know, remediating high risk issues we can bring in partners, we have a strong partner ecosystem. Uh we could definitely help you do interviews to the right partners but it's as simple as just signing up and just taking me out. First thing I guess. >>Jt great chatting with you have been there from early days of devops, born in the field, getting, getting close to the customers and you mentioned ec two and beta, they just celebrate their 15th birthday and I remember one of my starts that didn't actually get off the off the blocks, they didn't even have custom domains at that time was still the long remember the long you are else >>everything was ephemeral like when you restart server, everything will go away a cool >>time. And I just remember saying to myself man, every entrepreneur is going to use this service who would ever go out and buy and host the server. So you were there from the beginning and it's been great to see the success. Thanks for coming on the cube >>all That's >>okay. Jt thanks so much as a cube conversation here in Palo alto. I'm john for your host. Thanks for watching. Mhm.

Published Date : Sep 7 2021

SUMMARY :

Jt Welcome to the cube conversation. So yeah, Re platform in the cloud and they go great, let's do more and next thing, you know, they have an operations You have to maybe some environment, maybe you have to optimize So a lot, a lot of stuff going on, I gotta ask you while I got you here, what is the definition of cloud apps these days, Uh and it's critical because you know, you're running a uh high availability I gotta ask you why, why is it harder Yeah, I mean, when I first got into, you know, I t you know, So the question I get a lot from people we talk about devops and cloud, cloud apps or cloud So what we do, we actually, you know, analyze all your metadata. So there's a lot of things that you could miss. So you know we asked, you know, we start by just asking the question, having teams Because you know, it's the clouds at E. P. I. Based system. we do, you know we when we look at tags and we look of what you do is out there, how would you talk to that that C I O or C. It's like, you know, So when you have all this and you start to operationalize this next gen, What is the delta and uh, you know, I noticed you guys have a really strong net promoter score. and then I, you know, founded a consulting company over 100 employees. So talk us through about how to engage with you guys, how do people get involved? our pricing is tier model, uh you know, once you sign up, So you were there from the beginning and it's been great to see the I'm john for your host.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AmazonORGANIZATION

0.99+

Mark ZuckerbergPERSON

0.99+

$4,000QUANTITY

0.99+

amazonORGANIZATION

0.99+

JohnPERSON

0.99+

JT GiriPERSON

0.99+

200 servicesQUANTITY

0.99+

thousandsQUANTITY

0.99+

jerry ChenPERSON

0.99+

AWSORGANIZATION

0.99+

hundredsQUANTITY

0.99+

nine million containersQUANTITY

0.99+

1000 thingsQUANTITY

0.99+

over 100 employeesQUANTITY

0.99+

last weekDATE

0.98+

last monthDATE

0.98+

johnPERSON

0.98+

Palo alto CaliforniaLOCATION

0.98+

12 teamsQUANTITY

0.98+

JtPERSON

0.98+

15th birthdayQUANTITY

0.98+

over 200QUANTITY

0.98+

bothQUANTITY

0.97+

First thingQUANTITY

0.97+

Palo altoLOCATION

0.97+

two pointQUANTITY

0.95+

56 workloadsQUANTITY

0.95+

firstQUANTITY

0.95+

oneQUANTITY

0.92+

couple of years agoDATE

0.92+

GreylockORGANIZATION

0.92+

millions containersQUANTITY

0.87+

CeoORGANIZATION

0.84+

Jt gearORGANIZATION

0.84+

a yearQUANTITY

0.84+

C X O C IOSTITLE

0.83+

companiesQUANTITY

0.77+

single resourceQUANTITY

0.77+

Hot StartupORGANIZATION

0.76+

1stQUANTITY

0.75+

singleQUANTITY

0.75+

EnosORGANIZATION

0.72+

one workloadQUANTITY

0.71+

CSOsTITLE

0.67+

lambdaQUANTITY

0.66+

every single resourceQUANTITY

0.64+

single workloadQUANTITY

0.63+

ec twoTITLE

0.5+

zillionQUANTITY

0.49+

DeltaORGANIZATION

0.49+

resourceQUANTITY

0.44+

Breaking Analysis: How Nvidia Wins the Enterprise With AI


 

from the cube studios in palo alto in boston bringing you data-driven insights from the cube and etr this is breaking analysis with dave vellante nvidia wants to completely transform enterprise computing by making data centers run 10x faster at one tenth the cost and video's ceo jensen wang is crafting a strategy to re-architect today's on-prem data centers public clouds and edge computing installations with a vision that leverages the company's strong position in ai architectures the keys to this end-to-end strategy include a clarity of vision massive chip design skills a new arm-based architecture approach that integrates memory processors i o and networking and a compelling software consumption model even if nvidia is unsuccessful at acquiring arm we believe it will still be able to execute on this strategy by actively participating in the arm ecosystem however if its attempts to acquire arm are successful we believe it will transform nvidia from the world's most valuable chip company into the world's most valuable supplier of integrated computing architectures hello everyone and welcome to this week's wikibon cube insights powered by etr in this breaking analysis we'll explain why we believe nvidia is in the right position to power the world's computing centers and how it plans to disrupt the grip that x86 architectures have had on the data center for decades the data center market is in transition like the universe the cloud is expanding at an accelerated pace no longer is the cloud an opaque set of remote services i always say somewhere out there sitting in a mega data center no rather the cloud is extending to on-premises data centers data centers are moving into the cloud and they're connecting through adjacent locations that create hybrid interactions clouds are being meshed together across regions and eventually will stretch to the far edge this new definition or view of cloud will be hyper distributed and run by software kubernetes is changing the world of software development and enabling workloads to run anywhere open apis external applications expanding the digital supply chains and this expanding cloud they all increase the threat surface and vulnerability to the most sensitive information that resides within the data center and around the world zero trust has become a mandate we're also seeing ai being injected into every application and it's the technology area that we see with the most momentum coming out of the pandemic this new world will not be powered by general purpose x86 processors rather it will be supported by an ecosystem of arm-based providers in our opinion that are affecting an unprecedented increase in processor performance as we have been reporting and nvidia in our view is sitting in the poll position and is currently the favorite to dominate the next era of computing architecture for global data centers public clouds as well as the near and far edge let's talk about jensen wang's clarity of vision for this new world here's a chart that underscores some of the fundamental assumptions that he's leveraging to expand his market the first is that there's a lot of waste in the data center he claims that only half of the cpu cores deployed in the data center today actually support applications the other half are processing the infrastructure all around the applications that run the software defined data center and they're terribly under utilized nvidia's blue field three dpu the data processing unit was described in a blog post on siliconangle by analyst zias caravala as a complete mini server on a card i like that with software defined networking storage and security acceleration built in this product has the bandwidth and according to nvidia can replace 300 general purpose x86 cores jensen believes that every network chip will be intelligent programmable and capable of this type of acceleration to offload conventional cpus he believes that every server node will have this capability and enable every packed of every packet and every application to be monitored in real time all the time for intrusion and as servers move to the edge bluefield will be included as a core component in his view and this last statement by jensen is critical in our opinion he says ai is the most powerful force of our time whether you agree with that or not it's relevant because ai is everywhere an invidious position in ai and the architectures the company is building are the fundamental linchpin of its data center enterprise strategy so let's take a look at some etr spending data to see where ai fits on the priority list here's a set of data in a view that we often like to share the horizontal axis is market share or pervasiveness in the etr data but we want to call your attention to the vertical axis that's really really what really we want to pay attention today that's net score or spending momentum exiting the pandemic we've seen ai capture the number one position in the last two surveys and we think this dynamic will continue for quite some time as ai becomes the staple of digital transformations and automations an ai will be infused in every single dot you see on this chart nvidia's architectures it just so happens are tailor made for ai workloads and that is how it will enter these markets let's quantify what that means and lay out our view of how nvidia with the help of arm will go after the enterprise market here's some data from wikibon research that depicts the percent of worldwide spending on server infrastructure by workload type here are the key points first the market last year was around 78 billion dollars worldwide and is expected to approach 115 billion by the end of the decade this might even be a conservative figure and we've split the market into three broad workload categories the blue is ai and other related applications what david floyer calls matrix workloads the orange is general purpose think things like erp supply chain hcm collaboration basically oracle saps and microsoft work that's being supported today and of course many other software providers and the gray that's the area that jensen was referring to is about being wasted the offload work for networking and storage and all the software defined management in the data centers around the world okay you can see the squeeze that we think compute infrastructure is gonna gonna occur around that orange area that general-purpose workloads that we think is going to really get squeezed in the next several years on a percentage basis and on an absolute basis it's really not growing nearly as fast as the other two and video with arm in our view is well positioned to attack that blue area and the gray area those those workload offsets and the new emerging ai applications but even the orange as we've reported is under pressure as for example companies like aws and oracle they use arm-based designs to service general purpose workloads why are they doing that cost is the reason because x86 generally and intel specifically are not delivering the price performance and efficiency required to keep up with the demands to reduce data center costs and if intel doesn't respond which we believe it will but if it doesn't act arm we think will get 50 percent of the general purpose workloads by the end of the decade and with nvidia it will dominate the blue the ai and the gray the offload work when we say dominate we're talking like capture 90 percent of the available market if intel doesn't respond now intel they're not just going to sit back and let that happen pat gelsinger is well aware of this in moving intel to a new strategy but nvidia and arm are way ahead in the game in our view and as we've reported this is going to be a real challenge for intel to catch up now let's take a quick look at what nvidia is doing with relevant parts of its pretty massive portfolio here's a slide that shows nvidia's three chip strategy the company is shifting to arm-based architectures which we'll describe in more detail in a moment the slide shows at the top line nvidia's ampere architecture not to be confused with the company ampere computing nvidia is taking a gpu centric approach no surprise obvious reasons there that's their sort of stronghold but we think over time it may rethink this a little bit and lean more into npus the neural processing unit we look at what apple's doing what tesla are doing we see opportunities for companies like nvidia to really sort of go after that but we'll save that for another day nvidia has announced its grace cpu a nod to the famous computer scientist grace hopper grace is a new architecture that doesn't rely on x86 and much more efficiently uses memory resources we'll again describe this in more detail later and the bottom line there that roadmap line shows the bluefield dpu which we described is essentially a complete server on a card in this approach using arm will reduce the elapsed time to go from chip design to production by 50 we're talking about shaving years down to 18 months or less we don't have time to do a deep dive into nvidia's portfolio it's large but we want to share some things that we think are important and this next graphic is one of them this shows some of the details of nvidia's jetson architecture which is designed to accelerate those ai plus workloads that we showed earlier and the reason is that this is important in our view is because the same software supports from small to very large including edge systems and we think this type of architecture is very well suited for ai inference at the edge as well as core data center applications that use ai and as we've said before a lot of the action in ai is going to happen at the edge so this is a good example of leveraging an architecture across a wide spectrum of performance and cost now we want to take a moment to explain why the moved arm-based architectures is so critical to nvidia one of the biggest cost challenges for nvidia today is keeping the gpu utilized typical utilization of gpu is well below 20 percent here's why the left hand side of this chart shows essentially racks if you will of traditional compute and the bottlenecks that nvidia faces the processor and dram they're tied together in separate blocks imagine there are thousands thousands of cores in a rack and every time you need data that lives in another processor you have to send a request and go retrieve it it's very overhead intensive now technologies like rocky are designed to help but it doesn't solve the fundamental architectural bottleneck every gpu shown here also has its own dram and it has to communicate with the processors to get the data i.e they can't communicate with each other efficiently now the right hand side side shows where nvidia is headed start in the middle with system on chip socs cpus are packaged in with npus ipu's that's the image processing unit you know x dot dot dot x pu's the the alternative processors they're all connected with sram which is think of that as a high speed layer like an layer one cache the os for the system on a chip lives inside of this and that's where nvidia has this killer software model what they're doing is they're licensing the consumption of the operating system that's running this system on chip in this entire system and they're affecting a new and really compelling subscription model you know maybe they should just give away the chips and charge for the software like a razer blade model talk about disruptive now the outer layer is the the dpu and the shared dram and other resources like the ampere computing the company this time cpus ssds and other resources these are the processors that will manage the socs together this design is based on nvidia's three chip approach using bluefield dpu leveraging melanox that's the networking component the network enables shared dram across the cpus which will eventually be all arm based grace lives inside the system on a chip and also on the outside layers and of course the gpu lives inside the soc in a scaled-down version like for instance a rendering gpu and we show some gpus on the outer layer as well for ai workloads at least in the near term you know eventually we think they may reside solely in the system on chip but only time will tell okay so you as you can see nvidia is making some serious moves and by teaming up with arm and leaning into the arm ecosystem it plans to take the company to its next level so let's talk about how we think competition for the next era of compute stacks up here's that same xy graph that we love to show market share or pervasiveness on the horizontal tracking against next net score on the vertical net score again is spending velocity and we've cut the etr data to capture players that are that are big in compute and storage and networking we've plugged in a couple of the cloud players these are the guys that we feel are vying for data center leadership around compute aws is a very strong position we believe that more than half of its revenues comes from compute you know ec2 we're talking about more than 25 billion on a run rate basis that's huge the company designs its own silicon graviton 2 etc and is working with isvs to run general purpose workloads on arm-based graviton chips microsoft and google they're going to follow suit they're big consumers of compute they sell a lot but microsoft in particular you know they're likely to continue to work with oem partners to attack that on-prem data center opportunity but it's really intel that's the provider of compute to the likes of hpe and dell and cisco and the odms which are the odms are not shown here now hpe let's talk about them for a second they have architectures and i hate to bring it up but remember the machine i know it's the butt of many jokes especially from competitors it had been you know frankly hpe and hp they deserve some of that heat for all the fanfare and then that they they put out there and then quietly you know pulled the machine or put it out the pasture but hpe has a strong position in high performance computing and the work that it did on new computing architectures with the machine and shared memories that might be still kicking around somewhere inside of hp and could come in handy for some day in the future so hpe has some chops there plus hpe has been known hp historically has been known to design its own custom silicon so i would not count them out as an innovator in this race cisco is interesting because it not only has custom silicon designs but its entry into the compute business with ucs a decade ago was notable and they created a new way to think about integrating resources particularly compute and networking with partnerships to add in the storage piece initially it was within within emc prior to the dell acquisition but you know it continues with netapp and pure and others cisco invests they spend money investing in architectures and we expect the next generation of ucs oh ucs2 ucs 2.0 will mark another notable milestone in the company's data center business dell just had an amazing quarterly earnings report the company grew top line revenue by around 12 percent and it wasn't because of an easy compare to last year dells is simply executing despite continued softness in the legacy emc storage business laptop the laptop demand continued to soar in dell server business it's growing again but we don't see dell as an architectural innovator per se in compute rather we think the company will be content to partner with suppliers whether it's intel nvidia arm-based partners or all of the above dell we think will rely on its massive portfolio its excellent supply chain and execution ethos to compete now ibm is notable for historical reasons with its mainframe ibm created the first great compute monopoly before it unwind and wittingly handed it to intel along with microsoft we don't see ibm necessarily aspiring to retake that compute platform mantle that once once held with mainframes rather red hat in the march to hybrid cloud is the path that we think in our view is ibm's approach now let's get down to the elephants in the room intel nvidia and china inc china is of course relevant because of companies like alibaba and huawei and the chinese chinese government's desire to be self-sufficient in semiconductor technology and technology generally but our premise here is that the trends are favoring nvidia over intel in this picture because nvidia is making moves to further position itself for new workloads in the data center and compete for intel's stronghold intel is going to attempt to remake itself but it should have been doing this seven years ago what pat gelsinger is doing today intel is simply far behind and it's going to take at least a couple years for them to really start to to make inroads in this new model let's stay on the nvidia v intel comparison for a moment and take a snapshot of the two companies here's a quick chart that we put together with some basic kpis some of these figures are approximations or they're rounded so don't stress over it too much but you can see intel is an 80 billion dollar company 4x the size of nvidia but nvidia's market cap far exceeds that of intel why is that of course growth in our view it's justified due to that growth and nvidia's strategic positioning intel used to be the gross margin king but nvidia has much higher gross margins interesting now when it comes down to free cash flow intel is still dominant as it pertains to the balance sheet intel is way more capital intensive than nvidia and as it starts to build out its foundries that's going to eat into intel's cash position now what we did is we put together a little pro forma on the third column of nvidia plus arm circa let's say the end of 2022. we think they could get to a run rate that is about half the size of intel and that can propel the company's market cap to well over half a trillion dollars if they get any credit for arm they're paying 40 billion dollars for arm a company that's you know sub 2 billion the risk is that because of the arm because the arm deal is based on cash plus tons of stock it could put pressure on the market capitalization for some time arm has 90 percent gross margins because it pretty much has a pure license model so it helps the gross margin line a little bit for this in this pro forma and the balance sheet is a swag arm has said that it's not going to take on debt to do the transaction but we haven't had time to really dig into that and figure out how they're going to structure it so we took a took a swag in in what we would do with this low interest rate environment but but take that with a grain of salt we'll do more research in there the point is given the momentum and growth of nvidia its strategic position in ai is in its deep engineering they're aimed at all the right places and its potential to unlock huge value with arm on paper it looks like the horse to beat if it can execute all right let's wrap up here's a summary look the architectures on which nvidia is building its dominant ai business are evolving and nvidia is well positioned to drive a truck right to the enterprise in our view the power has shifted from intel to the arm ecosystem and nvidia is leaning in big time whereas intel it has to preserve its current business while recreating itself at the same time this is going to take a couple of years but intel potentially has the powerful backing of the us government too strategic to fail the wild card is will nvidia be successful in acquiring arm certain factions in the uk and eu are fighting the deal because they don't want the u.s dictating to whom arm can sell its technology for example the restrictions placed on huawei for many suppliers of arm-based chips based on u.s sanctions nvidia's competitors like broadcom qualcomm at all are nervous that if nvidia gets armed they will be at a competitive disadvantage they being invidious competitors and for sure china doesn't want nvidia controlling arm for obvious reasons and it will do what it can to block the deal and or put handcuffs on how business can be done in china we can see a scenario where the u.s government pressures the uk and eu regulators to let this deal go through look ai and semiconductors you can't get much more strategic than that for the u.s military and the u.s long-term competitiveness in exchange for maybe facilitating the deal the government pressures nvidia to guarantee some feed to the intel foundry business while at the same time imposing conditions that secure access to arm-based technology for nvidia's competitors and maybe as we've talked about before having them funnel business to intel's foundry actually we've talked about the us government enticing apple to do so but it could also entice nvidia's competitors to do so propping up intel's foundry business which is clearly starting from ground zero and is going to need help outside of intel's own semiconductor manufacturing internally look we don't have any inside information as to what's happening behind the scenes with the us government and so forth but on its earning call on its earnings call nvidia said they're working with regulators that are on track to complete the deal in early 2022. we'll see okay that's it for today thank you to david floyer who co-created this episode with me and remember i publish each week on wikibon.com and siliconangle.com these episodes they're all available as podcasts all you're going to do is search breaking analysis podcast and you can always connect with me on twitter at dvalante or email me at david.valante siliconangle.com i always appreciate the comments on linkedin and in the clubhouse please follow me so you can be notified when we start a room and riff on these topics and don't forget to check out etr.plus for all the survey data this is dave vellante for the cube insights powered by etr be well and we'll see you next time [Music] you

Published Date : May 30 2021

SUMMARY :

and it's the technology area that we see

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
alibabaORGANIZATION

0.99+

nvidiaORGANIZATION

0.99+

50 percentQUANTITY

0.99+

90 percentQUANTITY

0.99+

huaweiORGANIZATION

0.99+

microsoftORGANIZATION

0.99+

david floyerPERSON

0.99+

40 billion dollarsQUANTITY

0.99+

chinaLOCATION

0.99+

thousandsQUANTITY

0.99+

18 monthsQUANTITY

0.99+

appleORGANIZATION

0.99+

david.valanteOTHER

0.99+

last yearDATE

0.99+

two companiesQUANTITY

0.99+

bostonLOCATION

0.99+

googleORGANIZATION

0.99+

10xQUANTITY

0.99+

early 2022DATE

0.99+

jensenPERSON

0.99+

ibmORGANIZATION

0.99+

around 78 billion dollarsQUANTITY

0.99+

third columnQUANTITY

0.99+

80 billion dollarQUANTITY

0.99+

more than halfQUANTITY

0.99+

ukLOCATION

0.99+

firstQUANTITY

0.98+

around 12 percentQUANTITY

0.98+

a decade agoDATE

0.98+

115 billionQUANTITY

0.98+

todayDATE

0.98+

each weekQUANTITY

0.97+

dellsORGANIZATION

0.97+

seven years agoDATE

0.97+

50QUANTITY

0.97+

dellORGANIZATION

0.97+

jensen wangPERSON

0.97+

twoQUANTITY

0.97+

end of 2022DATE

0.97+

over half a trillion dollarsQUANTITY

0.97+

siliconangle.comOTHER

0.96+

intelORGANIZATION

0.96+

Compute Session 05


 

>> Thank you for joining us today for this session entitled, Deploy any Workload as a Service, When General Purpose Technology isn't Enough. This session today will be on our HPE GreenLake platform. And my name is Mark Seamans, and I'm a member of our GreenLake cloud services team. And I'll be kind of leading you through the material today which will include both a slide presentation as well as an interactive demo to get some experience in terms of how the process goes for interacting with your initial experience with our GreenLake system. So, let's go ahead and get started. One of the things that we've noticed over the last decade and I'm sure that you have as well has been the tremendous focus on accelerating business while concurrently trying to increase agility and to reduce costs. And one of the ways a lot of businesses have gone about doing that has been leveraging a cloud based technology set. And in many cases, that's involved moving some of the workloads to the public cloud. And so with that much said, though, while organizations have been able to enjoy that cost control and the agility associated with the public cloud. What we've seen is that the easy to move workloads have been moved but there's a significant amount as much as 70% in many cases of workloads that organizations run which still remain on prem. And there's reasons for that. Some cases it's due to data privacy and security concerns. Other times it's due to latency of really needing high-performance access to data. And the other times, it's really just related to the interconnected nature of systems and that you need to have a whole bunch of systems which form an overall experience and they need to be located close together. So, one of the challenges that we've worked with customers and have actually developed our GreenLake solution to address is this idea of trying to achieve this cloud-like experience for all of your apps and data in a way that leverages the best of the public cloud with also that same type of experience delivered on premise. So as you think about some of the challenges, again, we touched on this that customers are trying to address. One of the ones is this idea of agility, being able to move quickly and to be able to take a set of IT resources that you have and deploy them for different use cases and different models. So, it's one of the things as we built GreenLake, we really had a strong focus on is how do we provide a common foundation, a common framework to deliver that kind of agility. The next one is this term on the top right called scale. And one of the words you may hear is you hear cloud talked about regularly is this notion of what's called elasticity and the ability to have something stretch and get larger kind of on an on demand basis. That's another challenge and premise that we've really tried to work through. And you'll see how we've addressed that. Now, obviously, as you do this, you can achieve scale if you just put a ton of equipment in place much more maybe than you need at any given time but with that comes a lot of costs. And so as you think about wanting to have an agile and flexible system, what you'd also like is something where the costs flexes as your needs grow and it's elastic and that it can get larger and then it can get smaller as needed as well. So, we'll talk about how we do that with our GreenLake solution. And then finally it's complexity, it's trying to abstract away the vision for people of having to be aware of all the complexity it takes to build these systems and provide a single interface, a single experience for people to manage all of their IT assets. So we do that through this solution called HPE GreenLake and really we call it the cloud that comes to you. And as you think about what we're really trying to do here is take the notion of a cloud from being a place where people have thought about the public cloud and turning that to an idea of the cloud being an experience. And so it's regardless of whether it's in the public cloud or running on premise or as is the case with GreenLake, whether it's a mixture of those and maybe even a mixture of multiple public clouds with on-prem experience, the cloud now becomes something you experience and that you leverage as opposed to a place where you have an account and that can include edge computing combined with co-location or data center based computing. It could include equipment stored in your own data center and certainly it can include resources in the public cloud. So, let's take a look at how we go about delivering the experience and what some of those benefits are as we put these solutions in place. So, as you think about why you'd want to do this and the benefits you get from GreenLake, what we've seen in terms of both working with customers and actually having studies done with analysts is the benefits are numerous, but they come in areas that are shown here, one time to deployment. And that once you get this flexible and easily to manage environment in place with what we'll show you are these prebuilt, pre-configured and managed as a service solutions, your time to deployment for putting new workloads in place can shrink dramatically. The next in terms of having these pre-configured solutions and combining both the hardware and software technology with a set of managed services through our GreenLake managed services team, what you can do is dramatically reduce the risk of putting a new workload in place. So for example, if you wanted to deploy virtual desktop infrastructure and maybe you haven't done that in the past, you can leverage a GreenLake VDI solution along with GreenLake management services to very predictably and very reliably put that solution in place. So you're up and running focusing on the needs of your users with incredibly lowered risk, because this was built on a pre-validated and a pre-certified foundation. Obviously, I talked earlier about the idea with GreenLake is that you have flexibility in terms of scaling up your use of the resources, even though they're computers that may be in your data center or a colo, and also scaling them back down. So if you have workloads over time, that may be even an end of month cycle or an end to quarter cycle where certain workloads get larger and then would get smaller again, the ability with GreenLake on a consumption billing basis is there where your costs can flow as your use of the systems flow. And again, I'll show you a screen in just a few minutes, that kind of illustrates what that looks like. And then the last piece is the single pane of glass for control and insight into what's going on. And what we mean by that is not just what's going on from a cost perspective, but also what's going on from a system utilization perspective. You'll see in one of the screens I'll show that there's a system utilization report of all of your GreenLake resources that you can view at any time. And so what you can get visibility to, for example, with storage capacity as your storage capacity is being consumed over time as you generate more data, the system will tell you, hey, you're getting up to about 60, 70% utilized. And then at that point, we would be able to work with you to automatically deploy even though you won't be paying for it yet, additional storage capacity so it's ready as your needs grow to encompass that. So in terms of what are some of these services that we deliver as part of GreenLake? Well, they range and you see here a portfolio of services that we offer. If you start at the bottom, it's simple things, right? Things like compute as a service, and I'll show you examples of that today, networking as a service, hyper-converged infrastructure as a service. And then if we work our way up the stack, we move from kind of basic services to platform services, things like VMware and containers as a service. And then if we go to the top layer of this, we actually can offer complete solutions for targeted workloads. So if your need was for example, to run machine learning and AI, and you wanted to have a complete environment put in place that you could leverage for machine learning and AI and use it and consume it on a consumption as a service basis, we've got our MLOps solution that delivers that. And similarly, I mentioned earlier, VDI for virtual desktops or a solution for SAP HANA. So, the solutions range from very basic compute at the foundation all the way up to complete workload solutions that you can achieve. And the portfolio of what these are is expanding all the time. And as you'll see, you can go out to our hpe.com site and see a complete catalog of all the GreenLake services that are available. So let's take a minute and let's drill in like on that MLOps solution. And we can take a look at how that fits together and what makes that up. So, if you think about GreenLake for MLOps, it's a fast path for data scientists, and it's really oriented around the needs of data scientists within your organization who have a desire to be able to get in and start to analyze data for advantage in your business. So, what comes with an MLOps solution from GreenLake starts at the left side of the slide here with a fully curated hardware platform, including GPU based nodes, data science, optimized hardware, all the storage that you're going to need to run at scale and that performance to make these workloads work. And so that's one piece of it is a curated hardware stack for machine learning. Next in the software component, we pre-validated a whole bunch of the common stack elements that you would need. So beyond operating systems, but things for doing continuous integration, for things like TensorFlow and Jupyter notebooks are already pre-validated and delivered with this solution. So, the tools that your data scientists will need come with this, ready to go, out of the box. And then finally, as this solution gets delivered, there's a services component to it beyond just us installing this full thing and delivering a complete solution to you. But the GreenLake management services options where our services teams can work side by side with data scientists to assist them in getting up to speed on the solution, to leveraging the tools, to understanding best practices if you want those, if you want that assistance for deploying MLOps and the whole thing's delivered as a service. As similar, we similar solutions for other workloads like SAP HANA that would leverage again, different compute building blocks, but always in a way that's done for workload optimized solutions, best practice and that build up that stack. And so your experience in consuming this is always consistent, but what's running under the hood isn't just a generic solution that you might see in for example, a public cloud environment, it's a best practice, hardware optimized, software optimized environment built for each one of the workloads that we can deploy. So I like to do at this point is actually show you what's the process like for actually specifying a GreenLake solution. And maybe we'll take a look at compute as our example today. So, what I've got here is a browser experience, I'm just in my web browser, I'm on the hpe.com website and what I'd like to do. I mean the GreenLake section and I've actually clicked on this services menu and I'm going to go ahead and scroll down. And one of the things you can see here is that catalog of GreenLake services that I referenced. So, just like we showed you on the slide, this is that catalog of services that you can consume. I'm going to go to compute and we'll go about quoting a GreenLake compute solution. So we see when I clicked on that, one of the options I have is to get a price in my inbox. And I'll click on that to go in here to our GreenLake quick quote environment where if in my case here for our demonstration, I'll specify that I'd like to purchase to add to my GreenLake environment some additional general compute capability for some workloads that I might like to run. If I click on this, I go in and you notice here that I'm not going to specify server types. I'm really going to tell the system about the types of workloads that I'd like to run and the characteristics of those workloads. So for example, my workload choices would be adaptable performance or maybe densely optimized compute for highly scalable and high performance computing requirements. So, I'll select adaptable performance. I have a choice of processor types, my case, I'll pick Intel. And I then say, how many servers for the workloads that I want to run would be part of the solution. Again, in my case, maybe we'll quote a 20 server configuration. Now, as we think about the plans here, what you can see is we're really looking at the different options in terms of a balanced performance and price option which is the recommended option. But if I knew that the workloads I were going to run were more performance optimized, I could simply click on that option. And in the system under the hood does all the work to reconfigure the system. I'm not having to pick individual server options as you see. So once I picked between cost optimized balance or performance, I can go in here and select the rest of the options. Now, we'll start at the top right and you see here from a services perspective, this is where it specifies how much services content and in services assistance I'd like all the way from just doing proactive metering of my solution all the way through being able to do actual workload deployment and assistance with me physically managing the equipment myself. The other piece I'll focus on is this variable usage. And this comes back to how much of the variable time, variable capacity of additional capacity, what I like to have available in my data center for this solution. So if I know that my flex could be larger in the future of the capacity, I want to flex up and down. I might pick a slightly larger amount of flex capacity at my location as part of this solution. With that, I'd select that workload. And the less steps would be, I could click on get price and this whole thing will be packaged up and shipped to you in terms of the price of the solution. And any other details that you might like to see. And I encourage you to go out to hpe.com and to go through this process yourself for one of the workloads that might be of interest for you to get a flavor of that experience. So if we move forward, once you've deployed your GreenLake solution, one of the things you see here is that single pane of glass experience in terms of managing the system, right? We've got a single panel that all in one place provides you access to your cost information for billing, and what's driving that billing, your middle and the middle of the top center, you can see we've got information on the capacity planning but then we can actually drill in and actually look at additional things like services we offer around continuous compliance, capacity planning data for you to build and see how things like storage or filling, cost control information with recommendations around how you could reduce or minimize your costs based on the usage profile that you have. So, all of this is a fully integrated experience that can span components running both on-premise and also incorporating services that could be in the public cloud. Now, when we think about who's using this and why is this becoming attractive? You can imagine just looking at this capability that this ability to blend public cloud capabilities with on-premise or in a co-location, private data center capabilities provides tremendous power and provides tremendous flexibility for users. And so we're seeing this adopted broadly as kind of a new way, people are looking to take the advantages of cloud, but bring them into a much more self-managed or on-premise experience. And so some example, customers here include deployments in the automotive field, both at Porsche or over on the right at Zenseact, which is the autonomous driving division of Volvo where they're doing research with tremendous amounts of data to produce the best possible autonomous driving experience. And then in the center, Danfoss who is one of the world's leading manufacturers of both electric and hydraulic control components. And so as they produce components themselves, that drive an optimized management of physical infrastructure, power, liquids and cooling, they're leveraging GreenLake for the same type of control and best practice deployment of their data centers and of their IT infrastructure. So again, somebody who's innovating in their own world taking advantage of compute innovations to get the benefits of the cloud and the flexibility of a cloud-like environment but running within their own premise. And it's not just those three customers clearly. I mean, what we're seeing is, as you see on the slide, it's a unique solution in the market today. It provides the true benefits of the cloud, but with your own on-premise experience, it provides expertise in terms of services to help you take best advantage of it. And if you look at the adoption by customers, over a thousand customers in 50 countries have now deployed GreenLake based solutions as the foundation on which they're building their next generation IT architecture. So, there's a lot of unique capabilities that as we built GreenLake, that we have that really make this a single pane of glass and a very, very unified and elegant experience. So as we kind of wrap up, there's three things I want to call your attention to, one, GreenLake, which we focused a lot on today. I'd also like to call your attention to the point next services, which are an extension of those GreenLake services that I talked about earlier but there's a much broader portfolio of what Pointnext can do in delivering value for your organization. And then again, HPE financial services who much like what we do with GreenLake in this as a service consumption environment can provide a lot of financial flexibility in other models and other use cases. So, I'd encourage you to take time to learn about each of those three areas. And then there's obviously many many resources available online. And again, there's some that are listed here but it kind of as a single point takeaway from this slide, I encourage you to go to hpe.com. If you're interested in GreenLake, click on our GreenLake icon and you can take yourself through that quoting experience for what would be interesting and certainly as well for our compute solutions, there's a tremendous amount of information about the leading solutions that HPE brings to market. So with that, I hope that's been an informative set of experience. I'm thanking you for spending a little bit of time with us today and hopefully you'll take some time to learn more about GreenLake and how it might be a benefit for you within your organization. Thanks again.

Published Date : Apr 9 2021

SUMMARY :

and the benefits you get from GreenLake,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
VolvoORGANIZATION

0.99+

Mark SeamansPERSON

0.99+

PorscheORGANIZATION

0.99+

three customersQUANTITY

0.99+

todayDATE

0.99+

20 serverQUANTITY

0.99+

GreenLakeORGANIZATION

0.99+

PointnextORGANIZATION

0.99+

oneQUANTITY

0.99+

ZenseactORGANIZATION

0.99+

70%QUANTITY

0.99+

50 countriesQUANTITY

0.99+

over a thousand customersQUANTITY

0.99+

three thingsQUANTITY

0.99+

DanfossORGANIZATION

0.98+

bothQUANTITY

0.98+

eachQUANTITY

0.98+

SAP HANATITLE

0.98+

single pointQUANTITY

0.98+

HPEORGANIZATION

0.98+

OneQUANTITY

0.98+

single panelQUANTITY

0.97+

three areasQUANTITY

0.97+

one pieceQUANTITY

0.97+

single paneQUANTITY

0.97+

single interfaceQUANTITY

0.96+

single experienceQUANTITY

0.95+

JupyterORGANIZATION

0.94+

HPE GreenLakeTITLE

0.93+

hpe.comOTHER

0.93+

GreenLakeTITLE

0.93+

about 60, 70%QUANTITY

0.92+

TensorFlowORGANIZATION

0.91+

Deploy any Workload as a Service, When General Purpose Technology isn't EnoughTITLE

0.85+

one placeQUANTITY

0.84+

Session 05QUANTITY

0.83+

IntelORGANIZATION

0.79+

one ofQUANTITY

0.78+

hpe.comORGANIZATION

0.78+

GreeORGANIZATION

0.77+

GreenLakeCOMMERCIAL_ITEM

0.74+

up toQUANTITY

0.67+

onesQUANTITY

0.66+

last decadeDATE

0.66+

Benoit & Christian Live


 

>>Okay, We're now going into the technical deep dive. We're gonna geek out here a little bit. Ben Wa Dodgeville is here. He's co founder of Snowflake and president of products. And also joining us is Christian Kleinerman. Who's the senior vice president of products. Gentlemen, welcome. Good to see you. >>Yeah, you that >>get this year, they Thanks for having us. >>Very welcome. So it been well, we've heard a lot this morning about the data cloud, and it's becoming my view anyway, the linchpin of your strategy. I'm interested in what technical decisions you made early on. That that led you to this point and even enabled the data cloud. >>Yes. So? So I would say that that a crowd was built in tow in three phases. Really? The initial phase, as you call it, was it was really about 20 minutes. One regions Teoh, Data Cloud and and that region. What was important is to make that region infinity, infinity scalable, right. And that's our architectural, which we call the beauty cross to share the architectural er so that you can plug in as many were clues in that region as a Z without any limits. The limit is really the underlying prop Provide the, you know, resource is which you know, Cal provide the region as a really no limits. So So that z you know, region architecture, I think, was really the building block of the snowflake. That a cloud. But it really didn't stop there. The second aspect Waas Well, it was really data sharing. How you know munity internets within the region, how to share data between 10 and off that region between different customers on that was also enabled by architectures Because we discover, you know, compute and storage so compute You know clusters can access any storage within the region. Eso that's based off the data cloud and then really faced three Which is critical is the expansion the global expansion how we made you know, our cloud domestic layers so that we could talk You know the snowflake vision on different clouds on DNA Now we are running in three cloud on top of three cloud providers. We started with the ws and US West. We moved to assure and then uh, Google g c p On how this this crowd region way started with one crowd region as I said in the W S U S West, and then we create we created, you know, many you know, different regions. We have 22 regions today, all over the world and all over the different in the cloud providers. And what's more important is that these regions are not isolated. You know, Snowflake is one single, you know, system for the world where we created this global data mesh which connects every region such that not only there's no flex system as a whole can can be aware of for these regions, But customers can replicate data across regions on and, you know, share. There are, you know, across the planet if need be. So So this is one single, you know, really? I call it the World Wide Web. Off data that, that's, you know, is this vision of the data cloud. And it really started with this building block, which is a cloud region. >>Thank you for that. Ben White Christian. You and I have talked about this. I mean, that notion of a stripping away the complexity and that's kind of what the data cloud does. But if you think about data architectures, historically they really had no domain knowledge. They've really been focused on the technology toe ingest and analyze and prepare And then, you know, push data out to the business and you're really flipping that model, allowing the sort of domain leaders to be first class citizens if you will, uh, because they're the ones that creating data value, and they're worrying less about infrastructure. But I wonder, do you feel like customers air ready for that change? >>I I love the observation. They've that, uh, so much energy goes in in in enterprises, in organizations today, just dealing with infrastructure and dealing with pipes and plumbing and things like that and something that was insightful from from Ben Juan and and our founders from from Day one WAAS. This is a managed service. We want our customers to focus on the data, getting the insights, getting the decisions in time, not just managing pipes and plumbing and patches and upgrades, and and the the other piece that it's it's it's an interesting reality is that there is this belief that the cloud is simplifying this, and all of a sudden there's no problem but actually understanding each of the public cloud providers is a large undertaking, right? Each of them have 100 plus services, uh, sending upgrades and updates on a constant basis. And that just distracts from the time that it takes to go and say, Here's my data. Here's my data model. Here's how it make better decisions. So at the heart of everything we do is we wanna abstract the infrastructure. We don't wanna abstract the nuance of each of the cloud providers. And as you said, have companies focus on This is the domain expertise or the knowledge for my industry. Are all companies ready for it? I think it's a It's a mixed bag. We we talk to customers on a regular basis every way, every week, every day, and some of them are full on. They've sort of burned the bridges and, like I'm going to the cloud, I'm going to embrace a new model. Some others. You can see the complete like, uh, shock and all expressions like What do you mean? I don't have all these knobs. 2 to 3 can turn. Uh, but I think the future is very clear on how do we get companies to be more competitive through data? >>Well, Ben Ben. Well, it's interesting that Christian mentioned to manage service and that used to be in a hosting. Guys run around the lab lab coats and plugging things in. And of course, you're looking at this differently. It's high degrees of automation. But, you know, one of those areas is workload management. And I wonder how you think about workload management and how that changes with the data cloud. >>Yeah, this is a great question. Actually, Workload management used to be a nightmare. You know, traditional systems on it was a nightmare for the B s and they had to spend most a lot of their time, you know, just managing workloads. And why is that is because all these workloads are running on the single, you know, system and a single cluster The compete for resources. So managing workload that always explain it as explain Tetris, right? You had the first to know when to run. This work will make sure that too big workers are not overlapping. You know, maybe it really is pushed at night, you know, And And you have this 90 window which is not, you know, efficient. Of course, for you a TL because you have delays because of that. But but you have no choice, right? You have a speaks and more for resource is and you have to get the best out of this speaks resource is. And and for sure you don't want to eat here with her to impact your dash boarding workload or your reports, you know, impact and with data science and and And this became a true nine man because because everyone wants to be that a driven meaning that all the entire company wants to run new workers on on this system. And these systems are completely overwhelmed. So so, well below management was, and I may have before Snowflake and Snowflake made it really >>easy. The >>reason is it's no flag. We leverage the crowds who dedicates, you know, compute resources to each work. It's in the snowflake terminology. It's called a warehouse virtual warehouse, and each workload can run in its own virtual warehouse, and each virtual warehouse has its own dedicated competition resources. It's on, you know, I opened with and you can really control how much resources which workload gas by sizing this warehouses. You know, I just think the compute resources that they can use When the workload, you know, starts to execute automatically. The warehouse, the compute resources are turned off, but turned on by snowflake is for resuming a warehouse and you can dynamically resized this warehouse. It can be done by the system automatically. You know if if the conference see of the workload increases or it can be done manually by the administrator or, you know, just suggesting, you know, uh, compute power. You know, for each workload and and the best off that model is not only it gives you a very fine grain. Control on resource is that this work can get Not only workloads are not competing and not impacting it in any other workload. But because of that model, you can hand as many workload as you want. And that's really critical because, as I said, you know, everyone in the organization wants to use data to make decisions, So you have more and more work roads running. And then the Patriots game, you know, would have been impossible in in a in a centralized one single computer, cross the system On the flip side. Oh, is that you have to have a zone administrator off the system. You have to to justify that. The workload is worth running for your organization, right? It's so easy in literally in seconds, you can stand up a new warehouse and and start to run your your crazy on that new compute cluster. And of course, you have to justify if the cost of that because there is a cost, right, snowflake charges by seconds off compute So that cost, you know, is it's justified and you have toe. You know, it's so easy now to hire new workflow than you do new things with snowflake that that that you have to to see, you know, and and look at the trade off the cost off course and managing costs. >>So, Christian been while I use the term nightmare, I'm thinking about previous days of workload management. I mean, I talked to a lot of customers that are trying to reduce the elapsed time of going from data insights, and their nightmare is they've got this complicated data lifecycle. Andi, I'm wondering how you guys think about that. That notion of compressing elapsed time toe data value from raw data to insights. >>Yeah, so? So we we obsess or we we think a lot about this time to insight from the moment that an event happens toe the point that it shows up in a dashboard or a report or some decision or action happens based on it. There are three parts that we think on. How do we reduce that life cycle? The first one which ties to our previous conversation is related toe. Where is their muscle memory on processes or ways of doing things that don't actually make us much sense? My favorite example is you say you ask any any organization. Do you run pipelines and ingestion and transformation at two and three in the morning? And the answer is, Oh yeah, we do that. And if you go in and say, Why do you do that? The answer is typically, well, that's when the resource is are available Back to Ben Wallace. Tetris, right? That's that's when it was possible. But then you ask, Would you really want to run it two and three in the morning? If if you could do it sooner, we could do it. Mawr in time, riel time with when the event happened. So first part of it is back to removing the constraints of the infrastructures. How about running transformations and their ingestion when the business best needs it? When it's the lowest time to inside the lowest latency, not one of technology lets you do it. So that's the the the easy one out the door. The second one is instead of just fully optimizing a process, where can you remove steps of the process? This is where all of our data sharing and the snowflake data marketplace come into place. How about if you need to go in and just data from a SAS application vendor or maybe from a commercial data provider and imagine the dream off? You wouldn't have to be running constant iterations and FTP s and cracking C S V files and things like that. What if it's always available in your environment, always up to date, And that, in our mind, is a lot more revolutionary, which is not? Let's take away a process of ingesting and copying data and optimize it. How about not copying in the first place? So that's back to number two on, then back to number three is is what we do day in and day out on making sure our platform delivers the best performance. Make it faster. The combination of those three things has led many of our customers, and and And you'll see it through many of the customer testimonials today that they get insights and decisions and actions way faster, in part by removing steps, in part by doing away with all habits and in part because we deliver exceptional performance. >>Thank you, Christian. Now, Ben Wa is you know, we're big proponents of this idea of the main driven design and data architecture. Er, you know, for example, customers building entire applications and what I like all data products or data services on their data platform. I wonder if you could talk about the types of applications and services that you're seeing >>built >>on top of snowflake. >>Yeah, and And I have to say that this is a critical aspect of snowflake is to create this platform and and really help application to be built on top of this platform. And the more application we have, the better the platform will be. It is like, you know, the the analogies with your iPhone. If your iPhone that no applications, you know it would be useless. It's it's an empty platforms. So So we are really encouraging. You know, applications to be belong to the top of snowflake and from there one actually many applications and many off our customers are building applications on snowflake. We estimated that's about 30% are running already applications on top off our platform. And the reason is is off course because it's it's so easy to get compute resources. There is no limit in scale in our viability, their ability. So all these characteristics are critical for for an application on DWI deliver that you know from day One Now we have improved, you know, our increased the scope off the platform by adding, you know, Java in competition and Snow Park, which which was announced today. That's also you know, it is an enabler. Eso in terms off type of application. It's really, you know, all over and and what I like actually needs to be surprised, right? I don't know what well being on top of snowflake and how it will be the world, but with that are sharing. Also, we are opening the door to a new type of applications which are deliver of the other marketplace. Uh, where, You know, one can get this application died inside the platform, right? The platform is distributing this application, and today there was a presentation on a Christian T notes about, >>you >>know, 20 finds, which, you know, is this machine learning, you know, which is providing toe. You know, any users off snowflake off the application and and machine learning, you know, to find, you know, and apply model on on your data and enrich your data. So data enrichment, I think, will be a huge aspect of snowflake and data enrichment with machine learning would be a big, you know, use case for these applications. Also, how to get there are, you know, inside the platform. You know, a lot of applications led him to do that. Eso machine learning. Uh, that engineering enrichments away. These are application that we run on the platform. >>Great. Hey, we just got a minute or so left in. Earlier today, we ran a video. We saw that you guys announced the startup competition, >>which >>is awesome. Ben, while you're a judge in this competition, what can you tell us about this >>Yeah, >>e you know, for me, we are still a startup. I didn't you know yet, you know, realize that we're not anymore. Startup. I really, you know, you really feel about you know, l things, you know, a new startups, you know, on that. That's very important for Snowflake. We have. We were started yesterday, and we want to have new startups. So So the ends, the idea of this program, the other aspect off that program is also toe help, you know, started to build on top of snowflake and to enrich. You know, this this pain, you know, rich ecosystem that snowflake is or the data cloud off that a cloud is And we want to, you know, add and boost. You know that that excitement for the platform, so So the ants, you know, it's a win win. It's a win, you know, for for new startups. And it's a win, ofcourse for us. Because it will make the platform even better. >>Yeah, And startups, or where innovation happens. So registrations open. I've heard, uh, several, uh, startups have have signed up. You goto snowflake dot com slash startup challenge, and you can learn mawr. That's exciting program. An initiative. So thank you for doing that on behalf of of startups out there and thanks. Ben Wa and Christian. Yeah, I really appreciate you guys coming on Great conversation. >>Thanks for David. >>You're welcome. And when we talk, Thio go to market >>pros. They >>always tell us that one of the key tenets is to stay close to the customer. Well, we want to find out how data helps us. To do that in our next segment. Brings in to chief revenue officers to give us their perspective on how data is helping their customers transform. Business is digitally. Let's watch.

Published Date : Nov 20 2020

SUMMARY :

Okay, We're now going into the technical deep dive. That that led you to this point and even enabled the data cloud. and then we create we created, you know, many you know, different regions. and prepare And then, you know, push data out to the business and you're really flipping that model, And as you said, have companies focus on This is the domain expertise But, you know, You know, maybe it really is pushed at night, you know, And And you have this 90 The done manually by the administrator or, you know, just suggesting, you know, I'm wondering how you guys think about that. And if you go in and say, Why do you do that? Er, you know, for example, customers building entire It is like, you know, the the analogies with your iPhone. the application and and machine learning, you know, to find, We saw that you guys announced the startup competition, is awesome. so So the ants, you know, it's a win win. I really appreciate you guys coming on Great conversation. And when we talk, Thio go to market Brings in to chief revenue

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

Christian KleinermanPERSON

0.99+

Ben WallacePERSON

0.99+

Ben WhitePERSON

0.99+

Ben WaPERSON

0.99+

three partsQUANTITY

0.99+

Ben BenPERSON

0.99+

EachQUANTITY

0.99+

BenPERSON

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

Ben Wa DodgevillePERSON

0.99+

SnowflakeORGANIZATION

0.99+

ChristianPERSON

0.99+

BenoitPERSON

0.99+

todayDATE

0.99+

ThioPERSON

0.99+

yesterdayDATE

0.99+

first partQUANTITY

0.99+

firstQUANTITY

0.99+

eachQUANTITY

0.99+

three thingsQUANTITY

0.99+

22 regionsQUANTITY

0.98+

second aspectQUANTITY

0.98+

JavaTITLE

0.98+

about 20 minutesQUANTITY

0.98+

first oneQUANTITY

0.98+

10QUANTITY

0.98+

each workQUANTITY

0.98+

about 30%QUANTITY

0.97+

Ben JuanPERSON

0.97+

second oneQUANTITY

0.97+

nine manQUANTITY

0.97+

oneQUANTITY

0.97+

90 windowQUANTITY

0.97+

singleQUANTITY

0.97+

each virtual warehouseQUANTITY

0.96+

twoQUANTITY

0.96+

each workloadQUANTITY

0.96+

DWIORGANIZATION

0.95+

100 plus serviQUANTITY

0.94+

20 findsQUANTITY

0.94+

one singleQUANTITY

0.91+

3QUANTITY

0.91+

threeDATE

0.91+

three phasesQUANTITY

0.91+

this morningDATE

0.91+

GoogleORGANIZATION

0.89+

threeQUANTITY

0.89+

TetrisTITLE

0.89+

Snow ParkTITLE

0.88+

US WestLOCATION

0.87+

Christian TPERSON

0.87+

PatriotsORGANIZATION

0.87+

this yearDATE

0.86+

single clusterQUANTITY

0.84+

day OneQUANTITY

0.82+

twoDATE

0.8+

SASORGANIZATION

0.79+

one single computerQUANTITY

0.78+

SnowflakeTITLE

0.78+

one crowd regionQUANTITY

0.76+

three cloud providersQUANTITY

0.76+

W S U S WestLOCATION

0.74+

One regionsQUANTITY

0.73+

ChristianORGANIZATION

0.73+

Day oneQUANTITY

0.71+

Earlier todayDATE

0.68+

wsORGANIZATION

0.61+

number threeQUANTITY

0.58+

g c pTITLE

0.57+

2QUANTITY

0.53+

SnowflakeEVENT

0.45+

TetrisORGANIZATION

0.35+

Democratizing AI and Advanced Analytics with Dataiku x Snowflake


 

>>My name is Dave Volonte, and with me are two world class technologists, visionaries and entrepreneurs. And Wa Dodgeville is the he co founded Snowflake, and he's now the president of the product division. And Florian Duetto is the co founder and CEO of Data Aiko. Gentlemen, welcome to the Cube to first timers. Love it. >>Great to be here >>now, Florian you and Ben Wa You have a number of customers in common. And I have said many times on the Cube that you know, the first era of cloud was really about infrastructure, making it more agile, taking out costs. And the next generation of innovation is really coming from the application of machine intelligence to data with the cloud is really the scale platform. So is that premise your relevant to you? Do you buy that? And and why do you think snowflake and data ICU make a good match for customers? >>I think that because it's our values that are aligned when it's all about actually today allowing complexity for customers. So you close the gap or the democratizing access to data access to technology. It's not only about data data is important, but it's also about the impact of data. Who can you make the best out of data as fast as possible as easily as possible within an organization. And another value is about just the openness of the platform building the future together? Uh, I think a platform that is not just about the platform but also full ecosystem of partners around it, bringing the level off accessibility and flexibility you need for the 10 years away. >>Yeah, so that's key. But it's not just data. It's turning data into insights. Have been why you came out of the world of very powerful but highly complex databases. And we know we all know that you and the snowflake team you get very high marks for really radically simplifying customers lives. But can you talk specifically about the types of challenges that your customers air using snowflake to solve? >>Yeah, so So the really the challenge, you know, be four. Snowflake. I would say waas really? To put all the data, you know, in one place and run all the computers, all the workloads that you wanted to run, You know, against that data and off course, you know, existing legacy platforms. We're not able to support. You know that level of concurrency, Many workload. You know, we we talk about machine learning that a science that are engendering, you know, that our house big data were closed or running in one place didn't make sense at all. And therefore, you know what customers did is to create silos, silos of data everywhere, you know, with different system having a subset of the data. And of course, now you cannot analyze this data in one place. So, snowflake, we really solve that problem by creating a single, you know, architectural where you can put all the data in the cloud. So it's a really cloud native we really thought about You know how to solve that problem, how to create, you know, leverage, Cloud and the lessee cc off cloud to really put all the die in one place, but at the same time not run all workload at the same place. So each workload that runs in Snowflake that is dedicated, You know, computer resource is to run, and that makes it very Ajai, right? You know, Floyd and talk about, you know, data scientists having to run analysis, so they need you know a lot of compute resources, but only for, you know, a few hours on. Do you know, with snowflake they can run these new work lord at this workload to the system, get the compute resources that they need to run this workload. And when it's over, they can shut down. You know that their system, it will be automatically shut down. Therefore, they would not pay for the resources that they don't use. So it's a very Ajai system where you can do this, analyzes when you need, and you have all the power to run all this workload at the same time. >>Well, it's profound what you guys built to me. I mean, of course, everybody's trying to copy it now. It was like, remember that bringing the notion of bringing compute to the data and the Hadoop days, and I think that that Asai say everybody is sort of following your suit now are trying to Florian I gotta say the first data scientist I ever interviewed on the Cube was amazing. Hilary Mason, right after she started a bit Lee. And, you know, she made data science that sounds so compelling. But data science is hard. So same same question for you. What do you see is the biggest challenges for customers that they're facing with data science. >>The biggest challenge, from my perspective, is that owns you solve the issue of the data. Seidel with snowflake, you don't want to bring another Seidel, which would be a side off skills. Essentially, there is to the talent gap between the talented label of the market, or are it is to actually find recruits trained data scientist on what needs to be done. And so you need actually to simplify the access to technologies such as every organization can make it, whatever the talent, by bridging that gap and to get there, there is a need of actually breaking up the silos. And in a collaborative approach where technologists and business work together and actually put some their hands into those data projects together, >>it makes sense for flooring. Let's stay with you for a minute. If I can your observation spaces, you know it's pretty, pretty global, and and so you have a unique perspective on how companies around the world might be using data and data science. Are you seeing any trends may be differences between regions or maybe within different industries. What are you seeing? >>Yes. Yeah, definitely. I do see trends that are not geographic that much, but much more in terms of maturity of certain industries and certain sectors, which are that certain industries invested a lot in terms of data, data access, ability to start data in the last few years and no age, a level of maturity where they can invest more and get to the next steps. And it's really rely on the ability of certain medial certain organization actually to have built this long term strategy a few years ago and no start raping up the benefits. >>You know, a decade ago, Florian Hal Varian, we, you know, famously said that the sexy job in the next 10 years will be statisticians. And then everybody sort of change that to data scientists and then everybody. All the statisticians became data scientists, and they got a raise. But data science requires more than just statistics acumen. What what skills >>do >>you see as critical for the next generation of data science? >>Yeah, it's a good question because I think the first generation of the patient is became the licenses because they could done some pipe and quickly on be flexible. And I think that the skills or the next generation of data sentences will definitely be different. It will be first about being able to speak the language of the business, meaning, oh, you translate data inside predictive modeling all of this into actionable insight or business impact. And it would be about you collaborate with the rest of the business. It's not just a farce. You can build something off fast. You can do a notebook in python or your credit models off themselves. It's about, oh, you actually build this bridge with the business. And obviously those things are important. But we also has become the center of the fact that technology will evolve in the future. There will be new tools and technologies, and they will still need to keep this level of flexibility and get to understand quickly, quickly. What are the next tools they need to use the new languages or whatever to get there. >>As you look back on 2020 what are you thinking? What are you telling people as we head into next year? >>Yeah, I I think it's Zaveri interesting, right? We did this crisis, as has told us that the world really can change from one day to the next. And this has, you know, dramatic, you know, and perform the, you know, aspect. For example, companies all the sudden, you know, So their revenue line, you know, dropping. And they had to do less meat data. Some of the companies was the reverse, right? All the sudden, you know, they were online, like in stock out, for example, and their business, you know, completely, you know, change, you know, from one day to the other. So this GT off, You know, I, you know, adjusting the resource is that you have tow the task a need that can change, you know, using solution like snowflakes, you know, really has that. And we saw, you know, both in in our customers some customers from one day to the to do the next where, you know, growing like big time because they benefited, you know, from from from from co vid and their business benefited, but also, as you know, had to drop. And what is nice with with with cloud, it allows to, you know, I just compute resources toe, you know, to your business needs, you know, and really adjusted, you know, in our, uh, the the other aspect is is understanding what is happening, right? You need to analyze the we saw all these all our customers basically wanted to understand. What is that going to be the impact on my business? How can I adapt? How can I adjust? And and for that, they needed to analyze data. And, of course, a lot of data which are not necessarily data about, you know, their business, but also data from the outside. You know, for example, coffee data, You know, where is the States? You know, what is the impact? You know, geographic impact from covitz, You know, all the time and access to this data is critical. So this is, you know, the promise off the data crowd, right? You know, having one single place where you can put all the data off the world. So our customers, all the Children you know, started to consume the cov data from our that our marketplace and and we had the literally thousands of customers looking at this data analyzing this data, uh, to make good decisions So this agility and and and this, you know, adapt adapting, you know, from from one hour to the next is really critical. And that goes, you know, with data with crowding adjusting, resource is on and that's, you know, doesn't exist on premise. So So So indeed, I think the lesson learned is is we are living in a world which machines changing all the time and we have for understanding We have to adjust and and And that's why cloud, you know, somewhere it's great. >>Excellent. Thank you. You know the kid we like to talk about disruption, of course. Who doesn't on And also, I mean, you look at a I and and the impact that is beginning to have and kind of pre co vid. You look at some of the industries that were getting disrupted by, you know, we talked about digital transformation and you had on the one end of the spectrum industries like publishing which are highly disrupted or taxis. And you could say Okay, well, that's, you know, bits versus Adam, the old Negroponte thing. But then the flip side of that look at financial services that hadn't been dramatically disrupted. Certainly healthcare, which is ripe for disruption Defense. So the number number of industries that really hadn't leaned into digital transformation If it ain't broke, don't fix it. Not on my watch. There was this complacency and then, >>of >>course, co vid broke everything. So, florian, I wonder if you could comment? You know what industry or industries do you think you're gonna be most impacted by data science and what I call machine intelligence or a I in the coming years and decades? >>Honestly, I think it's all of them artist, most of them because for some industries, the impact is very visible because we're talking about brand new products, drones like cars or whatever that are very visible for us. But for others, we are talking about sport from changes in the way you operate as an organization, even if financial industry itself doesn't seems to be so impacted when you look it from the consumer side or the outside. In fact, internally, it's probably impacted just because the way you use data on developer for flexibility, you need the kind off cost gay you can get by leveraging the latest technologies is just enormous, and so it will actually transform the industry that also and overall, I think that 2020 is only a where, from the perspective of a I and analytics, we understood this idea of maturity and resilience, maturity, meaning that when you've got a crisis, you actually need data and ai more than before. You need to actually call the people from data in the room to take better decisions and look for a while and not background. And I think that's a very important learning from 2020 that will tell things about 2021 and the resilience it's like, Yeah, Data Analytics today is a function consuming every industries and is so important that it's something that needs to work. So the infrastructure is to work in frustration in super resilient. So probably not on prime on a fully and prime at some point and the kind of residence where you need to be able to plan for literally anything like no hypothesis in terms of behaviors can be taken for granted. And that's something that is new and which is just signaling that we're just getting to the next step for the analytics. >>I wonder, Benoit, if you have anything to add to that. I mean, I often wonder, you know, winter machine's gonna be able to make better diagnoses than doctors. Some people say already, you know? Well, the financial services traditional banks lose control of payment systems. Uh, you know what's gonna happen to big retail stores? I mean, maybe bring us home with maybe some of your final thoughts. >>Yeah, I would say, you know, I I don't see that as a negative, right? The human being will always be involved very closely, but the machine and the data can really have, you know, see, Coalition, you know, in the data that that would be impossible for for for human being alone, you know, you know, to to discover so So I think it's going to be a compliment, not a replacement on. Do you know everything that has made us you know faster, you know, doesn't mean that that we have less work to do. It means that we can doom or and and we have so much, you know, to do, uh, that that I would not be worried about, You know, the effect off being more efficient and and and better at at our you know, work. And indeed, you know, I fundamentally think that that data, you know, processing off images and doing, you know, I ai on on on these images and discovering, you know, patterns and and potentially flagging, you know, disease, where all year that then it was possible is going toe have a huge impact in in health care, Onda and And as as as Ryan was saying, every you know, every industry is going to be impacted by by that technology. So So, yeah, I'm very optimistic. >>Great guys. I wish we had more time. I gotta leave it there. But so thanks so much for coming on. The Cube was really a pleasure having you.

Published Date : Nov 20 2020

SUMMARY :

And Wa Dodgeville is the he co founded And I have said many times on the Cube that you know, the first era of cloud was really about infrastructure, So you close the gap or the democratizing access to data And we know we all know that you and the snowflake team you get very high marks for Yeah, so So the really the challenge, you know, be four. And, you know, And so you need actually to simplify the access to you know it's pretty, pretty global, and and so you have a unique perspective on how companies the ability of certain medial certain organization actually to have built this long term strategy You know, a decade ago, Florian Hal Varian, we, you know, famously said that the sexy job in the next And it would be about you collaborate with the rest of the business. So our customers, all the Children you know, started to consume the cov you know, we talked about digital transformation and you had on the one end of the spectrum industries You know what industry or industries do you think you're gonna be most impacted by data the kind of residence where you need to be able to plan for literally I mean, I often wonder, you know, winter machine's gonna be able to make better diagnoses that data, you know, processing off images and doing, you know, I ai on I gotta leave it there.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VolontePERSON

0.99+

Florian DuettoPERSON

0.99+

Hilary MasonPERSON

0.99+

Florian Hal VarianPERSON

0.99+

FlorianPERSON

0.99+

BenoitPERSON

0.99+

RyanPERSON

0.99+

Ben WaPERSON

0.99+

Data AikoORGANIZATION

0.99+

2020DATE

0.99+

10 yearsQUANTITY

0.99+

LeePERSON

0.99+

Wa DodgevillePERSON

0.99+

next yearDATE

0.99+

pythonTITLE

0.99+

SnowflakeORGANIZATION

0.99+

firstQUANTITY

0.99+

one placeQUANTITY

0.99+

one hourQUANTITY

0.98+

a decade agoDATE

0.98+

FloydPERSON

0.98+

2021DATE

0.98+

one dayQUANTITY

0.98+

bothQUANTITY

0.97+

todayDATE

0.97+

first generationQUANTITY

0.96+

AdamPERSON

0.93+

OndaORGANIZATION

0.93+

one single placeQUANTITY

0.93+

florianPERSON

0.93+

each workloadQUANTITY

0.92+

oneQUANTITY

0.91+

fourQUANTITY

0.9+

few years agoDATE

0.88+

thousands of customersQUANTITY

0.88+

CubeCOMMERCIAL_ITEM

0.87+

first data scientistQUANTITY

0.84+

singleQUANTITY

0.83+

AsaiPERSON

0.82+

two worldQUANTITY

0.81+

first eraQUANTITY

0.74+

next 10 yearsDATE

0.74+

NegropontePERSON

0.73+

ZaveriORGANIZATION

0.72+

DataikuORGANIZATION

0.7+

CubeORGANIZATION

0.64+

AjaiORGANIZATION

0.58+

yearsDATE

0.57+

covitzPERSON

0.53+

decadesQUANTITY

0.52+

CubePERSON

0.45+

SnowflakeTITLE

0.45+

SeidelORGANIZATION

0.43+

snowflakeEVENT

0.35+

SeidelCOMMERCIAL_ITEM

0.34+

Democratizing AI & Advanced Analytics with Dataiku x Snowflake | Snowflake Data Cloud Summit


 

>> My name is Dave Vellante. And with me are two world-class technologists, visionaries and entrepreneurs. Benoit Dageville, he co-founded Snowflake and he's now the President of the Product Division, and Florian Douetteau is the Co-founder and CEO of Dataiku. Gentlemen, welcome to the cube to first timers, love it. >> Yup, great to be here. >> Now Florian you and Benoit, you have a number of customers in common, and I've said many times on theCUBE, that the first era of cloud was really about infrastructure, making it more agile, taking out costs. And the next generation of innovation, is really coming from the application of machine intelligence to data with the cloud, is really the scale platform. So is that premise relevant to you, do you buy that? And why do you think Snowflake, and Dataiku make a good match for customers? >> I think that because it's our values that aligned, when it gets all about actually today, and knowing complexity of our customers, so you close the gap. Where we need to commoditize the access to data, the access to technology, it's not only about data. Data is important, but it's also about the impacts of data. How can you make the best out of data as fast as possible, as easily as possible, within an organization. And another value is about just the openness of the platform, building a future together. Having a platform that is not just about the platform, but also for the ecosystem of partners around it, bringing the level of accessibility, and flexibility you need for the 10 years of that. >> Yeah, so that's key, that it's not just data. It's turning data into insights. Now Benoit, you came out of the world of very powerful, but highly complex databases. And we know we all know that you and the Snowflake team, you get very high marks for really radically simplifying customers' lives. But can you talk specifically about the types of challenges that your customers are using Snowflake to solve? >> Yeah, so the challenge before snowflake, I would say, was really to put all the data in one place, and run all the computes, all the workloads that you wanted to run against that data. And of course existing legacy platforms were not able to support that level of concurrency, many workload, we talk about machine learning, data science, data engineering, data warehouse, big data workloads, all running in one place didn't make sense at all. And therefore be what customers did this to create silos, silos of data everywhere, with different system, having a subset of the data. And of course now, you cannot analyze this data in one place. So Snowflake, we really solved that problem by creating a single architecture where you can put all the data into cloud. So it's a really cloud native. We really thought about how solve that problem, how to create, leverage cloud, and the elasticity of cloud to really put all the data in one place. But at the same time, not run all workload at the same place. So each workload that runs in Snowflake, at its dedicated compute resources to run. And that makes it agile, right? Florian talked about data scientist having to run analysis, so they need a lot of compute resources, but only for a few hours. And with Snowflake, they can run these new workload, add this workload to the system, get the compute resources that they need to run this workload. And then when it's over, they can shut down their system, it will automatically shut down. Therefore they would not pay for the resources that they don't use. So it's a very agile system, where you can do this analysis when you need, and you have all the power to run all these workload at the same time. >> Well, it's profound what you guys built. I mean to me, I mean of course everybody's trying to copy it now, it was like, I remember that bringing the notion of bringing compute to the data, in the Hadoop days. And I think that, as I say, everybody is sort of following your suit now or trying to. Florian, I got to say the first data scientist I ever interviewed on theCUBE, it was the amazing Hillary Mason, right after she started at Bitly, and she made data sciences sounds so compelling, but data science is a hard. So same question for you, what do you see as the biggest challenges for customers that they're facing with data science? >> The biggest challenge from my perspective, is that once you solve the issue of the data silo, with Snowflake, you don't want to bring another silo, which will be a silo of skills. And essentially, thanks to the talent gap, between the talent available to the markets, or are released to actually find recruits, train data scientists, and what needs to be done. And so you need actually to simplify the access to technologies such as, every organization can make it, whatever the talent, by bridging that gap. And to get there, there's a need of actually backing up the silos. Having a collaborative approach, where technologies and business work together, and actually all puts up their ends into those data projects together. >> It makes sense, Florain let's stay with you for a minute, if I can. Your observation space, it's pretty, pretty global. And so you have a unique perspective on how can companies around the world might be using data, and data science. Are you seeing any trends, maybe differences between regions, or maybe within different industries? What are you seeing? >> Yeah, definitely I do see trends that are not geographic, that much, but much more in terms of maturity of certain industries and certain sectors. Which are, that certain industries invested a lot, in terms of data, data access, ability to store data. As well as experience, and know region level of maturity, where they can invest more, and get to the next steps. And it's really relying on the ability of certain leaders, certain organizations, actually, to have built these long-term data strategy, a few years ago when no stats reaping of the benefits. >> A decade ago, Florian, Hal Varian famously said that the sexy job in the next 10 years will be statisticians. And then everybody sort of changed that to data scientist. And then everybody, all the statisticians became data scientists, and they got a raise. But data science requires more than just statistics acumen. What skills do you see as critical for the next generation of data science? >> Yeah, it's a great question because I think the first generation of data scientists, became data scientists because they could have done some Python quickly, and be flexible. And I think that the skills of the next generation of data scientists will definitely be different. It will be, first of all, being able to speak the language of the business, meaning how you translates data insight, predictive modeling, all of this into actionable insights of business impact. And it would be about how you collaborate with the rest of the business. It's not just how fast you can build something, how fast you can do a notebook in Python, or do predictive models of some sorts. It's about how you actually build this bridge with the business, and obviously those things are important, but we also must be cognizant of the fact that technology will evolve in the future. There will be new tools, new technologies, and they will still need to keep this level of flexibility to understand quickly what are the next tools they need to use a new languages, or whatever to get there. >> As you look back on 2020, what are you thinking? What are you telling people as we head into next year? >> Yeah, I think it's very interesting, right? This crises has told us that the world really can change from one day to the next. And this has dramatic and perform the aspects. For example companies all of a sudden, show their revenue line dropping, and they had to do less with data. And some other companies was the reverse, right? All of a sudden, they were online like Instacart, for example, and their business completely changed from one day to the other. So this agility of adjusting the resources that you have to do the task, and need that can change, using solution like Snowflake really helps that. Then we saw both in our customers. Some customers from one day to the next, were growing like big time, because they benefited from COVID, and their business benefited. But others had to drop. And what is nice with cloud, it allows you to adjust compute resources to your business needs, and really address it in house. The other aspect is understanding what happening, right? You need to analyze. We saw all our customers basically, wanted to understand what is the going to be the impact on my business? How can I adapt? How can I adjust? And for that, they needed to analyze data. And of course, a lot of data which are not necessarily data about their business, but also they are from the outside. For example, COVID data, where is the States, what is the impact, geographic impact on COVID, the time. And access to this data is critical. So this is the premise of the data cloud, right? Having one single place, where you can put all the data of the world. So our customer obviously then, started to consume the COVID data from that our data marketplace. And we had delete already thousand customers looking at this data, analyzing these data, and to make good decisions. So this agility and this, adapting from one hour to the next is really critical. And that goes with data, with cloud, with interesting resources, and that doesn't exist on premise. So indeed I think the lesson learned is we are living in a world, which is changing all the time, and we have to understand it. We have to adjust, and that's why cloud some ways is great. >> Excellent thank you. In theCUBE we like to talk about disruption, of course, who doesn't? And also, I mean, you look at AI, and the impact that it's beginning to have, and kind of pre-COVID. You look at some of the industries that were getting disrupted by, everyone talks about digital transformation. And you had on the one end of the spectrum, industries like publishing, which are highly disrupted, or taxis. And you can say, okay, well that's Bits versus Adam, the old Negroponte thing. But then the flip side of, you say look at financial services that hadn't been dramatically disrupted, certainly healthcare, which is ripe for disruption, defense. So there a number of industries that really hadn't leaned into digital transformation, if it ain't broke, don't fix it. Not on my watch. There was this complacency. And then of course COVID broke everything. So Florian I wonder if you could comment, what industry or industries do you think are going to be most impacted by data science, and what I call machine intelligence, or AI, in the coming years and decade? >> Honestly, I think it's all of them, or at least most of them, because for some industries, the impact is very visible, because we have talking about brand new products, drones, flying cars, or whatever that are very visible for us. But for others, we are talking about a part from changes in the way you operate as an organization. Even if financial industry itself doesn't seem to be so impacted, when you look at it from the consumer side, or the outside insights in Germany, it's probably impacted just because the way you use data (mumbles) for flexibility you need. Is there kind of the cost gain you can get by leveraging the latest technologies, is just the numbers. And so it's will actually comes from the industry that also. And overall, I think that 2020, is a year where, from the perspective of AI and analytics, we understood this idea of maturity and resilience, maturity meaning that when you've got to crisis you actually need data and AI more than before, you need to actually call the people from data in the room to take better decisions, and look for one and a backlog. And I think that's a very important learning from 2020, that will tell things about 2021. And the resilience, it's like, data analytics today is a function transforming every industries, and is so important that it's something that needs to work. So the infrastructure needs to work, the infrastructure needs to be super resilient, so probably not on prem or not fully on prem, at some point. And the kind of resilience where you need to be able to blend for literally anything, like no hypothesis in terms of BLOs, can be taken for granted. And that's something that is new, and which is just signaling that we are just getting to a next step for data analytics. >> I wonder Benoir if you have anything to add to that. I mean, I often wonder, when are machines going to be able to make better diagnoses than doctors, some people say already. Will the financial services, traditional banks lose control of payment systems? What's going to happen to big retail stores? I mean, maybe bring us home with maybe some of your finals thoughts. >> Yeah, I would say I don't see that as a negative, right? The human being will always be involved very closely, but then the machine, and the data can really help, see correlation in the data that would be impossible for human being alone to discover. So I think it's going to be a compliment not a replacement. And everything that has made us faster, doesn't mean that we have less work to do. It means that we can do more. And we have so much to do, that I will not be worried about the effect of being more efficient, and bare at our work. And indeed, I fundamentally think that data, processing of images, and doing AI on these images, and discovering patterns, and potentially flagging disease way earlier than it was possible. It is going to have a huge impact in health care. And as Florian was saying, every industry is going to be impacted by that technology. So, yeah, I'm very optimistic. >> Great, guys, I wish we had more time. I've got to leave it there, but so thanks so much for coming on theCUBE. It was really a pleasure having you.

Published Date : Nov 9 2020

SUMMARY :

and Florian Douetteau is the And the next generation of innovation, the access to data, about the types of challenges all the workloads that you of bringing compute to the And essentially, thanks to the talent gap, And so you have a unique perspective And it's really relying on the that the sexy job in the next 10 years of the next generation the resources that you have and the impact that And the kind of resilience where you need Will the financial services, and the data can really help, I've got to leave it there,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

BenoitPERSON

0.99+

Florian DouetteauPERSON

0.99+

FlorianPERSON

0.99+

Benoit DagevillePERSON

0.99+

DataikuORGANIZATION

0.99+

2020DATE

0.99+

Hillary MasonPERSON

0.99+

Hal VarianPERSON

0.99+

10 yearsQUANTITY

0.99+

PythonTITLE

0.99+

SnowflakeORGANIZATION

0.99+

GermanyLOCATION

0.99+

one hourQUANTITY

0.99+

bothQUANTITY

0.99+

next yearDATE

0.99+

BitlyORGANIZATION

0.99+

one dayQUANTITY

0.98+

2021DATE

0.98+

A decade agoDATE

0.98+

one placeQUANTITY

0.97+

Snowflake Data Cloud SummitEVENT

0.97+

SnowflakeTITLE

0.96+

each workloadQUANTITY

0.96+

todayDATE

0.96+

first generationQUANTITY

0.96+

BenoirPERSON

0.95+

snowflakeEVENT

0.94+

first eraQUANTITY

0.92+

COVIDOTHER

0.92+

single architectureQUANTITY

0.91+

thousand customersQUANTITY

0.9+

first data scientistQUANTITY

0.9+

oneQUANTITY

0.88+

one single placeQUANTITY

0.87+

few years agoDATE

0.86+

NegropontePERSON

0.85+

FlorainORGANIZATION

0.82+

two worldQUANTITY

0.81+

firstQUANTITY

0.8+

InstacartORGANIZATION

0.75+

next 10 yearsDATE

0.7+

hoursQUANTITY

0.67+

SnowflakeEVENT

0.59+

a minuteQUANTITY

0.58+

theCUBEORGANIZATION

0.55+

AdamPERSON

0.49+

Benoit Dageville and Florian Douetteau V1


 

>> Hello everyone, welcome back to theCUBE'S wall to wall coverage of the Snowflake Data Cloud Summit. My name is Dave Vellante and with me are two world-class technologists, visionaries, and entrepreneurs. Benoit Dageville is the, he co-founded Snowflake. And he's now the president of the Product division and Florian Douetteau is the co-founder and CEO of Dataiku. Gentlemen, welcome to theCUBE, two first timers, love it. >> Great time to be here. >> Now Florian, you and Benoit, you have a number of customers in common. And I've said many times on theCUBE that, the first era of cloud was really about infrastructure, making it more agile taking out costs. And the next generation of innovation is really coming from the application of machine intelligence to data with the cloud, is really the scale platform. So is that premise relevant to you, do you buy that? And why do you think Snowflake and Dataiku make a good match for customers? >> I think that because it's our values that align. When it gets all about actually today, and knowing complexity per customer, so you close the gap or we need to commoditize the access to data, the access to technology, it's not only about data, data is important, but it's also about the impacts of data. How can you make the best out of data as fast as possible, as easily as possible within an organization? And another value is about just the openness of the platform, building a future together. I think a platform that is not just about the platform but also for the ecosystem of partners around it, bringing the little bit of accessibility and flexibility, you need for the 10 years of that. >> Yes, so that's key, but it's not just data. It's turning data into insights. Now Benoit, you came out of the world of very powerful, but highly complex databases. And we all know that, you and the Snowflake team, you get very high marks for really radically simplifying customers' lives. But can you talk specifically about the types of challenges that your customers are using Snowflake to solve? >> Yeah, so really the challenge before Snowflake, I would say, was really to put all the data, in one place and run all the computes, all the workloads that you wanted to run, against that data. And of course, existing legacy platforms were not able to support that level of concurrency, many workload. We talk about machine learning, data science, data engineering, data warehouse, big data workloads, all running in one place, didn't make sense at all. And therefore, what customers did, is to create silos, silos of data everywhere, with different systems having a subset of the data. And of course now you cannot analyze this data in one place. So Snowflake, we really solved that problem by creating a single architecture where you can put all the data in the cloud. So it's a really cloud native. We really thought about how to solve that problem, how to create leverage cloud and the elasticity of cloud to really put all the data in one place. But at the same time, not run all workload at the same place. So each workload that runs in Snowflake at least dedicate compute resources to run. And that makes it very agile, right. Florian talked about data scientist having to run analysis. So they need a lot of compute resources, but only for few hours and with Snowflake, they can run these new workload, add this workload to the system, get the compute resources that they need to run this workload. And then when it's over, they can shut down their system. It will automatically shut down. Therefore they would not pay for the resources that they don't choose. So it's a very agile system, where you can do these analysis when you need, and you have all the power to run all these workload at the same time. >> Well, it's profound what you guys built. To me, I mean, because everybody's trying to copy it now. It's like, I remember the notion of bringing compute to the data in the Hadoop days. And I think that, as I say, everybody is sort of following your suit now or trying to. Florian, I got to say, the first data scientist I ever interviewed on theCUBE was the amazing Hilary Mason, right after she started at Bitly. And she made data science sounds so compelling, but data science is hard. So same question for you. What do you see is the biggest challenges for customers that they're facing with data science? >> The biggest challenge from my perspective is that once you solve the issue of the data silo with Snowflake, you don't want to bring another silo, which would be a silo of skills. And essentially, thanks to that talent gap between the talent and labor of the markets, or how it is to actually find, recruit and train data scientists and what needs to be done. And so you need actually to simplify the access to technology such as every organization can make it, whatever the talents by bridging that gap. And to get there, there is a need of actually breaking up the silos. I think a collaborative approach, where technologies and business work together and actually all put some of their ends into those data projects together. >> Yeah, it makes sense. So Florian, Let's stay with you for a minute, if I can. Your observation spaces, is pretty, pretty global. And so, you have a unique perspective on how companies around the world might be using data and data science. Are you seeing any trends, maybe differences between regions or maybe within different industries? What are you seeing? >> Yep. Yeah, definitely, I do see trends that are not geographic that much, but much more in terms of maturity of certain industries and certain sectors, which are that certain industries invested a lot in terms of data, data access, ability to store data as well as few years and know each level of maturity where they can invest more and get to the next steps. And it's really reliant to reach out to certain details, certain organization, actually to have built this longterm data strategy a few years ago, and no stocks ripping off the benefits. >> You know, a decade ago, Florian, Hal Varian famously said that the sexy job in the next 10 years will be statisticians. And then everybody sort of changed that to data scientists. And then everybody, all the statisticians became data scientists and they got a raise. But data science requires more than just statistics acumen. What skills do you see is critical for the next generation of data science? >> Yeah, it's a good question because I think the first generation of data scientists became better scientists because they could learn some Python quickly and be flexible. And I think that skills of the next generation of data scientists will definitely be different. It will be first about being able to speak the language of the business, meaning all you translate data insight, predictive modeling, all of this into actionable insights or business impact. And it will be about who you collaborate with the rest of the business. It's not just how fast you can build something, how fast you can do a notebook in Python or do quantity models of some sorts. It's about how you actually build this bridge with the business. And obviously those things are important, but we also must be cognizant of the fact that technology will evolve in the future. There will be new tools in technologies, and they will still need to get this level of flexibility and get to understand quickly what are the next tools, they need to use or new languages or whatever to get there. >> Thank you for that. Benoit, let's come back to you. This year has been tumultuous to say the least for everyone, but it's a good time to be in tech, ironically. And if you're in cloud, it's even better. But you look at Snowflake and Dataiku, you guys had done well, despite the economic uncertainty and the challenges of the pandemic. As you look back on 2020, what are you thinking? What are you telling people as we head into next year? >> Yeah, I think it's very interesting, right. We, this crisis has told us that the world really can change from one day to the next. And this has dramatic and profound aspects. For example, companies all of a sudden, saw their revenue line dropping and they had to do less with data. And some of the companies was the reverse, right? All of a sudden, they were online like Instacart, for example, and their business completely change from one day to the other. So this agility of adjusting the resources that you have to do the task, a need that can change, using solution like Snowflake, really helps that. And we saw both in our customers. Some customers from one day to the next, were growing like big time, because they benefited from COVID and their business benefited, but also, as you know, had to drop and what is nice with cloud, it allows to adjust compute resources to your business needs and really address it in-house. The other aspect is understanding what is happening, right? You need to analyze. So we saw all our customers basically wanted to understand, what is it going to be the impact on my business? How can I adapt? How can I adjust? And for that, they needed to analyze data. And of course, a lot of data, which are not necessarily data about their business, but also data from the outside. For example, COVID data. Where is the state, what is the impact, geographic impact on COVID all the time. And access to this data is critical. So this is the promise of the data cloud, right? Having one single place where you can put all the data of the world. So, our customers all of a sudden, started to consume the COVID data from our data marketplace. And we have the unit already thousands of customers looking at this data, analyzing this data to make good decisions. So this agility and this adapting from one hour to the next is really critical and that goes with data, with cloud, more interesting resources and that's doesn't exist on premise. So, indeed I think the lesson learned is, we are living in a world which is changing all the time, and we have to understand it. We have to adjust and that's why cloud, some way is great. >> Excellent, thank you. You know, in theCUBE, we like to talk about disruption, of course, who doesn't. And also, I mean, you look at AI and the impact that it's beginning to have and kind of pre-COVID, you look at some of the industries that were getting disrupted by, everybody talks about digital transformation and you had on the one end of the spectrum, industries like publishing, which are highly disrupted or taxis, and you can say, "Okay well, that's Bits versus Adam, the old Negroponte thing." But then the flip side of this, it says, "Look at financial services that hadn't been dramatically disrupted, certainly healthcare, which is right for disruption, defense." So the more the number of industries that really hadn't leaned into digital transformation, if it ain't broke, don't fix it. Not on my watch. There was this complacency. And then of course COVID broke everything. So Florian, I wonder if you could comment, what industry or industries do you think are going to be most impacted by data science and what I call machine intelligence or AI in the coming years and decades? >> Honestly, I think it's all of them, or at least most of them. Because for some industries, the impact is very visible because we are talking about brand new products, drones, flying cars, or whatever is that are very visible for us. But for others, we are talking about spectrum changes in the way you operate as an organization. Even if financial industry itself doesn't seem to be so impacted when you look at it from the consumer side or the outside. In fact internally, it's probably impacted just because of the way you use data to develop for flexibility you need, is there kind of a cost gain you can get by leveraging the latest technologies, is just enormous. And so it will, actually comes from the industry, that also. And overall, I think that 2020 is a year where, from the perspective of AI and analytics, we understood this idea of maturity and resilience. Maturity, meaning that when you've got a crisis, you actually need data and AI more than before, you need to actually call the people from data in the room to take better decisions and look forward and not backward. And I think that's a very important learning from 2020 that will tell things about 2021. And resilience, it's like, yeah, data analytics today is a function consuming every industries, and is so important that it's something that needs to work. So the infrastructure needs to work, the infrastructure needs to be super resilient. So probably not on trend and not fully on trend, at some point and the kind of residence where you need to be able to plan for literally anything. like no hypothesis in terms of behaviors can be taken for granted. And that's something that is new and which is just signaling that we are just getting into a next step for all data analytics. >> I wonder Benoit, if you have anything to add to that, I mean, I often wonder, you know, when are machines going to be able to make better diagnoses than doctors, some people say already. Will the financial services, traditional banks lose control of payment systems? You know, what's going to happen to big retail stores? I mean, may be bring us home with maybe some of your final thoughts. >> Yeah, I would say, I don't see that as a negative, right? The human being will always be involved very closely, but then the machine and the data can really help, see correlation in the data that would be impossible for human being alone to discover. So, I think it's going to be a compliment, not a replacement and everything that has made us faster, doesn't mean that we have less work to do. It means that we can do more. And we have so much to do. That I would not be worried about the effect of being more efficient and better at our work. And indeed, I fundamentally think that, data, processing of images and doing AI on these images and discovering patterns and potentially flagging disease, way earlier than it was possible, it is going to have a huge impact in health care. And as Florian was saying, every industry is going to be impacted by that technology. So, yeah, I'm very optimistic. >> Great, Guys, I wish we had more time. We got to leave it there but so thanks so much for coming on theCUBE. It was really a pleasure having you. >> [Benoit & Florian] Thank you. >> You're welcome but keep it right there, everybody. We'll back with our next guest, right after this short break. You're watching theCUBE.

Published Date : Oct 21 2020

SUMMARY :

And he's now the president And the next generation of the access to data, the And we all know that, you all the workloads that you the notion of bringing the access to technology such as And so, you have a unique And it's really reliant to reach out Hal Varian famously said that the sexy job And it will be about who you collaborate and the challenges of the pandemic. adjusting the resources that you have end of the spectrum, of the way you use data to I mean, I often wonder, you know, So, I think it's going to be a compliment, We got to leave it there right after this short break.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

FlorianPERSON

0.99+

BenoitPERSON

0.99+

Florian DouetteauPERSON

0.99+

Benoit DagevillePERSON

0.99+

2020DATE

0.99+

10 yearsQUANTITY

0.99+

DataikuORGANIZATION

0.99+

Hilary MasonPERSON

0.99+

PythonTITLE

0.99+

Hal VarianPERSON

0.99+

next yearDATE

0.99+

SnowflakeORGANIZATION

0.99+

one placeQUANTITY

0.99+

bothQUANTITY

0.99+

one hourQUANTITY

0.99+

BitlyORGANIZATION

0.99+

Snowflake Data Cloud SummitEVENT

0.99+

a decade agoDATE

0.98+

one dayQUANTITY

0.98+

theCUBEORGANIZATION

0.98+

firstQUANTITY

0.98+

each levelQUANTITY

0.98+

SnowflakeTITLE

0.98+

2021DATE

0.97+

todayDATE

0.97+

first generationQUANTITY

0.97+

pandemicEVENT

0.97+

few years agoDATE

0.93+

thousands of customersQUANTITY

0.93+

single architectureQUANTITY

0.92+

first eraQUANTITY

0.88+

NegropontePERSON

0.87+

first data scientistQUANTITY

0.87+

InstacartORGANIZATION

0.87+

This yearDATE

0.86+

one single placeQUANTITY

0.86+

twoQUANTITY

0.83+

two world-QUANTITY

0.78+

each workloadQUANTITY

0.78+

oneQUANTITY

0.76+

AdamPERSON

0.74+

next 10 yearsDATE

0.69+

first timersQUANTITY

0.52+

COVIDOTHER

0.51+

COVIDORGANIZATION

0.43+

COVIDEVENT

0.37+

decadesDATE

0.29+

Maurizio Davini, University of Pisa and Thierry Pellegrino, Dell Technologies | VMworld 2020


 

>> From around the globe, it's theCUBE, with digital coverage of VMworld 2020, brought to you by the VMworld and its ecosystem partners. >> I'm Stu Miniman, and welcome back to theCUBES coverage of VMworld 2020, our 11th year doing this show, of course, the global virtual event. And what do we love talking about on theCUBE? We love talking to customers. It is a user conference, of course, so really happy to welcome to the program. From the University of Pisa, the Chief Technology Officer Maurizio Davini and joining him is Thierry Pellegrini, one of our theCUBE alumni. He's the vice president of worldwide, I'm sorry, Workload Solutions and HPC with Dell Technologies. Thierry, thank you so much for joining us. >> Thanks too. >> Thanks to you. >> Alright, so let, let's start. The University of Pisa, obviously, you know, everyone knows Pisa, one of the, you know, famous city iconic out there. I know, you know, we all know things in Europe are a little bit longer when you talk about, you know, some of the venerable institutions here in the United States, yeah. It's a, you know, it's a couple of hundred years, you know, how they're using technology and everything. I have to imagine the University of Pisa has a long storied history. So just, if you could start before we dig into all the tech, give us our audience a little bit, you know, if they were looking up on Wikipedia, what's the history of the university? >> So University of Pisa is one of the oldest in the world because there has been founded in 1343 by a pope. We were authorized to do a university teaching by a pope during the latest Middle Ages. So it's really one of the, is not the oldest of course, but the one of the oldest in the world. It has a long history, but as never stopped innovating. So anything in Pisa has always been good for innovating. So either for the teaching or now for the technology applied to a remote teaching or a calculation or scientific computing, So never stop innovating, never try to leverage new technologies and new kind of approach to science and teaching. >> You know, one of your historical teachers Galileo, you know, taught at the university. So, you know, phenomenal history help us understand, you know, you're the CTO there. What does that encompass? How, you know, how many students, you know, are there certain areas of research that are done today before we kind of get into the, you know, the specific use case today? >> So consider that the University of Pisa is a campus in the sense that the university faculties are spread all over the town. Medieval like Pisa poses a lot of problems from the infrastructural point of view. So, we have bought a lot in the past to try to adapt the Medieval town to the latest technologies advancement. Now, we have 50,000 students and consider that Pisa is a general partners university. So, we cover science, like we cover letters in engineering, medicine, and so on. So, during the, the latest 20 years, the university has done a lot of effort to build an infrastructure that was able to develop and deploy the latest technologies for the students. So for example, we have a private fiber network covering all the town, 65 kilometers of a dark fiber that belongs to the university, four data centers, one big and three little center connected today at 200 gigabit ethernet. We have a big data center, big for an Italian University, of course, and not Poland and U.S. university, where is, but also hold infrastructure for the enterprise services and the scientific computing. >> Yep, Maurizio, it's great that you've had that technology foundation. I have to imagine the global pandemic COVID-19 had an impact. What's it been? You know, how's the university dealing with things like work from home and then, you know, Thierry would love your commentary too. >> You know, we, of course we were not ready. So we were eaten by the pandemic and we have to adapt our service software to transform from imperson to remote services. So we did a lot of work, but we are able, thanks to the technology that we have chosen to serve almost a 100% of our curriculum studies program. We did a lot of work in the past to move to virtualization, to enable our users to work for remote, either for a workstation or DC or remote laboratories or remote calculation. So virtualization has designed in the past our services. And of course when we were eaten by the pandemic, we were almost ready to transform our service from in person to remote. >> Yeah, I think it's, it's true, like Maurizio said, nobody really was preparing for this pandemic. And even for, for Dell Technologies, it was an interesting transition. And as you can probably realize a lot of the way that we connect with customers is in person. And we've had to transition over to modes or digitally connecting with customers. We've also spent a lot of our energy trying to help the community HPC and AI community fight the COVID pandemic. We've made some of our own clusters that we use in our HPC and AI innovation center here in Austin available to genomic research or other companies that are fighting the the virus. And it's been an interesting transition. I can't believe that it's already been over six months now, but we've found a new normal. >> Detailed, let's get in specifically to how you're partnering with Dell. You've got a strong background in the HPC space, working with supercomputers. What is it that you're turning to Dell in their ecosystem to help the university with? >> So we are, we have a long history in HPC. Of course, like you can imagine not to the biggest HPC like is done in the U.S. so in the biggest supercomputer center in Europe. We have several system for doing HPC. Traditionally, HPC that are based on a Dell Technologies offer. We typically host all kind of technology's best, but now it's available, of course not in a big scale but in a small, medium scale that we are offering to our researcher, student. We have a strong relationship with Dell Technologies developing together solution to leverage the latest technologies, to the scientific computing, and this has a lot during the research that has been done during this pandemic. >> Yeah, and it's true. I mean, Maurizio is humble, but every time we have new technologies that are to be evaluated, of course we spend time evaluating in our labs, but we make it a point to share that technology with Maurizio and the team at the University of Pisa, That's how we find some of the better usage models for customers, help tuning some configurations, whether it's on the processor side, the GPU side, the storage and the interconnect. And then the topic of today, of course, with our partners at VMware, we've had some really great advancements Maurizio and the team are what we call a center of excellence. We have a few of them across the world where we have a unique relationship sharing technology and collaborating on advancement. And recently Maurizio and the team have even become one of the VMware certified centers. So it's a great marriage for this new world where virtual is becoming the norm. >> But well, Thierry, you and I had a conversation to talk earlier in the year when VMware was really geering their full kind of GPU suite and, you know, big topic in the keynote, you know, Jensen, the CEO of Nvidia was up on stage. VMware was talking a lot about AI solutions and how this is going to help. So help us bring us in you work with a lot of the customers theory. What is it that this enables for them and how to, you know, Dell and VMware bring, bring those solutions to bear? >> Yes, absolutely. It's one statistic I'll start with. Can you believe that only on average, 15 to 20% of GPU are fully utilized? So, when you think about the amount of technology that's are at our fingertips and especially in a world today where we need that technology to advance research and scientistic discoveries. Wouldn't it be fantastic to utilize those GPU's to the best of our ability? And it's not just GPU's , I think the industry has in the IT world, leverage virtualization to get to the maximum recycles for CPU's and storage and networking. Now you're bringing the GPU in the fold and you have a perfect utilization and also flexibility across all those resources. So what we've seen is that convergence between the IT world that was highly virtualized, and then this highly optimized world of HPC and AI because of the resources out there and researchers, but also data scientists and company want to be able to run their day to day activities on that infrastructure. But then when they have a big surge need for research or a data science use that same environment and then seamlessly move things around workload wise. >> Yeah, okay I do believe your stat. You know, the joke we always have is, you know, anybody from a networking background, there's no such thing as eliminating a bottleneck, you just move it. And if you talk about utilization, we've been playing the shell game for my entire career of, let's try to optimize one thing and then, oh, there's something else that we're not doing. So,you know, so important. Retail, I want to hear from your standpoint, you know, virtualization and HPC, you know, AI type of uses there. What value does this bring to you and, you know, and key learnings you've had in your organization? >> So, we as a university are a big users of the VMware technologies starting from the traditional enterprise workload and VPI. We started from there in the sense that we have an installation quite significant. But also almost all the services that the university gives to our internal users, either personnel or staff or students. At a certain point that we decided to try to understand the, if a VMware virtualization would be good also for scientific computing. Why? Because at the end of the day, their request that we have from our internal users is flexibility. Flexibility in the sense of be fast in deploying, be fast to reconfiguring, try to have the latest beats on the software side, especially on the AI research. At the end of the day we designed a VMware solution like you, I can say like a whiteboard. We have a whiteboard, and we are able to design a new solution of this whiteboard and to deploy as fast as possible. Okay, what we face as IT is not a request of the maximum performance. Our researchers ask us for flexibility then, and want to be able to have the maximum possible flexibility in configuring the systems. How can I say I, we can deploy as more test cluster on the visual infrastructure in minutes or we can use GPU inside the infrastructure tests, of test of new algorithm for deep learning. And we can use faster storage inside the virtualization to see how certain algorithm would vary with our internal developer can leverage the latest, the beat in storage like NVME, MVMS or so. And this is why at the certain point, we decided to try visualization as a base for HPC and scientific computing, and we are happy. >> Yeah, I think Maurizio described it it's flexibility. And of course, if you think optimal performance, you're looking at the bare medal, but in this day and age, as I stated at the beginning, there's so much technology, so much infrastructure available that flexibility at times trump the raw performance. So, when you have two different research departments, two different portions, two different parts of the company looking for an environment. No two environments are going to be exactly the same. So you have to be flexible in how you aggregate the different components of the infrastructure. And then think about today it's actually fantastic. Maurizio was sharing with me earlier this year, that at some point, as we all know, there was a lot down. You could really get into a data center and move different cables around or reconfigure servers to have the right ratio of memory, to CPU, to storage, to accelerators, and having been at the forefront of this enablement has really benefited University of Pisa and given them that flexibility that they really need. >> Wonderful, well, Maurizio my understanding, I believe you're giving a presentation as part of the activities this week. Give us a final glimpses to, you know, what you want your peers to be taking away from what you've done? >> What we have done that is something that is very simple in the sense that we adapt some open source software to our infrastructure in order to enable our system managers and users to deploy HPC and AI solution fastly and in an easy way to our VMware infrastructure. We started doing a sort of POC. We designed the test infrastructure early this year and then we go fastly to production because we had about the results. And so this is what we present in the sense that you can have a lot of way to deploy Vitola HPC, Barto. We went for a simple and open source solution. Also, thanks to our friends of Dell Technologies in some parts that enabled us to do the works and now to go in production. And that's theory told before you talked to has a lot during the pandemic due to the effect that stay at home >> Wonderful, Thierry, I'll let you have the final word. What things are you drawing customers to, to really dig in? Obviously there's a cost savings, or are there any other things that this unlocks for them? >> Yeah, I mean, cost savings. We talked about flexibility. We talked about utilization. You don't want to have a lot of infrastructure sitting there and just waiting for a job to come in once every two months. And then there's also the world we live in, and we all live our life here through a video conference, or at times through the interface of our phone and being able to have this web based interaction with a lot of infrastructure. And at times the best infrastructure in the world, makes things simpler, easier, and hopefully bring science at the finger tip of data scientists without having to worry about knowing every single detail on how to build up that infrastructure. And with the help of the University of Pisa, one of our centers of excellence in Europe, we've been innovating and everything that's been accomplished for, you know at Pisa can be accomplished by our customers and our partners around the world. >> Thierry, Maurizio, thank you much for so much for sharing and congratulations on all I know you've done building up that COE. >> Thanks to you. >> Thank you. >> Stay with us, lots more covered from VMworld 2020. I'm Stu Miniman as always. Thank you for watching the theCUBE. (soft music)

Published Date : Sep 30 2020

SUMMARY :

brought to you by the VMworld of course, the global virtual event. here in the United States, yeah. So either for the teaching or you know, you're the CTO there. So consider that the University of Pisa and then, you know, Thierry in the past our services. that are fighting the the virus. background in the HPC space, so in the biggest Maurizio and the team are the keynote, you know, Jensen, because of the resources You know, the joke we in the sense that we have an and having been at the as part of the activities this week. and now to go in production. What things are you drawing and our partners around the world. Thierry, Maurizio, thank you much Thank you for watching the theCUBE.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MaurizioPERSON

0.99+

ThierryPERSON

0.99+

Thierry PellegriniPERSON

0.99+

EuropeLOCATION

0.99+

15QUANTITY

0.99+

VMwareORGANIZATION

0.99+

DellORGANIZATION

0.99+

AustinLOCATION

0.99+

Stu MinimanPERSON

0.99+

University of PisaORGANIZATION

0.99+

NvidiaORGANIZATION

0.99+

JensenPERSON

0.99+

Maurizio DaviniPERSON

0.99+

1343DATE

0.99+

Dell TechnologiesORGANIZATION

0.99+

United StatesLOCATION

0.99+

65 kilometersQUANTITY

0.99+

50,000 studentsQUANTITY

0.99+

U.S.LOCATION

0.99+

200 gigabitQUANTITY

0.99+

PisaLOCATION

0.99+

three little centerQUANTITY

0.99+

GalileoPERSON

0.99+

todayDATE

0.99+

11th yearQUANTITY

0.99+

VMworld 2020EVENT

0.99+

over six monthsQUANTITY

0.99+

20%QUANTITY

0.98+

oneQUANTITY

0.98+

two different partsQUANTITY

0.97+

Thierry PellegrinoPERSON

0.97+

pandemicEVENT

0.97+

four data centersQUANTITY

0.96+

one bigQUANTITY

0.96+

earlier this yearDATE

0.96+

this weekDATE

0.96+

Middle AgesDATE

0.96+

COVID pandemicEVENT

0.96+

theCUBEORGANIZATION

0.95+

VMworldORGANIZATION

0.95+

100%QUANTITY

0.95+

early this yearDATE

0.95+

20 yearsQUANTITY

0.91+

HPCORGANIZATION

0.9+

two different research departmentsQUANTITY

0.9+

two different portionsQUANTITY

0.89+

PolandLOCATION

0.88+

one thingQUANTITY

0.87+

WikipediaORGANIZATION

0.86+

Kubernetes on Any Infrastructure Top to Bottom Tutorials for Docker Enterprise Container Cloud


 

>>all right, We're five minutes after the hour. That's all aboard. Who's coming aboard? Welcome everyone to the tutorial track for our launchpad of them. So for the next couple of hours, we've got a SYRIZA videos and experts on hand to answer questions about our new product, Doctor Enterprise Container Cloud. Before we jump into the videos and the technology, I just want to introduce myself and my other emcee for the session. I'm Bill Milks. I run curriculum development for Mirant us on. And >>I'm Bruce Basil Matthews. I'm the Western regional Solutions architect for Moran Tissue esa and welcome to everyone to this lovely launchpad oven event. >>We're lucky to have you with us proof. At least somebody on the call knows something about your enterprise Computer club. Um, speaking of people that know about Dr Enterprise Container Cloud, make sure that you've got a window open to the chat for this session. We've got a number of our engineers available and on hand to answer your questions live as we go through these videos and disgusting problem. So that's us, I guess, for Dr Enterprise Container Cloud, this is Mirant asses brand new product for bootstrapping Doctor Enterprise Kubernetes clusters at scale Anything. The airport Abu's? >>No, just that I think that we're trying Thio. Uh, let's see. Hold on. I think that we're trying Teoh give you a foundation against which to give this stuff a go yourself. And that's really the key to this thing is to provide some, you know, many training and education in a very condensed period. So, >>yeah, that's exactly what you're going to see. The SYRIZA videos we have today. We're going to focus on your first steps with Dr Enterprise Container Cloud from installing it to bootstrapping your regional child clusters so that by the end of the tutorial content today, you're gonna be prepared to spin up your first documentary prize clusters using documented prize container class. So just a little bit of logistics for the session. We're going to run through these tutorials twice. We're gonna do one run through starting seven minutes ago up until I guess it will be ten fifteen Pacific time. Then we're gonna run through the whole thing again. So if you've got other colleagues that weren't able to join right at the top of the hour and would like to jump in from the beginning, ten. Fifteen Pacific time. We're gonna do the whole thing over again. So if you want to see the videos twice, you got public friends and colleagues that, you know you wanna pull in for a second chance to see this stuff, we're gonna do it all. All twice. Yeah, this session. Any any logistics I should add, Bruce that No, >>I think that's that's pretty much what we had to nail down here. But let's zoom dash into those, uh, feature films. >>Let's do Edmonds. And like I said, don't be shy. Feel free to ask questions in the chat or engineers and boosting myself are standing by to answer your questions. So let me just tee up the first video here and walk their cost. Yeah. Mhm. Yes. Sorry. And here we go. So our first video here is gonna be about installing the Doctor Enterprise Container Club Management cluster. So I like to think of the management cluster as like your mothership, right? This is what you're gonna use to deploy all those little child clusters that you're gonna use is like, Come on it as clusters downstream. So the management costs was always our first step. Let's jump in there >>now. We have to give this brief little pause >>with no good day video. Focus for this demo will be the initial bootstrap of the management cluster in the first regional clusters to support AWS deployments. The management cluster provides the core functionality, including identity management, authentication, infantry release version. The regional cluster provides the specific architecture provided in this case, eight of us and the Elsie um, components on the UCP Cluster Child cluster is the cluster or clusters being deployed and managed. The deployment is broken up into five phases. The first phase is preparing a big strap note on this dependencies on handling with download of the bridge struck tools. The second phase is obtaining America's license file. Third phase. Prepare the AWS credentials instead of the adduce environment. The fourth configuring the deployment, defining things like the machine types on the fifth phase. Run the bootstrap script and wait for the deployment to complete. Okay, so here we're sitting up the strap node, just checking that it's clean and clear and ready to go there. No credentials already set up on that particular note. Now we're just checking through AWS to make sure that the account we want to use we have the correct credentials on the correct roles set up and validating that there are no instances currently set up in easy to instance, not completely necessary, but just helps keep things clean and tidy when I am perspective. Right. So next step, we're just going to check that we can, from the bootstrap note, reach more antis, get to the repositories where the various components of the system are available. They're good. No areas here. Yeah, right now we're going to start sitting at the bootstrap note itself. So we're downloading the cars release, get get cars, script, and then next, we're going to run it. I'm in. Deploy it. Changing into that big struck folder. Just making see what's there. Right now we have no license file, so we're gonna get the license filed. Oh, okay. Get the license file through the more antis downloads site, signing up here, downloading that license file and putting it into the Carisbrook struck folder. Okay, Once we've done that, we can now go ahead with the rest of the deployment. See that the follow is there. Uh, huh? That's again checking that we can now reach E C two, which is extremely important for the deployment. Just validation steps as we move through the process. All right, The next big step is valid in all of our AWS credentials. So the first thing is, we need those route credentials which we're going to export on the command line. This is to create the necessary bootstrap user on AWS credentials for the completion off the deployment we're now running an AWS policy create. So it is part of that is creating our Food trucks script, creating the mystery policy files on top of AWS, Just generally preparing the environment using a cloud formation script you'll see in a second will give a new policy confirmations just waiting for it to complete. Yeah, and there is done. It's gonna have a look at the AWS console. You can see that we're creative completed. Now we can go and get the credentials that we created Today I am console. Go to that new user that's being created. We'll go to the section on security credentials and creating new keys. Download that information media Access key I D and the secret access key. We went, Yeah, usually then exported on the command line. Okay. Couple of things to Notre. Ensure that you're using the correct AWS region on ensure that in the conflict file you put the correct Am I in for that region? I'm sure you have it together in a second. Yes. Okay, that's the key. Secret X key. Right on. Let's kick it off. Yeah, So this process takes between thirty and forty five minutes. Handles all the AWS dependencies for you, and as we go through, the process will show you how you can track it. Andi will start to see things like the running instances being created on the west side. The first phase off this whole process happening in the background is the creation of a local kind based bootstrapped cluster on the bootstrap node that clusters then used to deploy and manage all the various instances and configurations within AWS. At the end of the process, that cluster is copied into the new cluster on AWS and then shut down that local cluster essentially moving itself over. Okay. Local clusters boat just waiting for the various objects to get ready. Standard communities objects here Okay, so we speed up this process a little bit just for demonstration purposes. Yeah. There we go. So first note is being built the best in host. Just jump box that will allow us access to the entire environment. Yeah, In a few seconds, we'll see those instances here in the US console on the right. Um, the failures that you're seeing around failed to get the I. P for Bastian is just the weight state while we wait for a W s to create the instance. Okay. Yes. Here, beauty there. Okay. Mhm. Okay. Yeah, yeah. Okay. On there. We got question. Host has been built on three instances for the management clusters have now been created. We're going through the process of preparing. Those nodes were now copying everything over. See that? The scaling up of controllers in the big Strap cluster? It's indicating that we're starting all of the controllers in the new question. Almost there. Yeah. Yeah, just waiting for key. Clark. Uh huh. Start to finish up. Yeah. No. What? Now we're shutting down control this on the local bootstrap node on preparing our I. D. C. Configuration. Fourth indication, soon as this is completed. Last phase will be to deploy stack light into the new cluster the last time Monitoring tool set way Go stack like to plan It has started. Mhm coming to the end of the deployment Mountain. Yeah, America. Final phase of the deployment. Onda, We are done. Okay, You'll see. At the end they're providing us the details of you. I log in so there's a keeper clogging. You can modify that initial default password is part of the configuration set up with one documentation way. Go Councils up way can log in. Yeah, yeah, thank you very much for watching. >>Excellent. So in that video are wonderful field CTO Shauna Vera bootstrapped up management costume for Dr Enterprise Container Cloud Bruce, where exactly does that leave us? So now we've got this management costume installed like what's next? >>So primarily the foundation for being able to deploy either regional clusters that will then allow you to support child clusters. Uh, comes into play the next piece of what we're going to show, I think with Sean O'Mara doing this is the child cluster capability, which allows you to then deploy your application services on the local cluster. That's being managed by the ah ah management cluster that we just created with the bootstrap. >>Right? So this cluster isn't yet for workloads. This is just for bootstrapping up the downstream clusters. Those or what we're gonna use for workings. >>Exactly. Yeah. And I just wanted to point out, since Sean O'Mara isn't around, toe, actually answer questions. I could listen to that guy. Read the phone book, and it would be interesting, but anyway, you can tell him I said that >>he's watching right now, Crusoe. Good. Um, cool. So and just to make sure I understood what Sean was describing their that bootstrap er knows that you, like, ran document fresh pretender Cloud from to begin with. That's actually creating a kind kubernetes deployment kubernetes and Docker deployment locally. That then hits the AWS a p i in this example that make those e c two instances, and it makes like a three manager kubernetes cluster there, and then it, like, copies itself over toe those communities managers. >>Yeah, and and that's sort of where the transition happens. You can actually see it. The output that when it says I'm pivoting, I'm pivoting from my local kind deployment of cluster AP, I toothy, uh, cluster, that's that's being created inside of AWS or, quite frankly, inside of open stack or inside of bare metal or inside of it. The targeting is, uh, abstracted. Yeah, but >>those air three environments that we're looking at right now, right? Us bare metal in open staff environments. So does that kind cluster on the bootstrap er go away afterwards. You don't need that afterwards. Yeah, that is just temporary. To get things bootstrapped, then you manage things from management cluster on aws in this example? >>Yeah. Yeah. The seed, uh, cloud that post the bootstrap is not required anymore. And there's no, uh, interplay between them after that. So that there's no dependencies on any of the clouds that get created thereafter. >>Yeah, that actually reminds me of how we bootstrapped doctor enterprise back in the day, be a temporary container that would bootstrap all the other containers. Go away. It's, uh, so sort of a similar, similar temporary transient bootstrapping model. Cool. Excellent. What will convict there? It looked like there wasn't a ton, right? It looked like you had to, like, set up some AWS parameters like credentials and region and stuff like that. But other than that, that looked like heavily script herbal like there wasn't a ton of point and click there. >>Yeah, very much so. It's pretty straightforward from a bootstrapping standpoint, The config file that that's generated the template is fairly straightforward and targeted towards of a small medium or large, um, deployment. And by editing that single file and then gathering license file and all of the things that Sean went through, um, that that it makes it fairly easy to script >>this. And if I understood correctly as well that three manager footprint for your management cluster, that's the minimum, right. We always insist on high availability for this management cluster because boy do not wanna see oh, >>right, right. And you know, there's all kinds of persistent data that needs to be available, regardless of whether one of the notes goes down or not. So we're taking care of all of that for you behind the scenes without you having toe worry about it as a developer. >>No, I think there's that's a theme that I think will come back to throughout the rest of this tutorial session today is there's a lot of there's a lot of expertise baked him to Dr Enterprise Container Cloud in terms of implementing best practices for you like the defaulter, just the best practices of how you should be managing these clusters, Miss Seymour. Examples of that is the day goes on. Any interesting questions you want to call out from the chap who's >>well, there was. Yeah, yeah, there was one that we had responded to earlier about the fact that it's a management cluster that then conduce oh, either the the regional cluster or a local child molester. The child clusters, in each case host the application services, >>right? So at this point, we've got, in some sense, like the simplest architectures for our documentary prize Container Cloud. We've got the management cluster, and we're gonna go straight with child cluster. In the next video, there's a more sophisticated architecture, which will also proper today that inserts another layer between those two regional clusters. If you need to manage regions like across a BS, reads across with these documents anything, >>yeah, that that local support for the child cluster makes it a lot easier for you to manage the individual clusters themselves and to take advantage of our observation. I'll support systems a stack light and things like that for each one of clusters locally, as opposed to having to centralize thumb >>eso. It's a couple of good questions. In the chat here, someone was asking for the instructions to do this themselves. I strongly encourage you to do so. That should be in the docks, which I think Dale helpfully thank you. Dale provided links for that's all publicly available right now. So just head on in, head on into the docks like the Dale provided here. You can follow this example yourself. All you need is a Mirante license for this and your AWS credentials. There was a question from many a hear about deploying this toe azure. Not at G. Not at this time. >>Yeah, although that is coming. That's going to be in a very near term release. >>I didn't wanna make promises for product, but I'm not too surprised that she's gonna be targeted. Very bracing. Cool. Okay. Any other thoughts on this one does. >>No, just that the fact that we're running through these individual pieces of the steps Well, I'm sure help you folks. If you go to the link that, uh, the gentleman had put into the chat, um, giving you the step by staff. Um, it makes it fairly straightforward to try this yourselves. >>E strongly encourage that, right? That's when you really start to internalize this stuff. OK, but before we move on to the next video, let's just make sure everyone has a clear picture in your mind of, like, where we are in the life cycle here creating this management cluster. Just stop me if I'm wrong. Who's creating this management cluster is like, you do that once, right? That's when your first setting up your doctor enterprise container cloud environment of system. What we're going to start seeing next is creating child clusters and this is what you're gonna be doing over and over and over again. When you need to create a cluster for this Deb team or, you know, this other team river it is that needs commodity. Doctor Enterprise clusters create these easy on half will. So this was once to set up Dr Enterprise Container Cloud Child clusters, which we're going to see next. We're gonna do over and over and over again. So let's go to that video and see just how straightforward it is to spin up a doctor enterprise cluster for work clothes as a child cluster. Undocumented brands contain >>Hello. In this demo, we will cover the deployment experience of creating a new child cluster, the scaling of the cluster and how to update the cluster. When a new version is available, we begin the process by logging onto the you I as a normal user called Mary. Let's go through the navigation of the U I so you can switch. Project Mary only has access to development. Get a list of the available projects that you have access to. What clusters have been deployed at the moment there. Nan Yes, this H Keys Associate ID for Mary into her team on the cloud credentials that allow you to create access the various clouds that you can deploy clusters to finally different releases that are available to us. We can switch from dark mode to light mode, depending on your preferences, Right? Let's now set up semester search keys for Mary so she can access the notes and machines again. Very simply, had Mississippi key give it a name, we copy and paste our public key into the upload key block. Or we can upload the key if we have the file available on our local machine. A simple process. So to create a new cluster, we define the cluster ad management nodes and add worker nodes to the cluster. Yeah, again, very simply, you go to the clusters tab. We hit the create cluster button. Give the cluster name. Yeah, Andi, select the provider. We only have access to AWS in this particular deployment, so we'll stick to AWS. What's like the region in this case? US West one release version five point seven is the current release Onda Attach. Mary's Key is necessary Key. We can then check the rest of the settings, confirming the provider Any kubernetes c r D r I p address information. We can change this. Should we wish to? We'll leave it default for now on. Then what components? A stack light I would like to deploy into my Custer. For this. I'm enabling stack light on logging on Aiken. Sit up the retention sizes Attention times on. Even at this stage, at any customer alerts for the watchdogs. E consider email alerting which I will need my smart host details and authentication details. Andi Slack Alerts. Now I'm defining the cluster. All that's happened is the cluster's been defined. I now need to add machines to that cluster. I'll begin by clicking the create machine button within the cluster definition. Oh, select manager, Select the number of machines. Three is the minimum. Select the instant size that I'd like to use from AWS and very importantly, ensure correct. Use the correct Am I for the region. I commend side on the route device size. There we go, my three machines obviously creating. I now need to add some workers to this custom. So I go through the same process this time once again, just selecting worker. I'll just add to once again, the AM is extremely important. Will fail if we don't pick the right, Am I for a boon to machine in this case and the deployment has started. We can go and check on the bold status are going back to the clusters screen on clicking on the little three dots on the right. We get the cluster info and the events, so the basic cluster info you'll see pending their listen cluster is still in the process of being built. We kick on, the events will get a list of actions that have been completed This part of the set up of the cluster. So you can see here we've created the VPC. We've created the sub nets on We've created the Internet gateway. It's unnecessary made of us and we have no warnings of the stage. Yeah, this will then run for a while. We have one minute past waken click through. We can check the status of the machine bulls as individuals so we can check the machine info, details of the machines that we've assigned, right? Mhm Onda. See any events pertaining to the machine areas like this one on normal? Yeah. Just watch asked. The community's components are waiting for the machines to start. Go back to Custer's. Okay, right. Because we're moving ahead now. We can see we have it in progress. Five minutes in new Matt Gateway on the stage. The machines have been built on assigned. I pick up the U. S. Thank you. Yeah. There we go. Machine has been created. See the event detail and the AWS. I'd for that machine. Mhm. No speeding things up a little bit. This whole process and to end takes about fifteen minutes. Run the clock forward, you'll notice is the machines continue to bold the in progress. We'll go from in progress to ready. A soon as we got ready on all three machines, the managers on both workers way could go on and we could see that now we reached the point where the cluster itself is being configured. Mhm, mhm. And then we go. Cluster has been deployed. So once the classes deployed, we can now never get around our environment. Okay, Are cooking into configure cluster We could modify their cluster. We could get the end points for alert alert manager on See here The griffon occupying and Prometheus are still building in the background but the cluster is available on you would be able to put workloads on it the stretch to download the cube conflict so that I can put workloads on it. It's again three little dots in the right for that particular cluster. If the download cube conflict give it my password, I now have the Q conflict file necessary so that I can access that cluster Mhm all right Now that the build is fully completed, we can check out cluster info on. We can see that Allow the satellite components have been built. All the storage is there, and we have access to the CPU. I So if we click into the cluster, we can access the UCP dashboard, right? Shit. Click the signing with Detroit button to use the SSO on. We give Mary's possible to use the name once again. Thing is, an unlicensed cluster way could license at this point. Or just skip it on. There. We have the UCP dashboard. You can see that has been up for a little while. We have some data on the dashboard going back to the console. We can now go to the griffon, a data just being automatically pre configured for us. We can switch and utilized a number of different dashboards that have already been instrumented within the cluster. So, for example, communities cluster information, the name spaces, deployments, nodes. Mhm. So we look at nodes. If we could get a view of the resource is utilization of Mrs Custer is very little running in it. Yeah. General dashboard of Cuba navies cluster one of this is configurable. You can modify these for your own needs, or add your own dashboards on de scoped to the cluster. So it is available to all users who have access to this specific cluster, all right to scale the cluster on to add a notice. A simple is the process of adding a mode to the cluster, assuming we've done that in the first place. So we go to the cluster, go into the details for the cluster we select, create machine. Once again, we need to be ensure that we put the correct am I in and any other functions we like. You can create different sized machines so it could be a larger node. Could be bigger disks and you'll see that worker has been added from the provisioning state on shortly. We will see the detail off that worker as a complete to remove a note from a cluster. Once again, we're going to the cluster. We select the node would like to remove. Okay, I just hit delete On that note. Worker nodes will be removed from the cluster using according and drawing method to ensure that your workouts are not affected. Updating a cluster. When an update is available in the menu for that particular cluster, the update button will become available. And it's a simple as clicking the button, validating which release you would like to update to. In this case, the next available releases five point seven point one. Here I'm kicking the update by in the background We will coordinate. Drain each node slowly go through the process of updating it. Andi update will complete depending on what the update is as quickly as possible. Girl, we go. The notes being rebuilt in this case impacted the manager node. So one of the manager nodes is in the process of being rebuilt. In fact, to in this case, one has completed already on In a few minutes we'll see that there are great has been completed. There we go. Great. Done. Yeah. If you work loads of both using proper cloud native community standards, there will be no impact. >>Excellent. So at this point, we've now got a cluster ready to start taking our communities of workloads. He started playing or APs to that costume. So watching that video, the thing that jumped out to me at first Waas like the inputs that go into defining this workload cost of it. All right, so we have to make sure we were using on appropriate am I for that kind of defines the substrate about what we're gonna be deploying our cluster on top of. But there's very little requirements. A so far as I could tell on top of that, am I? Because Docker enterprise Container Cloud is gonna bootstrap all the components that you need. That s all we have is kind of kind of really simple bunch box that we were deploying these things on top of so one thing that didn't get dug into too much in the video. But it's just sort of implied. Bruce, maybe you can comment on this is that release that Shawn had to choose for his, uh, for his cluster in creating it. And that release was also the thing we had to touch. Wanted to upgrade part cluster. So you have really sharp eyes. You could see at the end there that when you're doing the release upgrade enlisted out a stack of components docker, engine, kubernetes, calico, aled, different bits and pieces that go into, uh, go into one of these commodity clusters that deploy. And so, as far as I can tell in that case, that's what we mean by a release. In this sense, right? It's the validated stack off container ization and orchestration components that you know we've tested out and make sure it works well, introduction environments. >>Yeah, and and And that's really the focus of our effort is to ensure that any CVS in any of the stack are taken care of that there is a fixes air documented and up streamed to the open stack community source community, um, and and that, you know, then we test for the scaling ability and the reliability in high availability configuration for the clusters themselves. The hosts of your containers. Right. And I think one of the key, uh, you know, benefits that we provide is that ability to let you know, online, high. We've got an update for you, and it's fixes something that maybe you had asked us to fix. Uh, that all comes to you online as your managing your clusters, so you don't have to think about it. It just comes as part of the product. >>You just have to click on Yes. Please give me that update. Uh, not just the individual components, but again. It's that it's that validated stack, right? Not just, you know, component X, y and Z work. But they all work together effectively Scalable security, reliably cool. Um, yeah. So at that point, once we started creating that workload child cluster, of course, we bootstrapped good old universal control plane. Doctor Enterprise. On top of that, Sean had the classic comment there, you know? Yeah. Yeah. You'll see a little warnings and errors or whatever. When you're setting up, UCP don't handle, right, Just let it do its job, and it will converge all its components, you know, after just just a minute or two. But we saw in that video, we sped things up a little bit there just we didn't wait for, you know, progress fighters to complete. But really, in real life, that whole process is that anything so spend up one of those one of those fosters so quite quite quick. >>Yeah, and and I think the the thoroughness with which it goes through its process and re tries and re tries, uh, as you know, and it was evident when we went through the initial ah video of the bootstrapping as well that the processes themselves are self healing, as they are going through. So they will try and retry and wait for the event to complete properly on. And once it's completed properly, then it will go to the next step. >>Absolutely. And the worst thing you could do is panic at the first warning and start tearing things that don't don't do that. Just don't let it let it heal. Let take care of itself. And that's the beauty of these manage solutions is that they bake in a lot of subject matter expertise, right? The decisions that are getting made by those containers is they're bootstrapping themselves, reflect the expertise of the Mirant ISS crew that has been developing this content in these two is free for years and years now, over recognizing humanities. One cool thing there that I really appreciate it actually that it adds on top of Dr Enterprise is that automatic griffon a deployment as well. So, Dr Enterprises, I think everyone knows has had, like, some very high level of statistics baked into its dashboard for years and years now. But you know our customers always wanted a double click on that right to be able to go a little bit deeper. And Griffon are really addresses that it's built in dashboards. That's what's really nice to see. >>Yeah, uh, and all of the alerts and, uh, data are actually captured in a Prometheus database underlying that you have access to so that you are allowed to add new alerts that then go out to touch slack and say hi, You need to watch your disk space on this machine or those kinds of things. Um, and and this is especially helpful for folks who you know, want to manage the application service layer but don't necessarily want to manage the operations side of the house. So it gives them a tool set that they can easily say here, Can you watch these for us? And Miran tas can actually help do that with you, So >>yeah, yeah, I mean, that's just another example of baking in that expert knowledge, right? So you can leverage that without tons and tons of a long ah, long runway of learning about how to do that sort of thing. Just get out of the box right away. There was the other thing, actually, that you could sleep by really quickly if you weren't paying close attention. But Sean mentioned it on the video. And that was how When you use dark enterprise container cloud to scale your cluster, particularly pulling a worker out, it doesn't just like Territo worker down and forget about it. Right? Is using good communities best practices to cordon and drain the No. So you aren't gonna disrupt your workloads? You're going to just have a bunch of containers instantly. Excellent crash. You could really carefully manage the migration of workloads off that cluster has baked right in tow. How? How? Document? The brass container cloud is his handling cluster scale. >>Right? And And the kubernetes, uh, scaling methodology is is he adhered to with all of the proper techniques that ensure that it will tell you. Wait, you've got a container that actually needs three, uh, three, uh, instances of itself. And you don't want to take that out, because that node, it means you'll only be able to have to. And we can't do that. We can't allow that. >>Okay, Very cool. Further thoughts on this video. So should we go to the questions. >>Let's let's go to the questions >>that people have. Uh, there's one good one here, down near the bottom regarding whether an a p I is available to do this. So in all these demos were clicking through this web. You I Yes, this is all a p. I driven. You could do all of this. You know, automate all this away is part of the CSC change. Absolutely. Um, that's kind of the point, right? We want you to be ableto spin up. Come on. I keep calling them commodity clusters. What I mean by that is clusters that you can create and throw away. You know, easily and automatically. So everything you see in these demos eyes exposed to FBI? >>Yeah. In addition, through the standard Cube cuddle, Uh, cli as well. So if you're not a programmer, but you still want to do some scripting Thio, you know, set up things and deploy your applications and things. You can use this standard tool sets that are available to accomplish that. >>There is a good question on scale here. So, like, just how many clusters and what sort of scale of deployments come this kind of support our engineers report back here that we've done in practice up to a Zeman ia's like two hundred clusters. We've deployed on this with two hundred fifty nodes in a cluster. So were, you know, like like I said, hundreds, hundreds of notes, hundreds of clusters managed by documented press container fall and then those downstream clusters, of course, subject to the usual constraints for kubernetes, right? Like default constraints with something like one hundred pods for no or something like that. There's a few different limitations of how many pods you can run on a given cluster that comes to us not from Dr Enterprise Container Cloud, but just from the underlying kubernetes distribution. >>Yeah, E. I mean, I don't think that we constrain any of the capabilities that are available in the, uh, infrastructure deliveries, uh, service within the goober Netease framework. So were, you know, But we are, uh, adhering to the standards that we would want to set to make sure that we're not overloading a node or those kinds of things, >>right. Absolutely cool. Alright. So at this point, we've got kind of a two layered our protection when we are management cluster, but we deployed in the first video. Then we use that to deploy one child clustering work, classroom, uh, for more sophisticated deployments where we might want to manage child clusters across multiple regions. We're gonna add another layer into our architectural we're gonna add in regional cluster management. So this idea you're gonna have the single management cluster that we started within the first video. On the next video, we're gonna learn how to spin up a regional clusters, each one of which would manage, for example, a different AWS uh, US region. So let me just pull out the video for that bill. We'll check it out for me. Mhm. >>Hello. In this demo, we will cover the deployment of additional regional management. Cluster will include a brief architectures of you how to set up the management environment, prepare for the deployment deployment overview and then just to prove it, to play a regional child cluster. So, looking at the overall architecture, the management cluster provides all the core functionality, including identity management, authentication, inventory and release version. ING Regional Cluster provides the specific architecture provider in this case AWS on the LCN components on the D you speak Cluster for child cluster is the cluster or clusters being deployed and managed? Okay, so why do you need a regional cluster? Different platform architectures, for example aws who have been stack even bare metal to simplify connectivity across multiple regions handle complexities like VPNs or one way connectivity through firewalls, but also help clarify availability zones. Yeah. Here we have a view of the regional cluster and how it connects to the management cluster on their components, including items like the LCN cluster Manager we also Machine Manager were held. Mandel are managed as well as the actual provider logic. Mhm. Okay, we'll begin by logging on Is the default administrative user writer. Okay, once we're in there, we'll have a look at the available clusters making sure we switch to the default project which contains the administration clusters. Here we can see the cars management cluster, which is the master controller. And you see, it only has three nodes, three managers, no workers. Okay, if we look at another regional cluster similar to what we're going to deploy now, also only has three managers once again, no workers. But as a comparison, here's a child cluster This one has three managers, but also has additional workers associate it to the cluster. All right, we need to connect. Tell bootstrap note. Preferably the same note that used to create the original management plaster. It's just on AWS, but I still want to machine. All right. A few things we have to do to make sure the environment is ready. First thing we're going to see go into route. We'll go into our releases folder where we have the kozberg struck on. This was the original bootstrap used to build the original management cluster. Yeah, we're going to double check to make sure our cube con figures there once again, the one created after the original customers created just double check. That cute conflict is the correct one. Does point to the management cluster. We're just checking to make sure that we can reach the images that everything is working. A condom. No damages waken access to a swell. Yeah. Next we're gonna edit the machine definitions. What we're doing here is ensuring that for this cluster we have the right machine definitions, including items like the am I. So that's found under the templates AWS directory. We don't need to edit anything else here. But we could change items like the size of the machines attempts. We want to use that The key items to ensure where you changed the am I reference for the junta image is the one for the region in this case AWS region for utilizing this was no construct deployment. We have to make sure we're pointing in the correct open stack images. Yeah, okay. Set the correct and my save file. Now we need to get up credentials again. When we originally created the bootstrap cluster, we got credentials from eight of the U. S. If we hadn't done this, we would need to go through the u A. W s set up. So we're just exporting the AWS access key and I d. What's important is CAAs aws enabled equals. True. Now we're sitting the region for the new regional cluster. In this case, it's Frankfurt on exporting our cube conflict that we want to use for the management cluster. When we looked at earlier Yeah, now we're exporting that. Want to call the cluster region Is Frank Foods Socrates Frankfurt yet trying to use something descriptive It's easy to identify. Yeah, and then after this, we'll just run the bootstrap script, which will complete the deployment for us. Bootstrap of the regional cluster is quite a bit quicker than the initial management clusters. There are fewer components to be deployed. Um, but to make it watchable, we've spent it up. So we're preparing our bootstrap cluster on the local bootstrap node. Almost ready on. We started preparing the instances at W s and waiting for that bastard and no to get started. Please. The best you nerd Onda. We're also starting to build the actual management machines they're now provisioning on. We've reached the point where they're actually starting to deploy. Dr. Enterprise, this is probably the longest face. Yeah, seeing the second that all the nerds will go from the player deployed. Prepare, prepare. Yeah, You'll see their status changes updates. He was the first night ready. Second, just applying second already. Both my time. No waiting from home control. Let's become ready. Removing cluster the management cluster from the bootstrap instance into the new cluster running the date of the U. S. All my stay. Ah, now we're playing Stockland. Switch over is done on. Done. Now I will build a child cluster in the new region very, very quickly to find the cluster will pick. Our new credential has shown up. We'll just call it Frankfurt for simplicity a key and customs to find. That's the machine. That cluster stop with three managers. Set the correct Am I for the region? Yeah, Do the same to add workers. There we go test the building. Yeah. Total bill of time Should be about fifteen minutes. Concedes in progress. It's going to expect this up a little bit. Check the events. We've created all the dependencies, machine instances, machines, a boat shortly. We should have a working cluster in Frankfurt region. Now almost a one note is ready from management. Two in progress. Yeah, on we're done. Clusters up and running. Yeah. >>Excellent. So at this point, we've now got that three tier structure that we talked about before the video. We got that management cluster that we do strapped in the first video. Now we have in this example to different regional clustering one in Frankfurt, one of one management was two different aws regions. And sitting on that you can do Strap up all those Doctor enterprise costumes that we want for our work clothes. >>Yeah, that's the key to this is to be able to have co resident with your actual application service enabled clusters the management co resident with it so that you can, you know, quickly access that he observation Elson Surfboard services like the graph, Ana and that sort of thing for your particular region. A supposed to having to lug back into the home. What did you call it when we started >>the mothership? >>The mothership. Right. So we don't have to go back to the mother ship. We could get >>it locally. Yeah, when, like to that point of aggregating things under a single pane of glass? That's one thing that again kind of sailed by in the demo really quickly. But you'll notice all your different clusters were on that same cluster. Your pain on your doctor Enterprise Container Cloud management. Uh, court. Right. So both your child clusters for running workload and your regional clusters for bootstrapping. Those child clusters were all listed in the same place there. So it's just one pane of glass to go look for, for all of your clusters, >>right? And, uh, this is kind of an important point. I was, I was realizing, as we were going through this. All of the mechanics are actually identical between the bootstrapped cluster of the original services and the bootstrapped cluster of the regional services. It's the management layer of everything so that you only have managers, you don't have workers and that at the child cluster layer below the regional or the management cluster itself, that's where you have the worker nodes. And those are the ones that host the application services in that three tiered architecture that we've now defined >>and another, you know, detail for those that have sharp eyes. In that video, you'll notice when deploying a child clusters. There's not on Lee. A minimum of three managers for high availability management cluster. You must have at least two workers that's just required for workload failure. It's one of those down get out of work. They could potentially step in there, so your minimum foot point one of these child clusters is fine. Violence and scalable, obviously, from a >>That's right. >>Let's take a quick peek of the questions here, see if there's anything we want to call out, then we move on to our last want to my last video. There's another question here about, like where these clusters can live. So again, I know these examples are very aws heavy. Honestly, it's just easy to set up down on the other us. We could do things on bare metal and, uh, open stack departments on Prem. That's what all of this still works in exactly the same way. >>Yeah, the, uh, key to this, especially for the the, uh, child clusters, is the provision hers? Right? See you establish on AWS provision or you establish a bare metal provision or you establish a open stack provision. Or and eventually that list will include all of the other major players in the cloud arena. But you, by selecting the provision or within your management interface, that's where you decide where it's going to be hosted, where the child cluster is to be hosted. >>Speaking off all through a child clusters. Let's jump into our last video in the Siri's, where we'll see how to spin up a child cluster on bare metal. >>Hello. This demo will cover the process of defining bare metal hosts and then review the steps of defining and deploying a bare metal based doctor enterprise cluster. So why bare metal? Firstly, it eliminates hyper visor overhead with performance boost of up to thirty percent. Provides direct access to GP use, prioritize for high performance wear clothes like machine learning and AI, and supports high performance workloads like network functions, virtualization. It also provides a focus on on Prem workloads, simplifying and ensuring we don't need to create the complexity of adding another opera visor. Lay it between so continue on the theme Why Communities and bare metal again Hyper visor overhead. Well, no virtualization overhead. Direct access to hardware items like F p G A s G p us. We can be much more specific about resource is required on the nodes. No need to cater for additional overhead. Uh, we can handle utilization in the scheduling. Better Onda we increase the performances and simplicity of the entire environment as we don't need another virtualization layer. Yeah, In this section will define the BM hosts will create a new project will add the bare metal hosts, including the host name. I put my credentials I pay my address the Mac address on then provide a machine type label to determine what type of machine it is for later use. Okay, let's get started. So well again. Was the operator thing. We'll go and we'll create a project for our machines to be a member off helps with scoping for later on for security. I begin the process of adding machines to that project. Yeah. So the first thing we had to be in post, Yeah, many of the machine A name. Anything you want, que experimental zero one. Provide the IAP my user name type my password. Okay. On the Mac address for the common interface with the boot interface and then the i p m I i p address These machines will be at the time storage worker manager. He's a manager. Yeah, we're gonna add a number of other machines on will. Speed this up just so you could see what the process looks like in the future. Better discovery will be added to the product. Okay. Okay. Getting back there we have it are Six machines have been added, are busy being inspected, being added to the system. Let's have a look at the details of a single note. Yeah, you can see information on the set up of the node. Its capabilities? Yeah. As well as the inventory information about that particular machine. I see. Okay, let's go and create the cluster. Yeah, So we're going to deploy a bare metal child cluster. The process we're going to go through is pretty much the same as any other child cluster. So we'll credit custom. We'll give it a name, but if it were selecting bare metal on the region, we're going to select the version we want to apply. No way. We're going to add this search keys. If we hope we're going to give the load. Balancer host I p that we'd like to use out of dress range on update the address range that we want to use for the cluster. Check that the sea ideal blocks for the Cuban ladies and tunnels are what we want them to be. Enable disabled stack light. Yeah, and soothe stack light settings to find the cluster. And then, as for any other machine, we need to add machines to the cluster. Here. We're focused on building communities clusters, so we're gonna put the count of machines. You want managers? We're gonna pick the label type manager and create three machines is the manager for the Cuban eighties. Casting Okay thing. We're having workers to the same. It's a process. Just making sure that the worker label host level are I'm sorry. On when Wait for the machines to deploy. Let's go through the process of putting the operating system on the notes validating and operating system deploying doctor identifies Make sure that the cluster is up and running and ready to go. Okay, let's review the bold events waken See the machine info now populated with more information about the specifics of things like storage and of course, details of a cluster etcetera. Yeah, yeah, well, now watch the machines go through the various stages from prepared to deploy on what's the cluster build? And that brings us to the end of this particular demo. You can see the process is identical to that of building a normal child cluster we got our complaint is complete. >>All right, so there we have it, deploying a cluster to bare metal. Much the same is how we did for AWS. I guess maybe the biggest different stepwise there is there is that registration face first, right? So rather than just using AWS financials toe magically create PM's in the cloud. You got a point out all your bare metal servers to Dr Enterprise between the cloud and they really come in, I guess three profiles, right? You got your manager profile with a profile storage profile which has been labeled as allocate. Um, crossword cluster has appropriate, >>right? And And I think that the you know, the key differentiator here is that you have more physical control over what, uh, attributes that love your cat, by the way, uh, where you have the different attributes of a server of physical server. So you can, uh, ensure that the SSD configuration on the storage nodes is gonna be taken advantage of in the best way the GP use on the worker nodes and and that the management layer is going to have sufficient horsepower to, um, spin up to to scale up the the environments, as required. One of the things I wanted to mention, though, um, if I could get this out without the choking much better. Um, is that Ah, hey, mentioned the load balancer and I wanted to make sure in defining the load balancer and the load balancer ranges. Um, that is for the top of the the cluster itself. That's the operations of the management, uh, layer integrating with your systems internally to be able to access the the Cube Can figs. I I p address the, uh, in a centralized way. It's not the load balancer that's working within the kubernetes cluster that you are deploying. That's still cube proxy or service mesh, or however you're intending to do it. So, um, it's kind of an interesting step that your initial step in building this, um and we typically use things like metal L B or in gen X or that kind of thing is to establish that before we deploy this bear mental cluster so that it can ride on top of that for the tips and things. >>Very cool. So any other thoughts on what we've seen so far today? Bruce, we've gone through all the different layers. Doctor enterprise container clouds in these videos from our management are regional to our clusters on aws hand bear amount, Of course, with his dad is still available. Closing thoughts before we take just a very short break and run through these demos again. >>You know, I've been very exciting. Ah, doing the presentation with you. I'm really looking forward to doing it the second time, so that we because we've got a good rhythm going about this kind of thing. So I'm looking forward to doing that. But I think that the key elements of what we're trying to convey to the folks out there in the audience that I hope you've gotten out of it is that will that this is an easy enough process that if you follow the step by steps going through the documentation that's been put out in the chat, um, that you'll be able to give this a go yourself, Um, and you don't have to limit yourself toe having physical hardware on prim to try it. You could do it in a ws as we've shown you today. And if you've got some fancy use cases like, uh, you you need a Hadoop And and, uh, you know, cloud oriented ai stuff that providing a bare metal service helps you to get there very fast. So right. Thank you. It's been a pleasure. >>Yeah, thanks everyone for coming out. So, like I said we're going to take a very short, like, three minute break here. Uh, take the opportunity to let your colleagues know if they were in another session or they didn't quite make it to the beginning of this session. Or if you just want to see these demos again, we're going to kick off this demo. Siri's again in just three minutes at ten. Twenty five a. M. Pacific time where we will see all this great stuff again. Let's take a three minute break. I'll see you all back here in just two minutes now, you know. Okay, folks, that's the end of our extremely short break. We'll give people just maybe, like one more minute to trickle in if folks are interested in coming on in and jumping into our demo. Siri's again. Eso For those of you that are just joining us now I'm Bill Mills. I head up curriculum development for the training team here. Moran Tous on Joining me for this session of demos is Bruce. Don't you go ahead and introduce yourself doors, who is still on break? That's cool. We'll give Bruce a minute or two to get back while everyone else trickles back in. There he is. Hello, Bruce. >>How'd that go for you? Okay, >>Very well. So let's kick off our second session here. I e just interest will feel for you. Thio. Let it run over here. >>Alright. Hi. Bruce Matthews here. I'm the Western Regional Solutions architect for Marantz. Use A I'm the one with the gray hair and the glasses. Uh, the handsome one is Bill. So, uh, Bill, take it away. >>Excellent. So over the next hour or so, we've got a Siris of demos that's gonna walk you through your first steps with Dr Enterprise Container Cloud Doctor Enterprise Container Cloud is, of course, Miranda's brand new offering from bootstrapping kubernetes clusters in AWS bare metal open stack. And for the providers in the very near future. So we we've got, you know, just just over an hour left together on this session, uh, if you joined us at the top of the hour back at nine. A. M. Pacific, we went through these demos once already. Let's do them again for everyone else that was only able to jump in right now. Let's go. Our first video where we're gonna install Dr Enterprise container cloud for the very first time and use it to bootstrap management. Cluster Management Cluster, as I like to describe it, is our mother ship that's going to spin up all the other kubernetes clusters, Doctor Enterprise clusters that we're gonna run our workloads on. So I'm gonna do >>I'm so excited. I can hardly wait. >>Let's do it all right to share my video out here. Yeah, let's do it. >>Good day. The focus for this demo will be the initial bootstrap of the management cluster on the first regional clusters. To support AWS deployments, the management cluster provides the core functionality, including identity management, authentication, infantry release version. The regional cluster provides the specific architecture provided in this case AWS and the Elsom components on the UCP cluster Child cluster is the cluster or clusters being deployed and managed. The deployment is broken up into five phases. The first phase is preparing a bootstrap note on its dependencies on handling the download of the bridge struck tools. The second phase is obtaining America's license file. Third phase. Prepare the AWS credentials instead of the ideas environment, the fourth configuring the deployment, defining things like the machine types on the fifth phase, Run the bootstrap script and wait for the deployment to complete. Okay, so here we're sitting up the strap node. Just checking that it's clean and clear and ready to go there. No credentials already set up on that particular note. Now, we're just checking through aws to make sure that the account we want to use we have the correct credentials on the correct roles set up on validating that there are no instances currently set up in easy to instance, not completely necessary, but just helps keep things clean and tidy when I am perspective. Right. So next step, we're just gonna check that we can from the bootstrap note, reach more antis, get to the repositories where the various components of the system are available. They're good. No areas here. Yeah, right now we're going to start sitting at the bootstrap note itself. So we're downloading the cars release, get get cars, script, and then next we're going to run it. Yeah, I've been deployed changing into that big struck folder, just making see what's there right now we have no license file, so we're gonna get the license filed. Okay? Get the license file through more antis downloads site signing up here, downloading that license file and putting it into the Carisbrook struck folder. Okay, since we've done that, we can now go ahead with the rest of the deployment. Yeah, see what the follow is there? Uh huh. Once again, checking that we can now reach E C two, which is extremely important for the deployment. Just validation steps as we move through the process. Alright. Next big step is violating all of our AWS credentials. So the first thing is, we need those route credentials which we're going to export on the command line. This is to create the necessary bootstrap user on AWS credentials for the completion off the deployment we're now running in AWS policy create. So it is part of that is creating our food trucks script. Creating this through policy files onto the AWS, just generally preparing the environment using a cloud formation script, you'll see in a second, I'll give a new policy confirmations just waiting for it to complete. And there is done. It's gonna have a look at the AWS console. You can see that we're creative completed. Now we can go and get the credentials that we created. Good day. I am console. Go to the new user that's being created. We'll go to the section on security credentials and creating new keys. Download that information media access Key I. D and the secret access key, but usually then exported on the command line. Okay, Couple of things to Notre. Ensure that you're using the correct AWS region on ensure that in the conflict file you put the correct Am I in for that region? I'm sure you have it together in a second. Okay, thanks. Is key. So you could X key Right on. Let's kick it off. So this process takes between thirty and forty five minutes. Handles all the AWS dependencies for you. Um, as we go through, the process will show you how you can track it. Andi will start to see things like the running instances being created on the AWS side. The first phase off this whole process happening in the background is the creation of a local kind based bootstrapped cluster on the bootstrap node that clusters then used to deploy and manage all the various instances and configurations within AWS at the end of the process. That cluster is copied into the new cluster on AWS and then shut down that local cluster essentially moving itself over. Yeah, okay. Local clusters boat. Just waiting for the various objects to get ready. Standard communities objects here. Yeah, you mentioned Yeah. So we've speed up this process a little bit just for demonstration purposes. Okay, there we go. So first note is being built the bastion host just jump box that will allow us access to the entire environment. Yeah, In a few seconds, we'll see those instances here in the US console on the right. Um, the failures that you're seeing around failed to get the I. P for Bastian is just the weight state while we wait for AWS to create the instance. Okay. Yeah. Beauty there. Movies. Okay, sketch. Hello? Yeah, Okay. Okay. On. There we go. Question host has been built on three instances for the management clusters have now been created. Okay, We're going through the process of preparing. Those nodes were now copying everything over. See that scaling up of controllers in the big strapped cluster? It's indicating that we're starting all of the controllers in the new question. Almost there. Right? Okay. Just waiting for key. Clark. Uh huh. So finish up. Yeah. No. Now we're shutting down. Control this on the local bootstrap node on preparing our I. D. C configuration, fourth indication. So once this is completed, the last phase will be to deploy stack light into the new cluster, that glass on monitoring tool set, Then we go stack like deployment has started. Mhm. Coming to the end of the deployment mountain. Yeah, they were cut final phase of the deployment. And we are done. Yeah, you'll see. At the end, they're providing us the details of you. I log in. So there's a key Clark log in. Uh, you can modify that initial default possible is part of the configuration set up where they were in the documentation way. Go Councils up way can log in. Yeah. Yeah. Thank you very much for watching. >>All right, so at this point, what we have we got our management cluster spun up, ready to start creating work clusters. So just a couple of points to clarify there to make sure everyone caught that, uh, as advertised. That's darker. Enterprise container cloud management cluster. That's not rework loans. are gonna go right? That is the tool and you're gonna use to start spinning up downstream commodity documentary prize clusters for bootstrapping record too. >>And the seed host that were, uh, talking about the kind cluster dingy actually doesn't have to exist after the bootstrap succeeds eso It's sort of like, uh, copies head from the seed host Toothy targets in AWS spins it up it then boots the the actual clusters and then it goes away too, because it's no longer necessary >>so that bootstrapping know that there's not really any requirements, Hardly on that, right. It just has to be able to reach aws hit that Hit that a p I to spin up those easy to instances because, as you just said, it's just a kubernetes in docker cluster on that piece. Drop note is just gonna get torn down after the set up finishes on. You no longer need that. Everything you're gonna do, you're gonna drive from the single pane of glass provided to you by your management cluster Doctor enterprise Continue cloud. Another thing that I think is sort of interesting their eyes that the convict is fairly minimal. Really? You just need to provide it like aws regions. Um, am I? And that's what is going to spin up that spending that matter faster. >>Right? There is a mammal file in the bootstrap directory itself, and all of the necessary parameters that you would fill in have default set. But you have the option then of going in and defining a different Am I different for a different region, for example? Oh, are different. Size of instance from AWS. >>One thing that people often ask about is the cluster footprint. And so that example you saw they were spitting up a three manager, um, managing cluster as mandatory, right? No single manager set up at all. We want high availability for doctrine Enterprise Container Cloud management. Like so again, just to make sure everyone sort of on board with the life cycle stage that we're at right now. That's the very first thing you're going to do to set up Dr Enterprise Container Cloud. You're going to do it. Hopefully exactly once. Right now, you've got your management cluster running, and they're gonna use that to spend up all your other work clusters Day today has has needed How do we just have a quick look at the questions and then lets take a look at spinning up some of those child clusters. >>Okay, e think they've actually been answered? >>Yeah, for the most part. One thing I'll point out that came up again in the Dail, helpfully pointed out earlier in surgery, pointed out again, is that if you want to try any of the stuff yourself, it's all of the dogs. And so have a look at the chat. There's a links to instructions, so step by step instructions to do each and every thing we're doing here today yourself. I really encourage you to do that. Taking this out for a drive on your own really helps internalizing communicate these ideas after the after launch pad today, Please give this stuff try on your machines. Okay, So at this point, like I said, we've got our management cluster. We're not gonna run workloads there that we're going to start creating child clusters. That's where all of our work and we're gonna go. That's what we're gonna learn how to do in our next video. Cue that up for us. >>I so love Shawn's voice. >>Wasn't that all day? >>Yeah, I watched him read the phone book. >>All right, here we go. Let's now that we have our management cluster set up, let's create a first child work cluster. >>Hello. In this demo, we will cover the deployment experience of creating a new child cluster the scaling of the cluster on how to update the cluster. When a new version is available, we begin the process by logging onto the you I as a normal user called Mary. Let's go through the navigation of the u I. So you can switch Project Mary only has access to development. Uh huh. Get a list of the available projects that you have access to. What clusters have been deployed at the moment there. Man. Yes, this H keys, Associate ID for Mary into her team on the cloud credentials that allow you to create or access the various clouds that you can deploy clusters to finally different releases that are available to us. We can switch from dark mode to light mode, depending on your preferences. Right. Let's now set up some ssh keys for Mary so she can access the notes and machines again. Very simply, had Mississippi key give it a name. We copy and paste our public key into the upload key block. Or we can upload the key if we have the file available on our machine. A very simple process. So to create a new cluster, we define the cluster ad management nodes and add worker nodes to the cluster. Yeah, again, very simply, we got the clusters tab we had to create cluster button. Give the cluster name. Yeah, Andi, select the provider. We only have access to AWS in this particular deployment, so we'll stick to AWS. What's like the region in this case? US West one released version five point seven is the current release Onda Attach. Mary's Key is necessary key. We can then check the rest of the settings, confirming the provider any kubernetes c r D a r i p address information. We can change this. Should we wish to? We'll leave it default for now and then what components of stack light? I would like to deploy into my custom for this. I'm enabling stack light on logging, and I consider the retention sizes attention times on. Even at this stage, add any custom alerts for the watchdogs. Consider email alerting which I will need my smart host. Details and authentication details. Andi Slack Alerts. Now I'm defining the cluster. All that's happened is the cluster's been defined. I now need to add machines to that cluster. I'll begin by clicking the create machine button within the cluster definition. Oh, select manager, Select the number of machines. Three is the minimum. Select the instant size that I'd like to use from AWS and very importantly, ensure correct. Use the correct Am I for the region. I convinced side on the route. Device size. There we go. My three machines are busy creating. I now need to add some workers to this cluster. So I go through the same process this time once again, just selecting worker. I'll just add to once again the am I is extremely important. Will fail if we don't pick the right. Am I for a Clinton machine? In this case and the deployment has started, we can go and check on the bold status are going back to the clusters screen on clicking on the little three dots on the right. We get the cluster info and the events, so the basic cluster info you'll see pending their listen. Cluster is still in the process of being built. We kick on, the events will get a list of actions that have been completed This part of the set up of the cluster. So you can see here. We've created the VPC. We've created the sub nets on. We've created the Internet Gateway. It's unnecessary made of us. And we have no warnings of the stage. Okay, this will then run for a while. We have one minute past. We can click through. We can check the status of the machine balls as individuals so we can check the machine info, details of the machines that we've assigned mhm and see any events pertaining to the machine areas like this one on normal. Yeah. Just last. The community's components are waiting for the machines to start. Go back to customers. Okay, right. Because we're moving ahead now. We can see we have it in progress. Five minutes in new Matt Gateway. And at this stage, the machines have been built on assigned. I pick up the U S. Yeah, yeah, yeah. There we go. Machine has been created. See the event detail and the AWS. I'd for that machine. No speeding things up a little bit this whole process and to end takes about fifteen minutes. Run the clock forward, you'll notice is the machines continue to bold the in progress. We'll go from in progress to ready. A soon as we got ready on all three machines, the managers on both workers way could go on and we could see that now we reached the point where the cluster itself is being configured mhm and then we go. Cluster has been deployed. So once the classes deployed, we can now never get around. Our environment are looking into configure cluster. We could modify their cluster. We could get the end points for alert Alert Manager See here the griffon occupying and Prometheus are still building in the background but the cluster is available on You would be able to put workloads on it at this stage to download the cube conflict so that I can put workloads on it. It's again the three little dots in the right for that particular cluster. If the download cube conflict give it my password, I now have the Q conflict file necessary so that I can access that cluster. All right, Now that the build is fully completed, we can check out cluster info on. We can see that all the satellite components have been built. All the storage is there, and we have access to the CPU. I. So if we click into the cluster, we can access the UCP dashboard, click the signing with the clock button to use the SSO. We give Mary's possible to use the name once again. Thing is an unlicensed cluster way could license at this point. Or just skip it on. Do we have the UCP dashboard? You could see that has been up for a little while. We have some data on the dashboard going back to the console. We can now go to the griffon. A data just been automatically pre configured for us. We can switch and utilized a number of different dashboards that have already been instrumented within the cluster. So, for example, communities cluster information, the name spaces, deployments, nodes. Um, so we look at nodes. If we could get a view of the resource is utilization of Mrs Custer is very little running in it. Yeah, a general dashboard of Cuba Navies cluster. What If this is configurable, you can modify these for your own needs, or add your own dashboards on de scoped to the cluster. So it is available to all users who have access to this specific cluster. All right to scale the cluster on to add a No. This is simple. Is the process of adding a mode to the cluster, assuming we've done that in the first place. So we go to the cluster, go into the details for the cluster we select, create machine. Once again, we need to be ensure that we put the correct am I in and any other functions we like. You can create different sized machines so it could be a larger node. Could be bigger group disks and you'll see that worker has been added in the provisioning state. On shortly, we will see the detail off that worker as a complete to remove a note from a cluster. Once again, we're going to the cluster. We select the node we would like to remove. Okay, I just hit delete On that note. Worker nodes will be removed from the cluster using according and drawing method to ensure that your workloads are not affected. Updating a cluster. When an update is available in the menu for that particular cluster, the update button will become available. And it's a simple as clicking the button validating which release you would like to update to this case. This available releases five point seven point one give you I'm kicking the update back in the background. We will coordinate. Drain each node slowly, go through the process of updating it. Andi update will complete depending on what the update is as quickly as possible. Who we go. The notes being rebuilt in this case impacted the manager node. So one of the manager nodes is in the process of being rebuilt. In fact, to in this case, one has completed already. Yeah, and in a few minutes, we'll see that the upgrade has been completed. There we go. Great. Done. If you work loads of both using proper cloud native community standards, there will be no impact. >>All right, there. We haven't. We got our first workload cluster spun up and managed by Dr Enterprise Container Cloud. So I I loved Shawn's classic warning there. When you're spinning up an actual doctor enterprise deployment, you see little errors and warnings popping up. Just don't touch it. Just leave it alone and let Dr Enterprises self healing properties take care of all those very transient temporary glitches, resolve themselves and leave you with a functioning workload cluster within victims. >>And now, if you think about it that that video was not very long at all. And that's how long it would take you if someone came into you and said, Hey, can you spend up a kubernetes cluster for development development A. Over here, um, it literally would take you a few minutes to thio Accomplish that. And that was with a W s. Obviously, which is sort of, ah, transient resource in the cloud. But you could do exactly the same thing with resource is on Prem or resource is, um physical resource is and will be going through that later in the process. >>Yeah, absolutely one thing that is present in that demo, but that I like to highlight a little bit more because it just kind of glides by Is this notion of, ah, cluster release? So when Sean was creating that cluster, and also when when he was upgrading that cluster, he had to choose a release. What does that didn't really explain? What does that mean? Well, in Dr Enterprise Container Cloud, we have released numbers that capture the entire staff of container ization tools that will be deploying to that workload costume. So that's your version of kubernetes sed cor DNs calico. Doctor Engineer. All the different bits and pieces that not only work independently but are validated toe work together as a staff appropriate for production, humanities, adopted enterprise environments. >>Yep. From the bottom of the stack to the top, we actually test it for scale. Test it for CVS, test it for all of the various things that would, you know, result in issues with you running the application services. And I've got to tell you from having, you know, managed kubernetes deployments and things like that that if you're the one doing it yourself, it can get rather messy. Eso This makes it easy. >>Bruce, you were staying a second ago. They I'll take you at least fifteen minutes to install your release. Custer. Well, sure, but what would all the other bits and pieces you need toe? Not just It's not just about pressing the button to install it, right? It's making the right decision. About what components work? Well, our best tested toe be successful working together has a staff? Absolutely. We this release mechanism and Dr Enterprise Container Cloud. Let's just kind of package up that expert knowledge and make it available in a really straightforward, fashionable species. Uh, pre Confederate release numbers and Bruce is you're pointing out earlier. He's got delivered to us is updates kind of transparent period. When when? When Sean wanted toe update that cluster, he created little update. Custer Button appeared when an update was available. All you gotta do is click. It tells you what Here's your new stack of communities components. It goes ahead. And the straps those components for you? >>Yeah, it actually even displays at the top of the screen. Ah, little header That says you've got an update available. Do you want me to apply? It s o >>Absolutely. Another couple of cool things. I think that are easy to miss in that demo was I really like the on board Bafana that comes along with this stack. So we've been Prometheus Metrics and Dr Enterprise for years and years now. They're very high level. Maybe in in previous versions of Dr Enterprise having those detailed dashboards that Ravana provides, I think that's a great value out there. People always wanted to be ableto zoom in a little bit on that, uh, on those cluster metrics, you're gonna provides them out of the box for us. Yeah, >>that was Ah, really, uh, you know, the joining of the Miranda's and Dr teams together actually spawned us to be able to take the best of what Morantes had in the open stack environment for monitoring and logging and alerting and to do that integration in in a very short period of time so that now we've got it straight across the board for both the kubernetes world and the open stack world. Using the same tool sets >>warm. One other thing I wanna point out about that demo that I think there was some questions about our last go around was that demo was all about creating a managed workplace cluster. So the doctor enterprise Container Cloud managers were using those aws credentials provisioned it toe actually create new e c two instances installed Docker engine stalled. Doctor Enterprise. Remember all that stuff on top of those fresh new VM created and managed by Dr Enterprise contain the cloud. Nothing unique about that. AWS deployments do that on open staff doing on Parramatta stuff as well. Um, there's another flavor here, though in a way to do this for all of our long time doctor Enterprise customers that have been running Doctor Enterprise for years and years. Now, if you got existing UCP points existing doctor enterprise deployments, you plug those in to Dr Enterprise Container Cloud, uh, and use darker enterprise between the cloud to manage those pre existing Oh, working clusters. You don't always have to be strapping straight from Dr Enterprises. Plug in external clusters is bad. >>Yep, the the Cube config elements of the UCP environment. The bundling capability actually gives us a very straightforward methodology. And there's instructions on our website for exactly how thio, uh, bring in import and you see p cluster. Um so it it makes very convenient for our existing customers to take advantage of this new release. >>Absolutely cool. More thoughts on this wonders if we jump onto the next video. >>I think we should move press on >>time marches on here. So let's Let's carry on. So just to recap where we are right now, first video, we create a management cluster. That's what we're gonna use to create All our downstream were closed clusters, which is what we did in this video. Let's maybe the simplest architectures, because that's doing everything in one region on AWS pretty common use case because we want to be able to spin up workload clusters across many regions. And so to do that, we're gonna add a third layer in between the management and work cluster layers. That's gonna be our regional cluster managers. So this is gonna be, uh, our regional management cluster that exists per region that we're going to manage those regional managers will be than the ones responsible for spending part clusters across all these different regions. Let's see it in action in our next video. >>Hello. In this demo, we will cover the deployment of additional regional management. Cluster will include a brief architectural overview, how to set up the management environment, prepare for the deployment deployment overview, and then just to prove it, to play a regional child cluster. So looking at the overall architecture, the management cluster provides all the core functionality, including identity management, authentication, inventory and release version. ING Regional Cluster provides the specific architecture provider in this case, AWS on the L C M components on the d you speak cluster for child cluster is the cluster or clusters being deployed and managed? Okay, so why do you need original cluster? Different platform architectures, for example AWS open stack, even bare metal to simplify connectivity across multiple regions handle complexities like VPNs or one way connectivity through firewalls, but also help clarify availability zones. Yeah. Here we have a view of the regional cluster and how it connects to the management cluster on their components, including items like the LCN cluster Manager. We also machine manager. We're hell Mandel are managed as well as the actual provider logic. Okay, we'll begin by logging on Is the default administrative user writer. Okay, once we're in there, we'll have a look at the available clusters making sure we switch to the default project which contains the administration clusters. Here we can see the cars management cluster, which is the master controller. When you see it only has three nodes, three managers, no workers. Okay, if we look at another regional cluster, similar to what we're going to deploy now. Also only has three managers once again, no workers. But as a comparison is a child cluster. This one has three managers, but also has additional workers associate it to the cluster. Yeah, all right, we need to connect. Tell bootstrap note, preferably the same note that used to create the original management plaster. It's just on AWS, but I still want to machine Mhm. All right, A few things we have to do to make sure the environment is ready. First thing we're gonna pseudo into route. I mean, we'll go into our releases folder where we have the car's boot strap on. This was the original bootstrap used to build the original management cluster. We're going to double check to make sure our cube con figures there It's again. The one created after the original customers created just double check. That cute conflict is the correct one. Does point to the management cluster. We're just checking to make sure that we can reach the images that everything's working, condone, load our images waken access to a swell. Yeah, Next, we're gonna edit the machine definitions what we're doing here is ensuring that for this cluster we have the right machine definitions, including items like the am I So that's found under the templates AWS directory. We don't need to edit anything else here, but we could change items like the size of the machines attempts we want to use but the key items to ensure where changed the am I reference for the junta image is the one for the region in this case aws region of re utilizing. This was an open stack deployment. We have to make sure we're pointing in the correct open stack images. Yeah, yeah. Okay. Sit the correct Am I save the file? Yeah. We need to get up credentials again. When we originally created the bootstrap cluster, we got credentials made of the U. S. If we hadn't done this, we would need to go through the u A. W s set up. So we just exporting AWS access key and I d. What's important is Kaz aws enabled equals. True. Now we're sitting the region for the new regional cluster. In this case, it's Frankfurt on exporting our Q conflict that we want to use for the management cluster when we looked at earlier. Yeah, now we're exporting that. Want to call? The cluster region is Frankfurt's Socrates Frankfurt yet trying to use something descriptive? It's easy to identify. Yeah, and then after this, we'll just run the bootstrap script, which will complete the deployment for us. Bootstrap of the regional cluster is quite a bit quicker than the initial management clusters. There are fewer components to be deployed, but to make it watchable, we've spent it up. So we're preparing our bootstrap cluster on the local bootstrap node. Almost ready on. We started preparing the instances at us and waiting for the past, you know, to get started. Please the best your node, onda. We're also starting to build the actual management machines they're now provisioning on. We've reached the point where they're actually starting to deploy Dr Enterprise, he says. Probably the longest face we'll see in a second that all the nodes will go from the player deployed. Prepare, prepare Mhm. We'll see. Their status changes updates. It was the first word ready. Second, just applying second. Grady, both my time away from home control that's become ready. Removing cluster the management cluster from the bootstrap instance into the new cluster running a data for us? Yeah, almost a on. Now we're playing Stockland. Thanks. Whichever is done on Done. Now we'll build a child cluster in the new region very, very quickly. Find the cluster will pick our new credential have shown up. We'll just call it Frankfurt for simplicity. A key on customers to find. That's the machine. That cluster stop with three manages set the correct Am I for the region? Yeah, Same to add workers. There we go. That's the building. Yeah. Total bill of time. Should be about fifteen minutes. Concedes in progress. Can we expect this up a little bit? Check the events. We've created all the dependencies, machine instances, machines. A boat? Yeah. Shortly. We should have a working caster in the Frankfurt region. Now almost a one note is ready from management. Two in progress. On we're done. Trust us up and running. >>Excellent. There we have it. We've got our three layered doctor enterprise container cloud structure in place now with our management cluster in which we scrap everything else. Our regional clusters which manage individual aws regions and child clusters sitting over depends. >>Yeah, you can. You know you can actually see in the hierarchy the advantages that that presents for folks who have multiple locations where they'd like a geographic locations where they'd like to distribute their clusters so that you can access them or readily co resident with your development teams. Um and, uh, one of the other things I think that's really unique about it is that we provide that same operational support system capability throughout. So you've got stack light monitoring the stack light that's monitoring the stack light down to the actual child clusters that they have >>all through that single pane of glass that shows you all your different clusters, whether their workload cluster like what the child clusters or usual clusters from managing different regions. Cool. Alright, well, time marches on your folks. We've only got a few minutes left and I got one more video in our last video for the session. We're gonna walk through standing up a child cluster on bare metal. So so far, everything we've seen so far has been aws focus. Just because it's kind of easy to make that was on AWS. We don't want to leave you with the impression that that's all we do, we're covering AWS bare metal and open step deployments as well documented Craftsman Cloud. Let's see it in action with a bare metal child cluster. >>We are on the home stretch, >>right. >>Hello. This demo will cover the process of defining bare metal hosts and then review the steps of defining and deploying a bare metal based doctor enterprise cluster. Yeah, so why bare metal? Firstly, it eliminates hyper visor overhead with performance boost of up to thirty percent provides direct access to GP use, prioritize for high performance wear clothes like machine learning and AI, and support high performance workouts like network functions, virtualization. It also provides a focus on on Prem workloads, simplifying and ensuring we don't need to create the complexity of adding another hyper visor layer in between. So continuing on the theme Why communities and bare metal again Hyper visor overhead. Well, no virtualization overhead. Direct access to hardware items like F p g A s G p, us. We can be much more specific about resource is required on the nodes. No need to cater for additional overhead. We can handle utilization in the scheduling better Onda. We increase the performance and simplicity of the entire environment as we don't need another virtualization layer. Yeah, In this section will define the BM hosts will create a new project. Will add the bare metal hosts, including the host name. I put my credentials. I pay my address, Mac address on, then provide a machine type label to determine what type of machine it is. Related use. Okay, let's get started Certain Blufgan was the operator thing. We'll go and we'll create a project for our machines to be a member off. Helps with scoping for later on for security. I begin the process of adding machines to that project. Yeah. Yeah. So the first thing we had to be in post many of the machine a name. Anything you want? Yeah, in this case by mental zero one. Provide the IAP My user name. Type my password? Yeah. On the Mac address for the active, my interface with boot interface and then the i p m i P address. Yeah, these machines. We have the time storage worker manager. He's a manager. We're gonna add a number of other machines on will speed this up just so you could see what the process. Looks like in the future, better discovery will be added to the product. Okay, Okay. Getting back there. We haven't Are Six machines have been added. Are busy being inspected, being added to the system. Let's have a look at the details of a single note. Mhm. We can see information on the set up of the node. Its capabilities? Yeah. As well as the inventory information about that particular machine. Okay, it's going to create the cluster. Mhm. Okay, so we're going to deploy a bare metal child cluster. The process we're going to go through is pretty much the same as any other child cluster. So credit custom. We'll give it a name. Thank you. But he thought were selecting bare metal on the region. We're going to select the version we want to apply on. We're going to add this search keys. If we hope we're going to give the load. Balancer host I p that we'd like to use out of the dress range update the address range that we want to use for the cluster. Check that the sea idea blocks for the communities and tunnels are what we want them to be. Enable disabled stack light and said the stack light settings to find the cluster. And then, as for any other machine, we need to add machines to the cluster. Here we're focused on building communities clusters. So we're gonna put the count of machines. You want managers? We're gonna pick the label type manager on create three machines. Is a manager for the Cuban a disgusting? Yeah, they were having workers to the same. It's a process. Just making sure that the worker label host like you are so yes, on Duin wait for the machines to deploy. Let's go through the process of putting the operating system on the notes, validating that operating system. Deploying Docker enterprise on making sure that the cluster is up and running ready to go. Okay, let's review the bold events. We can see the machine info now populated with more information about the specifics of things like storage. Yeah, of course. Details of a cluster, etcetera. Yeah, Yeah. Okay. Well, now watch the machines go through the various stages from prepared to deploy on what's the cluster build, and that brings us to the end of this particular do my as you can see the process is identical to that of building a normal child cluster we got our complaint is complete. >>Here we have a child cluster on bare metal for folks that wanted to play the stuff on Prem. >>It's ah been an interesting journey taken from the mothership as we started out building ah management cluster and then populating it with a child cluster and then finally creating a regional cluster to spread the geographically the management of our clusters and finally to provide a platform for supporting, you know, ai needs and and big Data needs, uh, you know, thank goodness we're now able to put things like Hadoop on, uh, bare metal thio in containers were pretty exciting. >>Yeah, absolutely. So with this Doctor Enterprise container cloud platform. Hopefully this commoditized scooping clusters, doctor enterprise clusters that could be spun up and use quickly taking provisioning times. You know, from however many months to get new clusters spun up for our teams. Two minutes, right. We saw those clusters gets better. Just a couple of minutes. Excellent. All right, well, thank you, everyone, for joining us for our demo session for Dr Enterprise Container Cloud. Of course, there's many many more things to discuss about this and all of Miranda's products. If you'd like to learn more, if you'd like to get your hands dirty with all of this content, police see us a training don Miranda's dot com, where we can offer you workshops and a number of different formats on our entire line of products and hands on interactive fashion. Thanks, everyone. Enjoy the rest of the launchpad of that >>thank you all enjoy.

Published Date : Sep 17 2020

SUMMARY :

So for the next couple of hours, I'm the Western regional Solutions architect for Moran At least somebody on the call knows something about your enterprise Computer club. And that's really the key to this thing is to provide some, you know, many training clusters so that by the end of the tutorial content today, I think that's that's pretty much what we had to nail down here. So the management costs was always We have to give this brief little pause of the management cluster in the first regional clusters to support AWS deployments. So in that video are wonderful field CTO Shauna Vera bootstrapped So primarily the foundation for being able to deploy So this cluster isn't yet for workloads. Read the phone book, So and just to make sure I understood The output that when it says I'm pivoting, I'm pivoting from on the bootstrap er go away afterwards. So that there's no dependencies on any of the clouds that get created thereafter. Yeah, that actually reminds me of how we bootstrapped doctor enterprise back in the day, The config file that that's generated the template is fairly straightforward We always insist on high availability for this management cluster the scenes without you having toe worry about it as a developer. Examples of that is the day goes on. either the the regional cluster or a We've got the management cluster, and we're gonna go straight with child cluster. as opposed to having to centralize thumb So just head on in, head on into the docks like the Dale provided here. That's going to be in a very near term I didn't wanna make promises for product, but I'm not too surprised that she's gonna be targeted. No, just that the fact that we're running through these individual So let's go to that video and see just how We can check the status of the machine bulls as individuals so we can check the machine the thing that jumped out to me at first Waas like the inputs that go into defining Yeah, and and And that's really the focus of our effort is to ensure that So at that point, once we started creating that workload child cluster, of course, we bootstrapped good old of the bootstrapping as well that the processes themselves are self healing, And the worst thing you could do is panic at the first warning and start tearing things that don't that then go out to touch slack and say hi, You need to watch your disk But Sean mentioned it on the video. And And the kubernetes, uh, scaling methodology is is he adhered So should we go to the questions. Um, that's kind of the point, right? you know, set up things and deploy your applications and things. that comes to us not from Dr Enterprise Container Cloud, but just from the underlying kubernetes distribution. to the standards that we would want to set to make sure that we're not overloading On the next video, we're gonna learn how to spin up a Yeah, Do the same to add workers. We got that management cluster that we do strapped in the first video. Yeah, that's the key to this is to be able to have co resident with So we don't have to go back to the mother ship. So it's just one pane of glass to the bootstrapped cluster of the regional services. and another, you know, detail for those that have sharp eyes. Let's take a quick peek of the questions here, see if there's anything we want to call out, then we move on to our last want all of the other major players in the cloud arena. Let's jump into our last video in the Siri's, So the first thing we had to be in post, Yeah, many of the machine A name. Much the same is how we did for AWS. nodes and and that the management layer is going to have sufficient horsepower to, are regional to our clusters on aws hand bear amount, Of course, with his dad is still available. that's been put out in the chat, um, that you'll be able to give this a go yourself, Uh, take the opportunity to let your colleagues know if they were in another session I e just interest will feel for you. Use A I'm the one with the gray hair and the glasses. And for the providers in the very near future. I can hardly wait. Let's do it all right to share my video So the first thing is, we need those route credentials which we're going to export on the command That is the tool and you're gonna use to start spinning up downstream It just has to be able to reach aws hit that Hit that a p I to spin up those easy to instances because, and all of the necessary parameters that you would fill in have That's the very first thing you're going to Yeah, for the most part. Let's now that we have our management cluster set up, let's create a first We can check the status of the machine balls as individuals so we can check the glitches, resolve themselves and leave you with a functioning workload cluster within exactly the same thing with resource is on Prem or resource is, All the different bits and pieces And I've got to tell you from having, you know, managed kubernetes And the straps those components for you? Yeah, it actually even displays at the top of the screen. I really like the on board Bafana that comes along with this stack. the best of what Morantes had in the open stack environment for monitoring and logging So the doctor enterprise Container Cloud managers were Yep, the the Cube config elements of the UCP environment. More thoughts on this wonders if we jump onto the next video. Let's maybe the simplest architectures, of the regional cluster and how it connects to the management cluster on their components, There we have it. that we provide that same operational support system capability Just because it's kind of easy to make that was on AWS. Just making sure that the worker label host like you are so yes, It's ah been an interesting journey taken from the mothership Enjoy the rest of the launchpad

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MaryPERSON

0.99+

SeanPERSON

0.99+

Sean O'MaraPERSON

0.99+

BrucePERSON

0.99+

FrankfurtLOCATION

0.99+

three machinesQUANTITY

0.99+

Bill MilksPERSON

0.99+

AWSORGANIZATION

0.99+

first videoQUANTITY

0.99+

second phaseQUANTITY

0.99+

ShawnPERSON

0.99+

first phaseQUANTITY

0.99+

ThreeQUANTITY

0.99+

Two minutesQUANTITY

0.99+

three managersQUANTITY

0.99+

fifth phaseQUANTITY

0.99+

ClarkPERSON

0.99+

Bill MillsPERSON

0.99+

DalePERSON

0.99+

Five minutesQUANTITY

0.99+

NanPERSON

0.99+

second sessionQUANTITY

0.99+

Third phaseQUANTITY

0.99+

SeymourPERSON

0.99+

Bruce Basil MatthewsPERSON

0.99+

Moran TousPERSON

0.99+

five minutesQUANTITY

0.99+

hundredsQUANTITY

0.99+

Corey Quinn, The Duckbill Group | Cloud Native Insights


 

>>from the Cube Studios in Palo Alto in Boston, connecting with thought leaders around the globe. These are cloud native insights. Hi, I'm stew Minimum and the host of Cloud Native Insights. And the threat that we've been pulling on with Cloud Native is that we needed to be able to take advantage of the innovation and agility that cloud in the ecosystem around it can bring, not just the location. It's It's not just the journey, but how do I take advantage of something today and keep being able to move for Happy to welcome back to the program one of our regulars and someone that I've had lots of discussion about? Cloud Cloud. Native Serverless So Cory Quinn, the Keith Cloud economists at the Duck Bill Group. Corey, always good to see you. Thanks for joining us. >>It is great to see me. And I always love having the opportunity to share my terrible opinions with people who then find themselves tarred by the mere association. And there's certainly no exception to use, too. Thanks for having me back. Although I question your judgment. >>Yeah, you know, what was that? Pandora's box. I open when I was like Hey, Corey, let's try you on video so much. And if people go out, they can look at your feet and you've spent lots of money on equipment. You have a nice looking set up. I guess you missed that one window of opportunity to get your hair cut in San Francisco during the pandemic. But be doesn't may Corey, why don't you give our audience just the update You went from a solo or mentor of the cloud? First you have a partner and a few other people, and you're now you've got economists. >>Yes, it comes down to separating out. What I'm doing with my nonsense from other people's other people's careers might very well be impacted by it considered tweet of mine. When you start having other clouds, economists and realize, okay, this is no longer just me we're talking about here. It forces a few changes. I was told one day that I would not be the chief economist. I smile drug put on a backlog item to order a new business cards because it's not like we're going to a lot of events these days, and from my perspective, things continue mostly a base. The back. To pretend people now means that there's things that my company does that I'm no longer directly involved with, which is a relief, that absolutely, ever. But it's been an interesting right. It's always strange. Is the number one thing that people who start businesses say is that if they knew what they were getting into, they'd never do it again. I'm starting to understand that. >>Yeah, well, Corey, as I mentioned you, and I have had lots of discussions about Cloud about multi Cloud server. Listen, like when you wrote an article talking about multi cloud is a worse practice. One of the things underneath is when I'm using cloud. I should really be able to leverage that cloud. One of the concerns that when you and I did a cube con and cloud native con is does multi cloud become a least common denominator? And a comment that I heard you say was if I'm just using cloud and the very basic services of it, you know, why don't I go to an AWS or an azure which have hundreds of services? Maybe I could just find something that is, you know, less expensive because I'm basically thinking of it as my server somewhere else. Which, of course, cloud is much more than so you do with a lot of very large companies that help them with their bills. What difference there differentiates the companies that get advantage from the cloud versus those that just kind of fit in another location, >>largely the stories that they tell themselves internally and how they wind up adapting to cloud. If the reason I got into my whole feel about why multi cloud is a worst practice is that of you best practices a sensible defaults, I view multi cloud as a ridiculous default. Sure, there are cases where it's important, and so I don't say I'm not suggesting for a second that those people who are deciding to go down that are necessarily making wrong decisions. But when you're building something from scratch with this idea toward taking a single workload and deploying it anywhere in almost every case, it's the wrong decision. Yes, there are going to be some workloads that are better suited. Other places. If we're talking about SAS, including that in the giant wrapper of cloud definition in terms of what was then, sure you would be nuts to wind of running on AWS and then decide you're also going to go with codecommit instead of git Hub. That's not something sensible people to use get up or got sick. But when I am suggesting, is that the idea of building absolutely every piece of infrastructure in a way that avoids any of the differentiated offerings that your primary cloud provider uses is just generally not a great occasionally you need to. But that's not the common case, and people are believing that it is >>well, and I'd like to dig a little deeper. Some of those differentiated services out there there are concerned, but some that said, You know, I think back to the past model. I want to build something. I can have it live ever anywhere. But those differentiated services are something that I should be able to get value out of it. So do you have any examples, or are there certain services that you have his favorites that you've seen customers use? And they say, Wow, it's it's something that is effective. It's something that is affordable, and I can get great value out of this because I didn't have to build it. And all of these hyper scaler have lots of engineers built, building lots of cool things. And I want to take advantage of that innovation. >>Sure, that's most of them. If we're being perfectly honest, there are remarkably few services that have no valid use cases for no customer anywhere. A lot of these solve an awful lot of pain that customers have. Dynamodb is a good example of this Is that one a lot of folks can relate to. It's super fast, charges you for what you use, and that is generally yet or a provision Great. But you don't have to worry about instances. You have to worry about scaling up or scaling down in the traditional sense. And that's great. The problem is, is great. How do I migrate off of this on to something else? Well, that's a good question. And if that is something that you need to at least have a theoretical exodus for, maybe Dynamo DV is the wrong service for you to pick your data store personally. If I have to build for a migration in mind on no sequel basis, I'll pick mongo DB every time, not because it's any easier to move it, but because it's so good at losing data, that'll have remarkably little bit left. Migrate. >>Yeah, Corey, of course. One of the things that you help customers with quite a bit is on the financial side of it. And one of the challenges if I moved from my environment and I move to the public cloud, is how do I take advantage not only of the capability to the cloud but the finances of the cloud. I've talked to many customers that when you modernize your pull things apart, maybe you start leveraging serverless capabilities. And if I tune things properly, I can have a much more affordable solution versus that. I just took my stuff and just shoved it all in the cloud kind of a traditional lift and shift. I might not have good economics. When I get to the cloud. What do you see along those lines? >>I'd say you're absolutely right with that assessment. If you are looking at hitting break even on your cloud migration in anything less than five years, it's probably wrong. The reason to go to Cloud is not to save money. There are edge cases where it makes sense, Sure, but by and large you're going to wind up spending longer in the in between state that you would believe eventually you're going to give up and call it hybrid game over. And at some point, if you stall long enough, you'll find that the cloud talent starts reaching out of your company. At which point that Okay, great. Now we're stuck in this scenario because no one wants to come in and finish the job is harder than we thought we landed. But it becomes this story of not being able to forecast what the economics are going to look like in advanced, largely because people don't understand where their workloads start and stop what the failure modes look like and how that's going to manifest itself in a cloud provider environment. That's why lift and shift is popular. People hate, lift and ship. It's a terrible direction to go in. Yeah, so are all the directions you can go in as far as migrating, short of burning it to the ground for insurance money and starting over, you've gotta have a way to get from where you are, where you're going. Otherwise, migration to be super simple. People with five weeks of experience and a certification consult that problem. It's but how do you take what's existing migrated end without causing massive outages or cost of fronts? It's harder than it looks. >>Well, okay, I remember Corey a few years ago when I talk to customers that were using AWS. Ah, common complaint was we had to dedicate an engineer just to look at the finances of what's happening. One of the early episodes I did of Cloud Native Insights talked to a company that was embracing this term called Been Ops. We have the finance team and the engineering team, not just looking back at the last quarter, but planning understanding what the engineering impacts were going forward so that the developers, while they don't need tohave all the spreadsheets and everything else, they understand what they architect and what the impact will be on the finance side. What are you hearing from your customers out there? What guidance do you give from an organizational standpoint as to how they make sure that their bill doesn't get ridiculous? >>Well, the term fin ops is a bit of a red herring in there because people immediately equate it back to cloud ability before their app. Geo acquisitions where the fin ops foundation vendors are not allowed to join except us, and it became effectively a marketing exercise that was incredibly poorly executed in sort of poisoned the well. Now the finance foundations been handed off to the Cloud Native Beauty Foundation slash Lennox Foundation. Maybe that's going to be rehabilitated, but we'll have to find out. One argument I made for a while was that developers do not need to know what the economic model in the cloud is going to be. As a general rule, I would stand by that. Now someone at your company needs to be able to have those conversations of understanding the ins and outs of various costs models. At some point you hit a point of complexity we're bringing in. Experts solve specific problems because it makes sense. But every developer you have does not need to sit with 3 to 5 days course understanding the economics of the cloud. Most of what they need to know if it's on a business card, it's on an index card or something small that is carplay and consult business and other index ramos. But the point is, is great. Big things cost more than small things. You're not charged for what you use your charger for. What you forget to turn off and being able to predict your usage model in advance is important and save money. Data transfers Weird. There are a bunch of edge cases, little slice it and ribbons, but inbound data transfer is generally free. Outbound, generally Austin arm and a leg and architect accordingly. But by and large for most development product teams, it's built something and see if it works first. We can always come back later and optimize costs as you wind up maturing the product offering. >>Yeah, Cory, it's some of those sharp edges I've love learning about in your newsletter or some of your online activities there, such as you talked about those egress fees. I know you've got a nice diagram that helps explain if you do this, it costs a lot of money. If you do this, it's gonna cost you. It cost you a lot less money. Um, you know, even something like serverless is something that in general looks like. It should be relatively expensive, but if you do something wrong, it could all of a sudden cost you a lot of money. You feel that companies are having a better understanding so that they don't just one month say, Oh my God, the CFO called us up because it was a big mistake or, you know, where are we along that maturation of cloud being a little bit more predictable? >>Unfortunately, no. Where near I'd like us to be it. The story that I think gets missed is that when you're month over, month span is 20% higher. Finance has a bunch of questions, but if they were somehow 20% lower, they have those same questions. They're trying to build out predictive models that align. They're not saying you're spending too much money, although by the time the issues of the game, yeah, it's instead help us understand and predict what's happening now. Server less is a great story around that, because you can tie charges back to individual transactions and that's great. Except find me a company that's doing that where the resulting bill isn't hilariously inconsequential. A cloud guru Before they bought Lennox, I can't get on stage and talk about this. It serverless kind of every year, but how? They're spending $600 a month in Lambda, and they have now well, over 100 employees. Yeah, no one cares about that money. You can trace the flow of capital all you want, but it grounds up to No one cares at some point that changes. But there's usually going to be far bigger fish to front with their case, I would imagine, given, you know, stream video, they're probably gonna have some data transfer questions that come into play long before we talk about their compute. >>Yeah, um, what else? Cory, when you look at the innovation in the cloud, are there things that common patterns that you see that customers are missing? Some of the opportunities there? How does the customers that you talk to, you know, other than reading your newsletter, talking Teoh their systems integrator or partner? How are they doing it? Keeping up with just the massive amount of change that happens out >>there. Get customers. AWS employees follow the newsletter specifically to figure out what's going on. We've long since passed a Rubicon where I can talk incredibly convincingly about services that don't really exist. And Amazon employees won't call me out on the joke that I've worked in there because what the world could ever say that and then single. It's well beyond any one person's ability to keep it all in their head. So what? We're increasingly seeing even one provider, let alone the rest. Their events are outpacing them and no one is keeping up. And now there's the persistent, never growing worry that there's something that just came out that could absolutely change your business for the better. And you'll never know about it because you're too busy trying to keep up with all the other number. Every release the cloud provider does is important to someone but none of its important everyone. >>Yeah, Corey, that's such a good point. When you've been using tools where you understand a certain way of doing things, how do you know that there's not a much better way of doing it? So, yeah, I guess the question is, you know, there's so much out there. How do people make sure that they're not getting left behind or, you know, keep their their their understanding of what might be able to be used >>the right answer. There, frankly, is to pick a direction and go in it. You can wind up in analysis paralysis issues very easily. And if you talk about what you've done on the Internet, the number one responsible to get immediately is someone suggesting an alternate approach you could have taken on day one. There is no one path forward for any six, and you can second guess yourself that the problem is that you have to pick a direction and go in it. Make sure it makes sense. Make sure the lines talk to people who know what's going on in the space and validate it out. But you're going to come up with a plan right head in that direction, I assure you, you are probably not the only person doing it unless you're using. Route 53 is a database. >>You know, it's an interesting thing. Corey used to be said that the best time to start a project was a year ago. But you can't turn back time, so you should start it now. I've been saying for the last few years the best time to start something would be a year from now, so you can take advantage of the latest things, but you can't wait a year, so you need to start now. So how how do you make sure you maintain flexibility but can keep moving projects moving forward? E think you touched on that with some of the analysis paralysis, Anything else as to just how do you make sure you're actually making the right bets and not going down? Some, you know, odd tangent that ends up being a debt. >>In my experience, the biggest problem people have with getting there is that they don't stop first to figure out alright a year from now. If this project has succeeded or failed, how will we know they wind up building these things and keeping them in place forever, despite the fact that cost more money to run than they bring in? In many cases, it's figure out what success looks like. Figure out what failure looks like. And if it isn't working, cut it. Otherwise, you're gonna wind up, went into this thing that you've got to support in perpetuity. One example of that one extreme is AWS. They famously never turn anything off. Google on the other spectrum turns things off as a core competence. Most folks wind up somewhere in the middle, but understand that right now between what? The day I start building this today and the time that this one's of working down the road. Well, great. There's a lot that needs to happen to make sure this is a viable business, and none of that is going to come down to, you know, build it on top of kubernetes. It's going to come down. Is its solving a problem for your customers? Are people they're people in to pay for the enhancement. Anytime you say yes to that project, you're saying no to a bunch of others. Opportunity Cost is a huge thing. >>Yeah, so it's such an important point, Cory. It's so fundamental when you look at what what cloud should enable is, I should be able to try more things. I should be able to fail fast on, and I shouldn't have to think about, you know, some cost nearly as much as I would in the past. We want to give you the final word as you look out in the cloud. Any you know, practices, guidelines, you can give practitioners out there as to make sure that they are taking advantage of the innovation that's available out there on being able to move their company just a little bit faster. >>Sure, by and large, for the practitioners out there, if you're rolling something out that you do not understand, that's usually a red flag. That's been my problem, to be blunt with kubernetes or an awful lot of the use cases that people effectively shove it into. What are you doing? What if the business problem you're trying to solve and you understand all of its different ways that it can fail in the ways that will help you succeed? In many cases, it is stupendous overkill for the scale of problem most people are throwing. It is not a multi cloud answer. It is not the way that everyone is going to be doing it or they'll make fun of you under resume. Remember, you just assume your own ego. In this sense, you need to deliver an outcome. You don't need to improve your own resume at the expense of your employer's business. One would hope, >>Well, Cory, always a pleasure catching up with you. Thanks so much for joining me on the cloud. Native insights. Thank you. Alright. Be sure to check out silicon angle dot com if you click on the cloud. There's a whole second for cloud Native insights on your host to minimum. And I look forward to hearing more from you and your cloud Native insights Yeah, yeah, yeah, yeah, yeah.

Published Date : Aug 14 2020

SUMMARY :

And the threat that we've been pulling on with Cloud Native is And I always love having the opportunity to share my terrible opinions with people Yeah, you know, what was that? When you start having other clouds, economists and realize, okay, this is no longer just me One of the concerns that when you and I did a cube is that of you best practices a sensible defaults, I view multi cloud as a ridiculous default. examples, or are there certain services that you have his favorites that you've maybe Dynamo DV is the wrong service for you to pick your data store personally. One of the things that you help customers with quite a bit is on the financial in the in between state that you would believe eventually you're going to give up and call it hybrid game over. One of the early episodes I did of Cloud Native Insights talked to a company that Well, the term fin ops is a bit of a red herring in there because people immediately equate it back to cloud but if you do something wrong, it could all of a sudden cost you a lot of money. I would imagine, given, you know, stream video, they're probably gonna have some data transfer questions that come into play AWS employees follow the newsletter specifically to figure out what's that they're not getting left behind or, you know, keep their their their understanding of what Make sure the lines talk to people who know what's going on in the space and validate it out. of the latest things, but you can't wait a year, so you need to start now. and none of that is going to come down to, you know, build it on top of kubernetes. on, and I shouldn't have to think about, you know, some cost nearly as much as I would in the past. of you under resume. And I look forward to hearing more from you and your cloud Native insights Yeah,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
3QUANTITY

0.99+

20%QUANTITY

0.99+

Corey QuinnPERSON

0.99+

AWSORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

San FranciscoLOCATION

0.99+

five weeksQUANTITY

0.99+

CoryPERSON

0.99+

AmazonORGANIZATION

0.99+

CoreyPERSON

0.99+

PandoraORGANIZATION

0.99+

Duck Bill GroupORGANIZATION

0.99+

last quarterDATE

0.99+

one monthQUANTITY

0.99+

sixQUANTITY

0.99+

less than five yearsQUANTITY

0.99+

Cube StudiosORGANIZATION

0.99+

over 100 employeesQUANTITY

0.99+

BostonLOCATION

0.98+

GoogleORGANIZATION

0.98+

5 daysQUANTITY

0.98+

OneQUANTITY

0.98+

singleQUANTITY

0.98+

FirstQUANTITY

0.98+

oneQUANTITY

0.98+

todayDATE

0.98+

hundreds of servicesQUANTITY

0.98+

LennoxORGANIZATION

0.98+

one providerQUANTITY

0.97+

Cloud CloudORGANIZATION

0.97+

Lennox FoundationORGANIZATION

0.96+

The Duckbill GroupORGANIZATION

0.96+

Cloud Native Beauty FoundationORGANIZATION

0.96+

DynamodbORGANIZATION

0.96+

a yearQUANTITY

0.95+

SASORGANIZATION

0.95+

Cory QuinnPERSON

0.95+

$600 a monthQUANTITY

0.95+

a year agoDATE

0.95+

One exampleQUANTITY

0.94+

pandemicEVENT

0.94+

one extremeQUANTITY

0.93+

Cloud Native InsightsORGANIZATION

0.93+

day oneQUANTITY

0.93+

Cloud NativeORGANIZATION

0.92+

firstQUANTITY

0.89+

one windowQUANTITY

0.88+

One argumentQUANTITY

0.88+

one personQUANTITY

0.87+

Been OpsORGANIZATION

0.85+

secondQUANTITY

0.81+

few years agoDATE

0.8+

muchQUANTITY

0.79+

one dayQUANTITY

0.78+

single workloadQUANTITY

0.75+

kQUANTITY

0.72+

LambdaTITLE

0.72+

last few yearsDATE

0.69+

egressORGANIZATION

0.68+

Keith CloudORGANIZATION

0.67+

NativeORGANIZATION

0.62+

yearQUANTITY

0.6+

stew MinimumPERSON

0.59+

a yearDATE

0.57+

RouteTITLE

0.56+

Dynamo DVORGANIZATION

0.54+

RubiconCOMMERCIAL_ITEM

0.51+

AustinLOCATION

0.45+

53ORGANIZATION

0.28+

Breaking Analysis: Google Rides the Cloud Wave but Remains a Distant Third


 

>> From The Cube Studios in Palo Alto and Boston, bringing you data driven insights from The Cube and ETR, this is Breaking Analysis with Dave Vellante. >> Despite it's faster growth and infrastructure as a service, relative to AWS and Azure, Google Cloud platform remains a third wheel in the race for cloud dominance. Google begins its Cloud Next online event starting July fourteenth in a series of nine rolling sessions that go through early September. Ahead of that, we want to update you on our most current data on Google's cloud business. Hello everyone, this is Dave Vellante, and welcome to this week's Wikibon Cube insights, powered by ETR. In this session, we'll review the current state of cloud, and Google's position in the market. We'll drill into the ETR data and share fresh insights from our partner and the Cube community. So let's get right into it. You know, Google, if you think about it, was actually very early into the cloud game. Google's 2004 IPO was a milestone event for the tech industry, and in you know many ways, it really marked the end of the post-dotcom malaise. It signaled the beginning of a new era of innovation. During this time, Google was busy building out its massive, global cloud infrastructure, probably the largest in the world, with undersea cables, global data centers, and tools like the Google file system, and of course Bigtable. But it took many years for Google to pull its head out of its ad serving butt and realize the opportunity to sell its cloud services to global enterprises. Bigtable, Google's no-sequel database, for example, was released in 2005, but it wasn't until 2015 that Google made this service available to its customers. That was the same year Google brought in VMware founder, Diane Greene to begin its enterprise journey in earnest. Now Google, they have a dizzying array of services in compute, storage, database, networking, IT ops, dev tools, machine learning, AI, analytics, big data, security, on and on and on. Name a category and it's likely that Google has something in it as a cloud service. But Google, to this day, still hasn't figured out how to sell to the enterprise. It really struggles to find the right formula. So, as you know, Google brought in Thomas Kurian from Oracle, to figure this out. Of course Kurian is, he's going to go with Google's strengths like analytics and database, but it has to have differentiation, so it comes up with unique pricing models like sustained discounts, which automatically apply discount for heavy usage, as opposed to forcing users to buy reserved instances such as what AWS does. You know Google is more aggressive partnering around multi-cloud, for instance, with Anthos, and it's smartly open-sourced Kubernetes really to minimize the importance of, physically, where workloads run. The bottom-line, however, is that these moves are necessary for Google to compete because it lags behind the leaders. And it has a long way to go before it's going to be satisfied with its cloud business. Let's look at the IaaS market in context. Now, I don't want to say it's all gloom and doom for Google. Far from it. Earnings for Q2, they're going to start rolling out later this month, but this chart shows our latest estimates of IaaS and PaaS for the big three cloud players. Now, I got to caution you, as I did before, other than AWS, which reports very clean numbers each quarter on IaaS and PaaS, we have to estimate Azure and GCP revenue because they bundle in other things. I'll give an example. Google reports its overall cloud numbers which include G Suite. Microsoft reports a category they call intelligent cloud. Now that includes public, private clouds, hybrid, sequel server, Windows server, system center, GitHub, enterprise support and consulting services. And Azure, the IaaS and PaaS numbers are also in there too. So what we have to do is to squint through the earnings reports and the 10 Ks and try to get a clean IaaS and PaaS figure for these players, and that's what we show here. Now there's really two points that we want to stress with this data. First, on a trailing 12 month basis, the big three cloud players now account for nearly 60 billion dollars in IaaS and PaaS revenue. And this 60 billion dollars, on a weighted average basis, is growing in the mid 40% range. So well on its way to being a 100 billion dollar business. Just for these three firms. And as we've reported, that's eating directly into the on-premises infrastructure install base, which is a flat to declining market. And that trend is going to play out in a big way this decade. We've predicted that public cloud is going to out pace on-prem infrastructure by more that 1800 basis points over the next 10 years, from a spending standpoint. Now the second point that I want to make relates to Google IaaS and PaaS growth. We peg it at greater than 70%, based on public statements, reading the 10 Ks and ETR data, which we'll discuss in a moment. So, very healthy growth, but from a much smaller install base than, or base than AWS and Azure. But in our view it's not enough, because AWS and Azure are so large and strong still, growth wise, that we feel Google is going to remain a distant third, really indefinitely. Nonetheless, a lot of companies would be thrilled to have a four billion dollar cloud business and there's certainly good news in the data for Google. So let's look at some of that survey data. Now, as we've reported in the past, Google pushes G Suite very hard, as part of its cloud story, and it leads often times with G Suite in its messaging. You know, but to us that's never really been that compelling. So let me start with some anecdotal data from ETR. ETR runs a regular program, they call it VENN, and in the VENN they invite clients into a private session to listen to named CIOs talk about their experience with vendors and overall spending intentions. It's a facilitated session. And we've had ETR's Eric Bradley on as a guest who directs the VENN program, and does much of the facilitation, and here's a statement from a recent VENN session quoting a CIO at a midsize Telco, that I think sums it up nicely. He says Google's G Suite is fine and dandy, but I don't see that truly as an enterprise solution. And frankly, it's still not of the quality of an Office application, talking about Microsoft. All in all I really like the infrastructure-as-a-service and the platform-as-a-service components that GCP had. And I thought they were coming along very very well in that space. Now, the reason that I share this is because the IT buyers that we speak with, you know they're very serious about exploring Google. They want options other than Azure and AWS and they see Google as having great tech and as a viable alternative. So let's talk about GCP and the enterprise. We looking, when we look into the ETR data for the most recent survey, which ran in June and early July, GCP is showing strength in one really important bellwether category, the giant public and private companies. These are the largest firms in the ETR dataset and often point to secular trends. Now, before we get into that, let's look at the picture for GCP using ETR's net score up methodology. This is fundamental to the ETR approach, and remember, each quarter ETR goes out and asks its respondents, are you planning to spend more or less? In its July survey, ETR focuses on second half spending. The next chart captures results across Google's entire portfolio. So here's the breakdown for, for Google across all sectors. 14% of the respondents are adopting new, that's the lime green. 39% plan to increase spending in the second half versus the first half, that's the forest green. Then there's a big fat middle, that's flat, and you see that in the gray area. And the 7% are spending less, with 2% replacing, that's the pinkish and dark red, respectively. So, I would say this result is mixed, in my opinion. Yeah, it's not bad, don't get me wrong, and we've, we'll see once ETR comes out of its quite period, how this compares to Azure and AWR, so remember, I can only share limited data until ETR clients get the data and have time to act on it. But this calculates out to a net score of 44%, which is respectable, but frankly not overly inspiring. So let's look across the GCP portfolio using the ETR taxonomy and see what it looks like. This chart shows the net score comparisons across three different surveys, October 19, April 20, and July 20. So reading the bars left to right, you can see Google's strong suit really is machine learning and AI. Container platforms are also very strong, as are functions, or server-less, and databases, very solid, we'll talk more about that in a minute. You know, video conferencing was just added by ETR and sure it pops up with the work from home. Cloud is actually holding firm when compared to October of last year. But surprisingly, analytics is looking a bit softer. And ETR for the first time added G Suite with, it shows a 26% net score, first time out, which is pretty tepid. I mean not very impressive at all. But overall, the picture looks pretty good for Google. So let's dig further into the giant public and private sector, that bellwether I talked about. And let's peal the onion a bit and look closer at the results from the largest companies in the dataset. So this chart shows the giant public, plus private organizations. So it would include like monster public companies but also large companies like a Cargill or a Coke Industries, if in fact they responded in this survey. And you can see, in that all important sector, it's a story of a lot of green with hardly any red, so quite a positive sign for Google within those bellwethers. Here's what I think is happening here. Is these large, and often far flung organizations, have realized that they have multiple cloud vendors, and they're asking their senior IT leadership to bring some consistency and sanity to their cloud strategies. So they look at the big three and say, okay, what's the best strategic fit for each workload? So they might say for instance let's use AWS for core IaaS, let's use Azure for productivity workloads, and we'll sprinkle some Google in for machine learning and related projects. So we do see some real strength in some of the larger strongholds for Google, although interestingly ETR sort of tells me that there's softness in the midsize and smaller companies that have powered AWS for so many years. And of course this, with Google's base, but compare that to AWS and AWS is much stronger in those smaller companies, start-ups and the like, and of course COVID's the wild car in all this. You know, we have to take that into account, and we will with Sagar Kadakia, who's ETR's director of research in the coming weeks. But I want to look at Google in the all important database category. So before we wrap, let's look at database. You remember, Google's playing catch up in the cloud and its marketing takes a more open posture around partners and things like multi-cloud and you know you can contrast that with AWS for example, but look, make no mistake, Google wants you data in their cloud, and that's why database is so strategic and so important. Look, it's the mother of all lock specs. All you got to do is look at Oracle and their success. Now, as we've reported many times, there's a new workload emerging in the cloud around this idea of the modern data warehouse. I mean I don't even like that term anymore, data warehouse, because it sounds just so static. But anyway, any rate, I'm talking about workloads that bring database, machine learning, AI, data science, compute and storage along with visualization tools to deliver real-time insights and operational analytics. Database is at the heart of everything here. Win the database and everything else falls into place. Now, Google has six or seven database products and one of the most impressive, in my opinion, is BigQuery. I mean, for those who have followed me over the years you know I love the technology behind Google's banner, but BigQuery is where much of the action is around this new workload that I'm talking about. So, let's look at, deeper at Google's position in database. This chart shows one of my favorite views. On the Y axis is the net score, or spending momentum, and on the X axis is market share or pervasiveness in the ETR dataset. The chart plots various database companies and their position within the all important giant public plus private sector. So these are the companies in the ETR survey that are the largest, and oftentimes, again, are a bellwether. And you can see Microsoft and Oracle and AWS have very strong presence on the horizontal axis. Mongo, MongoDB looms large, MemSQL, they just raised 50 million dollars this past May, MariaDB just raised another 25 million this month. You can see Couchbase and Redis, they show up, and they're on my radar. I'm learning more about those companies. Folks, database is hot. VC's are pouring money in and it's something that's very important to the Cube community to look at. And of course you see Google in the chart, with a strong net score, you know, but not the type of market presence that you see from the other big cloud players. In fact, they've pulled back a little somewhat in this last ETR survey. So despite some bright spots in the enterprise in terms of spending momentum, just not quite enough presence yet. Oh, by the way, look who's right there with Google. I know I sound like a broken record, but Snowflake is everywhere. You'll find them in AWS, you'll find them in Azure and on GCP. Now remember, Snowflake is only about one tenth the size of Google's IaaS and PaaS business. But it has stronger spending momentum than all the big guys, and it continues to creep its way to the right in terms of market share or presence. You know, but Google has great database tech and BigQuery is at the heart of its strategy to support analytics at scale, and automate the data pipeline. BigQuery's very well designed, it started as a cloud native database, it's based on server-less, it's highly scalable, and it's very cost-effective. In fact, ESG, enterprise strategy group, wrote a report comparing the TCO of the cloud databases. Let me pull that up and show you. Now the report was commissioned by Google, so I got to caution you there. But it was very well done in my opinion by a guy named Aviv Kaufmann, and you can see here it compares BigQuery with the other cloud databases, and of course, you know, BigQuery wins, got the lowest TCO, but again I thought the report was really detailed and well researched. I have no doubt that Snowflake has an answer for the big brown bar, which is on-demand cloud cost. I think ESG was making certain assumptions, maybe worst case assumptions, about the need to over-provision resources for Snowflake, which I'm sure ESG can defend, but I'll bet dollars to donuts that Snowflake, you know, has an answer to that or a comeback. I'm going to ask them. But the point I want to make here is that BigQuery was designed from day one, again, as a cloud-native database. We've been talking about that a lot. It's very efficient and is going to be competitive. So you can see, there are some bright spots in the enterprise, for Google. Okay, let's wrap up. Now, having called out some of the positives, and there are many, Google is still not getting it done in the enterprise, in my opinion. I certainly would not say too little too late, but I would say they spotted the competition a huge lead, and the only reason is Google just didn't act on the opportunity staring them in the face, within the enterprise, fast enough, and they finally woke up. But enterprise sales are, they're really hard. Thomas Kurian, for all his experience, is coming from way, way behind with regard to the enterprise go to market, systems and processes, pricing, partnerships, special deals for the enterprise. Google's still learning how to sell the business outcomes and is relying far too much on its technology chops, which, while impressive, are not going to win the day without better enterprise sales, marketing, and ecosystem integration. Now I feel like for years, Google has said to the enterprise market, give me heat and I'll add the wood. Meaning we have the best tech, go ahead and use it. That strategy just doesn't work in the enterprise. Kurian knows it and I suspect that's why Google's showing some strength within these large, giant public and private companies. They're probably applying focused sales resources to nail customer success with some of its top accounts where they have a presence, and then once they nail that they'll broaden to the market. But they got to move fast. We'll learn more about Google's intentions and its progress over the next few, next few months as they try their online event experiment, and of course we'll be there providing our wall to wall coverage. Remember, these Breaking Analysis episodes, they're all available as podcasts. ETR is shortly exiting its quiet period, this week, and will be rolling out the data, so check out etr.plus. I publish weekly on wikibon.com and siloconeangle.com and as always please comment on my LinkedIn posts, I really appreciate the feedback. This is Dave Vellante for the Cube Insights, powered by ETR. Thanks for watching everyone. We'll see you next time.

Published Date : Jul 13 2020

SUMMARY :

in Palo Alto and Boston, and realize the opportunity to sell

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

KurianPERSON

0.99+

MicrosoftORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

Thomas KurianPERSON

0.99+

JuneDATE

0.99+

2005DATE

0.99+

AWSORGANIZATION

0.99+

Eric BradleyPERSON

0.99+

sixQUANTITY

0.99+

OctoberDATE

0.99+

Diane GreenePERSON

0.99+

OracleORGANIZATION

0.99+

12 monthQUANTITY

0.99+

October 19DATE

0.99+

2015DATE

0.99+

July 20DATE

0.99+

39%QUANTITY

0.99+

JulyDATE

0.99+

April 20DATE

0.99+

2%QUANTITY

0.99+

second pointQUANTITY

0.99+

7%QUANTITY

0.99+

BostonLOCATION

0.99+

TelcoORGANIZATION

0.99+

second halfQUANTITY

0.99+

60 billion dollarsQUANTITY

0.99+

14%QUANTITY

0.99+

CargillORGANIZATION

0.99+

FirstQUANTITY

0.99+

siloconeangle.comOTHER

0.99+

first halfQUANTITY

0.99+

44%QUANTITY

0.99+

Palo AltoLOCATION

0.99+

G SuiteTITLE

0.99+

ESGORGANIZATION

0.99+

Coke IndustriesORGANIZATION

0.99+

26%QUANTITY

0.99+

two pointsQUANTITY

0.99+

100 billion dollarQUANTITY

0.99+

Paresh Kharya & Kevin Deierling, NVIDIA | HPE Discover 2020


 

>> Narrator: From around the global its theCUBE, covering HPE Discover Virtual Experience, brought to you by HPE. >> Hi, I'm Stu Miniman and this is theCUBE's coverage of HPE, discover the virtual experience for 2020, getting to talk to Hp executives, their partners, the ecosystem, where they are around the globe, this session we're going to be digging in about artificial intelligence, obviously a super important topic these days. And to help me do that, I've got two guests from Nvidia, sitting in the window next to me, we have Paresh Kharya, he's director of product marketing and sitting next to him in the virtual environment is Kevin Deierling, who is this senior vice president of marketing as I mentioned both with Nvidia. Thank you both so much for joining us. >> Thank you, so great to be here. >> Great to be here. >> All right, so Paresh when you set the stage for us? AI, obviously, one of those mega trends to talk about but just, give us the stages, where Nvidia sits, where the market is, and your customers today, that they think about AI. >> Yeah, so we are basically witnessing a massive changes that are happening across every industry. And it's basically the confluence of three things. One is of course, AI, the second is 5G and IOT, and the third is the ability to process all of the data that we have, that's now possible. For AI we are now seeing really advanced models, from computer vision, to understanding natural language, to the ability to speak in conversational terms. In terms of IOT and 5G, there are billions of devices that are sensing and inferring information. And now we have the ability to act, make decisions in various industries, and finally all of the processing capabilities that we have today, at the data center, and in the cloud, as well as at the edge with the GPUs as well as advanced networking that's available, we can now make sense all of this data to help industrial transformation. >> Yeah, Kevin, you know it's interesting when you look at some of these waves of technology and we say, "Okay, there's a lot of new pieces here." You talk about 5G, it's the next generation but architecturally some of these things remind us of the past. So when I look at some of these architectures, I think about, what we've done for high performance computing for a long time, obviously, you know, Mellanox, where you came from through NVIDIA's acquisition, strong play in that environment. So, maybe give us a little bit compare, contrast, what's the same, and what's different about this highly distributed, edge compute AI, IOT environment and what's the same with what we were doing with HPC in the past. >> Yeah, so we've--Mellanox has now been a part of Nvidia for a little over a month and it's great to be part of that. We were both focused on accelerated computing and high performance computing. And to do that, what it means is the scale and the type of problems that we're trying to solve are just simply too large to fit into a single computer. So if that's the case, then you connect a lot of computers. And Jensen talked about this recently at the GTC keynote where he said that the new unit computing, it's really the data center. So it's no longer the box that sits on your desk or even in Iraq, it's the entire data center because that's the scale of the types of problems that we're solving. And so the notion of scale up and scale out, the network becomes really, really critical. And we're doing high-performance networking for a long time. When you move to the edge, instead of having, a single data center with 10,000 computers, you have 10,000 data centers, each of which as a small number of servers that is processing all of that information that's coming in. But in a sense, the problems are very, very similar, whether you're at the edge or you're doing massive HPC, scientific computing or cloud computing. And so we're excited to be part of bringing together the AI and the networking because they are really optimizing at the data center scale across the entire stack. >> All right, so it's interesting. You mentioned, Nvidia CEO, Jensen. I believe if I saw right in there, he actually could, wrote a term which I had not run across, it was the data processing unit or DPU in that, data center, as you talked about. Help us wrap our heads around this a little bit. I know my CPU, when I think about GPUs, I obviously think of Nvidia. TPUs, in the cloud and everything we're doing. So, what is DPUs? Is this just some new AI thing or, is this kind of a new architectural model? >> Yeah. I think what Jensen highlighted is that there's three key elements of this accelerated disaggregated infrastructure that the data center has becoming. And so that's the CPU, which is doing traditional single threaded workloads but for all of the accelerated workloads, you need the GPU. And that does massive parallelism deals with massive amounts of data, but to get that data into the GPU and also into the CPU, you need really an intelligent data processing because the scale and scope of GPUs and CPUs today, these are not single core entities. These are hundreds or even thousands of cores in a big system. And you need to steer the traffic exactly to the right place. You need to do it securely. You need to do it virtualized. You need to do it with containers and to do all of that, you need a programmable data processing unit. So we have something called our BlueField, which combines our latest, greatest, 100 gig and 200 gig network connectivity with Arm processors and a whole bunch of accelerators for security, for virtualization, for storage. And all of those things then feed these giant parallel engines which are the GPU. And of course the CPU, which is really the workload at the application layer for non-accelerated outs. >> Great, so Paresh, Kevin talked about, needing similar types of services, wherever the data is. I was wondering if you could really help expand for us a little bit, the implications of it AI at the edge. >> Sure, yeah, so AI is basically not just one workload. AI is many different types of models and AI also means training as well as inferences, which are very different workloads or AI printing, for example, we are seeing the models growing exponentially, think of any AI model, like a brain of a computer or like a brain, solving a particular use case a for simple models like computer vision, we have models that are smaller, bugs have computer vision but advanced models like natural language processing, they require larger brains or larger models, so on one hand we are seeing the size of the AI models increasing tremendously and in order to train these models, you need to look at computing at the scale of data center, many processors, many different servers working together to train a single model, on the other hand because of these AI models, they are so accurate today from understanding languages to speaking languages, to providing the right recommendations whether it's for products or for content that you may want to consume or advertisements and so on. These models are so effective and efficient that they are being powered by AI today. These applications are being powered by AI and each application requires a small amount of acceleration, so you need the ability to scale out or, and support many different applications. So with our newly launched MPR architecture, just couple of weeks to go that Jensen announced, in the virtual keynote for the first time, we are now able to provide both, scale up and scale out both training data analytics as well as imprints on the single architecture and that's very exciting. >> Yeah, so look at that. The other thing that's interesting is you're talking about at the edge and scale out versus scale up, the networking is critical for both of those. And there's a lot of different workloads. And as Paresh was describing, you've got different workloads that require different amounts of GPU or storage or networking. And so part of that vision of this data center as the computer is that, the DPU lets you scale independently, everything. So you can compose, you desegregate into DPUs and storage and CPUs, and then you compose exactly the computer that you need on the fly container, right, to solve the problem that you're solving right now. So these new way of programming is programming the entire data center at once and you'll go grab all of it and it'll run for a few hundred milliseconds even and then it'll come back down and recompose itself onsite. And to do that, you need this very highly efficient networking infrastructure. And the good news is we're here at HPE Discover. We've got a great partner with HPE. You know, they have our M series switches that uses the Mellanox hundred gig and now even 200 and 400 gig ethernet switches, we have all of our adapters and they have great platforms. The Apollo platform for example, is break for HPC and they have other great platforms that we're looking at with the new telco that we're doing or 5G and accelerating that. >> Yeah, and on the edge computing side, there's the edge line set of products which are very interesting, the other sort of aspect that I wanted to touch upon, is the whole software stack that's needed for the edge. So edge is different in the sense that it's not centrally managed, the edge computing devices are distributed remote locations. And so managing the workflow of running and updating software on it is important and needs to be done in a very secure manner. The second thing that's, that's very different again, for the edges, these devices are going to require connectivity. As Kevin was pointing out, the importance of networking so we also announced, a couple of weeks ago at our GTC, our EGX product that combines the Mellanox NIC and our GPUs into a single a processor, Mellanox NIC provides a fast connectivity, security, as well as the encryption and decryption capabilities, GPUs provide acceleration to run the advanced DI models, that are required for applications at the edge. >> Okay, and if I understood that, right. So, you've got these throughout the HPE the product line, HPE's got long history of making, flexible configurations, I remember when they first came out with a Blade server it was, different form factors, different connectivity options, they pushed heavily into composable infrastructure. So it sounds like this is just a kind of extending, you know, what HP has been doing for a couple of decades. >> Yeah, I think HP is a great partner there and these new platforms, the EGX, for example that was just announced, a great workload there is a 5G telco. So we'll be working with our friends at HPE to take that to market as well. And, you know, really, there's a lot of different workloads and they've got a great portfolio of products across the spectrum from regular servers. And 1U, 2U, and then all the way up to their big Apollo platform. >> Well I'm glad you brought up telco, I'm curious, are there any specific, applications or workloads that, where the low hanging fruit or the kind of the first targets that you use for AI acceleration? >> Yeah, so you know, the 5G workload is just awesome. We're introduced with the EGX, a new platform called Ariel which is a programming framework and there were lots of partners there that were part of that, including, folks like Ericsson. And the idea there is that you have a software defined hardware accelerated radio area network, so a cloud RAM and it really has all of the right attributes of the cloud and what's nice there is now you can change on the fly, the algorithms that you're using for the baseband codex without having to go climb a radio tower and change the actual physical infrastructure. So that's a critical part. Our role in that, on the networking side, we introduced the technology that's part of EGX then are connected, It's like the DX adapter, it's called 5T for 5G. And one of the things that happens is you need this time triggered transport or a telco technology. That's the 5T's for 5G. And the reason is because you're doing distributed baseband unit, distributed radio processing and the timing between each of those server nodes needs to be super precise, 20 nanosecond. It's something that simply can't be done in software. And so we did that in hardware. So instead of having an expensive FPGA, I try to synchronize all of these boxes together. We put it into our NIC and now we put that into industry standard servers HP has some fantastic servers. And then with the EGX platform, with that we can build, really scale out software to client cloud RAM. >> Awesome, Paresh, anything else on the application side you'd like to add in just about what Kevin spoke about. >> Oh yeah, so from application perspective, every industry has applications that touch on edge. If you take a look at the retail, for example, there is, you know, all the way from supply chain to inventory management, to keeping the right stock units in the shelves, making sure there is a there is no slippage or shrinkage. So to telecom, to healthcare, we are re-looking at constantly monitoring patients and taking actions for the best outcomes to manufacturing. We are looking to automate production detecting failures much early on in the production cycle and so on every industry has different applications but they all use AI. They can all leverage the computing capabilities and high-speed networking at the edge to transform their business processes. >> All right, well, it's interesting almost every time we've talked about AI, networking has come up. So, you know, Kevin, I think that probably ease up a little bit why, Nvidia, spent around $7 billion for the acquisition of Mellanox and not only was it the Mellanox acquisition, Cumulus Networks, very known in the network space for software defined really, operating system for networking but give us strategically, does this change the direction of Nvidia, how should we be thinking about Nvidia in the overall network? >> Yeah, I think the way to think about it is going back to that data center as the computer. And if you're thinking about the data center as computer then networking becomes the back plane, if you will of that data center computer and having a high performance network is really critical. And Mellanox has been a leader in that for 20 years now with our InfiniBand and our Ethernet product. But beyond that, you need a programmatic interface because one of the things that's really important in the cloud is that everything is software defined and it's containerized now and there is no better company in the world then Cumulus, really the pioneer and building Cumulus clinics, taking the Linux operating system and running that on multiple homes. So not just hardware from Mellanox but hardware from other people as well. And so that whole notion of an open networking platform more committed to, you need to support that and now you have a programmatic interface that you can drop containers on top of, Cumulus has been the leader in the Linux FRR, it's Free Range Routing, which is the core routing algorithm. And that really is at the heart of other open source network operating systems like Sonic and DENT so we see a lot of synergy here, all the analytics that Cumulus is bringing to bear with NetQ. So it's really great that they're going to be part here of the Nvidia team. >> Excellent, well thank you both much. Want to give you the final word, what should they do, HPE customers in their ecosystem know about the Nvidia and HPE partnership? >> Yeah, so I'll start you know, I think HPE has been a longtime partner and a customer of ours. If you have accelerated workloads, you need to connect those together. The HPE server portfolio is an ideal place. We can combine some of the work we're doing with our new amp years and existing GPUs and then also to connect those together with the M series, which is their internet switches that are based on our spectrum switch platforms and then all of the HPC related activities on InfiniBand, they're a great partner there. And so all of that, pulling it together, and now as at the edge, as edge becomes more and more important, security becomes more and more important and you have to go to this zero trust model, if you plug in a camera that's somebody has at the edge, even if it's on a car, you can't trust it. So everything has to become, validated authenticated, all the data needs to be encrypted. And so they're going to be a great partner because they've been a leader and building the most secure platforms in the world. >> Yeah and on the data center, server, portfolio side, we really work very closely with HP on various different lines of products and really fantastic servers from the Apollo line of a scale up servers to synergy and ProLiant line, as well as the Edgeline for the edge and on the super computing side with the pre side of things. So we really work to the fullest spectram of solutions with HP. We also work on the software side, wehere a lot of these servers, are also certified to run a full stack under a program that we call NGC-Ready so customers get phenomenal value right off the bat, they're guaranteed, to have accelerated workloads work well when they choose these servers. >> Awesome, well, thank you both for giving us the updates, lots happening, obviously in the AI space. Appreciate all the updates. >> Thanks Stu, great to talk to you, stay well. >> Thanks Stu, take care. >> All right, stay with us for lots more from HPE Discover Virtual Experience 2020. I'm Stu Miniman and thank you for watching theCUBE. (bright upbeat music)

Published Date : Jun 24 2020

SUMMARY :

the global its theCUBE, in the virtual environment that they think about AI. and finally all of the processing the next generation And so the notion of TPUs, in the cloud and And of course the CPU, which of it AI at the edge. for the first time, we are And the good news is we're Yeah, and on the edge computing side, the product line, HPE's across the spectrum from regular servers. and it really has all of the else on the application side and high-speed networking at the edge in the network space for And that really is at the heart about the Nvidia and HPE partnership? all the data needs to be encrypted. Yeah and on the data Appreciate all the updates. Thanks Stu, great to I'm Stu Miniman and thank

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Kevin DeierlingPERSON

0.99+

KevinPERSON

0.99+

Paresh KharyaPERSON

0.99+

NvidiaORGANIZATION

0.99+

200 gigQUANTITY

0.99+

HPORGANIZATION

0.99+

100 gigQUANTITY

0.99+

hundredsQUANTITY

0.99+

10,000 computersQUANTITY

0.99+

MellanoxORGANIZATION

0.99+

200QUANTITY

0.99+

NVIDIAORGANIZATION

0.99+

PareshPERSON

0.99+

CumulusORGANIZATION

0.99+

Cumulus NetworksORGANIZATION

0.99+

IraqLOCATION

0.99+

20 yearsQUANTITY

0.99+

HPEORGANIZATION

0.99+

EricssonORGANIZATION

0.99+

2020DATE

0.99+

two guestsQUANTITY

0.99+

OneQUANTITY

0.99+

thirdQUANTITY

0.99+

StuPERSON

0.99+

first timeQUANTITY

0.99+

around $7 billionQUANTITY

0.99+

telcoORGANIZATION

0.99+

each applicationQUANTITY

0.99+

Stu MinimanPERSON

0.99+

secondQUANTITY

0.99+

20 nanosecondQUANTITY

0.99+

LinuxTITLE

0.99+

bothQUANTITY

0.99+

NetQORGANIZATION

0.99+

400 gigQUANTITY

0.99+

eachQUANTITY

0.99+

10,000 data centersQUANTITY

0.98+

second thingQUANTITY

0.98+

three key elementsQUANTITY

0.98+

oneQUANTITY

0.98+

thousands of coresQUANTITY

0.98+

three thingsQUANTITY

0.97+

JensenPERSON

0.97+

ApolloORGANIZATION

0.97+

JensenORGANIZATION

0.96+

single computerQUANTITY

0.96+

HPE DiscoverORGANIZATION

0.95+

single modelQUANTITY

0.95+

firstQUANTITY

0.95+

hundred gigQUANTITY

0.94+

InfiniBandORGANIZATION

0.94+

DENTORGANIZATION

0.93+

GTCEVENT

0.93+

Dave Russell & Jason Buffington, Veeam | VeeamON 2020


 

>>from around the globe. It's the Cube with digital coverage of VM on 2020. Brought to you by IBM. >>Welcome back. I'm Stew Minimum. And and this is the Cube's coverage of VM on 2020 online this year. We've done the event for many years and being able to reach the team executives, some of their partners and the like where they are around the globe really excited to be able to dig in. And we're gonna talk some numbers, the analysis and to help me do that. I've got to VM Cube alumni. We've had them on the Cube before. They were being always excited to get the talk of them and dig into the numbers with them now that they are at VM. Dave Russell is the vice president of enterprise strategy, and Jason Buffington is the vice president of solution strategy. Both with beam. Gentlemen, thanks so much for joining us. >>Thank you. Thank you. >>All right. First, I guess you know, let me ask how you guys doing? You know, we're having a little bit of ah, discussion before we came on here. As Do you know, everyone is now inundated with data and numbers and the like with this global pandemic. You know, Dave, how things doing in your neck of the woods, and, uh, and then we'll go to Jason. >>Yeah, well, you know, literally cannot complain. Personally, VM itself is doing incredibly well as an organization will double click on that here. But, you know, in terms of data, particularly as it relates to this space that we're in backup and recovery availability Cloud data management. The recent data for first half 2020 is actually fascinating. We're gonna double click on that a little bit more, right, Jason, >>we are now as far as how we're doing. You know, I've been at every team on that. We've had the 1st 3 is an analyst. Last two is a VP. I've never gotten to do one in my pajama bottoms, though, so that's kind of a nice changes to kind of mix it up a little bit. Um, but yeah, the other thing, which has been kind of fun is is that because we haven't been traveling, it really gave Dave and I had a chance to kind of get back to our roots a little bit and really dig into research. And how do you apply research to product direction and go to market? And so it's been a fun project that were culminating with was the motto >>Yeah, Jason, please don't be given out secrets. I'm not saying if you look up Dave Volante, Twitter, handle that. You'll find the suit on the top shorts on the bottom. Look, what I refer to is cube casual for some of these remote events. But, you know, you do have a breakout that you're doing really looking at digital transformation and I t. Modernization, you know, digital transformation. I'm sure you know you, both of you, from the analyst standpoint. For a while, it was a bit of a buzz word. You know, today, when you just with the backdrop of the global pandemic, it's like, Well, if you have had the the chance to go through the digital transformation, hopefully, you know, you get things put to the test, you're relying on data, you should be more agile, and those are all things that I think the remote workforce and what they're doing. But if you hadn't finish that or either started or in the middle of that journey, you know, big question is, you know, what are you doing? Will this accelerate it? Will it slow it down? So excited to dig into your CEO research? Why don't you give us a little bit of the background? How long is this going on? Who you're talking to is as part of this. This research. >>Sure. Well, as far as the research itself goes. So the team went to an outside panel and said, Hey, don't tell anybody who is from when you interview these kinds of personas in these kinds of folks. We did 1550 enterprises and by that definition, meaning 1000 users or not across 18 different countries around the world. And then we even ask some questions around. Not only what country are you in, but in what countries do you influence? Data protection, strategy and architecture? Everyone from I T architects all the way through csos were part of that survey. And we've got some great data back not only from an executive perspective of what are the expectations of i t, but also from the i t implementer anti architect's perspective on what are their real world challenges today and That's some of things that we were at being really keen to understand more, to make sure that we're building the right things and saying the right things for our customers and our prospects. >>Excellent. And maybe give us a little bit of a backdrop. You know, when I think about enterprise is, you know, we always talk about these mega waves. You know, The things that I talked about is you know, when I talk to the CSO suite, it's not that they have Well, you know, I've got a multi cloud strategy, you know, I'm figuring out how cloud changes what I'm doing. Digital transformation is one of those things that brings together, you know, the business and the I t. And hopefully you know something I know we've all been talking about for quite a long time. I t just can't be a separate thing. Or so you know, a cost center but needs to really respond to the business. What's that Backdrop of digital transformation and, you know, bring us inside a little bit what your learnings >>were. Yeah, to me. I think I like the notion of digital transformation because it's very specific to every business, maybe even every business unit, meaning it's not a case of a vendor saying, Here's what your project should be. Rather, it's more of a notion of whatever initiative you have to try to increase customer intimacy, to be able to contain costs, expand your reach. That's really what digital transformations here to support. >>Excellent. And Jason give us a little bit of color as you know, some of the finding. >>Yeah, so I mean, I think the big ones that we looked at were, you know, what were the major I t challenges you had overall, and maybe not so much of a surprise, but staffing and legacy infrastructure. We're still some of the biggest things that we're holding back i t organizations, which I think is especially interesting in the landscape, the world right now, right, Because your staff can't be in the places where they used to be and from a legacy perspectives to I know you love data as much as we do. Um, the you know, if if organizations are spending between 68 82% of their money and their dollars on the status quo, that doesn't leave a whole lot left for the things that you'd like to do, like improving customer experience like accelerating the employees of your business. So things like digital transformation tend to get hindered by the same stuff that tenders I t. Modernization and just hear the buzz words just trying to do better in I t. For the sake of the business. But really, those have been kind of big gaps. >>Yeah, I think Jason hit a key point. There's two of you know the issue right now is a lot of us are just trying to run the business like, literally keep the lights on. You and Jason mentioned the stats of high sixties low seventies just trying to keep status quo. The digital transformation, in my mind is about obviously trying to run the business while you're seeking to grow the business and aspirational, hoping to transform your business to really improve customer intimacy and success of end customers as well as partners. So if done right, pursuing digital transformation can help you with tactical needs as well. A strategic outcome? >>Yeah, you know, it's it's it's a little sad, I think, from an industry standpoint, you talk about how much money in time is spent on keeping the lights on. I feel like 10. 15 years ago, it was, you know, the 80 85%. If you're saying, you know, we've whittled away a little bit now in the low seventies, some really good companies, it's getting, but we haven't things yet. Um, I'm curious. You know, you have this position, they don't know that it was sponsored by VM. So how do cloud as a general technology and then, you know, data protection and availability specifically, you know, fit into the overall priorities for that that I t modernization. >>So there were There were two questions that we really focused on that they're my two favorite slides in the in the whole deck. The 1st 1 that I thought was really interesting is when we asked organizations, What does modern data protection look like? Or innovative? And I think we use a few different buzzwords along the way, and we asked them, check all of these capabilities that might apply, and then which one is the most definitive? And we actually got two different sets of answers depending on how you pivot that data. If you ask, uh, most common responses, Modern data protection looks cloudy, and what I mean by that is the top choices scored were the ability to do D. R as a service. The ability to integrate on premise and cloud based is part of your data protection architecture. And then the ability to move data from one cloud to another would certainly reinforces the fact that we are not only in a hybrid world but in a multi hybrid world as well. So if you're looking for most common answers, modern data protection looks cloudy. But if you flip it over and you say what is the most definitive feature, you actually get something very different. You find out that the ability to leverage orchestration and workflow, the ability to manage via AP eyes and systems management the ability to be part of a cyber security strategy. So what you see is that modern data protection in general has to be cloudy. But more importantly, backup should not sit on an island of its own. It should be a cohesive part of a broader I T experience that's managed by something broader that's part of provisioning a systems framework. So those two answers kind of Tell us what should we not only making sure that we continue to build on, but also making sure that we're communicating as far as you know, does being meet the bar for what organizations are looking for in a modern or innovative data protection strategy? >>Yeah, that's really interesting. You know, I guess one of the big things I've seen over the last 12 to 18 months is maturation of things like, you know, a really hybrid strategy. So if I look at the team, you know the most critical partnerships, of course, our VM ware from a historical standpoint and things like Microsoft going Ford in both of them have made big strides over the last couple of years as you not just, you know, on premises versus Public Cloud. But how do all these things work together? The discussions that we've been having about cloud is not necessarily a destination, but it's more of an operating model. And as people build out their architectures, the all the things you mentioned there, it's not a place or a destination, But it's more of that architect view and can live across lots of different environment. Does that make sense. Yeah, >>yeah, it's across. It's a horizontal play, really, It's not moving from Point A to point B. It's really embracing expanded choices. So you know what we found when we did? This survey is directionally where organizations are at the day with on Prem physical virtual going towards cloud and then how they responded their intention two years later. There weren't major surprises there, meaning the shift was increasingly more towards cloud. But it also wasn't a case that on Prem physical goes to zero. So any more than it's a case of an organization goes 100% all in on one hyper scaler, all the cloud provider. So it's really about supporting a mixed, and it's about offering choice because every business or maybe more specifically, every workload within a business might have their own natural migration associated with what they need to do what's appropriate, given their business realities and their desires. So if we double click on what's really important from backup, the number one thing that came back from our global survey which a little incriminating on the state of the industry was the number one thing that would make us want to change our backup provider so that application would back up. That is an amazing, the shocking statement. That's like saying so. If you change cars, automobiles, what would you look for? First and foremost, and your response is an automobile that started. >>It was really scary right in 2020. So Dave and I have each been in backup almost exclusively for 30 years each, right and still you using label spell backup for almost the same length of time. And we've been doing this for a really long time. And in 2020 when I T pros were asked what would get them to change, it's they'd like it to work the way they thought it would when they bought it. I mean, that's just a really damning statement. And then beyond that, when the next drivers certainly economics came into play. So the number two answers were reducing hardware and software costs and improving. TCO nor I were two and three and then capabilities around, improving our P o rto SL A's and then ease of use. That kind of rounds out the top five with cloud coming in right behind that. So not a whole lot of surprise there, but what a terrible statement for the industry that we just like it to work. >>All right, how about some good news? What? What recommendations or guidance? Is there anything that you got out of it that you know, best practices or leaders in the space or what peers would recommend team to each other. >>So I think the two things that I took away that I thought was really interesting from a best practices and moving forward data reuse scored really, really high. So the interest in leveraging and the survey actually asked several different scenarios for what folks were either doing or aspiring to do around data use. And you can call it copy data management. You can call it secondary storage use cases. You whatever marketing buzz where you want. But the bottom line is, don't just put your data in the backup repository and wait for bad things happen. Do something with that data. Dev Ops Acceleration patch testing risk mitigation, quarantine for forensics for cyber. But there was a lot of of yes, we're starting to do. And also yes, we're aspiring. Over the next 12 months, I think data reuse was a really big thing that I was so glad that folks were getting along the way and then also the recognition that with the intolerance of downtime and the intolerance of data loss that was measured in the survey, it was really obvious that a lot more organizations understand they have to be combining not only backups but also snapshots and replication in a consistent way. Because you can't meet the SL is that most organizations have today. If the only thing you're doing is just nightly backup now the team, we would say, Great, you got to do snapshots you out of the replication. You ought to do backup. Please don't use three different tools times each one of those times, each workload. It's not economically or operationally viable. So certainly in that's good news for us, because we manage all three. But those were kind of two big drivers I was most excited about. >>And if I take what we got from the data protection report and then couple that with recent industry analysis reports from like I, D. C. And Gartner, I merge that together, I think one of the reasons why IBM has been very successful you know, literally knock on wood, but VM is up as a company 10% year over year, October 2 October arm Sorry, April. April and that's been true for all 12 years. That being has been shipping back of product, so in a tough time, actually doing extremely well. Still, hiring still expanding Gartner has beam for calendar year 2019 moving from number four in market globally. Toe number three i. D. C. Maintains beam is number one and market in Europe, one of the top five vendors. Three of the five, where negative year over year VM was the highest sequentially positive year over year positive. And I think the reasons why not going back to the survey in my mind was due to the software defined nature of the solution and what I mean by that in particular, why that has customer value, especially now in a current pandemic. Situation is you can leverage the existing infrastructure that you've got. We we've been around and remember the macroeconomic issue of 2000 and eight organizations held on to their assets much, much longer. Refresh cycles slowed down, so the ability to leverage the infrastructure that you have to scale out horizontally to be able to ingest more data to have a horizontal management playing. To be able to have a service repository that could include cloud and object storage just allows you to better leverage the investments you've made but deflects appropriately for workloads and to be able to expand into things like public cloud and object storage as you see fit. >>Well, David Jason, thank you so much for the update. Real pleasure to catch up with you Always. Always great big data with both. >>Thank you. So you could just be >>alright. Stay tuned for more coverage from VM on 2020 Online on stew minimum. And thank you for watching the Cube. >>Yeah, yeah, yeah.

Published Date : Jun 18 2020

SUMMARY :

Brought to you by IBM. Dave Russell is the vice president of enterprise Thank you. First, I guess you know, let me ask how you guys doing? Yeah, well, you know, literally cannot complain. And how do you apply research to product direction and go to market? the middle of that journey, you know, big question is, you know, what are you doing? to an outside panel and said, Hey, don't tell anybody who is from when you interview these kinds of personas is one of those things that brings together, you know, the business and the I t. And hopefully you know something Rather, it's more of a notion of whatever initiative you have to try to some of the finding. Um, the you know, if if organizations are spending between There's two of you know the issue right now is a I feel like 10. 15 years ago, it was, you know, the 80 85%. So what you see is that modern data protection in general has to be cloudy. So if I look at the team, you know the most critical partnerships, So you know what we found when we did? So the number two answers were reducing hardware and software costs Is there anything that you got out of it that you know, best practices or leaders in the space or what peers And you can call it copy data management. so the ability to leverage the infrastructure that you have to scale out horizontally Real pleasure to catch up with you Always. So you could just be And thank you for watching the Cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

Jason BuffingtonPERSON

0.99+

JasonPERSON

0.99+

EuropeLOCATION

0.99+

Dave RussellPERSON

0.99+

IBMORGANIZATION

0.99+

twoQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

2020DATE

0.99+

100%QUANTITY

0.99+

Dave VolantePERSON

0.99+

ThreeQUANTITY

0.99+

two questionsQUANTITY

0.99+

oneQUANTITY

0.99+

David JasonPERSON

0.99+

FordORGANIZATION

0.99+

AprilDATE

0.99+

1550 enterprisesQUANTITY

0.99+

GartnerORGANIZATION

0.99+

1000 usersQUANTITY

0.99+

18 different countriesQUANTITY

0.99+

two thingsQUANTITY

0.99+

bothQUANTITY

0.99+

FirstQUANTITY

0.99+

CubeORGANIZATION

0.99+

10%QUANTITY

0.99+

2000DATE

0.99+

TwitterORGANIZATION

0.99+

TCOORGANIZATION

0.99+

eachQUANTITY

0.99+

fiveQUANTITY

0.98+

two answersQUANTITY

0.98+

two years laterDATE

0.98+

todayDATE

0.98+

68 82%QUANTITY

0.98+

two big driversQUANTITY

0.98+

12 yearsQUANTITY

0.97+

BothQUANTITY

0.97+

10. 15 years agoDATE

0.97+

VeeamPERSON

0.97+

threeQUANTITY

0.96+

this yearDATE

0.96+

zeroQUANTITY

0.95+

five vendorsQUANTITY

0.95+

1st 3QUANTITY

0.94+

two favorite slidesQUANTITY

0.94+

two different setsQUANTITY

0.93+

first half 2020DATE

0.93+

October 2 OctoberDATE

0.91+

D. C.ORGANIZATION

0.91+

Point AOTHER

0.91+

point B.OTHER

0.9+

Prem physicalORGANIZATION

0.9+

each workloadQUANTITY

0.9+

80 85%QUANTITY

0.89+

1st 1QUANTITY

0.88+

LastQUANTITY

0.87+

VM CubeORGANIZATION

0.86+

2019DATE

0.85+

eight organizationsQUANTITY

0.84+

three different toolsQUANTITY

0.83+

18 monthsQUANTITY

0.82+

30 years eachQUANTITY

0.82+

last couple of yearsDATE

0.8+

seventiesQUANTITY

0.8+

each oneQUANTITY

0.78+

VM onEVENT

0.78+

one hyperQUANTITY

0.77+

one thingQUANTITY

0.75+

sixtiesQUANTITY

0.75+

VxRail Taking HCI to Extremes, Dell Technologies


 

from the cube Studios in Palo Alto in Boston connecting with thought leaders all around the world this is a cute conversation hi I'm Stu minimun and welcome to this special presentation we have a launch from Dell technologies updates to the BX rail family we're gonna do things a little bit different here we actually have a launch video from Janet champion of Dell technologies and the way we do things a lot of times is analysts get a little preview or when you're watching things you might have questions on it though rather than me just walking it are you watching herself I actually brought in a couple of Dell technologies expert two of our cube alumni happy to welcome back to the program Jonathan Segal he is the vice president of product marketing and Chad Dunn who's the vice president at price today of product management both of them with Dell technologies gentlemen thanks so much for joining us it was too great to be here all right and so what we're gonna do is we're gonna be rolling the video here I've got a button I'm gonna press Andrew will stop it here and then we'll kind of dig in a little bit go into some questions when we're all done we're actually holding a crowd chat where you will be able to ask your questions talk to the expert and everything and so a little bit different way to do a product announcement hope you enjoy it and with that it's VX rail taking API to the extremes is is the theme we'll see you know how what that means and everything but without any further ado it but let's look fanon take the video away hello and welcome my name is Shannon champion and I'm looking forward to taking you through what's new with the ex rail let's get started we have a lot to talk about our launch covers new announcements addressing use cases across the core edge and cloud and spans both new hardware platforms and options as well as the latest in software innovations so let's jump right in before we talk about our announcements let's talk about where customers are adopting the ex rail today first of all on behalf of the entire Dell technologies and BX Rail teams I want to thank each of our over 8,000 customers big and small in virtually every industry who have chosen the x rail to address a broad range of workloads deploying nearly a hundred thousand nodes to date thank you our promise to you is that we will add new functionality improve serviceability and support new use cases so that we deliver the most value to you whether in the core at the edge or for the cloud in the core the X rail from day one has been a catalyst to accelerate IT transformation many of our customers started here and many will continue to leverage VX rail to simply extend and enhance your VMware environment now we can support even more demanding applications such as in-memory databases like s AP HANA and more AI and ML applications with support for more and more powerful GPUs at the edge video surveillance which also uses GPUs by the way is an example of a popular use case leveraging the X rail alongside external storage and right now we all know the enhanced role that IT is playing and as it relates to VDI the X Rail has always been a great option for that in the cloud it's all about kubernetes and how dell technologies cloud platform which is VCF on the x rail can deliver consistent infrastructure for both traditional and cloud native applications and we're doing that together with VMware the X ray o is the only jointly engineered HCI system built with VMware for VMware environments designed to enhance the native VMware experience this joint engineering with VMware and investments in software innovation together deliver an optimized operational experience at reduced risk for our customers all right so Shannon talked a bit about you know the important role of IP of course right now with the global pandemic going on it's really you know calling in you know essential things you know putting you know platforms to the test so I'd really love to hear what both of you are hearing from customers also you know VDI of course you know in the early days it was HDI only does VDI now we know there are many solutions but remote work is you know putting that back front and center so John why don't we start with you is you know what you're absolutely so first of all us - thank you I want to do a shout out to our BX real customers around the world it's really been humbling inspiring and just amazing to see the impact of our bx real customers around the world and what they're having on on human progress here you know just for a few examples there are genomics companies that we have running the X rail that have a row about testing at scale we also have research universities out in the Netherlands on doing the antibody detection the US Navy has stood up a hosta floating Hospital >> of course care for those in need so look we are here to help that's been our message to our customers but it's amazing to see how much they're helping society during this so just just a pleasure there but as you mentioned just to hit on the the VDI comments so it's your points do you know HCI and vxr8 EDI that was initially use case years ago and it's been great to see how many of our existing VX real customers have been able to inhibit very quickly leveraging via trail to add and to help bring their remote workforce you know online and support them with your existing VX rail because V it really is flexible it is agile to be able to support those multiple workloads and in addition to that we've also rolled out some new VDI bundles to make it simpler for customers more cost-effective catered to everything from knowledge workers to multimedia workers you name it you know from 250 desktops up to a thousand but again back to your point BX rail ci is well beyond video it had crossed the chasm a couple years ago actually and you know where VDI now is less than a third of the typical workloads any of our customers out there it supports now a range of workloads as you heard from Shannon whether it's video surveillance whether it's general purpose only to mission-critical applications now with SAV ha so you know this is this has changed the game for sure but the range of workloads and the flexibility of yet rail is what's really helping our existing customers from this pandemic we've seen customers really embrace HCI for a number of workloads in their environments from the ones that we serve all knew and loved back in the the initial days of of HCI now the mission-critical things now to cloud native workloads as well and you know sort of the efficiencies that customers are able to get from HCI and specifically VX rail gives them that ability to pivot when these you know shall we say unexpected circumstances arise and I think if that's informing their their decisions and their opinions on what their IT strategies look like as they move forward they want that same level of agility and the ability to react quickly with our overall infrastructure excellent want to get into the announcements what I want my team actually your team gave me access to the CIO from the city of Amarillo so maybe they can dig up that footage talk about how fast they pivoted you know using VX rail to really spin up things fast so let's hear from the announcements first and then definitely want to share that that customer story a little bit later so let's get to the actual news that and it's gonna share okay now what's new I am pleased to announce a number of exciting updates and new platforms to further enable IT modernization across core edge and cloud I will cover each of these announcements in more detail demonstrating how only the X rail can offer the breadth of platform configurations automation orchestration and lifecycle management across a fully integrated hardware and software full stack with consistent simple side operations to address the broadest range of traditional and modern applications I'll start with hybrid cloud and recap what you may have seen in the Dell technologies cloud announcements just a few weeks ago related to VMware cloud foundation on the X rail then I'll cover two brand new VX rail hardware platforms and additional options and finally circle back to talk about the latest enhancements to our VX rail HCI system software capabilities for lifecycle management let's get started with our new cloud offerings based on the ex rail you xrail is the HCI foundation for dell technologies cloud platform bringing automation and financial models similar to public cloud to on-premises environments VMware recently introduced cloud foundation for dotto which is based on vSphere 7 as you likely know by now vSphere 7 was definitely an exciting and highly anticipated release in keeping with our synchronous release commitment we introduced the XR l 7 based on vSphere 7 in late April which was within 30 days of VMware's release two key areas that VMware focused on were embedding containers and kubernetes into vSphere unifying them with virtual machines and the second is improving the work experience for vSphere administrators with vSphere lifecycle manager or VL CM I'll address the second point a bit in terms of how the X rail fits in in a moment for V cf4 with tansu based on vSphere 7 customers now have access to a hybrid cloud platform that supports native kubernetes workloads and management as well as your traditional vm based workloads and this is now available with VCF 4 on the ex rel 7 the X rails tight integration with VMware cloud foundation delivers a simple and direct path not only to the hybrid cloud but also to deliver kubernetes a cloud scale with one complete automated platform the second cloud announcement is also exciting recent VCF for networking advancements have made it easier than ever to get started with hybrid cloud because we're now able to offer a more accessible consolidated architecture and with that Dell technologies cloud platform can now be deployed with a four node configuration lowering the cost of an entry-level hybrid cloud this enables customers to start smaller and grow their cloud deployment over time VCF on the x rail can now be deployed in two different ways for small environments customers can utilize a consolidated architecture which starts with just four nodes since the management and workload domains share resources in this architecture it's ideal for getting started with an entry-level cloud to run general-purpose virtualized workloads with a smaller entry point both in terms of required infrastructure footprint as well as cost but still with a consistent cloud operating model for larger environments we're dedicated resources and role based access control to separate different sets of workloads is usually preferred you can choose to deploy a standard architecture which starts at 8 nodes for independent management and workload domains a standard implementation is ideal for customers running applications that require dedicated workload domains that includes horizon VDI and vSphere with kubernetes all right John there's definitely been a lot of interest in our community around everything that VMware's doing with vSphere 7 understand if you wanted to use the kubernetes piece you know it's it's VCF as that so we you know we've seen the announcements delt partnering there helped us connect that story between you know really the the VMware strategy and how they've talked about cloud and how you know where does the X rail fit in that overall Delta cloud story absolutely so so first of all is through the x-ray of course is integral to the Delta cloud strategy you know it's been VCF on bx r l equals the delta cloud platform and this is our flagship on-prem cloud offering that we've been able to enable operational consistency across any cloud right whether it's on prem in the edge or in a public cloud and we've seen the delta cloud platform embraced by customers for a couple key reasons one is it offers the fastest hybrid cloud deployment in the market and this is really you know thanks to a new subscription on offer that we're now offering out there we're at less than 14 days it can be set up and running and really the deltek cloud does bring a lot of flexibility in terms of consumption models overall comes to the extra secondly I would say is fast and easy upgrades I mean this is this is really this is what VX real brings to the table for all our clothes if you will and it's especially critical in the cloud so the full automation of lifecycle management across the hardware and software stack boss the VMware software stack and in the Dell software however we're supporting that together this enables essentially the third thing which is customers can just relax right they can be rest assured that their infrastructure will be continuously validated and always be in a continuously validated state and this this is the kind of thing that you know those three value propositions together really fit well with with any on print cloud now you take what Shannon just mentioned and the fact that now you can build and run modern applications on the same the x-ray link structure alongside traditional applications this is a game changer yeah it I love you know I remember in the early days that about CI how does that fit in with cloud discussion and align I've used the last couple years this you know modernize the platform then you can modernize the application though as companies are doing their full modernization this plays into what you're talking about all right let's get you know can't let ran and continue get some more before we dig into some more analysis that's good let's talk about new hardware platforms and updates that result in literally thousands of potential new configuration options covering a wide breadth of modern and traditional application needs across a range of the actual use cases first up I am incredibly excited to announce a brand new delhi MCB x rail series the DS series this is a ruggedized durable platform that delivers the full power of the x rail for workloads at the edge in challenging environments or for space constrained areas the X ray LD series offers the same compelling benefits as the rest of the BX rail portfolio with simplicity agility and lifecycle management but in a lightweight short depth at only 20 inches it's a durable form factor that's extremely temperature resilient shock resistant and easily portable it even meets mil spec standards that means you have the full power of lifecycle automation with VX rail HCI system software and 24 by 7 single point of support enabling you to rapidly react to business needs no matter the location or how harsh the conditions so whether you're deploying a data center at a mobile command base running real-time GPS mapping on-the-go or implementing video surveillance in remote areas you can ensure availability integrity and confidence for every workload with the new VX Rail ruggedized D series had would love for you to bring us in a little bit you know that what customer requirement bringing bringing this to market I I remember seeing you know Dell servers ruggedized of course edge you know really important growth to build on what John was talking about clouds so yeah Chad bring us inside what was driving this piece of the offering sure Stu yeah you know having the the hardware platforms that can go out into some of these remote locations is really important and that's being driven by the fact that customers are looking for compute performance and storage out at some of these edges or some of the more exotic locations you know whether that's manufacturing plants oil rigs submarine ships military applications in places that we've never heard of but it's also been extending that operational simplicity of the the sort of way that you're managing your data center that has VX rails you're managing your edges the same way using the same set of tools so you don't need to learn anything else so operational simplicity is is absolutely key here but in those locations you can take a product that's designed for a data center where you're definitely controlling power cooling space and take it to some of these places where you get sand blowing or sub-zero temperatures so we built this D series that was able to go to those extreme locations with extreme heat extreme cold extreme altitude but still offer that operational simplicity if you look at the the resistance that it has to heat it can go from around operates at a 45 degrees Celsius or 113 degrees Fahrenheit range but it can do an excursion up to 55 °c or 131 degrees Fahrenheit for up to eight hours it's also resisted the heats and dust vibration it's very lightweight short depth in fact it's only 20 inches deep this is a smallest form factor obviously that we have in the BX rail family and it's also built to to be able to withstand sudden shocks it's certified it was stand 40 G's of shock and operation of the 15,000 feet of elevation it's pretty high and you know this is this is sort of like where were skydivers go to when they weren't the real real thrill of skydiving where you actually the oxygen to to be a put that out to their milspec certified so mil-std 810g which i keep right beside my bed and read every night and it comes with a VX rail stick hardening package is packaging scripts so that you can auto lock down the rail environment and we've got a few other certifications that are on the roadmap now for for naval chakra quirements EMI and radiation immunity of all that yeah you know it's funny I remember when weights the I first launched it was like oh well everything's going to white boxes and it's going to be you know massive you know no differentiation between everything out there if you look at what you're offering if you look at how public clouds build their things what I call it a few years poor is there's a pure optimization so you need scale you need similarities but you know you need to fit some you know very specific requirements lots of places so interesting stuff yeah certifications you know always keep your teams busy alright let's get back to Shannon we are also introducing three other hardware based editions first a new VX rail eseries model based on were the first time AMD epic processors these single socket 1u nodes offered dual socket performance with CPU options that scale from 8 to 64 cores up to a terabyte of memory and multiple storage options making it an ideal platform for desktop VDI analytics and computer-aided design next the addition of the latest NVIDIA Quadro RT X GPUs brings the most significant advancement in computer graphics in over a decade to professional workflows designers and artists across industries can now expand the boundary of what's possible working with the largest and most complex graphics rendering deep learning and visual computing workloads and Intel obtain DC persistent memory is here and it offers high performance and significantly increase memory capacity with data persistence at an affordable price persistence is a critical feature that maintains data integrity even when power is lost enabling quicker recovery and less downtime with support for Intel obtain DC persistent memory customers can expand in memory intensive workloads and use cases like sa P Hana alright let's finally dig into our HCI system software which is the core differentiation for the xrail regardless of your workload or platform choice our joint engineering with VMware and investments in the x-ray HCI system software innovation together deliver an optimized operational experience at reduced risk for our customers under the covers the xrail offers best-in-class Hardware married with VMware HCI software either vcn or VCF but what makes us different stems from our investments to integrate the two Dell technologies has a dedicated VX rail team of about 400 people to build market sell and support a fully integrated hyper-converged system that team has also developed our unique the X rail HDI system software which is a suite of integrated software elements that extend VMware native capabilities to deliver a seamless automated operational experience that customers cannot find elsewhere the key components of the x rail HDI system software are shown around the arc here that include the X rail manager full stack lifecycle management ecosystem connectors and support I don't have time to get into all the details of these elements today but if you're interested in learning more I encourage you to meet our experts and I will tell you how to do that in a moment I touched on VLC M being a key feature to vSphere seven earlier and I'd like to take the opportunity to expand on that a bit in the context of the xrail lifecycle management the LCM adds valuable automation to the execution of updates for customers but it doesn't eliminate the manual work still needed to define and package the updates and validate all of the components prior to applying them with the X ray all customers have all of these areas addressed automatically on their behalf freeing them to put their time into other important functions for their business customers tell us that lifecycle management continues to be a major source of the maintenance effort they put into their infrastructure and then it tends to lead to overburden IT staff that it can cause disruptions to the business if not managed effectively and that it isn't the most efficient economically Automation of lifecycle management in VX Rail results in the utmost simplicity from a customer experience perspective and offers operational freedom from maintaining infrastructure but as shown here our customers not only realize greater IT team efficiencies they have also reduced downtime with fewer unplanned outages and reduced overall cost of operations with the xrail HCI system software intelligent lifecycle management upgrades of the fully integrated hardware and software stack are automated keeping clusters in continuously validated States while minimizing risks and operational costs how do we ensure continuously validated States Furby xrail the x-ray labs execute an extensive automated repeatable process on every firmware and software upgrade and patch to ensure clusters are in continuously validated states of the customer's choosing across their VX rail environment the VX rail labs are constantly testing analyzing optimising and sequencing all of the components in the upgrade to execute in a single package for the full stack all the while the x rail is backed by Delhi MCS world-class services and support with a single point of contact for both hardware and software IT productivity skyrockets with single-click non-disruptive upgrades of the fully integrated hardware and software stack without the need to do extensive research and testing taking you to the next VX rail version of your choice while always in a continuously validated state you can also confidently execute automated VX rail upgrades no matter what hardware generation or node types are in the cluster they don't have to all be the same and upgrades with VX rail are faster and more efficient with leap frogging simply choose any VX rail version you desire and be assured you will get there in a validated state while seamlessly bypassing any other release in between only the ex rail can do that all right so Chad you know the the lifecycle management piece that Jana was just talking about is you know not the sexiest it's often underappreciated you know there's not only the years of experience but the continuous work you're doing you know reminds me back you know the early V sand deployments versus VX rail jointly develop you know jointly tested between Dell and VMware so you know bring us inside why you know 2020 lifecycle management still you know a very important piece especially in the VL family yeah let's do I think it's sexy but I'm pretty big nerd yes even more the larger the deployments come when you start to look at data centers full of VX rails and all the different hardware software firmware combinations that could exist out there it's really the value that you get out of that VX r l HTI system software that Shannon was talking about and how its optimized around the VMware use case very tightly integrated with each VMware component of course and the intelligence of being able to do all the firmware all of the drivers all of the software altogether tremendous value to our customers but to deliver that we really need to make a fairly large investment so she Anna mentioned we've run about twenty five thousand hours of testing across each major release four patches Express patches that's about seven thousand hours for each of those so obviously there's a lot of parallelism and and we're always developing new test scenarios for each release that we need to build in as we as we introduce new functionality one of the key things that were able to do as Shannon mentioned is to be able to leapfrog releases and get you to that next validated state we've got about 100 engineers just working on creating and executing those test cases on a continuous basis and obviously a huge amount of automation and then when we talk about that investment to execute those tests that's well north of sixty million dollars of investment in our lab in fact we've got just over two thousand VH rail units in our testbed across the u.s. Shanghai China and corn island so a massive amount of testing of each of those those components to make sure that they operate together in a validated state yeah well you know absolutely it's super important not only for the day one but the day two deployments but I think this actually be a great place for us to bring in that customer that Dell gave me access to so we've got the CIO of Amarillo Texas he was an existing VX rail customer and he's going to explain what happened as to how he needed to react really fast to support the work from home initiative as well as you know we get to hear in his words the value of what lifecycle management means though Andrew if we could queue up that that customer segment please it was it's been massive and it's been interesting to see the IT team absorb it you know as we mature and they I think they embrace the ability to be innovative and to work with our departments but this instance really justified why I was driving progress so so fervently why it was so urgent today three years ago we the answer would have been no there would have been we wouldn't have been in a place where we could adapt with it with the x-ray all in place you know in a week we spun up hundreds of instant phones we spawned us a seventy five person call center in a day and a half for our public health we will allow multiple applications for Public Health so they could do remote clinics it's given us the flexibility to be able to to roll out new solutions very quickly and be very adaptive and it's not only been apparent to my team but it's really made an impact on the business and now what I'm seeing is those those are my customers that were a little lagging or a little conservative or understanding the impact of modernizing the way they do business because it makes them adaptable as well all right so rich you talked to a bunch about the the efficiencies that they tie put place how about that that overall just managed you know you talked about how fast you spun up these new VDI instances you need to be able to do things much simpler so you know how does the overall lifecycle management fit into this discussion it makes it so much easier and you know in the in the old environment one it took a lot of man-hours to make change it was it was very disruptive when we did make change this it overburdened I guess that's the word I'm looking for it really over overburdened our staff it cost disruption to business it was it cost-efficient and then you simple things like you know I've worked for multi billion-dollar companies where we had massive QA environments that replicated production simply can't afford that at local government you know having the sort of environment lets me do a scaled-down QA environment and still get the benefit of rolling out non disruptive change as I said earlier it's allow us to take all of those cycles that we were spending on lifecycle management because it's greatly simplified and move those resources and rescale them in in other areas where we can actually have more impact on the business it's hard to be innovated when a hundred percent of your cycles are just keeping the ship afloat all right well you know nothing better than hearing straight from the end-user you know public sector reacting very fast to the Cova 19 and you know you heard him he said if this had hit his before he had run this project he would not have been able to respond so I think everybody out there understands if I didn't actually have access to the latest technology you know it would be much harder all right I'm looking forward to doing the crowd chat and everybody else digging with questions and get follow-up but a little bit more I believe one more announcement he came and got for us though let's roll the final video clip in our latest software release the x-ray of 4.7 dot 510 we continue to add new automation and self-service features new functionality enables you to schedule and run upgrade health checks in advance of upgrades to ensure clusters are in a ready state for the next upgrade or patch this is extremely valuable for customers that have stringent upgrade windows as they can be assured the clusters will seamlessly upgrade within that window of course running health checks on a regular basis also helps ensure that your clusters are always ready for unscheduled patches and security updates we are also offering more flexibility and getting all nodes or clusters to a common release level with the ability to reimage nodes or clusters to a specific the xrail version or down Rev one or more more nodes that may be shipped at a higher Rev than the existing cluster this enables you to easily choose your validated state when adding new nodes or repurposing nodes in cluster to sum up all of our announcements whether you are accelerating data center modernization extending HCI to harsh edge environments deploying an on-premises Dell technologies cloud platform to create a developer ready kubernetes infrastructure BX Rail is there delivering a turnkey experience that enables you to continuously innovate realize operational freedom and predictably evolve the x rail provides an extensive breadth of platform configurations automation and lifecycle management across the integrated hardware and software full stack and consistent hybrid cloud operations to address the broadest range of traditional and modern applications across core edge and cloud I now invite you to engage with us first the virtual passport program is an opportunity to have some fun while learning about the ex rails new features and functionality and score some sweet digital swag while you're at it it delivered via an automated via an augmented reality app all you need is your device so go to the x-ray is slash passport to get started and secondly if you have any questions about anything I talked about or want a deeper conversation we encourage you to join one of our exclusive VX rail meet the experts sessions available for a limited time first-come first-served just go to the x-ray dot is slash expert session to learn more you all right well obviously with everyone being remote there's different ways we're looking to engage so we've got the crowd chat right after this but John gives a little bit more is that how Del's making sure to stay in close contact with customers and what you've got firfer options for them yeah absolutely so as Shannon said so in lieu of not having Dell tech world this year in person where we could have those great in-person interactions and answer questions whether it's in the booth or you know in in meeting rooms you know we are going to have these meet the experts sessions over the next couple of weeks and look we're gonna put our best and brightest from our technical community and make them accessible to to everyone out there so again definitely encourage you we're trying new things here in this virtual environment to ensure that we could still stay in touch answer questions be responsive and really looking forward to you know having these conversations over the next couple weeks all right well John and Chad thank you so much we definitely look forward to the conversation here in int in you'd if you're here live definitely go down below do it if you're watching this on demand you can see the full transcript of it at crowd chat /vx rocks sorry V xrail rocks for myself Shannon on the video John and Chad Andrew man in the booth there thank you so much for watching and go ahead and join the crowd chat

Published Date : Jun 5 2020

SUMMARY :

fast to the Cova 19 and you know you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jonathan SegalPERSON

0.99+

JohnPERSON

0.99+

ShannonPERSON

0.99+

15,000 feetQUANTITY

0.99+

Chad DunnPERSON

0.99+

ChadPERSON

0.99+

AndrewPERSON

0.99+

131 degrees FahrenheitQUANTITY

0.99+

JanetPERSON

0.99+

Palo AltoLOCATION

0.99+

US NavyORGANIZATION

0.99+

40 GQUANTITY

0.99+

VMwareORGANIZATION

0.99+

2020DATE

0.99+

DellORGANIZATION

0.99+

113 degrees FahrenheitQUANTITY

0.99+

45 degrees CelsiusQUANTITY

0.99+

8QUANTITY

0.99+

NetherlandsLOCATION

0.99+

NVIDIAORGANIZATION

0.99+

todayDATE

0.99+

JanaPERSON

0.99+

AnnaPERSON

0.99+

late AprilDATE

0.99+

a day and a halfQUANTITY

0.99+

vSphere 7TITLE

0.99+

vSphereTITLE

0.99+

AmarilloLOCATION

0.99+

twoQUANTITY

0.99+

thousandsQUANTITY

0.99+

each releaseQUANTITY

0.98+

bothQUANTITY

0.98+

30 daysQUANTITY

0.98+

250 desktopsQUANTITY

0.98+

less than 14 daysQUANTITY

0.98+

about 400 peopleQUANTITY

0.98+

BostonLOCATION

0.98+

two key areasQUANTITY

0.98+

less than a thirdQUANTITY

0.98+

about seven thousand hoursQUANTITY

0.98+

24QUANTITY

0.97+

7QUANTITY

0.97+

VX RailCOMMERCIAL_ITEM

0.97+

20 inchesQUANTITY

0.97+

Dell TechnologiesORGANIZATION

0.97+

about twenty five thousand hoursQUANTITY

0.97+

over two thousand VHQUANTITY

0.97+

Stu minimunPERSON

0.97+

over 8,000 customersQUANTITY

0.97+

u.s.LOCATION

0.97+

HCIORGANIZATION

0.97+

corn islandLOCATION

0.97+

eachQUANTITY

0.97+

64 coresQUANTITY

0.97+

up to a thousandQUANTITY

0.96+

oneQUANTITY

0.96+

first timeQUANTITY

0.96+

x railTITLE

0.95+

Lisa Spelman, Intel | Red Hat Summit 2020


 

from around the globe it's the cube with digital coverage of Red Hat summit 2020 brought to you by Red Hat welcome back to the cubes coverage of Red Hat summit 2020 of course this year it's rather than all coming to San Francisco we are talking to red hat executives their partners and their customers where they are around the globe happy to welcome back one of our cube alumni Lisa Spellman who's a corporate vice president and general manager of the Intel Xeon and memory group Lisa thanks so much for joining us and where are you joining us from well thank you for having me and I'm a little further north than where the conference was gonna be held so I'm in Portland Oregon right now excellent yeah we've had you know customers from around the globe as part of the cube coverage here and of course you're near the mothership of Intel so Lisa you know but let's start of course you know the Red Hat partnership you know I've been the Intel executives on the keynote stage for for many years so talk about to start us off the Intel Red Hat partnership as it stands today in 2020 yeah you know on the keynote stage for many years and then actually again this year so despite the virtual nature of the event that we're having we're trying to still show up together and demonstrate together to our customers and our developer community really give them a sense for all the work that we're doing across the important transformations that are happening in the industry so we view this partnership in this event as important ways for us to connect and make sure that we have a chance to really share where we're going next and gather feedback on where our customers and that developer community need us to go together because it is a you know rich long history of partnership of the combination of our Hardware work and the open-source software work that we do with Red Hat and we see that every year increasing in value as we expand to more workloads and more market segments that we can help with our technology yeah well Lisa you know we've seen on the cube for for many years Intel strong partnerships across the industry from the data centers from the cloud I think we're gonna talk a little bit about edge for this discussion too though edge and 5g III think about all the hard work that Intel does especially with its partnership you know you talked about and I think that the early days of Red Hat you know the operating system things that were done as virtualization rolled out there's accelerations that gone through so when it comes to edge in 5g obviously big mega waves that we spend a lot of talking about what's what's Intel's piece obviously we know Intel chips go everywhere but when it comes to kind of the engineering work that gets done what are some of the pieces that Intel spork yeah and that's a great example actually of what I what we are seeing is this expansion of areas of workloads and investment and opportunity that we face so as we move forward into 5g becoming not the theoretical next thing but actually the thing that is starting to be deployed and transformed you can see a bunch of underlying work that Intel and Red Hat have done together in order to make that a reality so you look at they move from a very proprietary ASIC based type of workload with a single function running on it and what we've done is drive to have the virtualization capabilities that took over and provided so much value in the cloud data center also apply to the 5g network so the move to network function virtualization and software-defined networking and a lot of value being derived from the opportunity to run that on open source standard and have that open source community really come together to make it easier and faster to deploy those technologies and also to get good SLA s and quality of service while you're driving down your overall total cost of ownership so we've spent years working on that together in the 5g space and network space in general and now it's really starting to take off then that is very well connected to the edge so if you think about the edge as this point of content creation of where the actions happening and you start to think through how much of the compute or the value can I get out at the edge without everything having to go all the way back to the data center you start to again see how those open standards in very complex environments and help people manage their total cost of ownership and the complexity all right Lisa so when you're talking about edge solutions when I've been talking to Red Hat where their first deployments have really been talking to the service providers really I've seen it as an extension of what you were talking about network functions virtualization you know everybody talks about edges there's a lot of different edges out there the service providers being the first place we see things but you know all the way out even to the consumer edge and the device edge where Intel may or may not have you know some some devices there so help us understand you know where where you're sitting and where should we be looking as these technologies work you know it's a it's a great point we see the edge being developed by multiple types of organizations so yes the service providers are obviously there in so much as they already even own the location points out there if you think of all the myriad of poles with the the base stations and everything that's out there that's a tremendous asset to capitalize on you also see our cloud service provider customers moving towards the edge as well as they think of new developer services and capabilities and of course you see the enterprise edge coming in if you think of factory type of utilization methodologies or in manufacturing all of those are very enterprise based and are really focused on not that consumer edge but on the b2b edge or the you know the infrastructure edge is what you might think of it as but they're working through how do they add efficiency capability automation all into their existing work but making it better so at Intel the way that we look at that is it's all opportunities to provide the right foundation for that so when we look at the silicon products that we develop we gather requirements from that entire landscape and then we work through our silicon portfolio you know we have our portfolio really focused on the movement the storage and the processing of data and we try to look at that in a very holistic way and decide where the capability will best serve that workload so you do have a choice at times whether some new feature or capability goes into the CPU or the Zeon engine or you could think about whether that would be better served by being added into a smart egg type of capability and so those are just small examples of how we look at the entirety of the data flow in the edge and at what the use case is and then we utilize that to inform how we improve the silicon and where we add feature well Lisa as you were going through this it makes me also think about one of the other big mega waves out there artificial intelligence so lots of discussion as you were saying what goes where how we think about it cloud edge devices so how does AI intersect with this whole discussion of edge that we were just having yeah and you're probably gonna have to cut me off because I could go on for a long time on on this one but AI is such an exciting at capability that is coming through everywhere literally from the edge through the core network into the cloud and you see it infiltrating every single workload across the enterprise across cloud service providers across the network service providers so it is truly on its way to being completely pervasive and so again that presents the same opportunity for us so if you look at your silicon portfolio you need to be able to address artificial intelligence all the way from the edge to the cloud and that can mean adding silicon capabilities that can handle milliwatts like ruggedized super low power super long life you don't literally out at the edge and then all the way back to the data center where you're going for a much higher power at a higher capability for training of the models so we have built out a portfolio that addresses all of that and one of the interesting things about the edges people always think of it as a low compute area so they think of it as data collection but more and more of that data collection is also having a great benefit from being able to do an amount of compute and inference out at the edge so we see a tremendous amount of actual Zeon product being deployed out at the edge because of the need to actually deliver quite high-powered compute right there and that's improving customer experiences and it's changing use cases through again healthcare manufacturing automotive you see it in all the major fast mover edge industries yeah now we're really good points they make their Lisa we all got used to you know limitless compute in the cloud and therefore you know let's put everything there but of course we understand there's this little thing called the speed of light that makes it that much of the information that is collected at the edge can't go beyond it you know I saw a great presentation actually last year talking about the geosynchronous satellites they collect so much information and you know you can't just beam it back and forth so I better have some compute there so you know we've known for a long time that the challenge of you know of our day has been distributed architectures and edge just you know changes that you know the landscape and the surface area that we need the touch so much more when I think about all those areas obviously security is an area that comes up so how does Intel and its partners make sure that no matter where my data is and you talk about the various memory that you know security is still considered at each aspect of the environment oh it's a huge focus because if you think of people and phrases they used to say like oh we got to have the fat pipe or the dumb pipe to get you know data back and or there is no such thing as a dumb pipe anymore everything is smart the entire way through the lifecycle and so with that smartness you need to have security embedded from the get-go into that work flow and what people need to understand is they undergo their edge deployments and start that work is that your obligation for the security of that data begins the you collect that data it doesn't start when it's back to the cloud or back in the data center so you own it and need to be on it from the beginning so we work across our Silicon portfolio and then our software ecosystem to think through it in terms of that entire pipeline of the data movement and making sure that there's not breakdowns in each of the handoff chain it's a really complex problem and it is not one that Intel is able to solve alone nor any individual silicon or software vendor along the way and I will say that some of the security work over the past couple years has led to a bringing together of the industry to address problems together whether they be on any other given day a friend or a foe when it comes to security I feel like I've seen just an amazing increase over the past two two and a half years on the collaboration to solve these problems together and ultimately I think that leads to a better experience for our users and for our customers so we are investing in it not just at the new features from the silicon perspective but in also understanding newer and more advanced threat or attack surfaces that can happen inside of the silicon or the software component all right so Lisa final question I have for you want to circle back to where we started it's Red Hat summit this week-long partnerships as I mentioned we see Intel it all the cloud shows you partner with all the hardware software providers and the like so big message from Red Hat is the open hybrid cloud to talk about how that fits in with everything that Intel is doing it's an area of really strong interconnection between us and Red Hat because we have a vision of that open hybrid cloud that is very well aligned and the part about it is that it is rooted not just in here's my feature here's my feature from either one of us it's rooted in what our customers need and what we see our enterprise customers driving towards that desire to utilize the cloud to in prove their capabilities and services but also maintain that capability inside their own house as well so that they have really viable work load transformation they have opportunities for their total cost of ownership and can fundamentally use technology to drive their business forward all right well Lisa Spellman thank you so much for all the update from Intel and definitely look forward to seeing the breakouts the keynotes and the like yes me too all right lots more coverage here from the cube redhead summit 2020 I'm Stu minimun and thanks as always for watching [Music]

Published Date : Apr 28 2020

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
Lisa SpellmanPERSON

0.99+

Lisa SpellmanPERSON

0.99+

Lisa SpelmanPERSON

0.99+

San FranciscoLOCATION

0.99+

2020DATE

0.99+

LisaPERSON

0.99+

Red HatORGANIZATION

0.99+

last yearDATE

0.99+

Portland OregonLOCATION

0.98+

this yearDATE

0.98+

IntelORGANIZATION

0.98+

red hatORGANIZATION

0.98+

Red HatEVENT

0.97+

Red Hat summit 2020EVENT

0.96+

Red HatTITLE

0.96+

first deploymentsQUANTITY

0.95+

Intel XeonORGANIZATION

0.95+

Red Hat summit 2020EVENT

0.95+

first placeQUANTITY

0.94+

ZeonORGANIZATION

0.94+

Stu minimunPERSON

0.93+

each aspectQUANTITY

0.93+

oneQUANTITY

0.91+

one of the interesting thingsQUANTITY

0.88+

Red Hat Summit 2020EVENT

0.86+

single functionQUANTITY

0.86+

two and a half yearsQUANTITY

0.84+

todayDATE

0.83+

bigEVENT

0.81+

yearsQUANTITY

0.81+

every single workloadQUANTITY

0.79+

mega wavesEVENT

0.78+

past couple yearsDATE

0.77+

many yearsQUANTITY

0.77+

every yearQUANTITY

0.74+

redhead summitEVENT

0.71+

eachQUANTITY

0.71+

ZeonCOMMERCIAL_ITEM

0.61+

Red HatTITLE

0.58+

of polesQUANTITY

0.58+

edgeORGANIZATION

0.58+

Red HatEVENT

0.54+

5gORGANIZATION

0.46+

twoDATE

0.46+

5gQUANTITY

0.38+