Why Use IaaS When You Can Make Bare Metal Cloud-Native?
>>Hi, Oleg. So great of you to join us today. I'm really looking forward to our session. Eso Let's get started. So if I can get you to give a quick intro to yourself and then if you can share with us what you're going to be discussing today >>Hi, Jake. In my name is Oleg Elbow. I'm a product architect and the Doctor Enterprise Container Cloud team. Uh, today I'm going to talk about running kubernetes on bare metal with a container cloud. My goal is going to tell you about this exciting feature and why we think it's important and what we actually did to make it possible. >>Brilliant. Thank you very much. So let's get started. Eso from my understanding kubernetes clusters are typically run in virtual machines in clouds. So, for example, public cloud AWS or private cloud maybe open staff based or VM ware V sphere. So why why would you go off and run it on their mettle? >>Well, uh, the Doctor Enterprise container cloud already can run Coburn eighties in the cloud, as you know, and the idea behind the container clouds to enable us to manage multiple doctor enterprise clusters. But we want to bring innovation to kubernetes. And instead of spending a lot of resources on the hyper visor and virtual machines, we just go all in for kubernetes directly environmental. >>Fantastic. So it sounds like you're suggesting then to run kubernetes directly on their mettle. >>That's correct. >>Fantastic and without a hyper visor layer. >>Yes, we all know the reasons to run kubernetes and virtual machines it's in The first place is mutual mutual isolation off workloads, but virtualization. It comes with the performance, heat and additional complexity. Uh, another. And when Iran coordinated the director on the hardware, it's a perfect opportunity for developers. They can see performance boost up to 30% for certain container workloads. Uh, this is because the virtualization layer adds a lot off overhead, and even with things like enhanced placement awareness technologies like Numa or processor opinion, it's it's still another head. By skipping over the virtualization, we just remove this overhead and gained this boost. >>Excellent, though it sounds like 30% performance boost very appealing. Are there any other value points or positive points that you can pull out? >>Yes, Besides, the hyper visor over had virtual machines. They also have some static resource footprint. They take up the memory and CPU cycles and overall reintroduces the density of containers per host. Without virtual machines, you can run upto 16% more containers on the same host. >>Excellent. Really great numbers there. >>One more thing to point out directly. Use environmental makes it easier to use a special purpose hardware like graphic processors or virtual no virtual network functions for don't work interfaces or the field programmable gate arrays for custom circuits, Uh, and you can share them between containers more efficiently. >>Excellent. I mean, there's some really great value points you pulled out there. So 30% performance boost, 60% density boost on it could go off and support specialized hardware a lot easier. But let's talk about now. The applications. So what sort of applications do you think would benefit from this The most? >>Well, I'm thinking primarily high performance computations and deep learning will benefit, Uh, which is the more common than you might think of now they're artificial Intelligence is gripping into a lot off different applications. Uh, it really depends on memory capacity and performance, and they also use a special devices like F P G s for custom circuits widely sold. All of it is applicable to the machine learning. Really? >>And I mean, that whole ai piece is I mean, really exciting. And we're seeing this become more commonplace across a whole host of sectors. So you're telcos, farmers, banking, etcetera. And not just I t today. >>Yeah, that's indeed very exciting. Uh, but creating communities closer environmental, unfortunately, is not very easy. >>Hope so it sounds like there may be some challenges or complexities around it. Ondas this, I guess. The reason why there's not many products then out there today for kubernetes on their metal on baby I like. Could you talk to us then about some of the challenges that this might entail? >>Well, there are quite a few challenges first, and for most, there is no one way to manage governmental infrastructures Nowadays. Many vendors have their solutions that are not always compatible with each other and not necessarily cover all aspects off this. Um So we've worked an open source project called metal cube metal cooped and integrated it into the doctor Enterprise Container Cloud To do this unified bar middle management for us. >>And you mentioned it I hear you say is that open source? >>There is no project is open source. We had a lot of our special sauce to it. Um, what it does, Basically, it enables us to manage the hardware servers just like a cloud server Instances. >>And could you go? I mean, that's very interesting, but could you go into a bit more detail and specifically What do you mean? As cloud instances, >>of course they can. Generally, it means to manage them through some sort of a p I or programming interface. Uh, this interface has to cover all aspects off the several life cycle, like hardware configuration, operating system management network configuration storage configuration, Uh, with help off Metal cube. We extend the carbonated C p i to enable it to manage bare metal hosts. And aled these suspects off its life cycle. The mental que project that's uses open stack. Ironic on. Did it drops it in the Cuban. It s a P I. And ironic does all the heavy lifting off provisioned. It does it in a very cloud native way. Uh, it configures service using cloud they need, which is very familiar to anyone who deals with the cloud and the power is managed transparently through the i p my protocol on. But it does a lot to hide the differences between different hardware hosts from the user and in the Doctor Enterprise Container Cloud. We made everything so the user doesn't really feel the difference between bare metal server and cloud VM. >>So, Oleg, are you saying that you can actually take a machine that's turned off and turn it on using the commands? >>That's correct. That's the I. P M I. R Intelligent platform management interface. Uh, it gives you an ability to interact directly with the hardware. You can manager monitor things like power, consumption, temperature, voltage and so on. But what we use it for is to manage the food source and the actual power state of the server. So we have a group of service that are available and we can turn them on. And when we need them, just if we were spinning the VM >>Excellent. So that's how you get around the fact that while aled cloud the ends of the same, the hardware is all different. But I would assume you would have different server configurations in one environment So how would you get around that? >>Uh, yeah, that Zatz. Excellent questions. So some elements of the berm mental management the FBI that we developed, they are specifically to enable operators toe handle wider range of hardware configurations. For example, we make it possible to consider multiple network interfaces on the host. We support flexible partitioning off hard disks and other storage devices. We also make it possible thio boot remote live using the unified extended firmware interface for modern systems. Or just good old bias for for the legacy ones. >>Excellent. So yeah, thanks. Thanks for sharing that that. Now let's take a look at the rest of the infrastructure and eggs. So what about things like networking and storage house that managed >>Oh, Jakey, that's some important details. So from the networking standpoint, the most important thing for kubernetes is load balancing. We use some proven open source technologies such a Zengin ICS and met a little bit to handle. Handle that for us and for the storage. That's ah, a bit more tricky part. There are a lot off different stories. Solutions out. There s o. We decided to go with self and ah cooperator for self self is very much your and stable distributed stories system. It has incredible scalability. We actually run. Uh, pretty big clusters in production with chef and rock makes the life cycle management for self very robust and cloud native with health shaking and self correction. That kind of stuff. So any kubernetes cluster that Dr Underprice Container Cloud provision for environmental Potentially. You can have the self cluster installed self installed in this cluster and provide stories that is accessible from any node in the cluster to any port in the cluster. So that's, uh, called Native Storage components. Native storage. >>Wonderful. But would that then mean that you'd have to have additional hardware so mawr hardware for the storage cluster, then? >>Not at all. Actually, we use Converse storage architecture in the current price container cloud and the workloads and self. They share the same machines and actually managed by the same kubernetes cluster A. Some point in the future, we plan to add more fully, even more flexibility to this, uh, self configuration and enable is share self, where all communities cluster will use a single single self back, and that's that's not the way for us to optimize our very basically. >>Excellent. So thanks for covering the infrastructure part. What would be good is if we can get an understanding them for that kind of look and feel, then for the operators and the users of the system. So what can they say? >>Yeah, the case. We know Doc Enterprise Container Cloud provides a web based user interface that is, uh, but enables to manage clusters. And the bare metal management actually is integrated into this interface and provides provides very smooth user experience. A zone operator, you need to add or enrolled governmental hosts pretty much the same way you add cloud credentials for any other for any other providers for any other platforms. >>Excellent. I mean, Oleg, it sounds really interesting. Would you be able to share some kind of demo with us? It be great to see this in action. Of >>course. Let's let's see what we have here. So, >>uh, thank you. >>Uh, so, first of all, you take a bunch of governmental service and you prepare them, connect and connect them to the network is described in the dogs and bootstrap container cloud on top of these, uh, three of these bare metal servers. Uh, once you put through, you have the container cloud up and running. You log into the u I. Let's start here. And, uh, I'm using the generic operator user for now. Its's possible to integrate it with your in the entity system with the customer and the entity system and get real users there. Mhm. So first of all, let's create a project. It will hold all off our clusters. And once we created it, just switched to it. And the first step for an operator is to add some burr metal hosts of the project. As you see it empty, uh, toe at the berm. It'll host. You just need a few parameters. Uh, name that will allow you to identify the server later. Then it's, ah, user name and password to access the IBM. My controls off the server next on, and it's very important. It's the hardware address off the first Internet port. It will be used to remotely boot the server over network. Uh, finally, that Z the i p address off the i p m i n point and last, but not the least. It's the bucket, uh, toe Assign the governmental host to. It's a label that is assigned to it. And, uh, right now we offer just three default labels or buckets. It's, ah, manager, manager, hosts, worker hosts and storage hosts. And depending on the hardware configuration of the server, you assign it to one of these three groups. You will see how it's used later in the phone, so note that least six servers are required to deploy managed kubernetes cluster. Just as for for the cloud providers. Um, there is some information available now about the service is the result of inspection. By the way, you can look it up. Now we move. Want to create a cluster, so you need to provide the name for the cluster. Select the release off Dr Enterprise Engine and next next step is for provider specific information. You need to specify the address of the Class three guy and point here, and the range of feathers is for services that will be installed in the cluster. The user war close um kubernetes Network parameter school be changed as well, but the defaults are usually okay. Now you can enable or disable stack light the monitoring system for the Burnett's cluster and provide some parameters to eat custom parameters. Uh, finally you click create to create the cluster. It's an empty cluster that we need to add some machines to. So we need a least three manager notes. The form is very simple. You just select the roll off the community snowed. It's either manager of worker Onda. You need to select this label bucket from which the environmental hospital we picked. We go with the manager label for manager notes and work your label for the workers. Uh, while question is deploying, let's check out some machine information. The storage data here, the names off the disks are taken from the environmental host Harbor inspection data that we checked before. Now we wait for servers to be deployed. Uh, it includes ah, operating system, and the government is itself. So uh, yeah, that's that's our That's our you user interface. Um, if operators need to, they can actually use Dr Enterprise Container Container cloud FBI for some more sophisticated, sophisticated configurations or to integrate with an external system, for example, configuration database. Uh, all the burr mental tasks they just can be executed through the carbonated C. P. I and by changing the custom resources customer sources describing the burr mental notes and objects >>Mhm, brilliant. Well, thank you for bringing that life. It's always good. Thio See it in action. I guess from my understanding, it looks like the operators can use the same tools as develops or developers but for managing their infrastructure, then >>yes, Exactly. For example, if you're develops and you use lands, uh, to monitor and manage your cluster, uh, the governmental resources are just another set of custom resources for you. Uh, it is possible to visualize and configure them through lands or any other developer to for kubernetes. >>Excellent. So from what I can see, that really could bridge the gap, then between infrastructure operators on develops and developer teams. Which is which is a big thing? >>Yes, that's that's Ah, one of our aspirations is to unify the user experience because we've seen a lot of these situations when infrastructure is operated by one set of tools and the container platform uses agnostic off it end users and offers completely different set of tools. So as a develops, you have to be proficient in both, and that's not very sustainable for some developers. Team James. >>Sure. Okay, well, thanks for covering that. That's great. E mean, there's obviously other container platforms out there in the market today. It would be great if you could explain only one of some of the differences there and in how Dr Enterprise Container Cloud approaches bare metal. >>Yeah, that's that's a That's an excellent question, Jake. Thank you. So, uh, in container cloud in the container Cloud Burr Mental management Unlike another container platforms, Burr metal management is highly and is tightly integrated in the in the product. It's integrated on the U and the A p I, and on the back and implementation level. Uh, other platforms typically rely on the user to provision in the ber metal hosts before they can deploy kubernetes on it. Uh, this leaves the operating system management hardware configuration hardware management mostly with dedicated infrastructure greater steam. Uh, Dr Enterprise Container Cloud might help to reduce this burden and this infrastructure management costs by just automated and effectively removing the part of responsibility from the infrastructure operators. And that's because container cloud on bare metal is essentially full stack solution. It includes the hardware configuration covers, operating system lifecycle management, especially, especially the security updates or C e updates. Uh, right now, at this point, the only out of the box operating system that we support is you, Bhutto. We're looking to expand this, and, as you know, the doctor Enterprise engine. It makes it possible to run kubernetes on many different platforms, including even Windows. And we plan to leverage this flexibility in the doctor enterprise container cloud full extent to expand this range of operating systems that we support. >>Excellent. Well, Oleg, we're running out of time. Unfortunately, I mean, I've thoroughly enjoyed our conversation today. You've pulled out some excellent points you talked about potentially up to a 30% performance boost up to 60% density boost. Um, you've also talked about how it can help with specialized hardware and make this a lot easier. Um, we also talked about some of the challenges that you could solve, obviously, by using docker enterprise container clouds such as persistent storage and load balancing. There's obviously a lot here, but thank you so much for joining us today. It's been fantastic. And I hope that we've given some food for thoughts to go out and try and deployed kubernetes on Ben. It'll so thanks. So leg >>Thank you for coming. BJ Kim
SUMMARY :
Hi, Oleg. So great of you to join us today. My goal is going to tell you about this exciting feature and why we think it's So why why would you go off And instead of spending a lot of resources on the hyper visor and virtual machines, So it sounds like you're suggesting then to run kubernetes directly By skipping over the virtualization, we just remove this overhead and gained this boost. Are there any other value points or positive points that you can pull out? Yes, Besides, the hyper visor over had virtual machines. Excellent. Uh, and you can share them between containers more efficiently. So what sort of applications do you think would benefit from this The most? Uh, which is the more common than you might think And I mean, that whole ai piece is I mean, really exciting. Uh, but creating communities closer environmental, the challenges that this might entail? metal cooped and integrated it into the doctor Enterprise Container Cloud to it. We made everything so the user doesn't really feel the difference between bare metal server Uh, it gives you an ability to interact directly with the hardware. of the same, the hardware is all different. So some elements of the berm mental Now let's take a look at the rest of the infrastructure and eggs. So from the networking standpoint, so mawr hardware for the storage cluster, then? Some point in the future, we plan to add more fully, even more flexibility So thanks for covering the infrastructure part. And the bare metal management actually is integrated into this interface Would you be able to share some Let's let's see what we have here. And depending on the hardware configuration of the server, you assign it to one of these it looks like the operators can use the same tools as develops or developers Uh, it is possible to visualize and configure them through lands or any other developer Which is which is a big thing? So as a develops, you have to be proficient in both, It would be great if you could explain only one of some of the differences there and in how Dr in the doctor enterprise container cloud full extent to expand Um, we also talked about some of the challenges that you could solve, Thank you for coming.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Oleg | PERSON | 0.99+ |
Oleg Elbow | PERSON | 0.99+ |
30% | QUANTITY | 0.99+ |
Jake | PERSON | 0.99+ |
FBI | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
Jakey | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
first step | QUANTITY | 0.98+ |
three groups | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
one set | QUANTITY | 0.98+ |
BJ Kim | PERSON | 0.98+ |
Windows | TITLE | 0.97+ |
up to 30% | QUANTITY | 0.97+ |
Doctor Enterprise | ORGANIZATION | 0.96+ |
Iran | ORGANIZATION | 0.93+ |
three | QUANTITY | 0.91+ |
single | QUANTITY | 0.91+ |
Ben | PERSON | 0.91+ |
Onda | ORGANIZATION | 0.9+ |
James | PERSON | 0.9+ |
Eso | ORGANIZATION | 0.89+ |
three manager | QUANTITY | 0.87+ |
Burnett | ORGANIZATION | 0.86+ |
One more thing | QUANTITY | 0.84+ |
three default | QUANTITY | 0.84+ |
each | QUANTITY | 0.83+ |
upto 16% more | QUANTITY | 0.81+ |
60% density | QUANTITY | 0.79+ |
single self | QUANTITY | 0.76+ |
up to 60% | QUANTITY | 0.75+ |
Zengin ICS | TITLE | 0.73+ |
IaaS | TITLE | 0.73+ |
six servers | QUANTITY | 0.72+ |
Harbor | ORGANIZATION | 0.68+ |
P G | TITLE | 0.68+ |
Enterprise | TITLE | 0.67+ |
Dr Enterprise | ORGANIZATION | 0.67+ |
I. P M | TITLE | 0.64+ |
three | OTHER | 0.64+ |
up | QUANTITY | 0.63+ |
Dr Enterprise Container Cloud | ORGANIZATION | 0.63+ |
Doctor | ORGANIZATION | 0.6+ |
Cuban | OTHER | 0.58+ |
Coburn eighties | ORGANIZATION | 0.58+ |
tools | QUANTITY | 0.56+ |
Thio | PERSON | 0.55+ |
Bhutto | ORGANIZATION | 0.55+ |
Cloud | TITLE | 0.54+ |
Doc Enterprise Container | TITLE | 0.5+ |
Doctor Enterprise Container | TITLE | 0.5+ |
Zatz | PERSON | 0.49+ |
Team | PERSON | 0.49+ |
Container Cloud | TITLE | 0.36+ |
Chris Jones, Platform9 | Finding your "Just Right” path to Cloud Native
(upbeat music) >> Hi everyone. Welcome back to this Cube conversation here in Palo Alto, California. I'm John Furrier, host of "theCUBE." Got a great conversation around Cloud Native, Cloud Native Journey, how enterprises are looking at Cloud Native and putting it all together. And it comes down to operations, developer productivity, and security. It's the hottest topic in technology. We got Chris Jones here in the studio, director of Product Management for Platform9. Chris, thanks for coming in. >> Hey, thanks. >> So when we always chat about, when we're at KubeCon. KubeConEU is coming up and in a few, in a few months, the number one conversation is developer productivity. And the developers are driving all the standards. It's interesting to see how they just throw everything out there and whatever gets adopted ends up becoming the standard, not the old school way of kind of getting stuff done. So that's cool. Security Kubernetes and Containers are all kind of now that next level. So you're starting to see the early adopters moving to the mainstream. Enterprises, a variety of different approaches. You guys are at the center of this. We've had a couple conversations with your CEO and your tech team over there. What are you seeing? You're building the products. What's the core product focus right now for Platform9? What are you guys aiming for? >> The core is that blend of enabling your infrastructure and PlatformOps or DevOps teams to be able to go fast and run in a stable environment, but at the same time enable developers. We don't want people going back to what I've been calling Shadow IT 2.0. It's, hey, I've been told to do something. I kicked off this Container initiative. I need to run my software somewhere. I'm just going to go figure it out. We want to keep those people productive. At the same time we want to enable velocity for our operations teams, be it PlatformOps or DevOps. >> Take us through in your mind and how you see the industry rolling out this Cloud Native journey. Where do you see customers out there? Because DevOps have been around, DevSecOps is rocking, you're seeing AI, hot trend now. Developers are still in charge. Is there a change to the infrastructure of how developers get their coding done and the infrastructure, setting up the DevOps is key, but when you add the Cloud Native journey for an enterprise, what changes? What is the, what is the, I guess what is the Cloud Native journey for an enterprise these days? >> The Cloud Native journey or the change? When- >> Let's start with the, let's start with what they want to do. What's the goal and then how does that happen? >> I think the goal is that promise land. Increased resiliency, better scalability, and overall reduced costs. I've gone from physical to virtual that gave me a higher level of density, packing of resources. I'm moving to Containers. I'm removing that OS layer again. I'm getting a better density again, but all of a sudden I'm running Kubernetes. What does that, what does that fundamentally do to my operations? Does it magically give me scalability and resiliency? Or do I need to change what I'm running and how it's running so it fits that infrastructure? And that's the reality, is you can't just take a Container and drop it into Kubernetes and say, hey, I'm now Cloud Native. I've got reduced cost, or I've got better resiliency. There's things that your engineering teams need to do to make sure that application is a Cloud Native. And then there's what I think is one of the largest shifts of virtual machines to containers. When I was in the world of application performance monitoring, we would see customers saying, well, my engineering team have this Java app, and they said it needs a VM with 12 gig of RAM and eight cores, and that's what we gave it. But it's running slow. I'm working with the application team and you can see it's running slow. And they're like, well, it's got all of its resources. One of those nice features of virtualization is over provisioning. So the infrastructure team would say, well, we gave it, we gave it all a RAM it needed. And what's wrong with that being over provisioned? It's like, well, Java expects that RAM to be there. Now all of a sudden, when you move to the world of containers, what we've got is that's not a set resource limit, really is like it used to be in a VM, right? When you set it for a container, your application teams really need to be paying attention to your resource limits and constraints within the world of Kubernetes. So instead of just being able to say, hey, I'm throwing over the fence and now it's just going to run on a VM, and that VMs got everything it needs. It's now really running on more, much more of a shared infrastructure where limits and constraints are going to impact the neighbors. They are going to impact who's making that decision around resourcing. Because that Kubernetes concept of over provisioning and the virtualization concept of over provisioning are not the same. So when I look at this problem, it's like, well, what changed? Well, I'll do my scale tests as an application developer and tester, and I'd see what resources it needs. I asked for that in the VM, that sets the high watermark, job's done. Well, Kubernetes, it's no longer a VM, it's a Kubernetes manifest. And well, who owns that? Who's writing it? Who's setting those limits? To me, that should be the application team. But then when it goes into operations world, they're like, well, that's now us. Can we change those? So it's that amalgamation of the two that is saying, I'm a developer. I used to pay attention, but now I need to pay attention. And an infrastructure person saying, I used to just give 'em what they wanted, but now I really need to know what they've wanted, because it's going to potentially have a catastrophic impact on what I'm running. >> So what's the impact for the developer? Because, infrastructure's code is what everybody wants. The developer just wants to get the code going and they got to pay attention to all these things, or don't they? Is that where you guys come in? How do you guys see the problem? Actually scope the problem that you guys solve? 'Cause I think you're getting at I think the core issue here, which is, I've got Kubernetes, I've got containers, I've got developer productivity that I want to focus on. What's the problem that you guys solve? >> Platform operation teams that are adopting Cloud Native in their environment, they've got that steep learning curve of Kubernetes plus this fundamental change of how an app runs. What we're doing is taking away the burden of needing to operate and run Kubernetes and giving them the choice of the flexibility of infrastructure and location. Be that an air gap environment like a, let's say a telco provider that needs to run a containerized network function and containerized workloads for 5G. That's one thing that we can deploy and achieve in a completely inaccessible environment all the way through to Platform9 running traditionally as SaaS, as we were born, that's remotely managing and controlling your Kubernetes environments on-premise AWS. That hybrid cloud experience that could be also Bare Metal, but it's our platform running your environments with our support there, 24 by seven, that's proactively reaching out. So it's removing a lot of that burden and the complications that come along with operating the environment and standing it up, which means all of a sudden your DevOps and platform operations teams can go and work with your engineers and application developers and say, hey, let's get, let's focus on the stuff that, that we need to be focused on, which is running our business and providing a service to our customers. Not figuring out how to upgrade a Kubernetes cluster, add new nodes, and configure all of the low level. >> I mean there are, that's operations that just needs to work. And sounds like as they get into the Cloud Native kind of ops, there's a lot of stuff that kind of goes wrong. Or you go, oops, what do we buy into? Because the CIOs, let's go, let's go Cloud Native. We want to, we got to get set up for the future. We're going to be Cloud Native, not just lift and shift and we're going to actually build it out right. Okay, that sounds good. And when we have to actually get done. >> Chris: Yeah. >> You got to spin things up and stand up the infrastructure. What specifically use case do you guys see that emerges for Platform9 when people call you up and you go talk to customers and prospects? What's the one thing or use case or cases that you guys see that you guys solve the best? >> So I think one of the, one of the, I guess new use cases that are coming up now, everyone's talking about economic pressures. I think the, the tap blows open, just get it done. CIO is saying let's modernize, let's use the cloud. Now all of a sudden they're recognizing, well wait, we're spending a lot of money now. We've opened that tap all the way, what do we do? So now they're looking at ways to control that spend. So we're seeing that as a big emerging trend. What we're also sort of seeing is people looking at their data centers and saying, well, I've got this huge legacy environment that's running a hypervisor. It's running VMs. Can we still actually do what we need to do? Can we modernize? Can we start this Cloud Native journey without leaving our data centers, our co-locations? Or if I do want to reduce costs, is that that thing that says maybe I'm repatriating or doing a reverse migration? Do I have to go back to my data center or are there other alternatives? And we're seeing that trend a lot. And our roadmap and what we have in the product today was specifically built to handle those, those occurrences. So we brought in KubeVirt in terms of virtualization. We have a long legacy doing OpenStack and private clouds. And we've worked with a lot of those users and customers that we have and asked the questions, what's important? And today, when we look at the world of Cloud Native, you can run virtualization within Kubernetes. So you can, instead of running two separate platforms, you can have one. So all of a sudden, if you're looking to modernize, you can start on that new infrastructure stack that can run anywhere, Kubernetes, and you can start bringing VMs over there as you are containerizing at the same time. So now you can keep your application operations in one environment. And this also helps if you're trying to reduce costs. If you really are saying, we put that Dev environment in AWS, we've got a huge amount of velocity out of it now, can we do that elsewhere? Is there a co-location we can go to? Is there a provider that we can go to where we can run that infrastructure or run the Kubernetes, but not have to run the infrastructure? >> It's going to be interesting too, when you see the Edge come online, you start, we've got Mobile World Congress coming up, KubeCon events we're going to be at, the conversation is not just about public cloud. And you guys obviously solve a lot of do-it-yourself implementation hassles that emerge when people try to kind of stand up their own environment. And we hear from developers consistency between code, managing new updates, making sure everything is all solid so they can go fast. That's the goal. And that, and then people can get standardized on that. But as you get public cloud and do it yourself, kind of brings up like, okay, there's some gaps there as the architecture changes to be more distributed computing, Edge, on-premises cloud, it's cloud operations. So that's cool for DevOps and Cloud Native. How do you guys differentiate from say, some the public cloud opportunities and the folks who are doing it themselves? How do you guys fit in that world and what's the pitch or what's the story? >> The fit that we look at is that third alternative. Let's get your team focused on what's high value to your business and let us deliver that public cloud experience on your infrastructure or in the public cloud, which gives you that ability to still be flexible if you want to make choices to run consistently for your developers in two different locations. So as I touched on earlier, instead of saying go figure out Kubernetes, how do you upgrade a hundred worker nodes in place upgrade. We've solved that problem. That's what we do every single day of the week. Don't go and try to figure out how to upgrade a cluster and then upgrade all of the, what I call Kubernetes friends, your core DNSs, your metrics server, your Kubernetes dashboard. These are all things that we package, we test, we version. So when you click upgrade, we've already handled that entire process. So it's saying don't have your team focused on that lower level piece of work. Get them focused on what is important, which is your business services. >> Yeah, the infrastructure and getting that stood up. I mean, I think the thing that's interesting, if you look at the market right now, you mentioned cost savings and recovery, obviously kind of a recession. I mean, people are tightening their belts for sure. I don't think the digital transformation and Cloud Native spend is going to plummet. It's going to probably be on hold and be squeezed a little bit. But to your point, people are refactoring looking at how to get the best out of what they got. It's not just open the tap of spend the cash like it used to be. Yeah, a couple months, even a couple years ago. So okay, I get that. But then you look at the what's coming, AI. You're seeing all the new data infrastructure that's coming. The containers, Kubernetes stuff, got to get stood up pretty quickly and it's got to be reliable. So to your point, the teams need to get done with this and move on to the next thing. >> Chris: Yeah, yeah, yeah. >> 'Cause there's more coming. I mean, there's a lot coming for the apps that are building in Data Native, AI-Native, Cloud Native. So it seems that this Kubernetes thing needs to get solved. Is that kind of what you guys are focused on right now? >> So, I mean to use a customer, we have a customer that's in AI/ML and they run their platform at customer sites and that's hardware bound. You can't run AI machine learning on anything anywhere. Well, with Platform9 they can. So we're enabling them to deliver services into their customers that's running their AI/ML platform in their customer's data centers anywhere in the world on hardware that is purpose-built for running that workload. They're not Kubernetes experts. That's what we are. We're bringing them that ability to focus on what's important and just delivering their business services whilst they're enabling our team. And our 24 by seven proactive management are always on assurance to keep that up and running for them. So when something goes bump at the night at 2:00am, our guys get woken up. They're the ones that are reaching out to the customer saying, your environments have a problem, we're taking these actions to fix it. Obviously sometimes, especially if it is running on Bare Metal, there's things you can't do remotely. So you might need someone to go and do that. But even when that happens, you're not by yourself. You're not sitting there like I did when I worked for a bank in one of my first jobs, three o'clock in the morning saying, wow, our end of day processing is stuck. Who else am I waking up? Right? >> Exactly, yeah. Got to get that cash going. But this is a great use case. I want to get to the customer. What do some of the successful customers say to you for the folks watching that aren't yet a customer of Platform9, what are some of the accolades and comments or anecdotes that you guys hear from customers that you have? >> It just works, which I think is probably one of the best ones you can get. Customers coming back and being able to show to their business that they've delivered growth, like business growth and productivity growth and keeping their organization size the same. So we started on our containerization journey. We went to Kubernetes. We've deployed all these new workloads and our operations team is still six people. We're doing way more with growth less, and I think that's also talking to the strength that we're bringing, 'cause we're, we're augmenting that team. They're spending less time on the really low level stuff and automating a lot of the growth activity that's involved. So when it comes to being able to grow their business, they can just focus on that, not- >> Well you guys do the heavy lifting, keep on top of the Kubernetes, make sure that all the versions are all done. Everything's stable and consistent so they can go on and do the build out and provide their services. That seems to be what you guys are best at. >> Correct, correct. >> And so what's on the roadmap? You have the product, direct product management, you get the keys to the kingdom. What is, what is the focus? What's your focus right now? Obviously Kubernetes is growing up, Containers. We've been hearing a lot at the last KubeCon about the security containers is getting better. You've seen verification, a lot more standards around some things. What are you focused on right now for at a product over there? >> Edge is a really big focus for us. And I think in Edge you can look at it in two ways. The mantra that I drive is Edge must be remote. If you can't do something remotely at the Edge, you are using a human being, that's not Edge. Our Edge management capabilities and being in the market for over two years are a hundred percent remote. You want to stand up a store, you just ship the server in there, it gets racked, the rest of it's remote. Imagine a store manager in, I don't know, KFC, just plugging in the server, putting in the ethernet cable, pressing the power button. The rest of all that provisioning for that Cloud Native stack, Kubernetes, KubeVirt for virtualization is done remotely. So we're continuing to focus on that. The next piece that is related to that is allowing people to run Platform9 SaaS in their data centers. So we do ag app today and we've had a really strong focus on telecommunications and the containerized network functions that come along with that. So this next piece is saying, we're bringing what we run as SaaS into your data center, so then you can run it. 'Cause there are many people out there that are saying, we want these capabilities and we want everything that the Platform9 control plane brings and simplifies. But unfortunately, regulatory compliance reasons means that we can't leverage SaaS. So they might be using a cloud, but they're saying that's still our infrastructure. We're still closed that network down, or they're still on-prem. So they're two big priorities for us this year. And that on-premise experiences is paramount, even to the point that we will be delivering a way that when you run an on-premise, you can still say, wait a second, well I can send outbound alerts to Platform9. So their support team can still be proactively helping me as much as they could, even though I'm running Platform9s control plane. So it's sort of giving that blend of two experiences. They're big, they're big priorities. And the third pillar is all around virtualization. It's saying if you have economic pressures, then I think it's important to look at what you're spending today and realistically say, can that be reduced? And I think hypervisors and virtualization is something that should be looked at, because if you can actually reduce that spend, you can bring in some modernization at the same time. Let's take some of those nos that exist that are two years into their five year hardware life cycle. Let's turn that into a Cloud Native environment, which is enabling your modernization in place. It's giving your engineers and application developers the new toys, the new experiences, and then you can start running some of those virtualized workloads with KubeVirt, there. So you're reducing cost and you're modernizing at the same time with your existing infrastructure. >> You know Chris, the topic of this content series that we're doing with you guys is finding the right path, trusting the right path to Cloud Native. What does that mean? I mean, if you had to kind of summarize that phrase, trusting the right path to Cloud Native, what does that mean? It mean in terms of architecture, is it deployment? Is it operations? What's the underlying main theme of that quote? What's the, what's? How would you talk to a customer and say, what does that mean if someone said, "Hey, what does that right path mean?" >> I think the right path means focusing on what you should be focusing on. I know I've said it a hundred times, but if your entire operations team is trying to figure out the nuts and bolts of Kubernetes and getting three months into a journey and discovering, ah, I need Metrics Server to make something function. I want to use Horizontal Pod Autoscaler or Vertical Pod Autoscaler and I need this other thing, now I need to manage that. That's not the right path. That's literally learning what other people have been learning for the last five, seven years that have been focused on Kubernetes solely. So the why- >> There's been a lot of grind. People have been grinding it out. I mean, that's what you're talking about here. They've been standing up the, when Kubernetes started, it was all the promise. >> Chris: Yep. >> And essentially manually kind of getting in in the weeds and configuring it. Now it's matured up. They want stability. >> Chris: Yeah. >> Not everyone can get down and dirty with Kubernetes. It's not something that people want to generally do unless you're totally into it, right? Like I mean, I mean ops teams, I mean, yeah. You know what I mean? It's not like it's heavy lifting. Yeah, it's important. Just got to get it going. >> Yeah, I mean if you're deploying with Platform9, your Ops teams can tinker to their hearts content. We're completely compliant upstream Kubernetes. You can go and change an API server flag, let's go and mess with the scheduler, because we want to. You can still do that, but don't, don't have your team investing in all this time to figure it out. It's been figured out. >> John: Got it. >> Get them focused on enabling velocity for your business. >> So it's not build, but run. >> Chris: Correct? >> Or run Kubernetes, not necessarily figure out how to kind of get it all, consume it out. >> You know we've talked to a lot of customers out there that are saying, "I want to be able to deliver a service to my users." Our response is, "Cool, let us run it. You consume it, therefore deliver it." And we're solving that in one hit versus figuring out how to first run it, then operate it, then turn that into a consumable service. >> So the alternative Platform9 is what? They got to do it themselves or use the Cloud or what's the, what's the alternative for the customer for not using Platform9? Hiring more people to kind of work on it? What's the? >> People, building that kind of PaaS experience? Something that I've been very passionate about for the past year is looking at that world of sort of GitOps and what that means. And if you go out there and you sort of start asking the question what's happening? Just generally with Kubernetes as well and GitOps in that scope, then you'll hear some people saying, well, I'm making it PaaS, because Kubernetes is too complicated for my developers and we need to give them something. There's some great material out there from the likes of Intuit and Adobe where for two big contributors to Argo and the Argo projects, they almost have, well they do have, different experiences. One is saying, we went down the PaaS route and it failed. The other one is saying, well we've built a really stable PaaS and it's working. What are they trying to do? They're trying to deliver an outcome to make it easy to use and consume Kubernetes. So you could go out there and say, hey, I'm going to build a Kubernetes cluster. Sounds like Argo CD is a great way to expose that to my developers so they can use Kubernetes without having to use Kubernetes and start automating things. That is an approach, but you're going to be going completely open source and you're going to have to bring in all the individual components, or you could just lay that, lay it down, and consume it as a service and not have to- >> And mentioned to it. They were the ones who kind of brought that into the open. >> They did. Inuit is the primary contributor to the Argo set of products. >> How has that been received in the market? I mean, they had the event at the Computer History Museum last fall. What's the momentum there? What's the big takeaway from that project? >> Growth. To me, growth. I mean go and track the stars on that one. It's just, it's growth. It's unlocking machine learning. Argo workflows can do more than just make things happen. Argo CD I think the approach they're taking is, hey let's make this simple to use, which I think can be lost. And I think credit where credit's due, they're really pushing to bring in a lot of capabilities to make it easier to work with applications and microservices on Kubernetes. It's not just that, hey, here's a GitOps tool. It can take something from a Git repo and deploy it and maybe prioritize it and help you scale your operations from that perspective. It's taking a step back and saying, well how did we get to production in the first place? And what can be done down there to help as well? I think it's growth expansion of features. They had a huge release just come out in, I think it was 2.6, that brought in things that as a product manager that I don't often look at like really deep technical things and say wow, that's powerful. But they have, they've got some great features in that release that really do solve real problems. >> And as the product, as the product person, who's the target buyer for you? Who's the customer? Who's making that? And you got decision maker, influencer, and recommender. Take us through the customer persona for you guys. >> So that Platform Ops, DevOps space, right, the people that need to be delivering Containers as a service out to their organization. But then it's also important to say, well who else are our primary users? And that's developers, engineers, right? They shouldn't have to say, oh well I have access to a Kubernetes cluster. Do I have to use kubectl or do I need to go find some other tool? No, they can just log to Platform9. It's integrated with your enterprise id. >> They're the end customer at the end of the day, they're the user. >> Yeah, yeah. They can log in. And they can see the clusters you've given them access to as a Platform Ops Administrator. >> So job well done for you guys. And your mind is the developers are moving 'em fast, coding and happy. >> Chris: Yeah, yeah. >> And and from a customer standpoint, you reduce the maintenance cost, because you keep the Ops smoother, so you got efficiency and maintenance costs kind of reduced or is that kind of the benefits? >> Yeah, yep, yeah. And at two o'clock in the morning when things go inevitably wrong, they're not there by themselves, and we're proactively working with them. >> And that's the uptime issue. >> That is the uptime issue. And Cloud doesn't solve that, right? Everyone experienced that Clouds can go down, entire regions can go offline. That's happened to all Cloud providers. And what do you do then? Kubernetes isn't your recovery plan. It's part of it, right, but it's that piece. >> You know Chris, to wrap up this interview, I will say that "theCUBE" is 12 years old now. We've been to OpenStack early days. We had you guys on when we were covering OpenStack and now Cloud has just been booming. You got AI around the corner, AI Ops, now you got all this new data infrastructure, it's just amazing Cloud growth, Cloud Native, Security Native, Cloud Native, Data Native, AI Native. It's going to be all, this is the new app environment, but there's also existing infrastructure. So going back to OpenStack, rolling our own cloud, building your own cloud, building infrastructure cloud, in a cloud way, is what the pioneers have done. I mean this is what we're at. Now we're at this scale next level, abstracted away and make it operational. It seems to be the key focus. We look at CNCF at KubeCon and what they're doing with the cloud SecurityCon, it's all about operations. >> Chris: Yep, right. >> Ops and you know, that's going to sound counterintuitive 'cause it's a developer open source environment, but you're starting to see that Ops focus in a good way. >> Chris: Yeah, yeah, yeah. >> Infrastructure as code way. >> Chris: Yep. >> What's your reaction to that? How would you summarize where we are in the industry relative to, am I getting, am I getting it right there? Is that the right view? What am I missing? What's the current state of the next level, NextGen infrastructure? >> It's a good question. When I think back to sort of late 2019, I sort of had this aha moment as I saw what really truly is delivering infrastructure as code happening at Platform9. There's an open source project Ironic, which is now also available within Kubernetes that is Metal Kubed that automates Bare Metal as code, which means you can go from an empty server, lay down your operating system, lay down Kubernetes, and you've just done everything delivered to your customer as code with a Cloud Native platform. That to me was sort of the biggest realization that I had as I was moving into this industry was, wait, it's there. This can be done. And the evolution of tooling and operations is getting to the point where that can be achieved and it's focused on by a number of different open source projects. Not just Ironic and and Metal Kubed, but that's a huge win. That is truly getting your infrastructure. >> John: That's an inflection point, really. >> Yeah. >> If you think about it, 'cause that's one of the problems. We had with the Bare Metal piece was the automation and also making it Cloud Ops, cloud operations. >> Right, yeah. I mean, one of the things that I think Ironic did really well was saying let's just treat that piece of Bare Metal like a Cloud VM or an instance. If you got a problem with it, just give the person using it or whatever's using it, a new one and reimage it. Just tell it to reimage itself and it'll just (snaps fingers) go. You can do self-service with it. In Platform9, if you log in to our SaaS Ironic, you can go and say, I want that physical server to myself, because I've got a giant workload, or let's turn it into a Kubernetes cluster. That whole thing is automated. To me that's infrastructure as code. I think one of the other important things that's happening at the same time is we're seeing GitOps, we're seeing things like Terraform. I think it's important for organizations to look at what they have and ask, am I using tools that are fit for tomorrow or am I using tools that are yesterday's tools to solve tomorrow's problems? And when especially it comes to modernizing infrastructure as code, I think that's a big piece to look at. >> Do you see Terraform as old or new? >> I see Terraform as old. It's a fantastic tool, capable of many great things and it can work with basically every single provider out there on the planet. It is able to do things. Is it best fit to run in a GitOps methodology? I don't think it is quite at that point. In fact, if you went and looked at Flux, Flux has ways that make Terraform GitOps compliant, which is absolutely fantastic. It's using two tools, the best of breeds, which is solving that tomorrow problem with tomorrow solutions. >> Is the new solutions old versus new. I like this old way, new way. I mean, Terraform is not that old and it's been around for about eight years or so, whatever. But HashiCorp is doing a great job with that. I mean, so okay with Terraform, what's the new address? Is it more complex environments? Because Terraform made sense when you had basic DevOps, but now it sounds like there's a whole another level of complexity. >> I got to say. >> New tools. >> That kind of amalgamation of that application into infrastructure. Now my app team is paying way more attention to that manifest file, which is what GitOps is trying to solve. Let's templatize things. Let's version control our manifest, be it helm, customize, or just a straight up Kubernetes manifest file, plain and boring. Let's get that version controlled. Let's make sure that we know what is there, why it was changed. Let's get some auditability and things like that. And then let's get that deployment all automated. So that's predicated on the cluster existing. Well why can't we do the same thing with the cluster, the inception problem. So even if you're in public cloud, the question is like, well what's calling that API to call that thing to happen? Where is that file living? How well can I manage that in a large team? Oh my God, something just changed. Who changed it? Where is that file? And I think that's one of big, the big pieces to be sold. >> Yeah, and you talk about Edge too and on-premises. I think one of the things I'm observing and certainly when DevOps was rocking and rolling and infrastructures code was like the real push, it was pretty much the public cloud, right? >> Chris: Yep. >> And you did Cloud Native and you had stuff on-premises. Yeah you did some lifting and shifting in the cloud, but the cool stuff was going in the public cloud and you ran DevOps. Okay, now you got on-premise cloud operation and Edge. Is that the new DevOps? I mean 'cause what you're kind of getting at with old new, old new Terraform example is an interesting point, because you're pointing out potentially that that was good DevOps back in the day or it still is. >> Chris: It is, I was going to say. >> But depending on how you define what DevOps is. So if you say, I got the new DevOps with public on-premise and Edge, that's just not all public cloud, that's essentially distributed Cloud Native. >> Correct. Is that the new DevOps in your mind or is that? How would you, or is that oversimplifying it? >> Or is that that term where everyone's saying Platform Ops, right? Has it shifted? >> Well you bring up a good point about Terraform. I mean Terraform is well proven. People love it. It's got great use cases and now there seems to be new things happening. We call things like super cloud emerging, which is multicloud and abstraction layers. So you're starting to see stuff being abstracted away for the benefits of moving to the next level, so teams don't get stuck doing the same old thing. They can move on. Like what you guys are doing with Platform9 is providing a service so that teams don't have to do it. >> Correct, yeah. >> That makes a lot of sense, So you just, now it's running and then they move on to the next thing. >> Chris: Yeah, right. >> So what is that next thing? >> I think Edge is a big part of that next thing. The propensity for someone to put up with a delay, I think it's gone. For some reason, we've all become fairly short-tempered, Short fused. You know, I click the button, it should happen now, type people. And for better or worse, hopefully it gets better and we all become a bit more patient. But how do I get more effective and efficient at delivering that to that really demanding- >> I think you bring up a great point. I mean, it's not just people are getting short-tempered. I think it's more of applications are being deployed faster, security is more exposed if they don't see things quicker. You got data now infrastructure scaling up massively. So, there's a double-edged swords to scale. >> Chris: Yeah, yeah. I mean, maintenance, downtime, uptime, security. So yeah, I think there's a tension around, and one hand enthusiasm around pushing a lot of code and new apps. But is the confidence truly there? It's interesting one little, (snaps finger) supply chain software, look at Container Security for instance. >> Yeah, yeah. It's big. I mean it was codified. >> Do you agree that people, that's kind of an issue right now. >> Yeah, and it was, I mean even the supply chain has been codified by the US federal government saying there's things we need to improve. We don't want to see software being a point of vulnerability, and software includes that whole process of getting it to a running point. >> It's funny you mentioned remote and one of the thing things that you're passionate about, certainly Edge has to be remote. You don't want to roll a truck or labor at the Edge. But I was doing a conversation with, at Rebars last year about space. It's hard to do brake fix on space. It's hard to do a, to roll a someone to configure satellite, right? Right? >> Chris: Yeah. >> So Kubernetes is in space. We're seeing a lot of Cloud Native stuff in apps, in space, so just an example. This highlights the fact that it's got to be automated. Is there a machine learning AI angle with all this ChatGPT talk going on? You see all the AI going the next level. Some pretty cool stuff and it's only, I know it's the beginning, but I've heard people using some of the new machine learning, large language models, large foundational models in areas I've never heard of. Machine learning and data centers, machine learning and configuration management, a lot of different ways. How do you see as the product person, you incorporating the AI piece into the products for Platform9? >> I think that's a lot about looking at the telemetry and the information that we get back and to use one of those like old idle terms, that continuous improvement loop to feed it back in. And I think that's really where machine learning to start with comes into effect. As we run across all these customers, our system that helps at two o'clock in the morning has that telemetry, it's got that data. We can see what's changing and what's happening. So it's writing the right algorithms, creating the right machine learning to- >> So training will work for you guys. You have enough data and the telemetry to do get that training data. >> Yeah, obviously there's a lot of investment required to get there, but that is something that ultimately that could be achieved with what we see in operating people's environments. >> Great. Chris, great to have you here in the studio. Going wide ranging conversation on Kubernetes and Platform9. I guess my final question would be how do you look at the next five years out there? Because you got to run the product management, you got to have that 20 mile steer, you got to look at the customers, you got to look at what's going on in the engineering and you got to kind of have that arc. This is the right path kind of view. What's the five year arc look like for you guys? How do you see this playing out? 'Cause KubeCon is coming up and we're you seeing Kubernetes kind of break away with security? They had, they didn't call it KubeCon Security, they call it CloudNativeSecurityCon, they just had in Seattle inaugural events seemed to go well. So security is kind of breaking out and you got Kubernetes. It's getting bigger. Certainly not going away, but what's your five year arc of of how Platform9 and Kubernetes and Ops evolve? >> It's to stay on that theme, it's focusing on what is most important to our users and getting them to a point where they can just consume it, so they're not having to operate it. So it's finding those big items and bringing that into our platform. It's something that's consumable, that's just taken care of, that's tested with each release. So it's simplifying operations more and more. We've always said freedom in cloud computing. Well we started on, we started on OpenStack and made that simple. Stable, easy, you just have it, it works. We're doing that with Kubernetes. We're expanding out that user, right, we're saying bring your developers in, they can download their Kube conflict. They can see those Containers that are running there. They can access the events, the log files. They can log in and build a VM using KubeVirt. They're self servicing. So it's alleviating pressures off of the Ops team, removing the help desk systems that people still seem to rely on. So it's like what comes into that field that is the next biggest issue? Is it things like CI/CD? Is it simplifying GitOps? Is it bringing in security capabilities to talk to that? Or is that a piece that is a best of breed? Is there a reason that it's been spun out to its own conference? Is this something that deserves a focus that should be a specialized capability instead of tooling and vendors that we work with, that we partner with, that could be brought in as a service. I think it's looking at those trends and making sure that what we bring in has the biggest impact to our users. >> That's awesome. Thanks for coming in. I'll give you the last word. Put a plug in for Platform9 for the people who are watching. What should they know about Platform9 that they might not know about it or what should? When should they call you guys and when should they engage? Take a take a minute to give the plug. >> The plug. I think it's, if your operations team is focused on building Kubernetes, stop. That shouldn't be the cloud. That shouldn't be in the Edge, that shouldn't be at the data center. They should be consuming it. If your engineering teams are all trying different ways and doing different things to use and consume Cloud Native services and Kubernetes, they shouldn't be. You want consistency. That's how you get economies of scale. Provide them with a simple platform that's integrated with all of your enterprise identity where they can just start consuming instead of having to solve these problems themselves. It's those, it's those two personas, right? Where the problems manifest. What are my operations teams doing, and are they delivering to my company or are they building infrastructure again? And are my engineers sprinting or crawling? 'Cause if they're not sprinting, you should be asked the question, do I have the right Cloud Native tooling in my environment and how can I get them back? >> I think it's developer productivity, uptime, security are the tell signs. You get that done. That's the goal of what you guys are doing, your mission. >> Chris: Yep. >> Great to have you on, Chris. Thanks for coming on. Appreciate it. >> Chris: Thanks very much. 0 Okay, this is "theCUBE" here, finding the right path to Cloud Native. I'm John Furrier, host of "theCUBE." Thanks for watching. (upbeat music)
SUMMARY :
And it comes down to operations, And the developers are I need to run my software somewhere. and the infrastructure, What's the goal and then I asked for that in the VM, What's the problem that you guys solve? and configure all of the low level. We're going to be Cloud Native, case or cases that you guys see We've opened that tap all the way, It's going to be interesting too, to your business and let us deliver the teams need to get Is that kind of what you guys are always on assurance to keep that up customers say to you of the best ones you can get. make sure that all the You have the product, and being in the market with you guys is finding the right path, So the why- I mean, that's what kind of getting in in the weeds Just got to get it going. to figure it out. velocity for your business. how to kind of get it all, a service to my users." and GitOps in that scope, of brought that into the open. Inuit is the primary contributor What's the big takeaway from that project? hey let's make this simple to use, And as the product, the people that need to at the end of the day, And they can see the clusters So job well done for you guys. the morning when things And what do you do then? So going back to OpenStack, Ops and you know, is getting to the point John: That's an 'cause that's one of the problems. that physical server to myself, It is able to do things. Terraform is not that the big pieces to be sold. Yeah, and you talk about Is that the new DevOps? I got the new DevOps with Is that the new DevOps Like what you guys are move on to the next thing. at delivering that to I think you bring up a great point. But is the confidence truly there? I mean it was codified. Do you agree that people, I mean even the supply and one of the thing things I know it's the beginning, and the information that we get back the telemetry to do get that could be achieved with what we see and you got to kind of have that arc. that is the next biggest issue? Take a take a minute to give the plug. and are they delivering to my company That's the goal of what Great to have you on, Chris. finding the right path to Cloud Native.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Chris | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Chris Jones | PERSON | 0.99+ |
12 gig | QUANTITY | 0.99+ |
five year | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
two years | QUANTITY | 0.99+ |
six people | QUANTITY | 0.99+ |
two personas | QUANTITY | 0.99+ |
Adobe | ORGANIZATION | 0.99+ |
Java | TITLE | 0.99+ |
three months | QUANTITY | 0.99+ |
20 mile | QUANTITY | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Seattle | LOCATION | 0.99+ |
two tools | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
eight cores | QUANTITY | 0.99+ |
KubeCon | EVENT | 0.99+ |
last year | DATE | 0.99+ |
GitOps | TITLE | 0.99+ |
one | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
over two years | QUANTITY | 0.99+ |
HashiCorp | ORGANIZATION | 0.99+ |
Terraform | ORGANIZATION | 0.99+ |
two separate platforms | QUANTITY | 0.99+ |
24 | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
today | DATE | 0.98+ |
two ways | QUANTITY | 0.98+ |
third alternative | QUANTITY | 0.98+ |
each release | QUANTITY | 0.98+ |
Intuit | ORGANIZATION | 0.98+ |
third pillar | QUANTITY | 0.98+ |
2:00am | DATE | 0.98+ |
first jobs | QUANTITY | 0.98+ |
Mobile World Congress | EVENT | 0.98+ |
Cloud Native | TITLE | 0.98+ |
this year | DATE | 0.98+ |
late 2019 | DATE | 0.98+ |
Platform9 | TITLE | 0.98+ |
one environment | QUANTITY | 0.98+ |
last fall | DATE | 0.97+ |
Kubernetes | TITLE | 0.97+ |
yesterday | DATE | 0.97+ |
two experiences | QUANTITY | 0.97+ |
about eight years | QUANTITY | 0.97+ |
DevSecOps | TITLE | 0.97+ |
Git | TITLE | 0.97+ |
Flux | ORGANIZATION | 0.96+ |
CNCF | ORGANIZATION | 0.96+ |
two big contributors | QUANTITY | 0.96+ |
Cloud Native | TITLE | 0.96+ |
DevOps | TITLE | 0.96+ |
Rebars | ORGANIZATION | 0.95+ |
Austin Parker, Lightstep | AWS re:Invent 2022
(lively music) >> Good afternoon cloud community and welcome back to beautiful Las Vegas, Nevada. We are here at AWS re:Invent, day four of our wall to wall coverage. It is day four in the afternoon and we are holding strong. I'm Savannah Peterson, joined by my fabulous co-host Paul Gillen. Paul, how you doing? >> I'm doing well, fine Savannah. You? >> You look great. >> We're in the home stretch here. >> Yeah, (laughs) we are. >> You still look fresh as a daisy. I don't know how you do it. >> (laughs) You're too kind. You're too kind, but I'm vain enough to take that compliment. I'm very excited about the conversation that we're going to have up next. We get to get a little DevRel and we got a little swagger on the stage. Welcome, Austin. How you doing? >> Hey, great to be here. Thanks for having me. >> Savannah: Yeah, it's our pleasure. How's the show been for you so far? >> Busy, exciting. Feels a lot like, you know it used to be right? >> Yeah, I know. A little reminiscent of the before times. >> Well, before times. >> Before we dig into the technical stuff, you're the most intriguingly dressed person we've had on the show this week. >> Austin: I feel extremely underdressed. >> Well, and we were talking about developer fancy. Talk to me a little bit about your approach to fashion. Wasn't expecting to lead with this, but I like this but I like this actually. >> No, it's actually good with my PR. You're going to love it. My approach, here's the thing, I give free advice all the time about developer relations, about things that work, have worked, and don't work in community and all that stuff. I love talking about that. Someone came up to me and said, "Where do you get your fashion tips from? What's the secret Discord server that I need to go on?" I'm like, "I will never tell." >> Oh, okay. >> This is an actual trait secret. >> Top secret. Wow! Talk about. >> If someone else starts wearing the hat, then everyone's going to be like, "There's so many white guys." Look, I'm a white guy with a beard that works in technology. >> Savannah: I've never met one of those. >> Exactly, there's none of them at all. So, you have to do something to kind stand out from the crowd a little bit. >> I love it, and it's a talk trigger. We're talking about it now. Production team loved it. It's fantastic. >> It's great. >> So your DevRel for Lightstep, in case the audience isn't familiar tell us about Lightstep. >> So Lightstep is a cloud native observability platform built at planet scale, and it powers observability at some places you've heard of like Spotify, GitHub, right? We're designed to really help developers that are working in the cloud with Kubernetes, with these huge distributed systems, understand application performance and being able to find problems, fix problems. We're also part of the ServiceNow family and as we all know ServiceNow is on a mission to help the world of work work better by powering digital transformation around IT and customer experiences for their many, many, many global 2000 customers. We love them very much. >> You know, it's a big love fest here. A lot of people have talked about the collaboration, so many companies working together. You mentioned unified observability. What is unified observability? >> So if you think about a tradition, or if you've heard about this traditional idea of observability where you have three pillars, right? You have metrics, and you have logs, and you have traces. All those three things are different data sources. They're picked up by different tools. They're analyzed by different people for different purposes. What we believe and what we're working to accomplish right now is to take all that and if you think those pillars, flip 'em on their side and think of them as streams of data. If we can take those streams and integrate them together and let you treat traces and metrics and logs not as these kind of inviolate experiences where you're kind of paging between things and going between tab A to tab B to tab C, and give you a standard way to query this, a standard way to display this, and letting you kind of find the most relevant data, then it really unlocks a lot of power for like developers and SREs to spend less time like managing tools. You know, figuring out where to build their query or what dashboard to check, more just being able to like kind of ask a question, get an answer. When you have an incident or an outage that's the most important thing, right? How quickly can you get those answers that you need so that you can restore system health? >> You don't want to be looking in multiple spots to figure out what's going on. >> Absolutely. I mean, some people hear unified observability and they go to like tool consolidation, right? That's something I hear from a lot of our users and a lot of people in re:Invent. I'll talk to SREs, they're like, "Yeah, we've got like six or seven different metrics products alone, just on services that they cover." It is important to kind of consolidate that but we're really taking it a step lower. We're looking at the data layer and trying to say, "Okay, if the data is all consistent and vendor neutral then that gives you flexibility not only from a tool consolidation perspective but also you know, a consistency, reliability. You could have a single way to deploy your observability out regardless of what cloud you're on, regardless if you're using Kubernetes or Fargate or whatever else. or even just Bare Metal or EC2 Bare Metal, right? There's been so much historically in this space. There's been a lot of silos and we think that unify diversability means that we kind of break down those silos, right? The way that we're doing it primarily is through a project called OpenTelemetry which you might have heard of. You want to talk about that in a minute? . >> Savannah: Yeah, let's talk about it right now. Why don't you tell us about it? Keep going, you're great. You're on a roll. >> I am. >> Savannah: We'll just hang out over here. >> It's day four. I'm going to ask the questions and answer the questions. (Savannah laughs) >> Yes, you're right. >> I do yeah. >> Open Tele- >> OpenTelemetry . >> Explain what OpenTelemetry is first. >> OpenTelemetry is a CNCF project, Cloud Native Computing Foundation. The goal is to make telemetry data, high quality telemetry data, a builtin feature of cloud native software right? So right now if you wanted to get logging data out, depending on your application stack, depending on your application run time, depending on language, depending on your deployment environment. You might have a lot... You have to make a lot of choices, right? About like, what am I going to use? >> Savannah: So many different choices, and the players are changing all the time. >> Exactly, and a lot of times what people will do is they'll go and they'll say like, "We have to use this commercial solution because they have a proprietary agent that can do a lot of this for us." You know? And if you look at all those proprietary agents, what you find very quickly is it's very commodified right? There's no real difference in what they're doing at a code level and what's stopped the industry from really adopting a standard way to create this logs and metrics and traces, is simply just the fact that there was no standard. And so, OpenTelemetry is that standard, right? We've got dozens of companies many of them like very, many of them here right? Competitors all the same, working together to build this open standard and implementation of telemetry data for cloud native software and really any software right? Like we support over 12 languages. We support Kubernetes, Amazon. AWS is a huge contributor actually and we're doing some really exciting stuff with them on their Amazon distribution of OpenTelemetry. So it's been extremely interesting to see it over the past like couple years go from like, "Hey, here's this like new thing that we're doing over here," to really it's a generalized acceptance that this is the way of the future. This is what we should have been doing all along. >> Yeah. >> My opinion is there is a perception out there that observability is kind of a commodity now that all the players have the same set of tools, same set of 15 or 17 or whatever tools, and that there's very little distinction in functionality. Would you agree with that? >> I don't know if I would characterize it that way entirely. I do think that there's a lot of duplicated effort that happens and part of the reason is because of this telemetry data problem, right? Because you have to wind up... You know, there's this idea of table stakes monitoring that we talk about right? Table stakes monitoring is the stuff that you're having to do every single day to kind of make sure your system is healthy to be able to... When there's an alert, gets triggered, to see why it got triggered and to go fix it, right? Because everyone has the kind of work on that table stake stuff and then build all these integrations, there's very little time for innovation on top of that right? Because you're spending all your time just like working on keeping up with technology. >> Savannah: Doing the boring stuff to make sure the wheels don't fall off, basically. >> Austin: Right? What I think the real advantage of OpenTelemetry is that it really, from like a vendor perspective, like it unblocks us from having to kind of do all this repetitive commodified work. It lets us help move that out to the community level so that... Instead of having to kind of build, your Kubernetes integration for example, you can just have like, "Hey, OpenTelemetry is integrated into Kubernetes and you just have this data now." If you are a commercial product, or if you're even someone that's interested in fixing a, scratching a particular itch about observability. It's like, "I have this specific way that I'm doing Kubernetes and I need something to help me really analyze that data. Well, I've got the data now I can just go create a project. I can create an analysis tool." I think that's what you'll see over time as OpenTelemetry promulgates out into the ecosystem is more people building interesting analysis features, people using things like machine learning to analyze this large amount, large and consistent amount of OpenTelemetry data. It's going to be a big shakeup I think, but it has the potential to really unlock a lot of value for our customers. >> Well, so you're, you're a developer relations guy. What are developers asking for right now out of their observability platforms? >> Austin: That's a great question. I think there's two things. The first is that they want it to just work. It's actually the biggest thing, right? There's so many kind of... This goes back to the tool proliferation, right? People have too much data in too many different places, and getting that data out can still be really challenging. And so, the biggest thing they want is just like, "I want something that I can... I want a lot of these questions I have to ask, answered already and OpenTelemetry is going towards it." Keep in mind it's the project's only three years old, so we obviously have room to grow but there are people running it in production and it works really well for them but there's more that we can do. The second thing is, and this isn't what really is interesting to me, is it's less what they're asking for and more what they're not asking for. Because a lot of the stuff that you see people, saying around, "Oh, we need this like very specific sort of lower level telemetry data, or we need this kind of universal thing." People really just want to be able to get questions or get questions answered, right? They want tools that kind of have these workflows where you don't have to be an expert because a lot of times this tooling gets locked behind sort of is gate kept almost in a organization where there are teams that's like, "We're responsible for this and we're going to set it up and manage it for you, and we won't let you do things outside of it because that would mess up- >> Savannah: Here's your sandbox and- >> Right, this is your sandbox you can play in and a lot of times that's really useful and very tuned for the problems that you saw yesterday, but people are looking at like what are the problems I'm going to get tomorrow? We're deploying more rapidly. We have more and more intentional change happening in the system. Like it's not enough to have this reactive sort of approach where our SRE teams are kind of like or this observability team is building a platform for us. Developers want to be able to get in and have these kind of guided workflows really that say like, "Hey, here's where you're starting at. Let's get you to an answer. Let's help you find the needle in the haystack as it were, without you having to become a master of six different or seven different tools." >> Savannah: Right, and it shouldn't be that complicated. >> It shouldn't be. I mean we've certainly... We've been working on this problem for many years now, starting with a lot of our team that started at Google and helped build Google's planet scale monitoring systems. So we have a lot of experience in the field. It's actually one... An interesting story that our founder or now general manager tells BHS, Ben Sigelman, and he told me this story once and it's like... He had built this really cool thing called Dapper that was a tracing system at Google, and people weren't using it. Because they were like, "This is really cool, but I don't know how to... but it's not relevant to me." And he's like, the one thing that we did to get to increase usage 20 times over was we just put a link. So we went to the place that people were already looking for that data and we added a link that says, "Hey, go over here and look at this." It's those simple connections being able to kind of draw people from like point A to point B, take them from familiar workflows into unfamiliar ones. You know, that's how we think about these problems right? How is this becoming a daily part of someone's usage? How is this helping them solve problems faster and really improve their their life? >> Savannah: Yeah, exactly. It comes down to quality of life. >> Warner made the case this morning that computer architecture should be inherently event-driven and that we are moving toward a world where the person matters less than what the software does, right? The software is triggering events. Does this complicate observability or simplify it? >> Austin: I think that at the end of the day, it's about getting the... Observability to me in a lot of ways is about modeling your system, right? It's about you as a developer being able to say this is what I expect the system to do and I don't think the actual application architecture really matters that much, right? Because it's about you. You are building a system, right? It can be event driven, can be support request response, can be whatever it is. You have to be able to say, "This is what I expect to... For these given inputs, this is the expected output." Now maybe there's a lot of stuff that happens in the middle that you don't really care about. And then, I talk to people here and everyone's talking about serverless right? Everyone... You can see there's obviously some amazing statistics about how many people are using Lambda, and it's very exciting. There's a lot of stuff that you shouldn't have to care about as a developer, but you should care about those inputs and outputs. You will need to have that kind of intermediate information and understand like, what was the exact path that I took through this invented system? What was the actual resources that were being used? Because even if you trust that all this magic behind the scenes is just going to work forever, sometimes it's still really useful to have that sort of lower level abstraction, to say like, "Well, this is what actually happened so that I can figure out when I deployed a new change, did I make performance better or worse?" Or being able to kind of segregate your data out and say like... Doing AB testing, right? Doing canary releases, doing all of these things that you hear about as best practices or well architected applications. Observability is at the core of all that. You need observability to kind of do any of, ask any of those higher level interesting questions. >> Savannah: We are here at ReInvent. Tell us a little bit more about the partnership with AWS. >> So I would have to actually probably refer you to someone at Service Now on that. I know that we are a partner. We collaborate with them on various things. But really at Lightstep, we're very focused on kind of the open source part of this. So we work with AWS through the OpenTelemetry project, on things like the AWS distribution for OpenTelemetry which is really... It's OpenTelemetry, again is really designed to be like a neutral standard but we know that there are going to be integrators and implementers that need to package up and bundle it in a certain way to make it easy for their end users to consume it. So that's what Amazon has done with ADOT which is the shortening for it. So it's available in several different ways. You can use it as like an SDK and drop it into your application. There's Lambda layers. If you want to get Lambda observability, you just add this extension in and then suddenly you're getting OpenTelemetry data on the other side. So it's really cool. It's been a really exciting to kind of work with people on the AWS side over the past several years. >> Savannah: It's exciting, >> I've personally seen just a lot of change. I was talking to a PM earlier this week... It's like, "Hey, two years ago I came and talked to you about OpenTelemetry and here we are today. You're still talking about OpenTelemetry." And they're like, "What changes?" Our customers have started coming to us asking for OpenTelemetry and we see the same thing now. >> Savannah: Timing is right. >> Timing is right, but we see the same thing... Even talking to ServiceNow customers who are... These very big enterprises, banks, finance, healthcare, whatever, telcos, it used to be... You'd have to go to them and say like, "Let me tell you about distributed tracing. Let me tell you about OpenTelemetry. Let me tell you about observability." Now they're coming in and saying, "Yeah, so we're standard." If you think about Kubernetes and how Kubernetes, a lot of enterprises have spent the past five-six years standardizing, and Kubernetes is a way to deploy applications or manage containerized applications. They're doing the same journey now with OpenTelemetry where they're saying, "This is what we're betting on and we want partners we want people to help us go along that way." >> I love it, and they work hand in hand in all CNCF projects as well that you're talking about. >> Austin: Right, so we're integrated into Kubernetes. You can find OpenTelemetry and things like kept in which is application standards. And over time, it'll just like promulgate out from there. So it's really exciting times. >> A bunch of CNCF projects in this area right? Prometheus. >> Prometheus, yeah. Yeah, so we inter-operate with Prometheus as well. So if you have Prometheus metrics, then OpenTelemetry can read those. It's a... OpenTelemetry metrics are like a super set of Prometheus. We've been working with the Prometheus community for quite a while to make sure that there's really good compatibility because so many people use Prometheus you know? >> Yeah. All right, so last question. New tradition for us here on theCUBE. We're looking for your 32nd hot take, Instagram reel, biggest theme, biggest buzz for those not here on the show floor. >> Oh gosh. >> Savannah: It could be for you too. It could be whatever for... >> I think the two things that are really striking to me is one serverless. Like I see... I thought people were talking about servers a lot and they were talking about it more than ever. Two, I really think it is observability right? Like we've gone from observability being kind of a niche. >> Savannah: Not that you're biased. >> Huh? >> Savannah: Not that you're biased. >> Not that I'm biased. It used to be a niche. I'd have to go niche thing where I would go and explain what this is to people and nowpeople are coming up. It's like, "Yeah, yeah, we're using OpenTelemetry." It's very cool. I've been involved with OpenTelemetry since the jump, since it was started really. It's been very exciting to see and gratifying to see like how much adoption we've gotten even in a short amount of time. >> Yeah, absolutely. It's a pretty... Yeah, it's been a lot. That was great. Perfect soundbite for us. >> Austin: Thanks, I love soundbites. >> Savannah: Yeah. Awesome. We love your hat and your soundbites equally. Thank you so much for being on the show with us today. >> Thank you for having me. >> Savannah: Hey, anytime, anytime. Will we see you in Amsterdam, speaking of KubeCon? Awesome, we'll be there. >> There's some real exciting OpenTelemetry stuff coming up for KubeCon. >> Well, we'll have to get you back on theCUBE. (talking simultaneously) Love that for us. Thank you all for tuning in two hour wall to wall coverage here, day four at AWS re:Invent in fabulous Las Vegas, Nevada, with Paul Gillin. I'm Savannah Peterson and you're watching theCUBE, the leader in high tech coverage. (lively music)
SUMMARY :
and we are holding strong. I'm doing well, fine Savannah. I don't know how you do it. and we got a little swagger on the stage. Hey, great to be here. How's the show been for you so far? Feels a lot like, you A little reminiscent of the before times. on the show this week. Well, and we were talking server that I need to go on?" Talk about. then everyone's going to be like, something to kind stand out and it's a talk trigger. in case the audience isn't familiar and being able to find about the collaboration, and going between tab A to tab B to tab C, in multiple spots to and they go to like tool Why don't you tell us about it? Savannah: We'll just and answer the questions. The goal is to make telemetry data, and the players are changing all the time. Exactly, and a lot of and that there's very little and part of the reason is because of this boring stuff to make sure but it has the potential to really unlock What are developers asking for right now and we won't let you for the problems that you saw yesterday, Savannah: Right, and it And he's like, the one thing that we did It comes down to quality of life. and that we are moving toward a world is just going to work forever, about the partnership with AWS. that need to package up and talked to you about OpenTelemetry and Kubernetes is a way and they work hand in hand and things like kept in which A bunch of CNCF projects So if you have Prometheus metrics, We're looking for your 32nd hot take, Savannah: It could be for you too. that are really striking to me and gratifying to see like It's a pretty... on the show with us today. Will we see you in Amsterdam, OpenTelemetry stuff coming up I'm Savannah Peterson and
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Peter Burris | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Michael Dell | PERSON | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
Michael | PERSON | 0.99+ |
Comcast | ORGANIZATION | 0.99+ |
Elizabeth | PERSON | 0.99+ |
Paul Gillan | PERSON | 0.99+ |
Jeff Clark | PERSON | 0.99+ |
Paul Gillin | PERSON | 0.99+ |
Nokia | ORGANIZATION | 0.99+ |
Savannah | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Richard | PERSON | 0.99+ |
Micheal | PERSON | 0.99+ |
Carolyn Rodz | PERSON | 0.99+ |
Dave Vallante | PERSON | 0.99+ |
Verizon | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Eric Seidman | PERSON | 0.99+ |
Paul | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Keith | PERSON | 0.99+ |
Chris McNabb | PERSON | 0.99+ |
Joe | PERSON | 0.99+ |
Carolyn | PERSON | 0.99+ |
Qualcomm | ORGANIZATION | 0.99+ |
Alice | PERSON | 0.99+ |
2006 | DATE | 0.99+ |
John | PERSON | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
congress | ORGANIZATION | 0.99+ |
Ericsson | ORGANIZATION | 0.99+ |
AT&T | ORGANIZATION | 0.99+ |
Elizabeth Gore | PERSON | 0.99+ |
Paul Gillen | PERSON | 0.99+ |
Madhu Kutty | PERSON | 0.99+ |
1999 | DATE | 0.99+ |
Michael Conlan | PERSON | 0.99+ |
2013 | DATE | 0.99+ |
Michael Candolim | PERSON | 0.99+ |
Pat | PERSON | 0.99+ |
Yvonne Wassenaar | PERSON | 0.99+ |
Mark Krzysko | PERSON | 0.99+ |
Boston | LOCATION | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Willie Lu | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Yvonne | PERSON | 0.99+ |
Hertz | ORGANIZATION | 0.99+ |
Andy | PERSON | 0.99+ |
2012 | DATE | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Rajiv Ramaswami, Nutanix | Supercloud22
[digital Music] >> Okay, welcome back to "theCUBE," Supercloud 22. I'm John Furrier, host of "theCUBE." We got a very special distinguished CUBE alumni here, Rajiv Ramaswami, CEO of Nutanix. Great to see you. Thanks for coming by the show. >> Good to be here, John. >> We've had many conversations in the past about what you guys have done. Again, the perfect storm is coming, innovation. You guys are in an interesting position and the Supercloud kind of points this out. We've been discussing about how multi-cloud is coming. Everyone has multiple clouds, but there's real structural change happening right now in customers. Now there's been change that's happened, cloud computing, cloud operations, developers are doing great, but now something magical's happening in the industry. We wanted to get your thoughts on that, that's called Supercloud. >> Indeed. >> How do you see this shift? I mean, devs are doing great. Ops and security are trying to get cloud native. What's happening in your opinion? >> Yeah, in fact, we've been talking about something very, very similar. I like the term supercloud. We've been calling it hybrid multicloud essentially, but the point being, companies are running their applications and managing their data. This is lifeblood for them. And where do they sit? Of course, some of these will sit in the public cloud. Some of these are going to sit inside their data centers and some of these applications increasingly are going to run in edges. And now what most companies struggle with is every cloud is different, their on-prem is different, their edge is different and they then have a scarcity of staff. Operating models are different. Security is different. Everything about it is different. So to your point, people are using multiple clouds and multiple locations. But you need to think about cloud as an operating model and what the supercloud or hyper multicloud delivers is really a consistent model, consistent operating model. One way for IT teams to operate across all of these environments and deliver an agile infrastructure as a service model to their developers. So that from a company's managed point of view, they can run their stuff wherever they want to, completely with consistency, and the IT teams can help support that easily. >> You know, it's interesting. You see a lot of transformation, certainly from customers, they were paying a lot of operating costs for IT. Now CapEx is covered by, I mean, CapEx now is covered by the cloud, so it's OpEx. They're getting core competencies and they're becoming very fluent in cloud technologies. And at the same time the vendors are saying, "Hey, you know, buy our stuff." And so you have the change over, how people relate to each other, vendors and customers, where there's a shared model where, okay, you got use cases for the cloud and use cases on-premise, both CapEx, both technology. You mentioned that operating model, Where's the gap? 'Cause nobody wants complexity, and you know, the enterprise, people love to add, solve complexity with more complexity. >> That's exactly the problem. You just hit the nail on the head, which is enterprise software tends to be very complex. And fundamentally complexity has been a friend for vendors, but the point being, it's not a friend for a company that's trying to manage their IT infrastructure. It's an an enemy because complexity means you need to train your staff, you need very specialized teams, and guess what? Talent is perhaps the most scarce thing out there, right? People talk about, you know, in IT, they always talk about people, process, technology. There's plenty of technology out there, but right now there's a big scarcity of people, and I think that talent is a major issue. And not only that, you know, it's not that we have as many specialized people who know storage, who know compute, who know networking. Instead, what you're getting is a bunch of new college grads coming in, who have generalized skill sets, who are used to having a consumer like experience with their experience with software and applications, and they want to see that from their enterprise software vendors. >> You know, it's just so you mentioned that when the hyper converged, we saw that movie that was bringing things together. Now you're seeing the commoditization of compute storage and networking, but yet the advancement of higher level services and things like Kubernetes for orchestration, that's an operating opportunity for people to get more orchestration, but that's a trade off. So we're seeing a new trend in the supercloud where it's not all Kubernetes all the time. It's not all AWS all the time. It's the new architecture, where there's trade offs. How do you see some of these key trade offs? I know you talked to a lot of your customers, they're kind of bringing things together, putting things together, kind of a day zero mentality. What are some of those key trade offs and architectural decision points? >> So there's a couple of points there, I think. First is that most customers are on a journey of thoughts and their journey is, well, they want to have a modern infrastructure. Many of them have on-prem footprints, and they're looking to modernize that infrastructure. They're looking to adopt cloud operating models. They're looking to figure out how they can extend and leverage these public clouds appropriately. The problem is when they start doing this, they find that everything is different. Every little piece, every cloud is different, their on-prem is different, and this results in a lot of complexity. In some ways, we at Nutanix solved this problem within data centers by converging separate silos of high computer storage and network. That's what we did with HCI. And now this notion of supercloud is just simply about converging different clouds and different data. >> Kind of the same thing. >> And on-prem and edges, right? Trying to bring all of these together rather than having separate teams, separate processes, separate technologies for every one of these, try to create consistency, and it makes life a lot simpler and easier. >> Yeah, I wanted to connect those dots because I think this is kind of interesting with the supercloud was, you get good at something in one cloud, then you bring that best practice and figure out how to make that work across edge and on-premise, which is, I mean, basically cloud operations. >> Exactly. It's cloud operations, which is why we say it's a cloud is an operating model. It's a way you operate your environment, but that environment could be anywhere. You're not restricted to it being in the public cloud. It's in your data center, that's in the edges. >> Okay, so when I hear about substrates, abstraction layers, I think two things, innovation cause you extract away complexity, then I also think about from the customer's perspective, maybe, lock-in. >> Yes. >> Whoa, oh, promises, promises. Lock in is a fear and ops teams and security teams, they know the downside of lock-in. >> Yes. >> Choice is obviously important. Devs don't care. I mean, like, whatever runs the software, go faster, but ops and security teams, they want choice, but they want functionality. So, what's that trade off? Talk about this lock-in dynamic, and how to get around. >> Yeah. >> And I think that's been some of the fundamental tenants of what we do. I mean, of course, people don't like lock-in, but they also want simplicity. And we provide both. Our philosophy is we want to make things as simple as possible. And that's one of the big differentiators that we have compared to other players. Our whole mission inside the company is to make things simple. But at the same time, we also want to provide customers with that flexibility and every layer in the stack, you don't want to lock to your point. So, if at the very bottom hardware, choice of hardware. Choice of hardware could be any of the vendors you work with or public cloud, Bare Metal. When you look at hypervisor, lots of choices. You got VMware, you got our own Ahv, which is KBM-based open source hypervisor, no lock-in there, provide complete flexibility. Then we have a storage stack, a distributor storage stack, which we provide. And then of course layers about that. Kubernetes, pick your Kubernetes, runtime of choice. Pick your Kubernetes, orchestrator and management of choice. So our whole goal is to provide that flexibility at every layer in the stack, allowing the customer to make the choice. They can decide how much they want to go with the full stack or how much they want to go piecemeal it, and there's a trade off there. And they get more flexibility, but at the cost of a little bit more complexity, and that, I think, is the trade off that each customer has to weigh. >> Okay, you guys have been transforming for many, many years. We've been covering on SiliconANGLE and theCUBE to software. >> Yes. >> I know you have hardware as well, but also software services. And you've been on the cloud bandwagon years ago, and now you made a lot of progress. What's the current strategy for you guys? How do you fit in? 'Cause public cloud has great use cases, great examples of success there, but that's not the only game in town. You've got on-premise and edge. What are you guys doing? What specifically are customers leaning on you for? How are you providing that value? What's the innovation strategy? >> Very simply, we provide a cloud software platform today. We don't actually sell anymore hardware. They're not on our books anymore. We're a pure software company. So we sell a cloud soft platform on top of which our customers can run all their applications, including the most mission critical applications. And they can use our platform wherever, to your point, on the supercloud. I keep coming back to that. We started out with our on-prem genes. That's where we started. We've extended that to Azure and AWS. And we are extending, of course, we've always been very strong when it came to the edge and extending that out to the edge. And so today we have a cloud platform that allows our customers to run these apps, whatever the apps may be, and manage all their data because we provide structured and unstructured data, blocks, files, objects, are all part of the platform. And we provide that in a consistent way across all of these locations, and we deliver the cloud operating model. >> So on the hardware thing, you guys don't have hardware anymore. >> We don't sell hardware anymore. We work with a whole range of hardware partners, HP, Dell, Supermicro, name it, Lenovo. >> Okay, so if I'm like a Telco and I want to build a data center at my tower, which could be only a few boxes, who do I buy that from? >> So you buy the software from us and you can buy the hardware from your choice of hardware partners. >> So yeah, whoever's selling the servers at that point. >> Yeah. >> Okay, so you send on the server. >> Yeah, we send on the server. >> Yeah, sound's good. So no hardware, so back to software that could transfer. How's that going, good? >> It's gone very well because, you know, we made two transformations. One is of course we were selling appliances when we started out, and then we started selling software, and now it's all fully subscription. So we're 100% subscription company. So our customers are buying subscriptions. They have the flexibility to get whatever duration they want. Again, to your philosophy, there's no lock-in. There is no long term lock-in here. We are happy if a customer chooses us for a year versus three years, whatever they like. >> I know that you've been on the road with customers this summer. It's been great to get out and see people in person. What are you learning? What are they viewing? What's their new Instagram picture of Nutanix? How do they see you? And how do you want them to see you? >> What they've seen us in the past has been, we created this whole category of HCI, Hyperconverged Infrastructure. They see us as a leader there and they see us as running some of their applications, not necessarily all their applications, especially at the very big customers. In the smaller customers, they run everything on us, but in the bigger customers, they run some workload, some applications on us. And now what they see is that we are now, if taking them on the journey, not only to run all their applications, whatever, they may be, including the most mission critical database workloads or analytics workloads on our platform, but also help them extend that journey into the public cloud. And so that's the journey we are on, modernized infrastructure. And this is what most of our customers are on. Modernizing the infrastructure, which we help and then creating a cloud operating model, and making that available everywhere. >> Yeah, and I think one, that's a great, and again, that's a great segue to supercloud, which I want to get your thoughts on because AWS, for example, spent all that CapEx, they're called the hyperscaler. They got H in there and that's a hyperscale in there. And now you can leverage that CapEx by bringing Nutanix in, you're a hyperscale-like solution on-premise and edge. So you take advantage of both. >> Absolutely. >> The success. >> Exactly. >> And a trajectory of cloud, so your customers, if I get this right, have all the economies of scale of cloud, plus the benefits of the HCI software kind of vibe. >> Absolutely. And I'll give you some examples how this plays out in the real world based on all my travels here. >> Yeah, please do. So we just put out a case study on a customer called FSP. They're a betting company, online betting company based out of the UK. And they run on our platform on-prem, but what they saw was they had to expand their operations to Asia and they went to Taiwan. And the problem for them was, they were told they had to get in business in Taiwan within a matter of a month, and they didn't know how to do it. And then they realized that they could just take the exact same software that they were running on our platform, and run it in an AWS region sitting in Taiwan. And they were up in business in less than a month, and they had now operations ready to go in Asia. I mean, that's a compelling business value. >> That's agile, that's agile. >> Agile. >> That's agile and a great... >> Versus the alternative would be weeks, months. >> Months, first of all, I mean, just think about, they have to open a data center, which probably takes them, they have to buy the hardware, which, you know, with supply chain deliveries, >> Supply chain. and God knows how long that takes. >> Oh God, yeah. >> So compared to all that here, they were up and running within a matter of a month. It's a, just one example of a very compelling value proposition. >> So you feel good about where you guys are right now relative to these big waves coming? >> Yeah, I think so. Well, I mean, you know, there's a lot of big waves coming and. >> What are the biggest ones that you see? >> Well, I mean, I think there's clearly one of the big ones, of course, out there is Broadcom buying VMware or potentially buying VMware and great company. I used to work there for many years and I have a lot of respect for what VMware has done for the industry in terms of virtualization of servers and creating their entire portfolio. >> Is it true you're hiring a lot of VMware folks? >> Yes, I mean a lot of them coming over now in anticipation, we've been hiring our fair share, but they're going other places too. >> A lot of VMware alumni at Nutanix now. >> Yes, there are certainly, we have our share of VMware alumni. We also have a share of alumni from others. >> We call the V mafia, by the way. (laughs) >> I dunno about the V mafia, but. But it's a great company, but I think right now a lot of customers are wondering what's going to happen, and therefore, they are looking at potentially what are the other alternatives? And we are very much front and center in that discussions. >> Well, Dave Alante and I, and the team have been very bullish on on-premise cloud operations. You guys are doing there. How would you describe the supercloud concept to a customer when they say, "Hey, what's the supercloud? "It's becoming a thing. "How would you describe what it is and the benefits?" >> Yeah, and I think the first thing is to tell them, what problem are you looking to solve? And the problem for them is, they have applications everywhere. They have data everywhere. How do their teams run and deal with all of this? And what they find is the way they're doing it today is different operating platform for every one of these. If you're on Amazon, it's one platform. If you're an Azure, it's another. If you're on-prim, it's a third. If you want to go to the edge, probably fourth, and it's a messy, complex thing for their IT teams. What a supercloud does is essentially unify all of these into a consistent operating model. You get a cloud operating model, you get the agility and the benefits, but with one way of handling your compute storage network needs, one way of handling your security policies, and security constructs, and giving you that, so such a dramatic simplification on the one side, and it's a dramatic enabler because it now enables you to run these applications wherever you want completely free. >> Yeah. It really bridges the cloud native. It kind of the interplay on the cloud between SAS and IAS, solves a lot of problems, highly integrated, that takes that model to the complexity of multiple environments. >> Exactly. >> That's a super cool environment. >> (John speaks over Rajiv) Across any environment, wherever. It's changing this thing from cloud being associated with the public cloud to cloud being available everywhere in a consistent way. >> And that's essentially the goodness of cloud, going everywhere. >> Yeah. >> Yeah, but that extension is what you call a supercloud. >> Rajiv, thank you so much for your time. I know you're super valuable, and you got a company to run. One final question for you. The edge is exploding. >> Yes. >> It's super dynamic. We kind of all know it's there. The industrial edge. You got the IOT edge and just the edge in general. On-premise, I think, is hybrid, it's the steady state, looking good. Everything's good. It's getting better, of course, things with cloud native and all that good stuff. What's your view of the edge? It's super dynamic, a lot of shifting, OT, IT, that's actually transformed. >> Yes, absolutely. >> Huge industrial thing. Amazon is buying, you know, industrial robots now. >> Yes. >> Space is around the corner, a lot of industrial advance with machine learning and the software side of things, so the edge is exploding. >> Yeah, you know, and I think one of the interesting things about that exploding edge is that it tends to be both compute and data heavy. It's not this notion of very thin edges. Yes, you've got thin edges too, of course, which may just be sensors on the one hand, but you're seeing an increased need for compute and storage at the edges, because a lot of these are crunching, crunching applications that require a crunch and generate a lot of data, crunch a lot of data. There's latency requirements that require you and there's even people deploying GPUs at the edges for image recognition and so forth, right? So this is. >> The edge is the data center now. >> Exactly. Think of the edge starting to look at the edge of the mini data center, but one that needs to be highly automated. You're not going to be able to put people at every one of these locations. You've got to be able to do all your services, lifecycle management, everything completely remove. >> Self-healing, all this good stuffs. >> Exactly. It has to be completely automated and self-healing and upgradeable and you know, life cycle managed from the cloud, so to speak. And so there's going to be this interlinkage between the edge and the cloud, and you're going to actually, essentially what you need is a cloud managed edge. >> Yeah, and this is where the super cloud extends, where you can extend the value of what you're building to these dynamically new emerging, and it's just the beginning. There'll be more. >> Oh, there's a ton of new applications emerging there. And I think that's going to be, I mean, there's people out there who code that half of data is going to be generated at the edge in a couple of years. >> Well, Rajiv, I am excited that you can bring the depth of technical architectural knowledge to the table on supercloud, as well as run a company. Congratulations on your success, and thanks for sharing with us and being part of our community. >> No, thank you, John, for having me on your show. >> Okay. Supercloud 22, we're continuing to open up the conversation. There is structural change happening. We're going to watch it. We're going to make it an open conversation. We're not going to make a decision. We're going to just let everyone discuss it and see how it evolves and on the best in the business discussing it, and we're going to keep it going. Thanks for watching. (digital music)
SUMMARY :
Thanks for coming by the show. and the Supercloud kind How do you see this shift? and the IT teams can and you know, the enterprise, Talent is perhaps the most It's not all AWS all the time. and they're looking to and it makes life a is kind of interesting It's a way you operate your environment, from the customer's Lock in is a fear and ops and how to get around. of the vendors you work with Okay, you guys have been transforming What's the current strategy for you guys? that out to the edge. So on the hardware thing, of hardware partners, and you can buy the hardware the servers at that point. So no hardware, so back to They have the flexibility to get And how do you want them to see you? And so that's the journey we are on, And now you can leverage that have all the economies of scale of cloud, in the real world and they didn't know how to do it. that's agile. Versus the alternative and God knows how long that takes. So compared to all that here, Well, I mean, you know, and I have a lot of respect Yes, I mean a lot of them of VMware alumni. We call the V mafia, by the way. I dunno about the V mafia, but. and the team have been very bullish on And the problem for them is, It kind of the interplay on It's changing this thing the goodness of cloud, is what you call a supercloud. and you got a company to run. and just the edge in general. Amazon is buying, you know, and the software side of things, and generate a lot of data, Think of the edge starting from the cloud, so to speak. and it's just the beginning. And I think that's going to be, I mean, excited that you can bring for having me on your show. and on the best in the
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Rajiv Ramaswami | PERSON | 0.99+ |
Taiwan | LOCATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Rajiv | PERSON | 0.99+ |
Asia | LOCATION | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
Dave Alante | PERSON | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
Lenovo | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
UK | LOCATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
three years | QUANTITY | 0.99+ |
HP | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Supermicro | ORGANIZATION | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
First | QUANTITY | 0.99+ |
less than a month | QUANTITY | 0.99+ |
Broadcom | ORGANIZATION | 0.99+ |
CapEx | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
CUBE | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
each customer | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
a year | QUANTITY | 0.99+ |
one platform | QUANTITY | 0.99+ |
Bare Metal | ORGANIZATION | 0.98+ |
fourth | QUANTITY | 0.98+ |
one cloud | QUANTITY | 0.98+ |
two transformations | QUANTITY | 0.98+ |
Supercloud | ORGANIZATION | 0.98+ |
agile | TITLE | 0.97+ |
one way | QUANTITY | 0.97+ |
theCUBE | ORGANIZATION | 0.97+ |
ORGANIZATION | 0.97+ | |
FSP | ORGANIZATION | 0.97+ |
first thing | QUANTITY | 0.97+ |
SiliconANGLE | ORGANIZATION | 0.96+ |
supercloud | ORGANIZATION | 0.96+ |
Agile | TITLE | 0.95+ |
a month | QUANTITY | 0.95+ |
third | QUANTITY | 0.95+ |
OpEx | ORGANIZATION | 0.95+ |
One final question | QUANTITY | 0.94+ |
HCI | ORGANIZATION | 0.94+ |
one side | QUANTITY | 0.93+ |
Supercloud22 | ORGANIZATION | 0.91+ |
One way | QUANTITY | 0.9+ |
Hyperconverged Infrastructure | ORGANIZATION | 0.9+ |
big | EVENT | 0.9+ |
one example | QUANTITY | 0.89+ |
Supercloud 22 | ORGANIZATION | 0.87+ |
big waves | EVENT | 0.8+ |
Azure | TITLE | 0.79+ |
Spotlight Track | HPE GreenLake Day 2021
(bright upbeat music) >> Announcer: We are entering an age of insight where data moves freely between environments to work together powerfully, from wherever it lives. A new era driven by next generation cloud services. It's freedom that accelerates innovation and digital transformation, but it's only for those who dare to propel their business toward a new future that pushes beyond the usual barriers. To a place that unites all information under a fluid yet consistent operating model, across all your applications and data. To a place called HPE GreenLake. HPE GreenLake pushes beyond the obstacles and limitations found in today's infrastructure because application entanglements, data gravity, security, compliance, and cost issues simply aren't solved by current cloud options. Instead, HPE GreenLake is the cloud that comes to you, bringing with it, increased agility, broad visibility, and open governance across your entire enterprise. This is digital transformation unlocked, incompatibility solved, data decentralized, and insights amplified. For those thinkers, makers and doers who want to create on the fly scale up or down with a single click, stand up new ideas without risk, and view it all as a single agile system of systems. HPE GreenLake is here and all are invited. >> The definition of cloud is evolving and now clearly comprises hybrid and on-prem cloud. These trends are top of mind for every CIO and the space is heating up as every major vendor has been talking about as-a-Service models and making moves to better accommodate customer needs. HPE was the first to market with its GreenLake brand, and continues to make new announcements designed to bring the cloud experience to far more customers. Come here from HPE and its partners about the momentum that they're seeing with this trend and what actions you can take to stay ahead of the competition in this fast moving market. (bright soft music) Okay, we're with Keith White, Senior Vice President and General Manager for GreenLake at HPE, and George Hope, who's the Worldwide Head of Partner Sales at Hewlett Packard Enterprise. Welcome gentlemen, good to see you. >> Awesome to be here. >> Yeah. Thanks so much. >> You're welcome, Keith, last we spoke, we talked about how you guys were enabling high performance computing workloads to get green-late right for enterprise markets. And you got some news today, which we're going to get to but you guys, you put out a pretty bold position with GreenLake, basically staking a claim if you will, the edge, cloud as-a-Service all in. How are you thinking about its impacts for your customers so far? >> You know, the impact's been amazing and, you know, in essence, I think the pandemic has really brought forward this real need to accelerate our customer's digital transformation, their modernization efforts, and you know, frankly help them solve what was amounting to a bunch of new business problems. And so, you know, this manifests itself in a set of workloads, set of solutions, and across all industries, across all customer types. And as you mentioned, you know GreenLake is really bringing that value to them. It brings the cloud to the customer in their data center, in their colo, or at the edge. And so frankly, being able to do that with that full cloud experience. All is a pay per use, you know, fully consumption-based scenario, all managed for them so they get that as I mentioned, true cloud experience. It's really sort of landing really well with customers and we continue to see accelerated growth. We're adding new customers, we're adding new technology. And we're adding a whole new set of partner ecosystem folks as well that we'll talk about. >> Well, you know, it's interesting you mentioned that just cause as a quick aside it's, the definition of cloud is evolving and it's because customers, it's the way customers look at it. It's not just vendor marketing. It's what customers want, that experience across cloud, edge, you know, multiclouds, on-prem. So George, what's your take? Anything you'd add to Keith's response? >> I would, you've heard Antonio Neri say it several times and you probably saying it for yourself. The cloud is an experience, it's not a destination. The digital transformation is pushing new business models and that demands more flexible IT. And the first round of digital transformation focused on a cloud first strategy. For our customers we're looking to get more agility. As Keith mentioned, the next phase of transformation will be characterized by bringing the cloud speed and agility to all apps and data, regardless of where they live, According to IDC, by the end of 2021, 80% of the businesses will have some mechanism in place to shift the cloud centric, infrastructure and apps and twice as fast as before the pandemic. So the pandemic has actually accelerated the impact of the digital divide, specifically, in the small and medium companies which are adapting to technology change even faster and emerging stronger as a result. You know, the analysts agree cloud computing and digitalization will be key differentiators for small and medium business in years to come. And speed and automation will be pivotal as well. And by 2022, at least 30% of the lagging SMBs will accelerate digitalization. But the fair focus will be on internal processes and operations. The digital leaders, however, will differentiate by delivering their customers, a dynamic experience. And with our partner ecosystem, we're helping our customers embrace our as-a-Service vision and stand out wherever they are. on their transformation journey. >> Well, thanks for those stats, I always liked the data. I mean, look, if you're not a digital business today I feel like you're out of business only 'cause.... I'm sure there's some exceptions, but you got to get on the digital bandwagon. I think pre-pandemic, a lot of times people really didn't know what it meant. We know now what it means. Okay, Keith, let's get into the news when we do these things. I love that you guys always have something new to share. What do you have? >> No, you got it. And you know, as we said, the world is hybrid and the world is multicloud. And so, customers are expecting these solutions. And so, we're continuing to really drive up the innovation and we're adding additional cloud services to GreenLake. We just recently went to General AVailability of our MLOps, Machine Learning Operations, and our containers for cloud services along with our virtual desktop which has become very big in a pandemic world where a lot more people are working from home. And then we have shipped our SAP HEC, customer edition, which allows SAP customers to run on their premise whether it's the data center or the colo. And then today we're introducing our new Bare Metal capabilities as well as containers on Bare Metal as a Service, for those folks that are running cloud native applications that don't require any sort of hypervisor. So we're really excited about that. And then second, I'd say similar to that HPC as a Service experience we talked about before, where we were bringing HPC down to a broader set of customers. We're expanding the entry point for our private cloud, which is virtual machines, containers, storage, compute type capabilities in workload optimized systems. So again, this is one of the key benefits that HPE brings is it combines all of the best of our hardware, software, third-party software, and our services, and financial services into a package. And we've workload optimized this for small, medium, large and extra-large. So we have a real sort of broader base for our customers to take advantage of and to really get that cloud experience through HPE GreenLake. And, you know, from a partner standpoint we also want to make sure that we continue to make this super easy. So we're adding self-service capabilities we're integrating into our distributors marketplaces through a core set of APIs to make sure that it plugs in for a very smooth customer experience. And this expands our reach to over 100,000 additional value-added resellers. And, you know, we saw just fantastic growth in the channel in Q1, over 118% year over year growth for GreenLake Cloud Services through the channel. And we're continuing to expand, extend and expand our partner ecosystem with additional key partnerships like our colos. The colocation centers are really key. So Equinix, CyrusOne and others that we're working with and I'll let George talk more about. >> Yeah, I wonder if you could pick up on that George. I mean, look, if I'm a partner and and I mean, I see an opportunity here.. Maybe, you know, I made a lot of money in the old days moving iron. But I got to move, I got to pivot my business. You know, COVID's actually, you know, accelerating a lot of those changes, but there's a lot of complexity out there and partners can be critical in helping customers make that journey. What do you see this meaning to partners, George? >> So I completely agree with Keith and through and with our partners we give our customers choice. Right, they don't have to worry about security or cost as they would with public cloud or the hyperscalers. We're driving special initiatives via Cloud28 which we run, which is the world's largest cloud aggregator. And also, in collaboration with our distributors in their marketplaces as Keith mentioned. In addition, customers can leverage our expertise and support of our service provider ecosystem, our SI's, our ISV's, to find the right mix of hybrid IT and decide where each application or workload should be hosted. 'Cause customers are now demanding robust ecosystems, cloud adjacency, and efficient low latency networks. And the modern workload demands, secure, compliant, highly available, and cost optimized environments. And Keith touched on colocation. We're partnering with colocation facilities to provide our customers with the ability to expand bandwidth, reduce latency, and get access to a robust ecosystem of adjacent providers. We touched on Equinix a bit as one of them, but we're partnering with them to enable customers to connect to multiple clouds with private on-demand interconnections from hundreds of data center locations around the globe. We continue to invest in the partner and customer experience, you know, making ourselves easier to do business with. We've now fully integrated partners in GreenLake Central, and could provide their customers end to end support and managing the entire hybrid IT estate. And lastly, we're providing partners with dedicated and exclusive enablement opportunities so customers can rely on both HPE and partner experts. And we have a competent team of specialists that can help them transform and differentiate themselves. >> Yeah, so, I'm hearing a theme of simplicity. You know, I talked earlier about this being customer-driven. To me what the customer wants is they want to come in, they want simple, like you mentioned, self-serve. I don't care if it's on-prem, in the cloud, across clouds, at the edge, abstract, all that complexity away from me. Make it simple to do, not only the technology to work, you figure out where the workload should run and let the metadata decide and that's a bold vision. And then, make it easy to do business. Let me buy as-a-Service if that's the way I want to consume. And partners are all about, you know, reducing friction and driving that. So, anyway guys, final thoughts, maybe Keith, you can close it out here and maybe George can call it timeout. >> Yeah, you summed it up really nice. You know, we're excited to continue to provide what we view as the largest and most flexible hybrid cloud for our customers' apps, data, workloads, and solutions. And really being that leading on-prem solution to meet our customer's needs. At the same time, we're going to continue to innovate and our ears are wide open, and we're listening to our customers on what their needs are, what their requirements are. So we're going to expand the use cases, expand the solution sets that we provide in these workload optimized offerings to a very very broad set of customers as they drive forward with that digital transformation and modernization efforts. >> Right, George, any final thoughts? >> Yeah, I would say, you know, with our partners we work as one team and continue to hone our skills and embrace our competence. We're looking to help them evolve their businesses and thrive, and we're here to help now more than ever. So, you know, please reach out to our team and our partners and we can show you where we've already been successful together. >> That's great, we're seeing the expanding GreenLake portfolio, partners key part of it. We're seeing new tools for them and then this ecosystem evolution and build out and expansion. Guys, thanks so much. >> Yeah, you bet, thank you. >> Thank you, appreciate it. >> You're welcome. (bright soft music) >> Okay, we're here with Jo Peterson the VP of Cloud & Security at Clarify360. Hello, Jo, welcome to theCUBE. >> Hello. >> Great to see you. >> Thanks for having me. >> You're welcome, all right, let's get right into it. How do you think about cloud where we are today in 2021? The definitions evolve, but where do you see it today and where do you see it going? >> Well, that's such an interesting question and is so relevant because the labels are disappearing. So over the last 10 years, we've sort of found ourselves defining whether an environment was public or whether it was private or whether it was hybrid. Here's the deal, cloud is infrastructure and infrastructure is cloud. So at the end of the day cloud in whatever form it's taking is a platform, and ultimately, this enablement tool for the business. Customers are consuming cloud in the best way that works for their businesses. So let's also point out that cloud is not a destination, it's this journey. And clients are finding themselves at different places on that road. And sometimes they need help getting to the next milestone. >> Right, and they're really looking for that consistent experience. Well, what are the big waves and trends that you're seeing around cloud out there in the marketplace? >> So I think that this hybrid reality is happening in most organizations. Their actual IT portfolios include a mix of on-premise and cloud infrastructure, and we're seeing this blurred line happening between the public cloud and the traditional data center. Customers want a bridge that easily connects one environment to the other environment, and they want end-to-end visibility. Customers are becoming more intentional and strategic about their cloud roadmaps. So some of them are intentionally and strategically selecting hybrid environments because they feel that it affords them more control, cost, balance, comfort level around their security. In a way, cloud itself is becoming borderless. The major tech providers are extending their platforms in an infrastructure agnostic manner and that's to work across hybrid environments, whether they be hosted in the data center, whether it includes multiple cloud providers. As cloud matures, workload environments fit is becoming more of a priority. So forward thinking where the organizations are matching workloads to the best environment. And it's sort of application rationalization on this case by case basis and it really makes sense. >> Yeah, it does makes sense. Okay, well, let's talk about HPE GreenLake. They just announced some new solutions. What do you think it means for customers? >> I think that HPE has stepped up. They've listened to not only their customers but their partners. Customers want consumable infrastructure, they've made that really clear. And HPE has expanded the cloud service portfolio for clients. They're offering more choices to not only enterprise customers but they're expanding that offering to attract this mid-market client base. And they provided additional tools for partners to make selling GreenLake easier. This is all helping to drive channel sales. >> Yeah, so better granularity, just so it increases the candidates, better optionality for customers. And this thing is evolving pretty quickly. We're seeing a number of customers that we talked to interested in this model, trying to understand it better and ultimately, I think they're going to really lean in hard. Jo, I wonder if you could maybe think about or share with us which companies are, I got to say, getting it right? And I'm really interested in the partner piece, because if you think about the partner business, it's really, it's changing a lot, right? It's gone from this notion of moving boxes and there was a lot of money to be made over the decades in doing that, but they have to now become value-add suppliers and really around cloud services. And in the early days of cloud, I think the channel was a little bit freaked out, saying, uh-oh, they're going to cut out the middleman. But what's actually happened is those smart agile partners are adding substantial value, they've got deep relationships with customers and they're serving as really trusted advisors and executors of cloud strategies. What do you see happening in the partner community? >> Well, I think it's been a learning curve and everything that you said was spot on. It's a two way street, right? In order for VARs to sell residual services, monthly recurring services, there has to have been some incentive to do that and HPE really got it right. Because they, again listened to that partner community, and they said, you know what? We've got to incentivize these guys to start selling this way. This is a partnership and we expect it to be a partnership. And the tech companies that are getting right are doing that same sort of thing, they're figuring out ways to make it palatable to that VAR, to help them along that journey. They're giving them tools, they're giving them self-serve tools, they're incentivizing them financially to make that shift. That's what's going to matter. >> Well, that's a key point you're making, I mean, the financial incentives, that's new and different. Paying, you know, incentivizing for as-a-Service models versus again, moving hardware and paying for, you know, installing iron. That's a shift in mindset, isn't it? >> It definitely is. And HPE, I think is getting it right because I didn't notice but I learned this, 70% of their annual sales are actually transacted through their channel. And they've seen this 116% increase in HPE GreenLake orders in Q1, from partners. So what they're doing is working. >> Yeah, I think you're right. And you know, the partner channel it becomes super critical. It's funny, Jo, I mean, again, in the early days of cloud, the channel was feeling like they were going to get disrupted. I don't know about you, but I mean, we've both been analysts for awhile and the more things get simple, the more they get complicated, right? I mean the consumerization of IT, the cloud, swipe your credit card, but actually applying that to your business is not easy. And so, I see that as great opportunities for the channel. Give you the last word. >> Absolutely, and what's going to matter is the tech companies that step up and realize we've got this chance, this opportunity to build that bridge and provide visibility, end-to-end visibility for clients. That's what going to matter. >> Yeah, I like how you're talking about that bridge, because that's what everybody wants. They want that bridge from on-prem to the public cloud, across clouds, going to to be moving out to the edge. And that is to your point, a journey that's going to evolve over the better part of this coming decade. Jo, great to see you. Thanks so much for coming on theCUBE today. >> Thanks for having me. (bright soft music) >> Okay, now we're going to into the GreenLake power panel to talk about the cloud landscape, hybrid cloud, and how the partner ecosystem and customers are thinking about cloud, hybrid cloud as a Service and of course, GreenLake. And with me are C.R. Howdyshell, President of Advizex. Ron Nemecek, who's the Business Alliance Manager at CBTS. Harry Zarek is President of Compugen. And Benjamin Klay is VP of Sales and Alliances at Arrow Electronics. Great to see you guys, thanks so much for coming on theCUBE. >> Thanks for having us. >> Good to be here. >> Okay, here's the deal. So I'm going to ask you guys each to introduce yourselves and your companies, add a little color to my brief intro, and then answer the following question. How do you and your customers think about hybrid cloud? And think about it in the context of where we are today and where we're going, not just the snapshot but where we are today and where we're going. C.R., why don't you start please? >> Sure, thanks a lot, Dave, appreciate it. And again, C.R. Howdyshell, President of Advizex. I've been with the company for 18 years, the last four years as president. So had the great opportunity here to lead a 45 year old company with a very strong brand and great culture. As it relates to Advizex and where we're headed to with hybrid cloud is it's a journey. So we're excited to be leading that journey for the company as well as HPE. We're very excited about where HPE is going with GreenLake. We believe it's a very strong solution when it comes to hybrid cloud. Have been an HPE partner since, well since 1980. So for 40 years, it's our longest standing OEM relationship. And we're really excited about where HPE is going with GreenLake. From a hybrid cloud perspective, we feel like we've been doing the hybrid cloud solutions, the past few years with everything that we've focused on from a VMware perspective. But now with where HPE is going, we think, probably changing the game. And it really comes down to giving customers that cloud experience with the on-prem solution with GreenLake. And we've had great response for customers and we think we're going to continue to see that kind of increased activity and reception. >> Great, thank you C.R., and yeah, I totally agree. It is a journey and we've seen it really come a long way in the last decade. Ron, I wonder if you could kickoff your little first intro there please. >> Sure Dave, thanks for having me today and it's a pleasure being here with all of you. My name is Ron Nemecek, I'm a Business Alliance manager at CBTS. In my role, I'm responsible for our HPE GreenLake relationship globally. I've enjoyed a 33 year career in the IT industry. I'm thankful for the opportunity to serve in multiple functional and senior leadership roles that have helped me gather a great deal of education and experience that could be used to aid our customers with their evolving needs, for business outcomes to best position them for sustainable and long-term success. I'm honored to be part of the CBTS and OnX Canada organization. CBTS stands for Consult Build Transform and Support. We have a 35 year relationship with HPE. We're a platinum and inner circle partner. We're headquartered in Cincinnati, Ohio. We service 3000 customers generating over a billion dollars in revenue and we have over 2000 associates across the globe. Our focus is partnering with our customers to deliver innovative solutions and business results through thought leadership. We drive this innovation via our team of the best and brightest technology professionals in the industry that have secured over 2,800 technical certifications, 260 specifically with HPE. And in our hybrid cloud business, we have clearly found that technology, new market demands for instant responses and experiences, evolving economic considerations with detailed financial evaluation, and of course the global pandemic, have challenged each of our customers across all industries to develop an optimal cloud strategy. We now play an enhanced strategic role for our customers as their technology advisor and their guide to the right mix of cloud experiences that will maximize their organizational success with predictable outcomes. Our conversations have really moved from product roadmaps and speeds and feeds to return on investment, return on capital, and financial statements, ratios, and metrics. We collaborate regularly with our customers at all levels and all departments to find an effective comprehensive cloud strategy for their workloads and applications ensuring proper alignment and cost with financial return. >> Great, thank you, Ron. Yeah, today it's all about the business value. Harry, please. >> Hi Dave, thanks for the opportunity and greetings from the Great White North. We're a Canadian-based company headquartered in Toronto with offices across the country. We've been in the tech industry for a very long time. We're what we would call a solution provider. How hard for my mother to understand what that means but what our goal is to help our customers realize the business value of their technology investments. Just to give you an example of what it is we try and do. We just finished a build out of a new networking endpoint and data center technology for a brand new hospital. It's now being mobilized for COVID high-risk patients. So talk about our all being in an essential industry, providing essential services across the whole spectrum of technology. Now, in terms of what's happening in the marketplace, our customers are confused. No question about it. They hear about cloud, I mean, cloud first, and everyone goes to the cloud, but the reality is there's lots of technology, lots of applications that actually still have to run on premises for a whole bunch of reasons. And what customers want is solid senior serious advice as to how they leverage what they already have in terms of their existing infrastructure, but modernize it, update it, so it looks and feels a lot like the cloud. But they have the security, they have the protection that they need to have for reasons that are dependent on their industry and business to allow them to run on-prem. And so, the GreenLake philosophy is perfect. That allows customers to actually have one foot in the cloud, one foot in their traditional data center but modernize it so it actually looks like one enterprise entity. And it's that kind of flexibility that gives us an opportunity collectively, ourselves, our partners, HPE, to really demonstrate that we understand how to optimize the use of technology across all of the business applications they need to run. >> You know Harry, it's interesting about what you said is, the cloud it is kind of chaotic my word, not yours. But there is a lot of confusion out there, I mean, what's cloud, right? Is it public cloud, is it private cloud, the hybrid cloud? Now, it's the edge and of course the answer is all of the above. Ben, what's your perspective on all this? >> From a cloud perspective, you know, I think as an industry, I think we we've all accepted that public cloud is not necessarily going to win the day and we're in fact, in a hybrid world. There's certainly been some commentary and press that was sort of validate that. Not that it necessarily needs any validation but I think is the linkages between on-prem and cloud-based services have increased. It's paved the way for customers more effectively, deploy hybrid solutions in in the model that they want or that they desire. You know, Harry was commenting on that a moment ago. As the trend continues, it becomes much easier for solution providers and service providers to drive their services initiatives, you know, in particular managed services. >> From an Arrow perspective is we think about how we can help scale in particular from a GreenLake perspective. We've got the ability to stand up some cloud capabilities through our ArrowSphere platform that can really help customers adopt GreenLake and to benefit from some alliances opportunities, as well. And I'll talk more about that as we go through. >> And Ben, I didn't mean to squeeze you on Arrow. I mean, Arrow has been around longer than computers. I mean, if you Google the history of Arrow it'll blow your mind, but give us a little quick commercial. >> Yeah, absolutely. So I've been with Arrow for about 20 years. I've got responsibility for Alliance organization in North America, We're a global value added distribution, business consulting and channel enablement company. And we bring scope, scale and and expertise as it relates to the IT industry. I love the fast pace that comes with the market that we're all in. And I love helping customers and suppliers both, be positioned for long-term success. And you know, the subject matter here today is just a great example of that. So I'm happy to be here and look forward to the discussion. >> All right, we got some good brain power in the room. Let's cut right to the chase. Ron, where's the pain? What are the main problems that CBTS I love what it stands for, Consult Build Transform and Support. What's the main pain point that customers are asking you to solve when it comes to their cloud strategies? >> Sure, Dave. Our customers' concerns and associated risks come from the market demands to deliver their products, services, and experiences instantaneously. And then the challenge is how do they meet those demands because they have aging infrastructure, processes, and fiscal constraints. Our customers really need us now more than ever to be excellent listeners so we can collaborate on an effective map with the strategic placement of workloads and applications in that spectrum of cloud experiences while managing their costs, and of course, mitigating risks to their business. This collaboration with our customers, often identify significant costs that have to be evaluated, justified or eliminated. We find significant development, migration, and egress charges in their current public cloud experience, coupled with significant over provisioning, maintenance, operational, and stranded asset costs in their on-premise infrastructure environment. When we look at all these costs holistically, through our customized workshops and assessments, we can identify the optimal cloud experience for the respective workloads and applications. Through our partnership with HPE and the availability of the HPE GreenLake solutions, our customers now have a choice to deliver SLA's, economics, and business outcomes for their workloads and applications that best reside on-premise in a private cloud and have that experience. This is a rock solid solution that eliminates, the development costs that they experience and the egress charges that are associated with the public cloud while utilizing HPE GreenLake to eliminate over provisioning costs and the maintenance costs on aging infrastructure hardware. Lastly, our customers only have to pay for actual infrastructure usage with no upfront capital expense. And now, that achieves true utilization to cost economics, you know, with HPE GreenLake solutions from CBTS. >> I love focus on the business case, 'cause it's measurable and it's sort of follow the money. That's where the opportunity is. Okay, C.R., so question for you. Thinking about Advizex customers, how are they, are they leaning into GreenLake? What are they telling you is the business impact when they experience GreenLake? >> Well, I think it goes back to what Ron was talking about. We had to solve the business challenges first and so far, the reception's been positive. When I say that is customers are open. Everybody wants to, the C-suite wants to hear about cloud and hybrid cloud fits. But what we hear and what we're seeing from our customers is we're seeing more adoption from customers that it may be their first foot in, if you will, but as important, we're able to share other customers with our potentially new clients that say, what's the first thing that happens with regard to GreenLake? Well, number one, it works. It works as advertised and as-a-Service, that's a big step. There are a lot of people out there dabbling today but when you can say we have a proven solution it's working in our environment today, that's key. I think the second thing is,, is flexibility. You know, when customers are looking for this hybrid solution, you got to be flexible for, again, I think Ron said (indistinct). You don't have a big capital outlay but also what customers want to be able to do is we want to build for growth but we don't want to pay for it. So we'll pay as we grow not as we have to use, as we used to do, it was upfront, the capital expenditure. Now we'll just pay as we grow, and that really facilitates in another great example as you'll hear from a customer, this afternoon. But you'll hear where one of the biggest benefits they just acquired a $570 million company and their integration is going to be very seamless because of their investment in GreenLake. They're looking at the flexibility to add to GreenLake as a big opportunity to integrate for acquisitions. And finally is really, we see, it really brings the cloud experience and as-a-Service to our customers. And with HPE GreenLake, it brings the best of breed. So it's not just what HPE has to offer. When you look at Hyperconverged, they have Nutanix, they have Cohesity. So, I really believe it brings best of breeds. So, to net it out and close it out with our customers, thus far, the customer experience has been exceptional. I mean, with GreenLake Central, as interface, customers have had a lot of success. We just had our first customer from about a year and a half ago just reopened, it was a highly competitive situation, but they just said, look, it's proven, it works, and it gives us that cloud experience so. Had a lot of great success thus far and looking forward to more. >> Thank you, so Harry, I want to pick up on something C.R. said and get your perspectives. So when I talk to the C-suite, they do all want to hear about, you know, cloud, they have a cloud agenda. And what they tell me is it's not just about their IT transformation. They want that but they also want to transform their business. So I wonder if you could talk, Harry, about Compugen's perspective on the potential business impact of GreenLake. And also, I'm interested in how you guys are thinking about workloads, how to manage work, you know, how to cost optimize in IT, but also, the business value that comes out of that capability. >> Yeah, so Dave, you know if you were to talk to CFO and I have the good fortune to talk to lots of CFOs, they want to pay the costs when they generate the revenue. They don't want to have all the costs upfront and then wait for the revenue to come through. A good example of where that's happening right now is you know, related to the pandemic, employees that used to work at the office have now moved to working from home. And now, they have to connect remotely to run the same application. So use this thing called VDI, virtual interfacing to allow them to connect to the applications that they need to run in the office. I don't want to get into too much detail but to be able to support that from an an at-home environment, they needed to buy a lot more computing capacity to handle this. Now, there's an expectation that hopefully six months from now, maybe sooner than that, people will start returning to the office. They may not need that capacity so they can turn down on the costs. And so, the idea of having the capacity available when you need it, but then turning it off when you don't need it, is really a benefit of the variable cost model. Another example that I would use is one in new development. If a customer is going to implement a new, let's say, line of business application. SAP is very very popular. You know, it actually, unfortunately, takes six months to two years to actually get that application set up, installed, validated, tested, then moves through production. You know, what used to happen before? They would buy all that capacity upfront, and it would basically sit there for two years, and then when they finally went to full production, then they were really value out of that investment. But they actually lost a couple of years of technology, literally sitting almost sidle. And so, from a CFO perspective, his ability to support the development of those applications as he scales it, perfect. GreenLake is the ideal solution that allows him to do that. >> You know, technology has saved businesses in this pandemic. There's no question about it. When Harry was just talking about with regard to VDI, you think about that, there's the dialing up and dialing down piece which is awesome from an IT perspective. And then the business impact there is the productivity of the end users. And most C-suite executives I've talked to said productivity actually went up during COVID with work from home, which is kind of astounding if you think about it. Ben, we said Arrow's been around for a long, long time. Certainly, before all of us were born and it's gone through many many industry transitions during our lifetimes. How does Arrow and how do your partners think about building cloud experiences and where does GreenLake fit in from your perspective? >> Great question. So from an Arrow perspective, when you think about cloud experience in of course us taking a view as a distribution partner, we want to be able to provide scale and efficiency to our network of partners. So we do that through our ArrowSphere platform. Just a bit of, you know, a bit of a commercial. I mean, you get single quote, single bill, auto provision, multi supplier, if you will, subscription management, utilization reporting from the platform itself. So if we pivot that directly to HPE, you're going to get a bit of a scoop here, Dave. And we're excited today to have GreenLake live in our platform available for our partner community to consume. In particular, the Swift solutions that HPE has announced so we're very excited to share that today. Maybe a little bit more on GreenLake. I think at this point in time, that it's differentiated in a sense that, if you think about some of the other offerings in the market today and further with having the the solutions themselves available in ArrowSphere. So, I would say, that we identify the uniqueness and quickly partner with HPE to work with our ArrowSphere platform. One other sort of unique thing is, when you think about platform itself, you've got to give a consistent experience. The different geographies around the world so, you know, we're available in North of 20 countries, there's thousands of resellers and transacting on the platform on a regular basis. And frankly, hundreds of thousands end customers. that are leveraging today. So that creates an opportunity for both Arrow, HPE and our partner community. So we're excited. >> You know, I just want to open it up. We don't have much time left, but thoughts on differentiation. Some people ask me, okay, what's really different about HPE and GreenLake? These others, you know, are doing things with as-a-Service. To me, I always say cultural, it starts from the top with Antonio, and it's like the company's all in. But I wonder from your perspectives, 'cause you guys are hands on. Are there other differentiable factors that you would point to? Let me just open that up to the group. >> Yeah, if I could make a comment. GreenLake is really just the latest invocation of the as-a-Service model. And what does that mean? What that actually means is you have a continuous ongoing relationship with the customer. It's not a sell and forget. Not that we ever forget about customers but there are highlights. Customer buys, it gets installed, and then for two or three years you may have an occasional engagement with them but it's not continuous. When you move to our GreenLake model, you're actually helping them manage that. You are in the core, in the heart of their business. No better place to be if you want to be sticky and you want to be relevant and you want to be always there for them. >> You know, I wonder if somebody else could add to it in your remarks. From your perspective as a partner, 'cause you know, hey, a lot of people made a lot of money selling boxes, but those days are pretty much gone. I mean, you have to transform into a services mindset, but other thoughts? >> I think to add to that Dave. I think Harry's right on. The way he positioned it it's exactly where he did own the customer. I think even another step back for us is, we're able to have the business conversation without leading with what you just said. You don't have to leave with a storage solution, you don't have to lead with compute. You know, you can really have step back, have a business conversation. And we've done that where you don't even bring up HPE GreenLake until you get to the point where the customer says, so you can give me an on-prem cloud solution that gives me scalability, flexibility, all the things you're talking about. How does that work? Then you bring up, it's all through this HPE GreenLake tool. And it really gives you the ability to have a business conversation. And you're solving the business problems versus trying to have a technology conversation. And to me, that's clear differentiation for HPE GreenLake. >> All right guys, C.R., Ron, Harry, Ben. Great discussion, thank you so much for coming on the program. Really appreciate it. >> Thanks for having us, Dave. >> Appreciate it Dave. >> All right, keep it right there for more great content at GreenLake Day, be right back. (bright soft music) (upbeat music) (upbeat electronic music)
SUMMARY :
the cloud that comes to you, and continues to make new announcements And you got some news today, It brings the cloud to the customer it's the way customers look at it. and you probably saying it for yourself. I love that you guys always and to really get that cloud experience But I got to move, I got and get access to a robust ecosystem only the technology to work, expand the solution sets that we provide and our partners and we can show you and then this ecosystem evolution (bright soft music) the VP of Cloud & Security at Clarify360. and where do you see it going? cloud in the best way in the marketplace? and that's to work across What do you think it means for customers? This is all helping to And in the early days of cloud, and everything that you said was spot on. I mean, the financial incentives, And HPE, I think is and the more things get simple, to build that bridge And that is to your point, Thanks for having me. and how the partner So I'm going to ask you guys each And it really comes down to and yeah, I totally agree. and their guide to the right about the business value. and everyone goes to the cloud, Now, it's the edge and of course in the model that they want We've got the ability to stand up to squeeze you on Arrow. and look forward to the discussion. Let's cut right to the chase. and the availability of the I love focus on the business case, and so far, the reception's been positive. how to manage work, you know, and I have the good fortune with regard to VDI, you think about that, in the market today and further with and it's like the company's all in. and you want to be relevant I mean, you have to transform And to me, that's clear differentiation for coming on the program. at GreenLake Day, be right back.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Jo Peterson | PERSON | 0.99+ |
CBTS | ORGANIZATION | 0.99+ |
Keith | PERSON | 0.99+ |
Ron Nemecek | PERSON | 0.99+ |
Ron | PERSON | 0.99+ |
Harry | PERSON | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
GreenLake | ORGANIZATION | 0.99+ |
Ben | PERSON | 0.99+ |
Toronto | LOCATION | 0.99+ |
Harry Zarek | PERSON | 0.99+ |
Keith White | PERSON | 0.99+ |
OnX | ORGANIZATION | 0.99+ |
George Hope | PERSON | 0.99+ |
Benjamin Klay | PERSON | 0.99+ |
two years | QUANTITY | 0.99+ |
C.R. Howdyshell | PERSON | 0.99+ |
18 years | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
$570 million | QUANTITY | 0.99+ |
Equinix | ORGANIZATION | 0.99+ |
six months | QUANTITY | 0.99+ |
Advizex | ORGANIZATION | 0.99+ |
one foot | QUANTITY | 0.99+ |
116% | QUANTITY | 0.99+ |
2021 | DATE | 0.99+ |
80% | QUANTITY | 0.99+ |
Cincinnati, Ohio | LOCATION | 0.99+ |
70% | QUANTITY | 0.99+ |
35 year | QUANTITY | 0.99+ |
Clarify360 | ORGANIZATION | 0.99+ |
three years | QUANTITY | 0.99+ |
Hewlett Packard Enterprise | ORGANIZATION | 0.99+ |
40 years | QUANTITY | 0.99+ |
2022 | DATE | 0.99+ |
33 year | QUANTITY | 0.99+ |
Arrow | ORGANIZATION | 0.99+ |
Arrow Electronics | ORGANIZATION | 0.99+ |
first round | QUANTITY | 0.99+ |
Geor | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
Compugen | ORGANIZATION | 0.99+ |
HPE GreenLake | TITLE | 0.99+ |
Reliance Jio: OpenStack for Mobile Telecom Services
>>Hi, everyone. My name is my uncle. My uncle Poor I worked with Geo reminds you in India. We call ourselves Geo Platforms. Now on. We've been recently in the news. You've raised a lot off funding from one of the largest, most of the largest tech companies in the world. And I'm here to talk about Geos Cloud Journey, Onda Mantis Partnership. I've titled it the story often, Underdog becoming the largest telecom company in India within four years, which is really special. And we're, of course, held by the cloud. So quick disclaimer. Right. The content shared here is only for informational purposes. Um, it's only for this event. And if you want to share it outside, especially on social media platforms, we need permission from Geo Platforms limited. Okay, quick intro about myself. I am a VP of engineering a geo. I lead the Cloud Services and Platforms team with NGO Andi. I mean the geo since the beginning, since it started, and I've seen our cloud footprint grow from a handful of their models to now eight large application data centers across three regions in India. And we'll talk about how we went here. All right, Let's give you an introduction on Geo, right? Giorgio is on how we became the largest telecom campaign, India within four years from 0 to 400 million subscribers. And I think there are There are a lot of events that defined Geo and that will give you an understanding off. How do you things and what you did to overcome massive problems in India. So the slide that I want to talkto is this one and, uh, I The headline I've given is, It's the Geo is the fastest growing tech company in the world, which is not a new understatement. It's eggs, actually, quite literally true, because very few companies in the world have grown from zero to 400 million subscribers within four years paying subscribers. And I consider Geo Geos growth in three phases, which I have shown on top. The first phase we'll talk about is how geo grew in the smartphone market in India, right? And what we did to, um to really disrupt the telecom space in India in that market. Then we'll talk about the feature phone phase in India and how Geo grew there in the future for market in India. and then we'll talk about what we're doing now, which we call the Geo Platforms phase. Right. So Geo is a default four g lt. Network. Right. So there's no to geo three g networks that Joe has, Um it's a state of the art four g lt voiceover lt Network and because it was designed fresh right without any two D and three G um, legacy technologies, there were also a lot of challenges Lawn geo when we were starting up. One of the main challenges waas that all the smart phones being sold in India NGOs launching right in 2000 and 16. They did not have the voice or lt chip set embedded in the smartphone because the chips it's far costlier to embed in smartphones and India is a very price and central market. So none of the manufacturers were embedding the four g will teach upset in the smartphones. But geos are on Lee a volte in network, right for the all the network. So we faced a massive problem where we said, Look there no smartphones that can support geo. So how will we grow Geo? So in order to solve that problem, we launched our own brand of smartphones called the Life um, smartphones. And those phones were really high value devices. So there were $50 and for $50 you get you You At that time, you got a four g B storage space. A nice big display for inch display. Dual cameras, Andi. Most importantly, they had volte chip sets embedded in them. Right? And that got us our initial customers the initial for the launch customers when we launched. But more importantly, what that enabled other oh, EMS. What that forced the audience to do is that they also had to launch similar smartphones competing smartphones with voltage upset embedded in the same price range. Right. So within a few months, 3 to 4 months, um, all the other way EMS, all the other smartphone manufacturers, the Samsung's the Micromax is Micromax in India, they all had volte smartphones out in the market, right? And I think that was one key step We took off, launching our own brand of smartphone life that helped us to overcome this problem that no smartphone had. We'll teach upsets in India and then in order. So when when we were launching there were about 13 telecom companies in India. It was a very crowded space on demand. In order to gain a foothold in that market, we really made a few decisions. Ah, phew. Key product announcement that really disrupted this entire industry. Right? So, um, Geo is a default for GLT network itself. All I p network Internet protocol in everything. All data. It's an all data network and everything from voice to data to Internet traffic. Everything goes over this. I'll goes over Internet protocol, and the cost to carry voice on our smartphone network is very low, right? The bandwidth voice consumes is very low in the entire Lt band. Right? So what we did Waas In order to gain a foothold in the market, we made voice completely free, right? He said you will not pay anything for boys and across India, we will not charge any roaming charges across India. Right? So we made voice free completely and we offer the lowest data rates in the world. We could do that because we had the largest capacity or to carry data in India off all the other telecom operators. And these data rates were unheard off in the world, right? So when we launched, we offered a $2 per month or $3 per month plan with unlimited data, you could consume 10 gigabytes of data all day if you wanted to, and some of our subscriber day. Right? So that's the first phase off the overgrowth and smartphones and that really disorders. We hit 100 million subscribers in 170 days, which was very, very fast. And then after the smartphone faith, we found that India still has 500 million feature phones. And in order to grow in that market, we launched our own phone, the geo phone, and we made it free. Right? So if you take if you took a geo subscription and you carried you stayed with us for three years, we would make this phone tree for your refund. The initial deposit that you paid for this phone and this phone had also had quite a few innovations tailored for the Indian market. It had all of our digital services for free, which I will talk about soon. And for example, you could plug in. You could use a cable right on RCR HDMI cable plug into the geo phone and you could watch TV on your big screen TV from the geophones. You didn't need a separate cable subscription toe watch TV, right? So that really helped us grow. And Geo Phone is now the largest selling feature phone in India on it. 100 million feature phones in India now. So now now we're in what I call the geo platforms phase. We're growing of a geo fiber fiber to the home fiber toe the office, um, space. And we've also launched our new commerce initiatives over e commerce initiatives and were steadily building platforms that other companies can leverage other companies can use in the Jeon o'clock. Right? So this is how a small startup not a small start, but a start of nonetheless least 400 million subscribers within four years the fastest growing tech company in the world. Next, Geo also helped a systemic change in India, and this is massive. A lot of startups are building on this India stack, as people call it, and I consider this India stack has made up off three things, and the acronym I use is jam. Trinity, right. So, um, in India, systemic change happened recently because the Indian government made bank accounts free for all one billion Indians. There were no service charges to store money in bank accounts. This is called the Jonathan. The J. GenDyn Bank accounts. The J out off the jam, then India is one of the few countries in the world toe have a digital biometric identity, which can be used to verify anyone online, which is huge. So you can simply go online and say, I am my ankle poor on duh. I verify that this is indeed me who's doing this transaction. This is the A in the jam and the last M stands for Mobil's, which which were held by Geo Mobile Internet in a plus. It is also it is. It also stands for something called the U. P I. The United Unified Payments Interface. This was launched by the Indian government, where you can carry digital transactions for free. You can transfer money from one person to the to another, essentially for free for no fee, right so I can transfer one group, even Indian rupee to my friend without paying any charges. That is huge, right? So you have a country now, which, with a with a billion people who are bank accounts, money in the bank, who you can verify online, right and who can pay online without any problems through their mobile connections held by G right. So suddenly our market, our Internet market, exploded from a few million users to now 506 106 100 million mobile Internet users. So that that I think, was a massive such a systemic change that happened in India. There are some really large hail, um, numbers for this India stack, right? In one month. There were 1.6 billion nuclear transactions in the last month, which is phenomenal. So next What is the impact of geo in India before you started, we were 155th in the world in terms off mobile in terms of broadband data consumption. Right. But after geo, India went from one 55th to the first in the world in terms of broadband data, largely consumed on mobile devices were a mobile first country, right? We have a habit off skipping technology generation, so we skip fixed line broadband and basically consuming Internet on our mobile phones. On average, Geo subscribers consumed 12 gigabytes of data per month, which is one of the highest rates in the world. So Geo has a huge role to play in making India the number one country in terms off broad banded consumption and geo responsible for quite a few industry first in the telecom space and in fact, in the India space, I would say so before Geo. To get a SIM card, you had to fill a form off the physical paper form. It used to go toe Ah, local distributor. And that local distributor is to check the farm that you feel incorrectly for your SIM card and then that used to go to the head office and everything took about 48 hours or so, um, to get your SIM card. And sometimes there were problems there also with a hard biometric authentication. We enable something, uh, India enable something called E K Y C Elektronik. Know your customer? We took a fingerprint scan at our point of Sale Reliance Digital stores, and within 15 minutes we could verify within a few minutes. Within a few seconds we could verify that person is indeed my hunk, right, buying the same car, Elektronik Lee on we activated the SIM card in 15 minutes. That was a massive deal for our growth. Initially right toe onboard 100 million customers. Within our and 70 days. We couldn't have done it without be K. I see that was a massive deal for us and that is huge for any company starting a business or start up in India. We also made voice free, no roaming charges and the lowest data rates in the world. Plus, we gave a full suite of cloud services for free toe all geo customers. For example, we give goTV essentially for free. We give GOTV it'll law for free, which people, when we have a launching, told us that no one would see no one would use because the Indians like watching TV in the living rooms, um, with the family on a big screen television. But when we actually launched, they found that GOTV is one off our most used app. It's like 70,000,080 million monthly active users, and now we've basically been changing culture in India where culture is on demand. You can watch TV on the goal and you can pause it and you can resume whenever you have some free time. So really changed culture in India, India on we help people liver, digital life online. Right, So that was massive. So >>I'm now I'd like to talk about our cloud >>journey on board Animal Minorities Partnership. We've been partners that since 2014 since the beginning. So Geo has been using open stack since 2014 when we started with 14 note luster. I'll be one production environment One right? And that was I call it the first wave off our cloud where we're just understanding open stack, understanding the capabilities, understanding what it could do. Now we're in our second wave. Where were about 4000 bare metal servers in our open stack cloud multiple regions, Um, on that around 100,000 CPU cores, right. So it's a which is one of the bigger clouds in the world, I would say on almost all teams, with Ngor leveraging the cloud and soon I think we're going to hit about 10,000 Bama tools in our cloud, which is massive and just to give you a scale off our network, our in French, our data center footprint. Our network introduction is about 30 network data centers that carry just network traffic across there are there across India and we're about eight application data centers across three regions. Data Center is like a five story building filled with servers. So we're talking really significant scale in India. And we had to do this because when we were launching, there are the government regulation and try it. They've gotten regulatory authority of India, mandates that any telecom company they have to store customer data inside India and none of the other cloud providers were big enough to host our clothes. Right. So we we made all this intellectual for ourselves, and we're still growing next. I love to show you how we grown with together with Moran says we started in 2014 with the fuel deployment pipelines, right? And then we went on to the NK deployment. Pipelines are cloud started growing. We started understanding the clouds and we picked up M C p, which has really been a game changer for us in automation, right on DNA. Now we are in the latest release, ofem CPM CPI $2019 to on open stack queens, which on we've just upgraded all of our clouds or the last few months. Couple of months, 2 to 3 months. So we've done about nine production clouds and there are about 50 internal, um, teams consuming cloud. We call as our tenants, right. We have open stack clouds and we have communities clusters running on top of open stack. There are several production grade will close that run on this cloud. The Geo phone, for example, runs on our cloud private cloud Geo Cloud, which is a backup service like Google Drive and collaboration service. It runs out of a cloud. Geo adds G o g S t, which is a tax filing system for small and medium enterprises, our retail post service. There are all these production services running on our private clouds. We're also empaneled with the government off India to provide cloud services to the government to any State Department that needs cloud services. So we were empaneled by Maiti right in their ego initiative. And our clouds are also Easter. 20,000 certified 20,000 Colin one certified for software processes on 27,001 and said 27,017 slash 18 certified for security processes. Our clouds are also P our data centers Alsop a 942 be certified. So significant effort and investment have gone toe These data centers next. So this is where I think we've really valued the partnership with Morantes. Morantes has has trained us on using the concepts of get offs and in fries cold, right, an automated deployments and the tool change that come with the M C P Morantes product. Right? So, um, one of the key things that has happened from a couple of years ago to today is that the deployment time to deploy a new 100 north production cloud has decreased for us from about 55 days to do it in 2015 to now, we're down to about five days to deploy a cloud after the bear metals a racked and stacked. And the network is also the physical network is also configured, right? So after that, our automated pipelines can deploy 100 0 clock in five days flight, which is a massive deal for someone for a company that there's adding bear metals to their infrastructure so fast, right? It helps us utilize our investment, our assets really well. By the time it takes to deploy a cloud control plane for us is about 19 hours. It takes us two hours to deploy a compu track and it takes us three hours to deploy a storage rack. Right? And we really leverage the re class model off M C. P. We've configured re class model to suit almost every type of cloud that we have, right, and we've kept it fairly generous. It can be, um, Taylor to deploy any type of cloud, any type of story, nor any type of compute north. Andi. It just helps us automate our deployments by putting every configuration everything that we have in to get into using infra introduction at school, right plus M. C. P also comes with pipelines that help us run automated tests, automated validation pipelines on our cloud. We also have tempest pipelines running every few hours every three hours. If I recall correctly which run integration test on our clouds to make sure the clouds are running properly right, that that is also automated. The re class model and the pipelines helpers automate day to operations and changes as well. There are very few seventh now, compared toa a few years ago. It very rare. It's actually the exception and that may be because off mainly some user letter as opposed to a cloud problem. We also have contributed auto healing, Prometheus and Manager, and we integrate parameters and manager with our even driven automation framework. Currently, we're using Stack Storm, but you could use anyone or any event driven automation framework out there so that it indicates really well. So it helps us step away from constantly monitoring our cloud control control planes and clothes. So this has been very fruitful for us and it has actually apps killed our engineers also to use these best in class practices like get off like in France cord. So just to give you a flavor on what stacks our internal teams are running on these clouds, Um, we have a multi data center open stack cloud, and on >>top of that, >>teams use automation tools like terra form to create the environments. They also create their own Cuba these clusters and you'll see you'll see in the next slide also that we have our own community that the service platform that we built on top of open stack to give developers development teams NGO um, easy to create an easy to destroy Cuban. It is environment and sometimes leverage the Murano application catalog to deploy using heats templates to deploy their own stacks. Geo is largely a micro services driven, Um um company. So all of our applications are micro services, multiple micro services talking to each other, and the leverage develops. Two sets, like danceable Prometheus, Stack stone from for Otto Healing and driven, not commission. Big Data's tax are already there Kafka, Patches, Park Cassandra and other other tools as well. We're also now using service meshes. Almost everything now uses service mesh, sometimes use link. Erred sometimes are experimenting. This is Theo. So So this is where we are and we have multiple clients with NGO, so our products and services are available on Android IOS, our own Geo phone, Windows Macs, Web, Mobile Web based off them. So any client you can use our services and there's no lock in. It's always often with geo, so our sources have to be really good to compete in the open Internet. And last but not least, I think I love toe talk to you about our container journey. So a couple of years ago, almost every team started experimenting with containers and communities and they were demand for as a platform team. They were demanding community that the service from us a manage service. Right? So we built for us, it was much more comfortable, much more easier toe build on top of open stack with cloud FBI s as opposed to doing this on bare metal. So we built a fully managed community that a service which was, ah, self service portal, where you could click a button and get a community cluster deployed in your own tenant on Do the >>things that we did are quite interesting. We also handle some geo specific use cases. So we have because it was a >>manage service. We deployed the city notes in our own management tenant, right? We didn't give access to the customer to the city. Notes. We deployed the master control plane notes in the tenant's tenant and our customers tenant, but we didn't give them access to the Masters. We didn't give them the ssh key the workers that the our customers had full access to. And because people in Genova learning and experimenting, we gave them full admin rights to communities customers as well. So that way that really helped on board communities with NGO. And now we have, like 15 different teams running multiple communities clusters on top, off our open stack clouds. We even handle the fact that there are non profiting. I people separate non profiting I peoples and separate production 49 p pools NGO. So you could create these clusters in whatever environment that non prod environment with more open access or a prod environment with more limited access. So we had to handle these geo specific cases as well in this communities as a service. So on the whole, I think open stack because of the isolation it provides. I think it made a lot of sense for us to do communities our service on top off open stack. We even did it on bare metal, but that not many people use the Cuban, indeed a service environmental, because it is just so much easier to work with. Cloud FBI STO provision much of machines and covering these clusters. That's it from me. I think I've said a mouthful, and now I love for you toe. I'd love to have your questions. If you want to reach out to me. My email is mine dot capulet r l dot com. I'm also you can also message me on Twitter at my uncouple. So thank you. And it was a pleasure talking to you, Andre. Let let me hear your questions.
SUMMARY :
So in order to solve that problem, we launched our own brand of smartphones called the So just to give you a flavor on what stacks our internal It is environment and sometimes leverage the Murano application catalog to deploy So we have because it was a So on the whole, I think open stack because of the isolation
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
2015 | DATE | 0.99+ |
India | LOCATION | 0.99+ |
2014 | DATE | 0.99+ |
two hours | QUANTITY | 0.99+ |
$50 | QUANTITY | 0.99+ |
3 | QUANTITY | 0.99+ |
12 gigabytes | QUANTITY | 0.99+ |
three years | QUANTITY | 0.99+ |
Morantes | ORGANIZATION | 0.99+ |
70,000,080 million | QUANTITY | 0.99+ |
Andre | PERSON | 0.99+ |
three hours | QUANTITY | 0.99+ |
Samsung | ORGANIZATION | 0.99+ |
2000 | DATE | 0.99+ |
70 days | QUANTITY | 0.99+ |
Genova | LOCATION | 0.99+ |
five days | QUANTITY | 0.99+ |
2 | QUANTITY | 0.99+ |
zero | QUANTITY | 0.99+ |
0 | QUANTITY | 0.99+ |
170 days | QUANTITY | 0.99+ |
100 million subscribers | QUANTITY | 0.99+ |
Onda Mantis Partnership | ORGANIZATION | 0.99+ |
first phase | QUANTITY | 0.99+ |
100 million | QUANTITY | 0.99+ |
15 minutes | QUANTITY | 0.99+ |
10 gigabytes | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
16 | DATE | 0.99+ |
four years | QUANTITY | 0.99+ |
4 months | QUANTITY | 0.99+ |
one person | QUANTITY | 0.99+ |
49 p | QUANTITY | 0.99+ |
100 million customers | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
one billion | QUANTITY | 0.99+ |
Two sets | QUANTITY | 0.99+ |
155th | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
one key step | QUANTITY | 0.99+ |
last month | DATE | 0.99+ |
first country | QUANTITY | 0.98+ |
3 months | QUANTITY | 0.98+ |
around 100,000 CPU cores | QUANTITY | 0.98+ |
Joe | PERSON | 0.98+ |
100 | QUANTITY | 0.98+ |
27,001 | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
15 different teams | QUANTITY | 0.98+ |
Android IOS | TITLE | 0.98+ |
one month | QUANTITY | 0.98+ |
France | LOCATION | 0.98+ |
506 106 100 million | QUANTITY | 0.98+ |
Geo | ORGANIZATION | 0.98+ |
Elektronik Lee | ORGANIZATION | 0.98+ |
FBI | ORGANIZATION | 0.98+ |
one group | QUANTITY | 0.98+ |
1.6 billion nuclear transactions | QUANTITY | 0.98+ |
Andi | PERSON | 0.97+ |
Geo Mobile Internet | ORGANIZATION | 0.97+ |
five story | QUANTITY | 0.97+ |
Prometheus | TITLE | 0.97+ |
Allan & Gostev Final
(upbeat music) >> From around the globe, it's theCUBE. With digital coverage of VeeamON 2020. Brought to you by Veeam. Everybody, we're back. This is Dave Vellante, and you're watching theCUBE's continuous coverage of VeeamON 2020. Veeam Online 2020. And Danny Allen is here, he's the CTO and Senior Vice President of Product Strategy and he's joined by Anton Gostev, who's the Senior Vice President of Product Management. Gentlemen, good to see you again. Wish we were face-to-face, but thanks for coming on, virtually. >> Thanks Dave for having us. >> Always love being on with you. Thank you. >> So Danny, I want to start with you. In your keynote, you talked to, about great quote by Satya Nadella. He said "We basically compress two years of digital transformation in two months." And so, I'm interested in what that meant for Veeam but also specifically, for your customers and how you help. >> Yeah, I think about that in two different ways. So digital transformation is obviously the word that he used. But I think of this a lot about being remote. So in two months, every organization that we're ourselves included, has gone from, in person operations going into the office doing things to enabling remote operations. And so, I'm working from home today, Anton's working from home today. We're all working from home today. And so remote operations is a big part of that. And it's not just working from home, it's how do I actually conduct my operations, my backup, my archiving, my hearing, all of those things remotely. It's actually changed the way organizations think about their data management. Not just operations from the sense of internal processes, but also external processes as well. But I think about this as remote offering. So organizations say, "How can I take where we are today "in the world and turn this into competitive advantage? "How can I take the services that I offered today, "and help my customers be more successful remotely?" And so, it has those two aspects to it remote operations, remote offerings. And of course, all driven by data which we backed. >> So Anton, you know there's a saying "It's better to be lucky than good." And I say, "It's best to be lucky and good." So Danny was talking about some of the external processes, a lot of those processes were unknown. And people kind of making them up as they went along, with things that we've never seen before. So, I wonder if we could talk about your product suite, and how well you were able to adapt to some of these unknown. >> Well it's more customers using our product in creative ways. But, one feedback we got most recently in our annual user survey is that like, one of the customers was using tape as the off-site backups. And they had a process where obviously someone had to physically come to the office, pick up the exporter tapes and put them on the truck and move them some off-site location. And so this basically, the process was completely broken with COVID because of lockdown. And in that particular country, it was a stricter on the ground than in most and they were physically unable to basically leave the home. So they basically looked at, Luckily they upgraded already to version 10. And they looked at what version 10 has to offer. And then we're able to switch from using tape to fully automating this off-site backup and going directly to the public cloud to object storage. So, they still have the same off-site backups that, effectively air-gapped because of the first house you provide in virtual time for mutable backups. As soon as they created that they automatically ship to object storage, completely replacing this manual off-site process. So I don't know how long it will take them, if not COVID, to move to this process. Now they love it because it's so much better than what they did before. That's amazing. >> Yeah I bet, there's no doubt. That's interesting, that's an interesting use case. Do you see, others use cases that popped up. Again, I was saying that these processes were new. I mean, and I'm interested in from a product standpoint, how you guys were able to adapt to that. >> Well, another use case that seems to be on the rise is that the ability for customers to deploy the new machines to procure new hardware is severely limited now. Not only their supply chain issues, but also again, bring something into your data center. You have to physically be there and collaborate with other workers and doing installing the, whatever new hardware you purchase. So, we see a significant pick up of the functionality where that, we had in the product for a while, which we called direct resorts to cloud. So we support taking any backup, physical virtual machine. And restoring directory into cloud machine. So we see really the big uptick of migration, maybe a lot of migrations, maybe, not necessarily permanent migrations, but when people want to basically this, some of the applications start to struggle on their sources and they're unable to update the underlying hardware. So what they do is that they schedule the downtime, and then migrate, restore that latest backup into the cloud and continue using the machine in the cloud on much more powerful hardware. That's a lifesaver for them obviously in this situation. >> Yeah so the cloud, Danny is becoming a linchpin of these new models. In your keynote you talked about your vision. And it's interesting to note, I mean, VeeamON, last year, you actually talked about, what I call getting back to the basic of, backup, you kind of embrace backup, where a lot of the new entrants are like, "No no backup's, just one small part, it's data management." And, so I'd love to get your thoughts on that. But the vision you laid out was, backup and cloud data management. Maybe you could, unpack that a little bit. >> Yeah, the way I think about this is step one, in every infrastructure, it doesn't matter whether you're talking about on-prem or in the cloud. Step one is, to protect your data. So this is ingesting the data, whether be backup, whether it be replication, whether it be, long term retention. We have to do that, not only do we have to do that, but as you go to new cycles of infrastructure, it happens all over again. So, we backed up physical first and then virtual, and then we did, cloud and in some ways, containers we're going towards, we're not going backwards but people who are running containers on-prem so we always go back to the starting point of protect the data. And then of course, after you protect it then you, want to effectively begin to manage it. And that's exactly what Anton said. How do you automate the operational procedures to be able to make this part of the DNA of the organization and so, it doesn't matter whether it's on-prem or whether it's in the cloud, that protection of data and then the effective management and integration with existing processes, is fundamental for every infrastructure and will continue to be so into the future, including the cloud. And it's only then when you have this effective protection and management of it, can you begin to unleash the power of data, as you look out into the future, because you can reuse the data for additional purposes, you can move it to the optimal location, but we always start with protection and management of the data. >> So Anton, I want to come back to you on this notion of cloud being a portion of that, when you talk about security people say you layer, how should we think about the cloud? Is it a another layer of protection? And then Danny just said, "It doesn't really matter whether it's on-prem "or in the cloud, it well, it doesn't matter "if you can ensure the same experience." If it's a totally different experience well then it's problematic though. I wonder if you could address, both the layers. Is cloud just another layer and is the management of that, actually, how do you make it, quote, unquote, "Seamless"? I know it's an overused word, but from a product name? >> Well, for larger customers, it's not necessarily a new challenge, because it's rare when the customer had a single data center. And they had this challenge for always. How do I manage my multiple data centers with a single pane of glass? And, I will say public cloud does not necessarily mean that some new perspective in that sense. Yeah, maybe it even makes it easier because you no longer have to manage the physical aspect, the most important aspect of security, which is physical security. So someone else manages it for you and probably much better than most companies could ever afford. In terms of security answer, so then data center. But as far as networking security and how those multiple data centers interact with each other, that's essentially not a new challenge. It is a new challenge for smaller customers for SMBs that are just starting. So they have their own small data center, small world and now they are starting to move some workloads into the cloud. And I would say the biggest problem there is networking and VeeamON, sure provides some free tools to call Veeam PN to make it easier for them to make this step of, securing the networking aspect of public cloud and the private property also that they are in now as workloads move to the cloud, but also keeping some workloads on-prem. >> The other piece of cloud Danny, is SaaS. You weren't the first you were one of the first to offer SaaS back up particularly for Office 365. And a lot of people just, I think, rely on the SaaS vendor, "Hey, they've got me covered. "They've got me backed up", and maybe they do have them backed up, but they might not have them recovered. How is that market shaping up? What are the trends that you're seeing there? >> Well, you're absolutely right Dave. That the, focus here is not just on back up, but on recovery, and it's one of the things that Veeam is known for we don't just do the backup, but we have an Explorer for Exchange , an Explorer for SharePoint, an Explorer for OneDrive. You saw on stage today we demoed the Explorer for Microsoft Teams. So, it's not just about protecting the data, but getting back the specific element of data that you need for operations. And that is critically important. And our customers expect to need that. If you're depending on the SaaS vendor themselves to do that, and I won't, be derogatory or specific about any SaaS vendor, but what they'll often do is, take the entire data set from seven days ago, we'll say, and merge it back into the current data set. And that just results in, a complete chaos of your inbox, if that's what they're merging together. So having specific granularity to pull back that data, exactly the data you need when you need it, is critical. And that's why we're adding it, and the focus on Microsoft Teams now obviously, is because, as we have more intellectual property, in collaboration tools for remote operations, exactly what we're doing now, that only becomes more critical for the business. So, when you think about SaaS for backup, but we also think about it for recovery. And one thing that I'll credit Anton and the product management team for, we build all of this in-house, We don't give this to a third party to build it on our behalf because you need it to work and not only need it to work, but need it to work well, that completely integrated with the underlying cloud data management platform. >> So Anton, I wonder if I could ask you about that. So, from a recovery standpoint, there's one thing, is Dan was saying, you've got to have the granularity, you've got to be able to have a really relatively simple way to recover. But because it's the cloud, there's, latency involved and how are you from a product standpoint, dealing with, making that recovery as consistent and predictable and reliable as you have for a decade on-prem. >> So you mean recovery in the cloud or back to on-prem? >> Yeah, so, recovery from data that lives in the cloud. >> Okay. So basically, the most important feature of any cloud is the price of whatever you do. So, whenever we design anything, we always look at the costs even more than anything else. But, it in turn always translates into better performance as well. To give you example, without functionality where we can take the on-prem backup and make a copy in the public object storage for disaster recovery purposes, so that for example, when a hacker or ransomware wipes out your, entire data center, you have those backups in the cloud, and you can restore from them. So when you perform the restore from cloud backups, we are actually smart enough to understand that, we need to pull that and this in that block from the cloud backup, but many of those blocks actually shared with backups in another machines that are in your own prem backup repository. So we do this on the fly analysis, and we say, instead of pulling the 10 terabyte of the entire backup from the cloud, we can actually pull only 100 gigabytes off unique blocks. And the rest of the blocks we can take from on-prem repositories that have still survived the disaster. So, not only reduces the cost 20 times or whatever. The performance, obviously, of restoring from on-prem data versus pulling everything from the cloud through the internet links is dramatic. So again, we started from the cost, how do we reduce the cost of restore, because, that's where cloud vendors quote, unquote, "Get you." But in the end, it resulted in much better performance as well. >> Excellent, Anton as well in your keynote, you talked about the Veeam availability suite, gave a little sneak preview. You talked about continuous data protection. Cloud Tier, NAS recovery, which is oftentimes very challenging. What should we take away from that sneak peek? >> Three main directions basically, The first is Veeam CGP is we keep investing a lot in on-prem, data Protection, disaster recovery. VMware is a clear leader of on-prem virtualization. So, we keep building these, new ways to protect your web VMware with lower RPOs and RTOs that were never possible before with the classic snapshotting technologies. So that's one thing we keep investing on-prem. Second thing, we do major investments in the cloud in object storage specifically, from that regards, again, put a couple keynote in Google Cloud support. And we're adding the ability to work with coldest tier of object storage, which is Amazon Glacier Deep Archive or Microsoft Azure Blob Storage, archive tier. So that's the second big area of investment. And third, instant recovery Veaam has always been extremely well known for its instant recovery capabilities. And this race is going to be the biggest in terms of new instant recovery capabilities, that were introduced as many as three new major companies with capabilities there. (mumbles) >> So, Danny, I wonder if I could ask you. I'm interested in how you go from product strategy to actual product management and bring things to market. I mean, in the early days, Veeam. Very, very specific to virtualization. That of course, with the Bare-metal, you got a number of permutations and product capabilities. How do you guys work together in terms of assessing the market potential, the degree of difficulty, prioritizing, how does that all come to your customer value? >> Well, first of all, Anton and I, spend a lot of time together on the phone and collaborating just on a weekly basis about where we're going, what we're going to do. I always say there's four directions that we look at for the product strategy and what we're building. You look behind you, you have a, we have 375,000 customers and so those are the tail winds that are pushing you forward. We talked to them on all segments. What is it that you want? I say we look left and right, the left who are alliances. We have a rich ecosystem of partners and channel that we look to collect feedback from. Look right, we look out at the competitors in this space, what are they doing to make sure that we're not missing anything that we should be including and then look forward. Big focus of Veeam has always been not just creating check boxes and making sure that we have the required features but innovation. And you saw that on stage today when Anton was showing the NAS Instant Recovery in the database instant recovery and the capabilities that we have, we have a big focus on, not just checking a box but actually doing things better and differently than everyone else in the industry and that serve to see incredibly well. >> So I love that framework. But so now when you think about this pandemic, you look behind your customers have obviously been affected, your partners have been affected. Let's put your competitors to the side for a minute, we'll see how they respond. But then looking forward, future, as I've said many times, we're not just going back to 2019. We're new decade and really digital transformation is becoming real, for real this time around. So as you think about the pandemic and looking at those four dimensions, what initial conclusions are you drawing? >> Well, the first one would be that that Veeam is well positioned to win, continue to win and to win into the future. And the reason for that I would argue, is that we're software defined. Our whole model is based on being simple to use obviously, but software defined and because of the pandemic, as Anton said, can't go into the office anymore to switch your tapes from one system to another. And so being software defined set this apart positions as well for the future. And so make it simple, make it flexible. And ultimately, what our customers care about is the reliability of this end to the credit of research and development and Anton theme is, "We have product that as everyone says, it just worked". >> So Anton I wonder if I could ask you kind of a similar question. How has the pandemic affected your thinking along those dimensions and maybe some of your initial thinking on changes that you'll implement? >> Yes, sorry I wanted to add exactly on that. I will say that pandemic accelerated our vision becoming the reality. Basically, the vision we had and, I said a few years ago, one day that Veeam will become the biggest storage vendor without selling a single storage box. And this is just becoming the reality. We support a number of object storage providers today. Only a few of them actually track the consumption that is generated by different vendors. And just for those few who do track that and report numbers to us. We are already managing over hundreds of petabytes of data in the cloud. And we only just started a couple of years ago with object storage support. So that's the power of software defined. we don't need to sell you any storage to be eventually the biggest storage player on the market. And pandemic is clear accelerated that in the last three months we see the adoption, it was already like a hockey stick, but it's accelerating further. Because of the issues customers are facing today. Unable to actually physically go back to the office, do this backup handling the way they normally do it. >> Well guys, it's been really fun the last decade watching the ascendancy of Veeam, we've boarded on it and talked about it a lot. And as you guys have both said things have been accelerated. It's actually very exciting to see a company with, rich legacy, but also, very competitive with some of the new products and new companies that are hitting the market. So, congratulations, I know you've got a lot more to do here. You guys have been, for a private company, pretty transparent, more transparent than most and I have to say as an analyst, we appreciate that and, appreciate the partnership with theCUBE. So thanks very much for coming on. >> Thank you, Dave. Always a pleasure. >> Thanks Dave. >> All right, and thank you for watching everybody. This is Dave Vellante for theCUBE in our coverage of VeeamON 2020. Veeam Online. Keep it right there, I'll be right back. (upbeat music)
SUMMARY :
Gentlemen, good to see you again. being on with you. And so, I'm interested in what that meant going into the office doing things and how well you were able to adapt of the first house you provide how you guys were able to adapt to that. is that the ability for customers But the vision you laid out was, and management of the data. and is the management of that, of public cloud and the the first to offer SaaS back exactly the data you need But because it's the cloud, data that lives in the cloud. is the price of whatever you do. the Veeam availability suite, So that's the second I mean, in the early days, Veeam. and the capabilities that we have, So as you think about the pandemic And the reason for that I would argue, How has the pandemic that in the last three and I have to say as an Always a pleasure. you for watching everybody.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Anton Gostev | PERSON | 0.99+ |
Danny | PERSON | 0.99+ |
Danny Allen | PERSON | 0.99+ |
Satya Nadella | PERSON | 0.99+ |
Anton | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
two years | QUANTITY | 0.99+ |
2019 | DATE | 0.99+ |
second | QUANTITY | 0.99+ |
two months | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
Dan | PERSON | 0.99+ |
Veeam | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
10 terabyte | QUANTITY | 0.99+ |
20 times | QUANTITY | 0.99+ |
two aspects | QUANTITY | 0.99+ |
375,000 customers | QUANTITY | 0.99+ |
Office 365 | TITLE | 0.99+ |
100 gigabytes | QUANTITY | 0.99+ |
two different ways | QUANTITY | 0.99+ |
third | QUANTITY | 0.99+ |
Exchange | TITLE | 0.99+ |
both | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Second thing | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
SharePoint | TITLE | 0.98+ |
one | QUANTITY | 0.98+ |
seven days ago | DATE | 0.98+ |
Three main directions | QUANTITY | 0.98+ |
first one | QUANTITY | 0.97+ |
OneDrive | TITLE | 0.97+ |
pandemic | EVENT | 0.97+ |
Veaam | ORGANIZATION | 0.97+ |
one system | QUANTITY | 0.96+ |
single data center | QUANTITY | 0.96+ |
first house | QUANTITY | 0.95+ |
single storage box | QUANTITY | 0.94+ |
one feedback | QUANTITY | 0.94+ |
theCUBE | ORGANIZATION | 0.93+ |
Step one | QUANTITY | 0.93+ |
Explorer | TITLE | 0.92+ |
step one | QUANTITY | 0.92+ |
one thing | QUANTITY | 0.92+ |
last three months | DATE | 0.91+ |
four dimensions | QUANTITY | 0.9+ |
single pane | QUANTITY | 0.89+ |
last decade | DATE | 0.89+ |
Bare-metal | ORGANIZATION | 0.88+ |
VMware | ORGANIZATION | 0.88+ |
four directions | QUANTITY | 0.88+ |
Joe Fernandes, Red Hat | Red Hat Summit 2020
>> From around the globe, it's the CUBE with digital coverage of Red Hat Summit 2020 brought to you by Red Hat. >> Hi, I'm Stu Miniman, and this is the CUBE's coverage of a Red Hat Summit 2020 happening digitally. We're connecting with Red Hat executives, thought leaders, practitioners, wherever they are around the globe, bringing them remotely into this online event. Happy to welcome back to the program, Joe Fernandez, who's the Vice President and General Manager, of Core Cloud Platforms with Red Hat. Joe, thanks so much for joining us. >> Yeah, thanks for having me. Glad to be here. >> All right, so, Joe, you know, Cloud, of course, has been a conversation we've been having for a lot of years. When I went to Red Hat Summit last year, when I went to IBM, I think last year, there was discussion of moving from kind of chapter one, if you will, to chapter two. Some of the labels that we put on things back in the early days, like Hybrid Cloud and Multicloud, they're coming into a little bit clearer picture. So, let's just give a high level, what you're seeing from your customers when they talk about Hybrid and Multicloud environment? What does that mean to your customers? And therefore, how is Red Hat meeting them where they are? >> Yeah, sure. So, Red Hat obviously, serves an enterprise customer base. And what we've seen in that customer base, really since the start and it's really informed our strategy, is the fact that all their applications aren't going to run in one place, right? So they're really employing a hybrid class strategy, a Hybrid and Multicloud strategy, that spans from their data centers out to a public cloud, typically then out to multiple public clouds as their cloud investments grow, as they move more applications. And now, even out to the edge for many of those customers. So that's the newest footprint that we're getting asked about. So really we think of that as the open hybrid cloud. And you know, our goal is really to provide a consistent platform for applications regardless of where they run across all those environments. >> Yeah. Let's get down a second on that because we've had consistency for quite a while. You look at the largest cloud provider out there, they said, hybrid environment, will give you the exact same hardware that we're running in the public cloud of your bet. You know, that in your environment. Of course, Red Hat's a software company. You've lived across lots of platforms. We're going to Red Hat's entire existence. So, you know, where is that consistency needed? How do you, well, think about how Red Hat does things? Maybe the same and a little different than some of the other players that are then, positioning and even repositioning their hybrid story over the last year or so. >> Yeah. So, we're really excited to see a lot of folks in the industry, including all the major public cloud providers are now talking about Hybrid and talking about these types of initiatives that we've been talking about for quite some time. But yeah, it's a little bit different when we talk about Hybrid Cloud, when we talk about Multicloud, we're talking about being able to run not just in one public cloud and then in a non-premise clients that mirrors that cloud. We're really talking about being able to run across multiple clouds. So having that consistency across, running in, say Amazon to Azure to Google, and then carrying that into your on-premise environments, whether that's on Bare Metal, on VMware, on OpenStack, and then, like I said, out out to the edge, right? So that consistency is important for people who are concerned about how their applications are going to operate in these different environments. Because otherwise, they'd have to manage those differences themselves. I'm speaking as part of Red Hat, right? This is what the company was built on, right? In 20 years ago, it was all about Linux bringing consistency for enterprise applications running across x86 hardware, right? So regardless of who your OEM vendor was, as long as you're building to the x86 standard and leveraging Linux as a base, Red Hat Enterprise Linux became that same consistent operating environment for applications, which is important for our software vendors, but also more importantly for customers themselves as they yep those apps into production. >> Yeah, I guess, you know, last question I have for kind of just the landscape out there. We've been talking for a number of years. When you talk to practitioners, they don't get caught up in the labels that we use in the industry. Do they have a cloud strategy? Yes, most companies have a cloud strategy, and if you ask them is their cloud strategy same today, as it was a quarter ago or a year ago, they say, of course not. Everything's changed. We know in today's day and age, what I was doing a month ago is probably very different from what I am doing today. So, I know you've got a survey that was done of enterprise users. I saw it when it came out a month ago. And, you know, some good data in there. So, you know, where are we? And what data do you have to share with us on kind of the customer adoption with (mumbles). >> Yeah, so I think, you know, we put out a survey not too long ago and we started as, I think, over 60% of customers were adopting a hybrid cloud strategy exactly as I described. Thinking about their applications in terms of, in an environment that spans multiple cloud infrastructures, as well as on-premise footprints. And then, you know, going beyond that, we think that number will grow based on what we saw in that survey. That just mirrors the conversations that I've had with customers, that many of us here at Red Hat have been having with those same customers over the years. Because everybody's in a different spot in terms of their transformation efforts, in terms of their adoption of cloud technologies and what it means for their business. So we need to meet customers where they're at, understand that everybody's at a different spot and then make sure that we can help them make that transition. And it's really an evolution, as opposed to , I think, some people in the past might've thought of as a revolution where all the data centers are going to shut down and everything's going to move all at once. And so helping customers evolve. And that transition is really what Red Hat is all about. >> Yeah. And, so often, Joe, when I talk to some of the vendors out there, when you talk about Hybrid, you talk about Multicloud, it's talking about something you mentioned, it's a box, it's a place, it's, you know, the infrastructure discussion. But when I've been having conversations with a lot of your peers of these interviews for Red Hat Summit. We know that, it's the organization and it's the applications that are hugely important as these changes go and happen. So talk a little bit about that. What's happening to the organization? How are you helping the infrastructure team keep up and the app dev team move forward? >> Yeah, so first, I'll start with, that on the technology side, right? One of the things that that has enabled this type of consistency and portability has been sort of the advent of Linux containers as a standard packaging format that can span across all these different (mumbles), right? So we know that Linux runs in all these different footprints and Linux containers, as a portable packaging format, enables that. And then Kubernetes enables customers to orchestrate containers at scale. So that's really what OpenShift is focused on, is delivering an enterprise Kubernetes platform. Again, spanning all these environments that leverages container-based packaging, provides enterprise Kubernetes orchestration and management, to manage in all those environments. What that then also does on the people front is bring infrastructure and operations teams together, right? Because Kubernetes containers represents the agility for both sides, right? Or application developers, it represents the ability to pay their application and all their dependencies. And know that when they run it in one environment, it will be consistent with how it runs in other environments. So eliminating that problem of, works on my machine, but it doesn't work, you know, in prod or what have you. So it brings consistency for developers. Infrastructure teams, it gives them the ability to basically make decisions around where the best places to run these applications without having to think about that from a technology perspective, but really from things that should matter more, like cost and convenience to customers and performance and so forth. So, I think we see those teams coming together. That being said, it is an evolution in people and process and culture. So we've done a lot of work. We launched a global transformation office. We had previously launched a Red Hat open innovation labs and have done a lot of work with our consulting services and our partners as well, to help with, sort of, people in process evolutions that need to occur to adopt these types of technologies as well as, to move towards a more cloud native approach. >> All right. So Joe, what one of the announcements that made it the show, it is talking about how OpenShift is working with virtualization. So, I think back to the earliest container days, there was a discussion of, "oh, you know, Docker and containers, "it kills VM." Or you know, Cloud of course. Some Cloud services run on VMs, other run on containers, they're serverless. So there's a lot of confusion out there as to. >> Yep. >> What happened, we know in IT, no technology ever dies, everything's always additive. It's figuring out the right solutions and the right bet. So, help us understand what Red Hat is doing when it comes to virtualization in OpenShift and Kubernetes and, how is your approach different than some of what we've already seen in the marketplace? >> Yeah, so definitely we've seen just explosive adoption of containers technology, right? Which has driven the OpenShift business and Red Hat's business overall. So, we expect that to continue, right? More applications moving towards that container-based, packaging and deployment model and leveraging Kubernetes and OpenShift to manage those environments. That being said, as you mentioned, virtualization has been around for a really long time, right? And, predominantly, most applications, today, are running virtualized. And so some of them have made the transition to containers or were built a container native from the start. But many more are still running in VM based environments and may never make that switch. So, what we were looking at is, how do we manage this sort of hybrid environment from the application perspective where you have some applications running in containers, other applications running in VMs? We have platforms like Red Hat, OpenStack, Red Hat Virtualization that leveraged the KVM hypervisor and Red Hat Enterprise Linux to serve apps running in a VM based environment. What we did with Kubernetes is, instead, how could we innovate to have convergence on the orchestration and management fund? And we leveraged the fact that, KVM, you know, a chosen hypervisor, is actually a Linux process that can itself be containerized. And so by running the hypervisor in a container, we can then span VMs that could be managed on that same platform as the containers run. So what you have in OpenShift Virtualization is the ability to use Kubernetes to manage containerized workloads, as well as, standard VM based workloads. And these are full VMs. These aren't micro VMs or, you know, things like Firecracker Kata Container. These are standard VMs that could be, well, Windows guests or Linux guests, running inside those VMs. And so it helps you basically, manage that type of environment where you may be moving to containers and more cloud native approach, but those containers need to interact or work with applications that are still in a VM based deployment environment. And we think it's really exciting, we've demoed it at the last Red Hat Summit. We're going to talk about it even more here, in terms of how we're going to bring those products to market and enable customers. >> Okay, yeah, Joe, let me make sure I understand this because as you said, it is a different approach. So, number one, if I'm moving towards a (mumbles) management solution, this is going to fit natively into what I'm doing. It's not taking some of my traditional management tools and saying, "oh, I also get some visibility containers." There's more, you know, here's my Kubernetes solution. And just some of those containers happen to be virtualized. Did I get that piece right? >> Yeah, I think it's more like... so we know that Kubernetes is going to be in in the environment because we know that, yeah, people are moving application workloads to standard Linux containers. But we also know that virtual machines are going to still exist in that environment. So you can think about it as, how would we enable Kubernetes to manage a virtual machine in the same way that it manages a Linux container? And, what we do there, is we actually, put the VM inside the container, right? So because the VM, specifically with (mumbles) is just a Linux process, and that's what a Linux container is. It's a Linux process, right? So you can run the hypervisor, span the virtual machines, inside of containers. But those virtual machines, are just like any other VM that would run in OpenStack or Red Hat Virtualization or what have you. And you could, vSphere for example. So those are traditional virtual machines, that are now being managed in a Kubernetes environment. And what we're seeing is sort of, this evolution of Kubernetes to take on these new types of workloads. VMs is just one example, of something that you can now manage with Kubernetes. >> Okay. And, help me understand what this means to really the app dev in my application portfolio. Because you know, the original promise of virtualization was, I can just stick my application in a VM and I never need to think about it ever again. And well, that was super helpful when windows NT was going end of life. In 2020, we do find that most companies do want to update their applications, and they are talking about, do I refactor them? Do I make them microservices architecture? I don't want to have that iceberg of an application that I'm just dragging along slowly into the new world. So. >> Yeah. >> What is this virtualization integration with Kubernetes? You mean for the AppDev and the applications? >> Yeah, sure, so what we see customers doing, what we see the application development team is doing is modernizing a lot of their existing applications, right? So they're taking traditional monolithic applications or end tier, like the applications that may run in a VM based environment and they're moving them towards more of a distributed architecture leveraging microservices based approach. But that doesn't happen all at once either, right? So, oftentimes what you see is your microservices, are still connected to VM based applications. Or maybe you're breaking down a monolithic application. The core is still running in a VM, but some of those business functions have now been carved out and containerized. So, you're going to end up in a hybrid environment from the application perspective in terms of how these applications are packaged, and deployed. The question is, what does that mean for your deployment architecture? Does it mean you always have to run a virtualization platform and a container platform together? That's how it's done today, right? OpenShift and Kubernetes run on top of vSphere, they run on top of Amazon and Azure and Google bands, and on top of OpenStack. But what if you could actually just run Kubernetes directly on Bare Metal and manage those types of workloads? That's really sort of the idea. A whole bunch of virtualization solution was based on is, let's just merge VMs natively with Kubernetes in the same way that we manage containers. And then, it can facilitate for the application developer. This evolution of apps that are running in one environment towards apps that are running essentially, in a hybrid environment from how they're packaged and deployed. >> Yeah, absolutely, something I've been hearing for the last year or so, that hybrid deployment, pulling apart application, sometimes it's even, the core piece as you said, is on premises and then I might have some of the more transactional pieces happening in the public cloud. So really interesting. So, how long has Red Hat been working on this? My (mumbles), something, you know, I'm familiar with in the CNCF. I believe it has been around for a couple of years. >> Yeah. >> So talk to us about just kind of how long it took to get here and, fully support stateful applications now. What's the overall roadmap look like? >> Yeah, so, so (mumbles) as a open source project was launched more than two years ago now. As you know, Red Hat really drives all of our development upstream in the open source community. So we launched (mumbles) project. We've been collaborating with other vendors and even customers on that. But then, you know, over time we then decided, how do we bring these technologies to market, which technologies make sense to bring the market? So, (mumbles) is the open source project. OpenShift and OpenShift Virtualization, which is what this feature is referred to commercially, is the product that then we would ship and support for running this in production environments. The capabilities, right. So, I think, those have been evolving as well. So, virtual machines have a specific requirements in terms of not only how they're deployed and managed, but how they connect to storage, how they connect networking, how do you do things like fencing and all sorts of live migration and that type of thing. We've been building out those types of capabilities. They're certainly still more to do there. But it's something that we're really excited about, not just from the perspective of running VMs, but just even more broadly from the perspective of how Kubernetes is expanding to take on new workloads, right? Because Kubernetes has moved far beyond just running, cloud native applications, today, you can run stateful services in containers. You can run things like AI and machine learning and analytics and IoT type services. But it hasn't come for free, right? This has come through a lot of hard work in the Kubernetes community, in the various associated communities, the container communities, communities like (mumbles). But it's all kind of trying to leverage that same automation, that same platform to just do more things. The cool thing is, it'll not just be Red Hat talking about it, but you'll see that from a lot of customers that are doing sessions at our summit this year and beyond. Talking about how, what it means to them. >> Yeah, that's great. Always love hearing the practitioner viewpoint. All right, Joe, I want to give you the final word when it comes to this whole space things kind of move pretty fast, but also we remember it when we first saw it. So, tell us what the customers who were kind of walking away from Red Hat Summit 2020 should be looking at and understanding that they might not have thought about if they were looking at Kubernetes, a year or two ago? >> Yeah, I think a couple of things. One is, yeah, Kubernetes and this whole container ecosystem is continuing to evolve, continuing to add capabilities and continue to expand the types of workloads, that it can run. Red Hat is right in the center of it. It's all happening in open source. Red Hat as a leading contributor to Kubernetes and open source in general, is driving a lot of this innovation. We're working with some great customers and partners, other vendors, who are working side by side with us as well. And I think the most important thing is we understand that it's an evolution for customers, right? So this evolution towards moving applications to the public cloud, adopting a hybrid cloud approach. This evolution in terms of expanding the types of workloads, and how you run and manage them. And that approach is something that we've always helped customers do and we're doing that today as they move out towards embracing a cloud native. >> All right, well, Joe Fernandez, thank you so much for the updates. Congratulations on the launch of OpenShift Virtualization. I definitely look forward to talking to some the customers in finding out that helping them along their hybrid cloud journey. All right. Lots more coverage from the CUBE at Red Hat Summit. I'm Stu Miniman ,and thank you for watching the CUBE.
SUMMARY :
brought to you by Red Hat. and General Manager, of Core Cloud Platforms with Red Hat. Glad to be here. What does that mean to your customers? is the fact that all their applications aren't going to run So, you know, where is that consistency needed? and then, like I said, out out to the edge, right? And what data do you have And that transition is really what Red Hat is all about. and it's the applications that are hugely important and management, to manage in all those environments. So, I think back to the earliest container days, It's figuring out the right solutions and the right bet. is the ability to use Kubernetes And just some of those containers happen to be virtualized. of something that you can now manage with Kubernetes. that I'm just dragging along slowly into the new world. in the same way that we manage containers. sometimes it's even, the core piece as you said, So talk to us about just kind of is the product that then we would All right, Joe, I want to give you the final word and continue to expand the types of workloads, Congratulations on the launch of OpenShift Virtualization.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Joe Fernandez | PERSON | 0.99+ |
Joe Fernandes | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Joe | PERSON | 0.99+ |
last year | DATE | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
2020 | DATE | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Linux | TITLE | 0.99+ |
a month ago | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
a year ago | DATE | 0.99+ |
both sides | QUANTITY | 0.99+ |
one example | QUANTITY | 0.99+ |
vSphere | TITLE | 0.99+ |
Kubernetes | TITLE | 0.99+ |
Red Hat Summit 2020 | EVENT | 0.99+ |
Red Hat Summit | EVENT | 0.98+ |
a year | DATE | 0.98+ |
today | DATE | 0.98+ |
OpenShift | TITLE | 0.98+ |
first | QUANTITY | 0.98+ |
a quarter ago | DATE | 0.98+ |
over 60% | QUANTITY | 0.97+ |
Multicloud | ORGANIZATION | 0.97+ |
Red Hat Virtualization | TITLE | 0.97+ |
Red Hat Enterprise | TITLE | 0.97+ |
one place | QUANTITY | 0.97+ |
Red Hat | TITLE | 0.97+ |
one | QUANTITY | 0.96+ |
Windows | TITLE | 0.96+ |
Firecracker | TITLE | 0.96+ |
this year | DATE | 0.96+ |
One | QUANTITY | 0.95+ |
Red Hat Enterprise Linux | TITLE | 0.95+ |
IBM | ORGANIZATION | 0.93+ |
one environment | QUANTITY | 0.93+ |
two ago | DATE | 0.92+ |
windows NT | TITLE | 0.92+ |
x86 | TITLE | 0.91+ |
Leslie Minnix-Wolfe & Russ Elsner, ScienceLogic | ScienceLogic Symposium 2019
(energetic music) >> From Washington D.C., It's theCUBE! Covering ScienceLogic Symposium 2019. Brought to you by ScienceLogic. >> Welcome back to TheCUBE's coverage of ScienceLogic Symposium 2019, I'm Stu Miniman, and we're here at the Ritz-Carlton in Washington, D.C. Happy to welcome to the program two first-time guests from ScienceLogic, to my left is Leslie Minnix-Wolfe, who is the Senior Director of Product Marketing. And to her left, is Russ Elsner, who's the Senior Director of Product Strategy. Thank you so much for joining us. >> Thank you sir. >> Good, good to be here. >> All right, so Leslie let's start with you. Talk a lot about the product, a whole lot of announcements, Big Ben on the keynote this morning. Everybody's in, getting a little bit more of injection in the keynote today. Tell us a little bit about your roll, what you work on inside of ScienceLogic. >> Okay, so I am basically responsible for enterprise product marketing. So my job is to spin the story and help our sales guys successfully sell the product. >> All right, and Russ. >> I'm part of the product strategy team. So, I have product management responsibilities. I work a lot with the analytics and applications. And I spend a lot of time in the field with our customers. >> All right so, Leslie let's start with enterprise, the keynote this morning. The themes that I hear at many of the shows, you know we talk about things like digital transformation. But, we know the only constant in our environment is change. You know, it's good. I've actually talked to a couple of your customers and one of them this morning he's like "Look, most people don't like change. "I do, I'm embracing it I'm digging in, It's good." But, you know, we have arguments sometimes in analyst circles. And it's like are customers moving any faster. My peers that have been in the industry longer, they're like, Hogwash Stu. They never move faster they don't want change, we can't get them to move anything. I'm like, come on, if they don't the alternative is often, You're going to be... You know, you're competitors are going to take advantage of data and do things better. So, bring us a little bit of insight as what you're hearing from your customers both here and in your day to day. >> Sure, yeah, change is constant now and so one of the big challenges that our customers are facing is how do I keep up with it. The traditional manual processes that they've had in place for years are just not sufficient anymore. So they're looking for ways to move faster, to automate some of the processes that they've been doing manually. To find ways to free up resources to focus on things that do require a human to be involved. But they really need to have more automation in their day to day operations. >> All right, so Russ when I look at this space you know, tooling, monitoring has been something that in my career, has been a little bit messy. (laughter) Guess a little bit of an understatement even. It's an interesting... When I look at, kind of, that balance between what's happening in the infrastructure space and the application space. I went through, one of your partners over here is like "from legacy to server lists and how many weeks." (laughter) And I'm like okay that sounds good on a slide but, these things take awhile. >> Absolutely. Bring us inside a little bit, kind of the the application space an how that marries with the underlying pieces and monitoring. >> Yeah, you have a lot of transformations happening. There's a lot of new technologies and trends happening. You hear about server lists or containers or microservices. And that does represent a part of the application world. There are applications being written with those technologies. But, one of the things is that those applications don't live in isolation. It's that there part of broader business services and we're not rewriting everything and so the new shiny application and the new framework has to work with the old legacy application. So, a big piece of what we see is how do we collapse those different silos of information? How do we merge that data into something meaningful? You can have the greatest Kubernetes based microservice application but, if it requires a SAP instance it's on PRIM it's on Bare Metal. Those things need to work together. So, how do you work with an environment that's like that? Enterprise, just by it's nature is incredibly heterogeneous, lot's of different technologies and that's not going to change. >> Yeah. It's going to be that way. >> You're preaching to the choir, here. You know, IT it always seems additive the answer is always and. And, unfortunately, nothing ever dies. By the way you want to run that wonderful Kubernetes Docker stuff and everything. I could do it on a mainframe with Z Linux. So, from that environment to the latest greatest hypercloud environment >> Right. Talk a little bit about your customers. Most of them probably have hundreds of applications. They're working through that portfolio. What goes where, how do I manage all of those various pieces, and not kill my staff? (laughter) One of the things we're spending a lot of time with this, is that obviously, we come from a background of infrastructure management. So, we understand the different technologies different layers and the heterogeneous nature and on top of that runs application. So they have their own data and there's APM space. So we're seeing a lot of interest in the work we're doing with taking our view of the infrastructure and marrying it to the application view that we're getting from tools like Appdynamics or Dynatrace or New Relic. And so, we're able to take that data and leverage it on top of the infrastructure to give you a single view which aids in root cause analysis, capacity planning and all the different things that people want to do. Which lead us to automation. So, this idea of merging data from lots of sources is a big theme for us. >> All right so, Leslie who are some of the key constituents that you're talking to, to messaging to. In the industry we talked about silos for so many time. And now it's like oh, we're going to get architects and generalists. And you know cloud changes everything, yes and no. (laughter) We understand where budgets sit for most CIO's today. So, bring us inside what you're seeing. >> Sure. Yeah, we're seeing a tremendous change. Where before we use to talk more to the infrastructure team, to the folks managing the servers, the storage the network. We're really seeing a broader audience. And a multiple constituent. We're looking at directors, VP's, CIO's, CEO's, architects. We're starting to see more people that are tools managers, folks that are involved in the application side of the house. So, it's really diverged. So, you're not going in and talking to one person you're talking to lots of different teams, lots of different organizations that need to work together. To Russ's point in about being able to bring all this data together. As you bring it together, those different stakeholders have more visibility into each others areas. And they also have a better understanding of what the impact is when something goes down in the infrastructure, how it effects the app and vice versa. >> Leslie, the other thing I'm wondering if you can help me squint through, when I looked at the landscape, it's, you know, my ITSM's I've got my logging, I've got all my various tools and silos. When I hear something like, actually, your CEO Dave just said "Oh, we just had a customer that replaced 50 tools." with there it's like, How do you target that? How does a customer know that they have a solution that they have a challenge that you fit, Because, you understand, you can't be all things to all people. You've got certain partners that might claim that kind of thing. >> Right But, where you fit in the marketplace how do you balance that? >> Well, so I think what we're seeing now is that there have been some big players for a long time. What we refer to fondly as the Big Four. And those companies really haven't evolved to the extent that they can support the latest technology. Certainly at the speed with which organizations are adopting them. So, they might be able to support some of the legacy but they've really become so cumbersome, so complicated and difficult to maintain people are wanting to move away from them. I would say five years ago, most organizations weren't willing to move down that path. But with some of the recent acquisitions, The Broadcom acquisition, Microfocus acquisition. You're seeing that more organizations are looking to replace those tools in their entirety. And as a result of that they're looking at how can I minimize my tool set. I'm not going to get rid of everything and only have one vendor. But, how do I pick the right tools and bring them together. And this is one of the areas where we do extremely well in that we can bring in data, we can integrate in other tools, we can give you the full picture. But, we're kind of that hub, that central. And I think we heard that earlier today from Bailey at Cisco, where he talked about ScienceLogic is really the core to their monitoring and management environment, because we're bringing the data and we're feeding the data in to other systems as well as managing it within ScienceLogic. >> Russ, I actually heard, data was emphasized more that I expect. I know enough about the management and monitoring space. We understand data was important to that, I'm a networking guy by background, we've been talking about leveraging the data for network and using some automation and things like that but it's a little bit different. Can you talk some about those relationships to data? We understand data's going to be everywhere and customers actually wrapping my arms around it make sure I can manage it, compliance and to hopefully get value out of that is one of the most important things in today. >> Absolutely, so one of the things we stress a lot when we talk about data, it use to be that data was hard to come by. We were data poor and so how do we get... We don't have a probe there so how do we get this data, Do we need agent? That's different now, data is... We are drowning in data, we have so much data. So, really the key is to give that data context. And so for us that means a lot of structure, and topology and dependencies across the layers of abstraction, across the application. And we think that's really the key to taking this, just vast unstructured mess of data that isn't useful to the business and actually be able to take... Apply analytics, and actually take action, and ultimately drive automation by learning and maintaining that structure in real time automatically, because that's something a human can't do. So, you need machine help, you need to automate that. >> So, Leslie, there was in the keynote this morning that to start discussion of the AI Ops maturity model >> Right >> And one of the things struck me is there was not a single person in the poll that said, yes I've gone fully automated. And first, there's the maturity of the technology, the term and where we are. But, there's also that, let's put it on the table. That fear sometimes, is to "Oh my gosh, the machines are taking our jobs" (laughter) You know, we laugh, but it is something that needs to be addressed. How are you addressing that, Where are your customers with at least that willingness, because I use to run operations for a number of years, and I told my team, look you're going to have more work next year, and you're going to have more things change, so if you can't simplify, automate. Get rid of things, I've got to have somebody helping me, and boy those robots would be a good help there. >> What we're seeing is, I mean let's be real, people don't like to do the mundane tasks, right. So you think about, When you report an issue to the service desk. Do you really want to open that ticket? Do you want to enter in all that information yourself? Do you want to provide all the details that they need in order to help you? No. People don't do it they put in the bare minimum and then what ends up happening is there's this back and forth, as they try collect more information. It's things like that, that you want to automate. You want to be able to take that burden off of the individuals And do the things, or at least allow them to do the things that they really need to do. The things that require their intelligence. So, we can do things like clean up storage disk space when your starting to run out of disk space. Or we can restart a service, or we might apply a configuration change that we know that is inconsistent in environment. So, there's lots of things like that that you can automate without actually replacing the individual. You're just freeing them up to do more high level thinking. >> Russ, anything else along the automation line. Great customer examples or any successes that you've seen that are worth sharing? >> Yeah, automation also comes in the form of connecting the breadcrumbs. So, we have a great example. A customer we worked with, they had an EPM tool, one of the great ones, you know, top of the magic quadrant kind of thing, and it kept on reporting code problems. The applications going down, affecting revenue, huge visibility. And it's saying code problem, code problem ,code problem. But the problem is jumping around. Sometimes it's here, sometimes it's there. So, it seemed like a ghost. So, when we connected that data, the APN data with the V center data and the network data what it turned out was, there was a packet loss in the hypervisor. So, it was actually network outage that was manifesting itself as a code problem, and as soon as they saw that, they said what's causing that network problem? They immediately found a big spike of traffic and were able to solve it. They always had the data. They had the network data, they had the VMware data they had the JVM data. They didn't know to connect the dots. And so, by us putting it right next to each other we connected the dots, and it was a human ultimately that said I know what's wrong, I can fix that. But it took them 30 seconds to solve a problem that they had been chasing after for months. That's a form of automation too is get the information to the human, so that they can make a smart decision. That's automation just as much as rebooting a >> Exactly server or cleaning a disk >> Well right, It's The Hitchhikers Guide to the Galaxy. Sometimes, the answers are easy if I know what question to ask. >> Exactly, yes. (laughter) >> And that's something we've seen from data scientists too. That's what their expertise is, is to help find that. All right, Leslie give us a little view forward. We heard a little bit, so many integrations, the AI ops journey. What should customers be looking for forward? What are they asking you, to help bring them along that journey? >> Oh gosh. They're asking us to make it easier on all counts. Whether it's easier to collect the data, easier to add the context to the data, easier to analyze the data. So, we're putting more and more analytics into our platform. So that their not having to do a lot of the analysis themselves. There's, as you said earlier, there's the folks that are afraid they're going to lose their job because the robots or the machines are taking over. That's not really where I see it. It's just that we're bringing the automation in ways and the analytics in ways that they don't want to have to do, so that they can look at it and solve the really gnarly problems and start focusing on areas that are not necessarily going to be automatable or predictable. It's the things that are unusual that their going to have to get involved in as opposed to the things that are traditional and constant. So, Russ, I'd love for you to comment on the same question. And just a little bit of feedback I got talking to some of the customers is they like directionally where it's going, but the term they through out was dynamic. Because, if you talk about cloud you talk about containers. Down the road things like serverless. It's if it pulls every five minutes it's probably out of date. >> oh, Absolutely. I remember back when we talked big data, real time was one of those misnomers that got thrown out there. Really, what we always said is what real time needs to mean is the data in the right place to the right people to solve the issue >> Absolutely. >> Exactly. So, where do you guys see this directionally, and how do you get more dynamic? >> Well see, dynamic exists in a bunch of different ways. How immediate is the data? How accurate is the dependency map, and that's changing and shifting all the time. So, we have to keep that up to date automatically in our product. It's also the analytics that get applied the recommendations you make. And one of the things you can talk to data scientists and they can build a model, train a model, test a model and find something. But if they find something that was true three weeks ago it's irrelevant. So, we need to build systems that can do this in real time. That they can in real time, meaning, gather data in real time, understand the context in real time, recognize the behavior and make a recommendation or take an action. There's a lot of stuff that we have to do to get there. We have a lot of the pieces in place, it's a really cool time in the industry right now because, we have the tools we have the technology. And it's a need that needs to be filled. That's really where we're spending our energy is completing that loop. Closed loop system that can help humans do their jobs better and in a more automated way. >> Awesome. Well, Leslie and Russ, thanks so much for sharing your visibility into what customers are doing and the progress with your platforms. >> All right, thank you Stu. >> And we'll be back with more coverage here from ScienceLogic Symposium 2019. I'm Stu Miniman, and thank you for watching theCUBE. (energetic music)
SUMMARY :
Brought to you by ScienceLogic. And to her left, is Russ Elsner, of injection in the keynote today. and help our sales guys successfully sell the product. I'm part of the product strategy team. My peers that have been in the industry longer, and so one of the big challenges that our customers and the application space. the application space an how that marries and the new framework has to work It's going to be that way. So, from that environment to the latest greatest and marrying it to the application view that we're In the industry we talked about silos for so many time. lots of different organizations that need to work together. that they have a challenge that you fit, ScienceLogic is really the core to their is one of the most important things in today. So, really the key is to give that data context. And one of the things struck me is that they really need to do. Russ, anything else along the automation line. is get the information to the human, Well right, It's The Hitchhikers Guide to the Galaxy. (laughter) so many integrations, the AI ops journey. So that their not having to do the data in the right place to the and how do you get more dynamic? And one of the things you can talk to data scientists and the progress with your platforms. I'm Stu Miniman, and thank you for watching theCUBE.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Leslie | PERSON | 0.99+ |
Leslie Minnix | PERSON | 0.99+ |
Russ | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Russ Elsner | PERSON | 0.99+ |
50 tools | QUANTITY | 0.99+ |
Leslie Minnix-Wolfe | PERSON | 0.99+ |
30 seconds | QUANTITY | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Washington, D.C. | LOCATION | 0.99+ |
one | QUANTITY | 0.99+ |
Washington D.C. | LOCATION | 0.99+ |
ScienceLogic | ORGANIZATION | 0.99+ |
one vendor | QUANTITY | 0.99+ |
Dave | PERSON | 0.99+ |
today | DATE | 0.99+ |
first | QUANTITY | 0.98+ |
next year | DATE | 0.98+ |
TheCUBE | ORGANIZATION | 0.98+ |
five years ago | DATE | 0.98+ |
three weeks ago | DATE | 0.98+ |
Z Linux | TITLE | 0.98+ |
ScienceLogic Symposium 2019 | EVENT | 0.97+ |
both | QUANTITY | 0.97+ |
single person | QUANTITY | 0.95+ |
one person | QUANTITY | 0.94+ |
One | QUANTITY | 0.93+ |
this morning | DATE | 0.93+ |
Stu | PERSON | 0.93+ |
earlier today | DATE | 0.92+ |
hundreds of applications | QUANTITY | 0.92+ |
New Relic | ORGANIZATION | 0.91+ |
The Hitchhikers Guide to the Galaxy | TITLE | 0.9+ |
single view | QUANTITY | 0.89+ |
two first-time guests | QUANTITY | 0.89+ |
ScienceLogic Symposium | EVENT | 0.89+ |
Kubernetes Docker | TITLE | 0.88+ |
Dynatrace | ORGANIZATION | 0.86+ |
Bailey | PERSON | 0.86+ |
VMware | ORGANIZATION | 0.83+ |
every five minutes | QUANTITY | 0.78+ |
Appdynamics | ORGANIZATION | 0.73+ |
Bare Metal | TITLE | 0.71+ |
Carlton | ORGANIZATION | 0.67+ |
Big Ben | PERSON | 0.66+ |
Microfocus | ORGANIZATION | 0.64+ |
Ritz- | LOCATION | 0.6+ |
Kubernetes | TITLE | 0.6+ |
the areas | QUANTITY | 0.6+ |
CEO | PERSON | 0.59+ |
2019 | DATE | 0.59+ |
years | QUANTITY | 0.58+ |
Broadcom | ORGANIZATION | 0.57+ |
Wolfe | PERSON | 0.55+ |
each | QUANTITY | 0.5+ |
couple | QUANTITY | 0.49+ |
SAP | ORGANIZATION | 0.48+ |
Four | EVENT | 0.46+ |
Brian Pawlowski, DriveScale | CUBEConversation, Sept 2018
(intense orchestral music) >> Hey welcome back everybody, Jeff Frick here with theCUBE. We're having a CUBE Conversation in our Palo Alto studios, getting a short little break between the madness of the conference season, which is fully upon us, and we're excited to have a long time industry veteran Brian Pawlowski, the CTO of DriveScale, joining us to talk about some of the crazy developments that continue to happen in this in this world that just advances, advances. Brian, great to see you. >> Good morning, Jeff, it's great to be here, I'm a bit, still trying to get used to the timezone after a long, long trip in Europe, but I'm glad to be here, I'm glad we finally were able to schedule this. >> Yes, it's never easy, (laughs) one of the secrets of our business is everyone is actually all together at conferences, it's hard to get 'em together when when there's not that catalyst of a conference to bring everybody together. So give us the 101 on DriveScale. >> So, DriveScale. Let me start with, what is composable infrastructure? DriveScale provides product for orchestrating disaggregated components on a high-performance fabric to allow you to spin up essentially your own private cloud, your own clusters for these modern applications, scale out applications. And I just said a bunch of gobble-dee-gook, what does that mean? The DriveScale software is essentially an orchestration package that provides the ability to take compute nodes and storage nodes on high-performance fabric and securely form multi-tenant architectures, much like you would in a cloud. When we think of application deployment, we think of a hundred nodes or 500 nodes. The applications we're looking at are things that our people are using for big data, machine learning, or AI, or, or these scale out databases. Things like Vertica, Aerospike, is important, DRAM, ESES, dBase database, and, this is an alternative to the standard way of deploying applications in a very static nature onto fixed physical resources, or into network storage coming from the likes of Network Appliance, sorry NetApp, and Dell EMC. It's the modern applications we're after, the big data applications for analytics. >> Right. So it's software that basically manages the orchestration of hardware, I mean of compute, store, and networks you can deploy big data analytics applications? >> Yes. >> Ah, at scale. >> It's absolutely focused on the orchestration part. The typical way applications that we're in pursuit of right now are deployed is on 500 physical bare metal nodes from, pick your vendor, of compute and storage that is all bundled together and then laid out into physical deployment on network. What we do is just that you essentially disaggregate, separate compute, pure compute, no disks at all, storage into another layer, have the fabric, and we inventory it all and, much like vCenter for virtualization, for doing software deployment of applications, we do software deployment of scale out applications and a scale out cluster, so. >> Right. So you talked about using industry standard servers, industry standard storage, does the system accommodate different types of compute and CPUs, different types of storage? Whether it's high performance disks, or it's Flash, how does it accommodate those things? And if I'm trying to set up my big stack of hardware to then deploy your software to get it configured, what're some of the things I should be thinkin' about? >> That's actually, a great question, I'm going to try to hit three points. (clears throat) Absolutely. In fact, a core part of our orchestration layer is to essentially generalize the compute and storage components and the networking components of your data center, and do rule-based, constraint-based selection when creating a cluster. From your perspective when creating a cluster (coughs) you say "I want a hundred nodes, and I'm going to run this application on it, and I need that this environment for the application." And this application is running on local, it thinks it's running local, bare metal, so. You say "A hundred nodes, eight cores each minimum, and I want 64 gig of memory minimum." It'll go out and look at the inventory and do a best match of the components there. You could have different products out there, we are compute agnostic, storage agnostic, you could have mix and match, we will basically do a best fit match of all of your available resources and then propose to you in a couple seconds back with the cluster you want, and then you just hit go, and it forms a cluster in a couple seconds. >> A virtual cluster within that inventory of assets that I-- >> A virtual cluster that-- Yes, out of the inventory of assets, except from the perspective of the application it looks like a physical cluster. This is the critical part of what we do, is that, somebody told me "It's like we have an extension cord between the storage and the compute nodes." They used this analogy yesterday and I said I was going to reuse it, so if they listen to this: Hey, I stole your analogy! We basically provide a long extension cord to the direct-to-test storage, except we've separated out the storage from the compute. What's really cool about that, it was the second point of what you said is that you can mix and match. The mix and match occurs because one of the things your doing with your compute and storage is refreshing your compute and storage at three to five year cycles, separately. When you have the old style model of combining compute and storage in what I'd call a captured dazz scenario. You are forced to do refreshes of both compute and persistent storage at the same time, it just becomes, it's a unmanageable position to be in, and separating out the components provides you a lot of flexibility from mixing and matching different types of components, doing rolling upgrades of the compute separate from the storage, and then also having different storage tiers that you can combine SSD storage, the biggest tiers today are SSD storage and spinning disk storage, being able to either provide spinning disk, SSDs, solid-state storage, or a mixture of both for a hybrid deployment for an application without having to worry about a purchase time having to configure your box that way, we just basically do it on the fly. >> Right. So, and then obviously I can run multiple applications against that big stack of assets, and it's going to go ahead and parse the pieces out that I need for each application. >> We didn't even practice this beforehand, that was a great one too! (laughs) Key part of this is actually providing secure multi-tenant environment is the phrase I use, because it's a common phrase. Our target customer is running multiple applications, 2010, when somebody was deploying big data, they were deploying Hadoop. Quickly, (snaps) think, what were the other things then? Nothing. It was Hadoop. Today it's 10 applications, all scale out, all having different requirements for the reference architecture for the amount of compute storage. So, our orchestration layer basically allows you to provision separate virtual physical clusters in a secure, multi-tenant way, cryptographically secure, and you could encrypt the data too if you wanted you could turn on encryption to get over the wire with that data at rest encryption, think GDPR and stuff like that. But, the different clusters cannot interfere with each other's workloads, and because you're on a fully switched internet fabric, they don't interfere with performance either. But that secure multi-tenant part is critical for the orchestration and management of multiple scale out clusters. >> So then, (light laugh) so in theory, if I'm doing this well, I can continually add capacity, I can upgrade my drives to SSDs, I can put in new CPUs as new great things come out into my big cloud, not my cloud, but my big bucket of resources, and then using your software continue to deploy those against applications as is most appropriate? >> Could we switch seats? (both laugh) Let me ask the questions. (laughing) No, because it's-- >> It sounds great, I just keep adding capacity, and then it redeploys based on the optimum, right? >> That's a great summary because the thing that we're-- the basic problem we're trying to solve is that... This is like the lesson from VMware, right? One lesson from VMware was, first it was, we had unused CPU resources, let's get those unused CPU cycles back. No CPU cycle shall go unused! Right? >> I thought that they needed to keep 50% overhead, just to make sure they didn't bump against the roof. But that's a different conversation. >> That's a little detail, (both laugh) that's a little detail. But anyway. The secondary effect was way more important. Once people decoupled their applications from physical purchase decisions and rolling out physical hardware, they stopped caring about any critical piece of hardware, they then found that the simplified management, the one button push software application deployment, was a critical enabler for business operations and business agility. So, we're trying to do what VMware did for that kind of captured legacy application deployments, we're trying to do that for essentially what has been historically, bare metal, big data application deployment, where people were... Seriously in 2012, 2010, 2012, after virtualization took over the data center, and the IT manager had his cup of coffee and he's layin' back goin' "Man, this is great, I have nothing else to worry about." Then there's a (knocks) and the guy comes in his office, or his cube, and goes "Whaddya want?!" and he goes "Well, I'd like you to deploy 500 bare metal nodes to run this thing called Hadoop." and he goes "Well, I'll just give you 500 virtualized instances." a he goes "Nope, not good enough! I want to start going back to bare metal." And sense then it's gotten worse. So what we're trying to do is restore the balance in the universe, and apply for the scale out clusters what virtualization did for the legacy applications. Does that make a little bit of sense? >> Yeah! And is it heading towards the other direction ride is towards the atomic, right? So if you're trying to break the units of compute and store down to the base, so you've got a unified baseline that you can apply more volume than maybe a particular feature set, in a particular CPU, or a particular, characteristic of a particular type of a storage? >> Right. >> This way you're doing in software, and leveraging a whole bunch of it to satisfy, as you said kind of the meets min for that particular application. >> Yeah, absolutely. And I think, kind of critical about the timing of all this is that virtualization drove, very much, a model of commoditization of CPUs, once VMware hit there, people weren't deploying applications on particular platforms, they were deploying applications on a virtualized hardware model, and that was how applications were always thought about from then on. From a lot of these scale out applications, not a lot of them, all of them, are designed to be hardware agnostic. They want to run on bare metal 'cause they're designed to run, when you play a bare metal application for a scale out, Apache Spark, it uses all of the CPU on the machine, you don't need virtualization because it will use all the CPU, it will use all the bandwidth and the disks underneath it. What we're doing is separating it out to provide lifecycle management between the two of them, but also allow you to change the configurations dynamically over time. But, this word of atomic kinda's a-- the disaggregation part is the first step for composability. You want to break it out, and I'll go here and say that the enterprise storage vendors got it right at one point, I mean, they did something good. When they broke out captured storage to the network and provided a separation of compute and storage, before virtualization, that was a step towards a gaining controlled in a sane management approach to what are essentially very different technologies evolving at very different speeds. And then your comment about "So what if you want to basically replace spinning disks with SSDs?" That's easily done in a composable infrastructure because it's a virtual function, you're just using software, software-defined data center, you're using software, except for the set of applications that just slip past what was being done in the virtualized infrastructure, and the network storage infrastructure. >> Right. And this really supports kind of the trend that we see, which is the new age, which is "No, don't tell me what infrastructure I have, and then I'll build an app and try and make it fit." It's really app first, and the infrastructure has to support the app, and I don't really care as a developer and as a competitive business trying to get apps to satisfy my marketplace, the infrastructure, I'm just now assuming, is going to support whatever I build. This is how you enable that. >> Right. And very importantly, the people that are writing all of these apps, the tons of low apps, Apache-- by the way, there's so many Apache things, Apache Kafka, (laughing) Apache Spark, the Hadoops of the world, the NoSQL databases, >> Flinks, and Oracle, >> Cassandra, Vertica, things that we consider-- >> MongoDB, you got 'em all. MongoDB, right. Let's just keep rolling these things off our tongue. >> They're all CUBE alumni, so we've talked to 'em all. >> Oh, this is great. >> It's awesome. (laughs) >> And they're all brilliant technologists, right? And they have defined applications that are so, so good at what they do, but they didn't all get together beforehand and say, "Hey, by the way, how can we work together to make sure that when this is all deployed, and operating in pipelines, and in parallel, that from an IT management perspective, it all just plays well together?" They solved their particular problems, and when it was just one application being deployed no harm no foul, right? When it's 10 applications being deployed, and all of a sudden the line item for big data application starts creeping past five, six, approaching 10%, people start to get a little bit nervous about the operational cost, the management cost, deployability, I talked about lifecycle management, refreshes, tech refreshes, expansion, all these things that when it's a small thing over there in the corner, okay, I'll just ignore it for a while. Yeah. Do you remember the old adventure game pieces? (Jeff laughs) I'm dating myself. >> What's adventure game, I don't know? (laughs) >> Yeah, when you watered a plant, "Water, please! Water, please!" The plant, the plant in there looked pitiful, you gave it water and then it goes "Water! Water! Give me water!" Then it starts to attack, but. >> I'll have to look that one up. (both laugh) Alright so, before I let you go, you've been at this for a while, you've seen a lot of iterations. As you kind of look forward over the next little while, kind of what do you see as some of the next kind of big movements or kind of big developments as kind of the IT evolution, and every company's now an IT company, or software company continues? >> So, let's just say that this is a great time, why I joined DriveScale actually, a couple reasons. This is a great time for composable infrastructure. It's like "Why is composalbe infrastructure important now?" It does solve a lot of problems, you can deploy legacy applications over and stuff, but, they don't have any pain points per se, they're running in their virtualization infrastructure over here, the enterprise storage over here. >> And IBM still sells mainframes, right? So there's still stuff-- >> IBM still sells mainframes. >> There's still stuff runnin' on those boxes. >> Yes there is. (laughs) >> Just let it be, let it run. >> This came up in Europe. (laughs) >> And just let it run, but there's no pain point there, what these increasingly deployed scale out applications, 2004 when the clocks beep was hit, and then everything went multi-core and then parallel applications became the norm, and then it became scale out applications for these for the Facebooks of the world, the Googles of the world, whatever. >> Amazon, et cetera. >> For their applications, that scale out is becoming the norm moving forward for application architecture, and application deployment. The more data that you process, the more scale out you need, and composable infrastructure is becoming a-- is a critical part of getting that under control, and getting you the flexibility and manageability to allow you to actually make sense of that deployment, in the IT center, in the large. And the second thing I want to mention is that, one thing is that Flash has emerged, and that's driven something called NVME over Fabrics, essentially a high-performance fabric interconnect for providing essentially local latency to remote resources; that is part of the composable infrastructure story today, and you're basically accessing with the speed of local access to solid state memory, you're accessing it over the fabric, and all these things are coming together driving a set of applications that are becoming both increasingly important, and increasingly expensive to deploy. And composable infrastructure allows you to get a handle on controlling those costs, and making it a lot more manageable. >> That's a great summary. And clearly, the amount of data, that's going to be coming into these things is only going up, up, up, so. Great conversation Brian, again, we still got to go meet at TerĂşn, later so. >> Yeah, we have to go, yes. >> We will make that happen with ya. >> Great restaurant in Palo Alto. >> Thanks for stoppin' by, and, really appreciate the conversation. >> Yeah, and if you need to buy DriveScale, I'm your guy. (both laughing) >> Alright, he's Brian, I'm Jeff, you're walking the CUBE Conversation from our Palo Alto studios. Thanks for watchin', we'll see you at a conference soon, I'm sure. See ya next time. (intense orchestral music)
SUMMARY :
madness of the conference season, which is fully upon us, but I'm glad to be here, one of the secrets of our business that provides the ability to take the orchestration of hardware, It's absolutely focused on the orchestration part. does the system accommodate and the networking components of your data center, and persistent storage at the same time, and it's going to go ahead and and you could encrypt the data too if you wanted Let me ask the questions. This is like the lesson from VMware, right? I thought that they needed to keep 50% overhead, and apply for the scale out clusters and leveraging a whole bunch of it to satisfy, and the network storage infrastructure. and the infrastructure has to support the app, the Hadoops of the world, the NoSQL databases, MongoDB, you got 'em all. It's awesome. and all of a sudden the line item for big data application the plant in there looked pitiful, kind of the IT evolution, the enterprise storage over here. (laughs) This came up in Europe. for the Facebooks of the world, the Googles of the world, and getting you the flexibility and manageability And clearly, the amount of data, really appreciate the conversation. Yeah, and if you need to buy DriveScale, I'm your guy. we'll see you at a conference soon, I'm sure.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Brian Pawlowski | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Brian | PERSON | 0.99+ |
50% | QUANTITY | 0.99+ |
Europe | LOCATION | 0.99+ |
10 applications | QUANTITY | 0.99+ |
2012 | DATE | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
two | QUANTITY | 0.99+ |
2010 | DATE | 0.99+ |
Sept 2018 | DATE | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
2004 | DATE | 0.99+ |
five year | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
500 nodes | QUANTITY | 0.99+ |
One lesson | QUANTITY | 0.99+ |
MongoDB | TITLE | 0.99+ |
both | QUANTITY | 0.99+ |
six | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
64 gig | QUANTITY | 0.99+ |
eight cores | QUANTITY | 0.99+ |
10% | QUANTITY | 0.99+ |
Network Appliance | ORGANIZATION | 0.98+ |
one application | QUANTITY | 0.98+ |
first step | QUANTITY | 0.98+ |
five | QUANTITY | 0.98+ |
each application | QUANTITY | 0.98+ |
second point | QUANTITY | 0.98+ |
VMware | ORGANIZATION | 0.97+ |
DriveScale | ORGANIZATION | 0.97+ |
GDPR | TITLE | 0.97+ |
101 | QUANTITY | 0.97+ |
today | DATE | 0.97+ |
Cassandra | TITLE | 0.97+ |
Today | DATE | 0.96+ |
second thing | QUANTITY | 0.96+ |
CUBE | ORGANIZATION | 0.96+ |
one | QUANTITY | 0.96+ |
NoSQL | TITLE | 0.96+ |
each | QUANTITY | 0.96+ |
Facebooks | ORGANIZATION | 0.96+ |
one thing | QUANTITY | 0.95+ |
one point | QUANTITY | 0.95+ |
both laugh | QUANTITY | 0.95+ |
first | QUANTITY | 0.94+ |
Googles | ORGANIZATION | 0.94+ |
Dell EMC | ORGANIZATION | 0.94+ |
NetApp | ORGANIZATION | 0.93+ |
Apache | ORGANIZATION | 0.91+ |
three points | QUANTITY | 0.91+ |
DriveScale | TITLE | 0.88+ |
TerĂşn | ORGANIZATION | 0.88+ |
500 bare metal nodes | QUANTITY | 0.88+ |
Flinks | TITLE | 0.87+ |
Vertica | TITLE | 0.86+ |
a hundred nodes | QUANTITY | 0.85+ |
vCenter | TITLE | 0.84+ |
CUBEConversation | EVENT | 0.83+ |
couple seconds | QUANTITY | 0.83+ |
500 physical bare metal nodes | QUANTITY | 0.81+ |
couple | QUANTITY | 0.81+ |
Aerospike | TITLE | 0.78+ |
500 virtualized | QUANTITY | 0.77+ |
hundred nodes | QUANTITY | 0.76+ |
secondary | QUANTITY | 0.76+ |
one button | QUANTITY | 0.72+ |
Spark | TITLE | 0.68+ |
Siva Sivakumar, Cisco and Rajiev Rajavasireddy, Pure Storage | Pure Storage Accelerate 2018
>> Announcer: Live from the Bill Graham Auditorium in San Francisco, it's The Cube, covering Pure Storage Accelerate 2018. Brought to you by Pure Storage. (upbeat techno music) >> Welcome back to The Cube, we are live at Pure Accelerate 2018 at the Bill Graham Civic Auditorium in San Francisco. I'm Lisa Martin, moonlighting as Prince today, joined by Dave Vellante, moonlighting as The Who. Should we call you Roger? >> Yeah, Roger. Keith. (all chuckling) I have a moon bat. (laughing) >> It's a very cool concert venue, in case you don't know that. We are joined by a couple of guests, Cube alumnae, welcoming them back to The Cube. Rajiev Rajavasireddy, the VP of Product Management and Solutions at Pure Storage and Siva Sivakumar, the Senior Director of Data Center Solutions at Cisco. Gentlemen, welcome back. >> Thank you. >> Thank you. >> Rajiev: Happy to be here. >> So talk to us about, you know, lots of announcements this morning, Cisco and Pure have been partners for a long time. What's the current status of the Cisco-Pure partnership? What are some of the things that excite you about where you are in this partnership today? >> You want to take that, Siva, or you want me to take it? >> Sure, sure. I think if you look back at what brought us together, obviously both of us are looking at the market transitions and some of the ways that customers were adopting technologies from our site. The converged infrastructure is truly how the partnership started. We literally saw that the customers wanted simplification, wanted much more of a cloud-like experience. They wanted to see infrastructure come together in a much more easier fashion. That we bring the IT, make it easier for them, and we started, and of course, the best of breed technology on both sides, being a Flash leader from their side, networking and computer leader on our side, we truly felt the partnership brought the best value out of both of us. So it's a journey that started that way and we look back now and we say that this is absolutely going great and the best is yet to come. >> So from my side, basically Pure had started what we now call FlashStack, a converged infrastructure offering, roughly about four years ago. And about two and a half years ago, Cisco started investing a lot in this partnership. We're very thankful to them, because they kind of believed in us. We were growing, obviously. But we were not quite as big as we are right now. But they saw the potential early. So about roughly two-and-a-half years ago, I talked about them investing in us. I'm not sure how many people know about what a Cisco validated design is. It's a pretty exhaustive document. It takes a lot of work on Cisco's site to come up with one of those. And usually, a single CVD takes about two or three of their TMEs, highly technical resources and about roughly three to six months to build those. >> Per CVD? >> Per CVD. >> Wow. >> Like I said, it's very exhaustive, I mean you get your building materials, your versions, your interoperability, your, you can actually, your commands that you actually use to stand up that infrastructure and the applications, so on and so forth. So in a nine-month span, they kind of did seven CVDs for us. That was phenomenal. We were very, very thankful that they did that. And over time, that investment paid off. There was a lot of good market investment that Cisco and Pure jointly made, all those investments paid off really well in terms of the customer adoption, the acquisition. And essentially we are at a really good point right now. When we came out with our FlashArray X70 last April, Cisco was about the same time, they were coming out with the M5 servers. And so they invested again, and gave us five more CVDs. And just recently they've added FlashBlade to that portfolio. As you know, FlashBlade is a new product offering. Well not so new, but relatively new, product offering from PR, so we have a new CV that just got released that includes FlashArray and Flash Blade for Oracle. So FlashArray does the online transaction processing, FlashBlade does data warehousing, obviously Cisco networking and Cisco servers do everything OLTB and data warehouse, it's an end to an architecture. So that was what Matt Burr had talked about on stage today. We are also excited to announce that we had that we had introduced AIRI AI-ready infrastructure along with Nvidia at their expo recently. We are excited to say that Cisco is now part of that AIRI infrastructure that Matt Burr had talked about on stage as well. So as you can tell, in a two and half year period we've come a really long way. We have a lot of customer adoption every quarter. We keep adding a ton of customers and we are mutually benefiting from this partnership. >> So I want to ask you about, follow up on the Oracle solution. Oracle would obviously say, "Okay, you buy our database, "buy our SAS, buy the Red Stack, "single throat to choke, "You're going to run better, "take advantage of all the hooks we have." You've heard it before. And it's an industry discussion. >> Rajiev: Of course. >> Customer have it, Oracle comes in hard. So what's the advantage of working with you guys, versus going with an all-Red Stack? Let's talk about that a little bit. >> Sure. Do you want to do it? >> I think if you look at the Oracle databases being deployed, this is a, this really powers many companies. This is really the IT platform. And one of the things that customers, or major customers standardize on this. Again, if they have a standardization from an Oracle perspective, they have a standardization from an infrastructure perspective. Just a database alone is not necessarily easy to put on a different infrastructure, manage them, operate them, go through lifecycle. So they look for a architecture. They look for something that's a overall platform for IT. "I want to do some virtualization. "I want to run desktop virtualization. "I want to do Oracle. "I want to do SAP." So the typical IT operates as more of "I want to manage my infrastructure as a whole. "I want to manage my database and data as its own. "I want its own way of looking." So while there are way to make very appliancey behaviors, that actually operates one better, the approach we took is truly delivering a architecture for data center. The fact that the network as well as the computer is so programmable it makes it easy to expand. Really brings a value from a complete perspective. But if you look at Pure again, their FlashArrays truly have world-class performance. So the customer also looks at, "Well I can get everything from one vendor. "Am I getting the best of breed? "Am I getting the world-class technology from "every one of those aspects and perspectives?" So we certainly think there are a good class of customers who value what we bring to the table and who certainly choose us for what we are. >> And to add to what Siva has just said, right? So if you looked at pre-Flash, you're mostly right in the sense that, hey, if you built an application, especially if it was mission-vertical application, you wanted it siloed, you didn't want another application jumping in and kind of messing up the performance and response times and all that good stuff, right? So in those kind of cases, yeah, appliances made sense. But now, when you have all Flash, and then you have servers and networking that can actually elaborates the performance of Flash, you don't really have to worry about mixing different applications and messing up performance for one at the expense of the other. That's basically, it's a win-win for the customers to have much more of a consolidated platform for multiple applications as opposed to silos. 'Cause silos are always hard to manage, right? >> Siva, I want to ask you, you know, Pure has been very bullish, really, for many years now. Obviously Cisco works with a lot of other vendors. What was it a couple years ago? 'Cause you talked about the significant resource investment that Cisco has been making for a couple of years now in Pure Storage. What is it that makes this so, maybe this Flash tech, I'm kind of thinking of the three-legged stool that Charlie talked about this morning. But what were some of the things that you guys saw a few years ago, even before Pure was a public company, that really drove Cisco to make such a big investment in this? >> I think they, when you look at how Cisco has evolved our data center portfolio, I mean, we are a very significant part of the enterprise today powered by Cisco, Cisco networking, and then we grew into the computer business. But when you looked at the way we walked into this computer business, the traditional storage as we know today is something we actually led through a variety of partnerships in the industry. And our approach to the partnership is, first of all, technology. Technology choice was very very critical, that we bring the best of breed for the customers. But also, again, the customer themself, speaking to us, and then our channel partners, who are very critical for our enablement of the business, is very very critical. So the way we, and when Pure really launched and forayed into all Flash, and they created this whole notion that storage means Flash and that was never the patterning before. That was a game-changing, sort of a model of offering storage, not just capacity but also Flash as my capacity as well as the performance point. We really realized that was going to be a good set of customers will absorb that. Some select workloads will absorb that. But as Flash in itself evolved to be much more mainstream, every day's data storage can be in a Flash medium. They realize, customers realized, this technology, this partner, has something very unique. They've thought about a future that was coming, which we realized was very critical for us. When we evolved network from 10-gig fabric to 40-gig to 100-gig, the workloads that are the slowest part of any system is the data movement. So when Flash became faster and easier for data to be moved, the fabric became a very critical element for the eventual success of our customer. We realized a partnership with Pure, with all Flash and the faster network, and faster compute, we realized there is something unique that we can bring to bear for the customer. So our partnership minds had really said, "This is the next big one that we are going to "invest time and energy." And so we clearly did that and we continue to do that. I mean we continue to see huge success in the customer base with the joint solutions. >> This issue of "best of breed" versus a kind of integrated stacks, it's been around forever, it's not going to go away. I mean obviously Cisco, in the early days of converged infrastructure, put a lot of emphasis on integrating, and obviously partnerships. Since that time, I dunno what it was, 2009 or whatever it was, things have changed a lot. Y'know, cloud was barely a thought back then. And the cloud has pushed this sort of API economy. Pure talks about platforms and integrating through APIs. How has that changed your ability to integrate "best of breed" more seamlessly? >> Actually, you know, I've been working with UCS since it started, right? And it's perhaps, it was a first server system that was built on an API-first philosophy. So everything in the Cisco UCS system can be basically, anything you can do to it GUI or the command line, you can do it their XML API, right? It's an open API that they provide. And they kind of emphasized the openness of it. When they built the initial converged infrastructure stacks, right, the challenge was the legacy storage arrays didn't really have the same API-first programmability mentality, right? If you had to do an operation, you had a bunch of, a ton of CLI commands that you had to go through to get to one operation, right? So Pure, having the advantage of being built from scratch, when APIs are what people want to work with, does everything through rest APIs. All function features, right? So the huge advantage we have is with both Pure, Pure actually unlocks the potential that UCS always had. To actually be a programmable infrastructure. That was somewhat held back, I don't know if Siva agrees or not, but I will say it. That kind of was held back by legacy hardware that didn't have rest space APIs or XML or whatever. So for example, they have Python, and PowerShell-based toolkits, based on their XML APIs that they built around that. We have Python PowerShell toolkits that we built around our own rest APIs. We have puppet integration installed, and all the other stuff that you saw on the stage today. And they have the same things. So if you're a customer, and you've standardized, you've built your automation around any of these things, right, If you have the Intuit infrastructure that is completely programmable, that cloud paradigms that you're talking about is mainly because of programmability, right, that people like that stuff. So we offer something very similar, the joint-value proposition. >> You're being that dev-ops kind of infrastructure-as-code mentality to systems design and architecture. >> Rajiev: Yeah. >> And it does allow you to bring the cloud operating model to your business. >> An aspect of the cloud operating model, right. There's multiple different things that people, >> Yeah maybe not every single feature, >> Rajiev: Right. >> But the ones that are necessary to be cloud-like. >> Yeah, absolutely. >> Dave: That's kind of what the goal is. >> Let's talk about some customer examples. I think Domino's was on stage last year. >> Right. >> And they were mentioned again this morning about how they're leveraging AI. Are they a customer of Flash tech? Is that maybe something you can kind of dig into? Let's see how the companies that are using this are really benefiting at the business level with this technology. >> I think, absolutely, Domino's is one of our top examples of a Flash tech customer. They obviously took a journey to actually modernize, consolidate many applications. In fact, interestingly, if you look at many of the customer journeys, the place where we find it much much more valuable in this space is the customer has got a variety of workloads and he's also looking to say, "I need to be cloud ready. "I need to have a cloud-like concept, "that I have a hybrid cloud strategy today "or it'll be tomorrow. "I need to be ready to catch him and put him on cloud." And the customer also has the mindset that "While I certainly will keep my traditional applications, "such as Oracle and others, "I also have a very strong interest in the new "and modern workloads." Whether it is analytics, or whether it is even things like containers micro-services, things like that which brings agility. So while they think, "I need to have a variety "of things going." Then they start asking the question, "How can I standardize on a platform, "on an architecture, on something that I can "reuse, repeat, and simplify IT." That's, by far, it may sound like, you know, you got everything kind of thing, but that is by far the single biggest strength of the architecture. That we are versatile, we are multi-workload, and when you really build and deploy and manage, everything from an architecture, from a platform perspective looks the same. So they only worry about the applications they are bringing onboard and worry about managing the lifecycle of the apps. And so a variety of customers, so what has happened because of that is, we started with commercial or mid-size customers, to larger commercial. But now we are much more in enterprise. Large, many large IT shops are starting to standardize on Flash tech, and many of our customers are really measured by the number of repeat purchases they will come back and buy. Because once they like and they bought, they really love it and they come back and buy a lot more. And this is the place where it gets very exciting for all of us that these customers come back and tell us what they want. Whether we build automation or build management architecture, our customer speaks to us and says, "You guys better get together and do this." That's where we want to see our partners come to us and say, "We love this architecture but we want these features in there." So our feedback and our evolution really continues to be a journey driven by the demand and the market. Driven by the customers who we have. And that's hugely successful. When you are building and launching something into the marketplace, your best reward is when customer treats you like that. >> So to basically dovetail into what Siva was talking about, in terms of customers, so he brought up a very valid point. So what customers are really looking for is an entire stack, an infrastructure, that is near invisible. It's programmable, right? And it's, you can kind of cookie-cutter that as you scale. So we have an example of that. I'm not going to use the name of the customer, 'cause I'm sure they're going to be okay with it, but I just don't want to do it without asking their permission. It's a healthcare service provider that has basically, literally dozens of these Flash techs that they've standardized on. Basically, they have vertical applications but they also offer VM as a service. So they have cookie-cuttered this with full automation, integration, they roll these out in a very standard way because of a lot of automation that they've done. And they love the Flash tech just because of the programmability and everything else that Siva was talking about. >> With new workloads coming on, do you see any, you know, architectural limitations? When I say new workloads, data-driven, machine intelligence, AI workloads, do we see any architectural limitations to scale, and how do you see that being addressed in the near future? >> Rajiev: Yeah, that's actually a really good question. So basically, let's start with the, so if you look at Bare Metal VMs and containers, that is one factor. In that factor, we're good because, you know, we support Bare Metal and so does the entire stack, and when I say we, I'm talking about the entire Flash tech servers and storage and network, right. VMs and then also containers. Because you know, most of the containers in the early days were ephemeral, right? >> Yeah. >> Rajiev: Then persistent storage started happening. And a lot of the containers would deploy in the public cloud. Now we are getting to a point where customers are kind of, basically experimenting with large enterprises with containers on prem. And so, the persistent storage that connects to containers is kind of nascent but it's picking up. So there's Kubernetes and Docker are the primary components in there, right? And Docker, we already have Docker native volume plug-ins and Cisco has done a lot of work with Docker for the networking and server pieces. And Kubernetes has flex volumes and we have Kubernetes flex volume integration and Cisco works really well with Kubernetes. So there are no issues in that factor. Now if you're talking about machine learning and Artificial Intelligence, right? So it depends. So for example, Cisco's servers today are primarily driven by Intel-based CPUs, right? And if you look at the Nvidia DGXs, these are mostly GPUs. Cisco has a great relationship with Nvidia. And I will let Siva speak to the machine learning and artificial intelligence pieces of it, but the networking piece for sure, we've already announced today that we are working with Cisco in our AIRI stack, right? >> Dave: Right. >> Yeah, no, I think that the next generation workloads, or any newer workloads, always comes with a different set of, some are just software-level workloads. See typically, software-type of innovation, given the platform architecture is more built with programmability and flexibility, adopting our platforms to a newer software paradigm, such as container micro-services, we certainly can extend the architecture to be able to do that and we have done that several times. So that's a good area that covers. But when there are new hardware innovations that comes with, that is interconnect technologies, or that is new types of Flash models, or machine-learning GPU-style models, what we look at from a platform perspective is what can we bring from an integrated perspective. That, of course, allows IT to take advantage of the new technology, but maintain the operational and IT costs of doing business to be the same. That's where our biggest strength is. Of course Nvidia innovates on the GPU factor, but IT doesn't just do GPUs. They have to integrate into a data center, flow the data into the GPU, run compute along that, and applications to really get most out of this information. And then, of course, processing for any kind of real-time, or any decision making for that matter, now you're really talking about bringing it in-house and integrating into the data center. >> Dave: Right. >> Any time you start in that conversation, that's really where we are. I mean, that's our, we welcome more innovation, but we know when you get into that space, we certainly shine quite well. >> Yeah, it's secured, it's protected, it's move it, it's all kind of things. >> So we love these innovations but then our charter and what we are doing is all in making this experience of whatever the new be, as seamless as possible for IT to take advantage of that. >> Wow, guys, you shared a wealth of information with us. We thank you so much for talking about these Cisco-Pure partnership, what you guys have done with FlashStack, you're helping customers from pizza delivery with Domino's to healthcare services to really modernize their infrastructures. Thanks for you time. >> Thank you. >> Thank you very much. >> For Dave Vellante and Lisa Martin, you're watching the Cube live from Pure Accelerate 2018. Stick around, we'll be right back.
SUMMARY :
Brought to you by Pure Storage. Should we call you Roger? I have a moon bat. and Siva Sivakumar, the Senior Director So talk to us about, you know, We literally saw that the customers wanted simplification, and about roughly three to six months to build those. So that was what Matt Burr had talked about on stage today. "take advantage of all the hooks we have." So what's the advantage of working with you guys, Do you want to do it? The fact that the network as well as the computer that can actually elaborates the performance of Flash, of the three-legged stool "This is the next big one that we are going to And the cloud has pushed this sort of API economy. and all the other stuff that you saw on the stage today. You're being that dev-ops kind of And it does allow you to bring the cloud operating model An aspect of the cloud operating model, right. I think Domino's was on stage last year. Is that maybe something you can kind of dig into? but that is by far the single biggest strength So to basically dovetail into what Siva was talking about, and so does the entire stack, And a lot of the containers would deploy and integrating into the data center. but we know when you get into that space, it's move it, it's all kind of things. So we love these innovations but then what you guys have done with FlashStack, For Dave Vellante and Lisa Martin,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Rajiev Rajavasireddy | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Roger | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Rajiev | PERSON | 0.99+ |
Matt Burr | PERSON | 0.99+ |
10-gig | QUANTITY | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Siva Sivakumar | PERSON | 0.99+ |
100-gig | QUANTITY | 0.99+ |
Pure Storage | ORGANIZATION | 0.99+ |
40-gig | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
Domino | ORGANIZATION | 0.99+ |
Keith | PERSON | 0.99+ |
2009 | DATE | 0.99+ |
six months | QUANTITY | 0.99+ |
nine-month | QUANTITY | 0.99+ |
Pure | ORGANIZATION | 0.99+ |
Charlie | PERSON | 0.99+ |
both | QUANTITY | 0.99+ |
Bill Graham Civic Auditorium | LOCATION | 0.99+ |
one factor | QUANTITY | 0.99+ |
Python | TITLE | 0.99+ |
Siva | PERSON | 0.99+ |
Domino's | ORGANIZATION | 0.99+ |
dozens | QUANTITY | 0.99+ |
last April | DATE | 0.99+ |
three | QUANTITY | 0.98+ |
PowerShell | TITLE | 0.98+ |
three-legged | QUANTITY | 0.98+ |
tomorrow | DATE | 0.98+ |
today | DATE | 0.98+ |
both sides | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
Flash | TITLE | 0.98+ |
Bill Graham Auditorium | LOCATION | 0.97+ |
about two and a half years ago | DATE | 0.97+ |
first server | QUANTITY | 0.97+ |
Adrian Cockcroft, AWS | KubeCon 2017
>> Announcer: Live from Austin, Texas, It's The Cube. Covering KubeCon 2017 and CloudNativeCon 2017. Brought to you by Red Hat, The Lennox Foundation, and The Cube's ecosystem partners. >> Okay, welcome back everyone. Live here in Austin, Texas, this is The Cube's exclusive coverage of the CNCF CloudNativeCon which was yesterday, and today is KubeCon, for Kubernetes conference, and a little bit tomorrow as well, some sessions. Our next guest is Adrian Cockcroft, VP of Cloud Architecture Strategy at AWS, Amazon Web Services, and my co-host Stu Miniman. Obviously, Adrian, an industry legend on Twitter and the industry, formerly with Netflix, knows a lot about AWS, now VP of Cloud Architecture, thanks for joining us. Appreciate it. >> Thanks very much. >> This is your first time as an AWS employee on The Cube. You've been verified. >> I've been on The Cube before. >> Many times. You've been verified. What's going on now with you guys, obviously coming off a hugely successful reinvent, there's a ton of video of me ranting and raving about how you guys are winning, and there's no second place, in the rear-view mirror, certainly Amazon's doing great. But CloudNative's got the formula, here. This is a cultural shift. What is going on here that's similar to what you guys are doing architecturally, why are you guys here, are you evangelizing, are you recruiting, are you proposing anything? What's the story? >> Yeah, it's really all of those things. We've been doing CloudNative for a long time, and the key thing with AWS, we always listen to our customers, and go wherever they take us. That's a big piece of the way we've always managed to keep on top of everything. And in this case, the whole container industry, there's a whole whole market there, there's a lot of different pieces, we've been working on that for a long time, and we found more and more people interested in CNCF and Kubernetes, and really started to engage. Part of my role is to host the open source team that does outbound engagement with all the different open source communities. So I've hired a few people, I hired Arun Gupta, who's very active in CNCF earlier this year, and internally we were looking at, we need to join CNCF at some point. We got to do that eventually and venture in, let's go make it happen. So last summer we just did all the internal paperwork, and running around talking to people and got everyone on the same page. And then in August we announced, hey, we're joining. So we got that done. I'm on the board of CNCF, Arun's my alternate for the board and technical, running around, and really deeply involved in as much of the technology and everything. And then that was largely so that we could kind of get our contributions from engineering on a clear footing. We were starting to contribute to Kupernetes, like as an outsider to the whole thing. So that's why we're, what's going on here? So getting that in place was like the basis for getting the contributions in place, we start hiring, we get the teams in place, and then getting our ducks in a row, if you like. And then last week at Reinvent, we announced EKS, the EC2 Kubernete's Service. And this week, we all had to be here. Like last week after Reinvent, everyone at AWS wants to go and sleep for a week. But no, we're going to go to Austin, we're going to do this. So we have about 20 people here, we came in, I did a little keynote yesterday. I could talk through the different topics, there, but fundamentally we wanted to be here where we've got the engineering teams here, we've got the engineering managers, they're in full-on hiring mode, because we've got the basic teams in place, but there's a lot more we want to do, and we're just going out and engaging, really getting to know the customers in detail. So that's really what drives it. Customer interactions, little bit of hiring, and just being present in this community. >> Adrian, you're very well known in the open source community, everything that you've done. Netflix, when you were on the VC side, you evangelized a bunch of it, if I can use the term. Amazon, many of us from the outside looked and, trying to understand. Obviously Amazon used lots of open source, Amazon's participated in a number of open source. MXNet got a lot of attention, joining the CNCF is something, I know this community, it's been very positively received, everybody's been waiting for it. What can you tell us about how Amazon, how do they think about open source? Is that something that fits into the strategy, or is it a tactic? Obviously, you're building out your teams, that sends certain signals to market, but can you help clarify for those of us that are watching what Amazon thinks about when it comes to this space? >> I think we've been, so, we didn't really have a team focused on outbound communication of what we were doing in open source until I started building this team a year ago. I think that was the missing link. We were actually doing a lot more than most people realized. I'd summarize it as saying, we were doing more than most people expected, but less than we probably could have been given the scale of what we are, the scale that AWS is at. So part of what we're doing is unlocking some internal demand where engineering teams were going. We'd like to open source something, we don't know how to engage with the communities. We're trying to build trust with these communities, and I've hired a team, I've got several people now, who are mostly from the open source community, we were also was kind of interviewing people like crazy. That was our sourcing for this team. So we get these people in and then we kind of say, all right, we have somebody that understands how to build these communities, how to respond, how to engage with the open source community. It's a little different to a standard customer, enterprise, start up, those are different entities that you'd want to relate to. But from a customer point of view, being customer-obsessed as AWS is, how do we get AWS to listen to an open source community and work with them, and meet all their concerns. So we've been, I think, doing a better job of that now we've pretty much got the team in place. >> That's your point, is customer focus is the ethos there. The communities are your customers in this case. So you're formalizing, you're formalizing that for Amazon, which has been so busy building out, and contributing here and there, so it sounds like there was a lot of activity going on within AWS, it was just kind of like contributing, but so much work on building out cloud ... >> Well there's a lot going on, but if no one was out there telling the story, you didn't know about it. Actually one of the best analogies we have for the EKS is actually our EMR, our Hadoop service, which launched 2010 or something, 2009, we've had it forever. But from the first few years when we did EMR, it was actually in a fork. We kept just sort of building our own version of it to do things, but about three or four years ago, we started upstreaming everything, and it's a completely clean, upstreamed version of all the Hadoop and all the related projects. But you make one API call, a cluster appears. Hey, give me a Hadoop cluster. Voom, and I want Spark and I want all these other things on it. And we're basically taking Kubernetes, it's very similar, we're going to reduce that to a single API call, a cluster appears, and it's a fully upstreamed experience. So that's, in terms of an engineering relationship to open source, we've already got a pretty good success story that nobody really knew about. And we're following a very similar path. >> Adrian, can you help us kind of unpack the Amazon Kubernetes stack a little bit? One of the announcements had a lot of attention, definitely got our attention, Fargate, kind of sits underneath what Kubernetes is doing, my understanding. Where are you sitting with the service measures, kind of bring us through the Amazon stack. What does Amazon do on its own versus the open source, and how those all fit together. >> Yeah, so everyone knows Amazon is a place where you can get virtual machines. It's easy to get me a virtual machine from ten years ago, everyone gets that, right? And then about three years ago, I think it was three years ago, we announced Lambda - was that two or three years ago? I lose track of how many reinvents ago it was. But with Lambda it's like, well, just give me a function. But as a first class entity, there's a, give me a function, here's the code I want you to run. We've now added two new ways that you can deploy to, two things you can deploy to. One of them's bare metal, which is already announced, one of the many, many, many announcements last week that might have slipped by without you noticing, but Bare Metal is a service. People go, 'those machines are really big'. Yes, of course they're really big! You get the whole machine and you can be able to bring your own virtualization or run whatever you want. But you could launch, you could run Kubernetes on that if you wanted, but we don't really care what you run it on. So we had Bare Metal, and then we have container. So Fargate is container as a first class entity that you deploy to. So here's my container registry, point you at it, and run one of these for me. And you don't have to think about deploying the underlying machines it's running on, you don't have to think about what version of Lennox it is, you have to build an AMI, all of the agents and fussing around, and you can get it in much smaller chunks. So you can say you get a CPU and half a gig of ram, and have that as just a small container. So it becomes much more granular, and you can get a broader range of mixes. A lot of our instances are sort of powers of two of a ratio of CPU to memory, and with Fargate you can ask for a much broader ratio. So you can have more CPU, less memory, and go back the other way, as well. 'Cause we can mix it up more easily at the container level. So it gives you a lot more flexibility, and if you buy into this, basically you'll get to do a lot of cost reduction for the sort of smaller scale things that you're running. Maybe test environments, you could shrink them down to just the containers and not have a lot of wasted space where you're trying to, you have too many instances running that you want to put it in. So it's partly the finer grain giving you more ability to say -- >> John: Or consumption choice. >> Yeah, and the other thing that we did recently was move to per-second billing, after the first minute, it's per-second. So the granularity of Cloud is now getting to be extremely fine-grained, and Lambda is per hundred millisecond, so it's just a little bit -- >> $4.03 for your bill, I mean this is the key thing. You guys have simplified the consumption experience. Bare Metal, VM's, containers, and functions. I mean pick one. >> Or pick all of them, it's fine. And when you look at the way Fargate's deployed in ECS it's a mixture. It's not all one or all the other, you deploy a number of instances with your containers on them, plus Fargate to deploy some additional containers that maybe didn't fit those instances. Maybe you've got a fleet of GPU enhanced machines, but you want to run a bit of Logic around it, some other containers in the same execution environment, but these don't need to be on the GPU. That kind of thing, you can mix it up. The other part of the question was, so how does this play into Kubernetes, and the discussions are just that we had to release the thing first, and then we can start talking, okay, how does this fit. Parts of the model fit into Kubernetes, parts don't. So we have to expose some more functionality in Fargate for this to make sense, 'cause we've got a really minimal initial release right now, we're going to expose it and add some more features. And then we possibly have to look at ways that we mutate Kubernetes a little bit for it to fit. So the initial EKS release won't include Fargate, because we're just trying to get it out based on what everyone knows today, we'd rather get that out earlier. But we'll be doing development work in the meantime, so a subsequent release we'll have done the integration work, which will all happen in public, in discussion with the community, and we'll have a debate about, okay, this is the features Fargate needs to properly integrate into Kubernetes, and there are other similar services from other top providers that want to integrate to the same API. So it's all going to be done as a public development, how we architect this. >> I saw a tweet here, I want to hear your comments on, it's from your keynote, someone retweeted, "managing over 100,000 clusters on ACS, hashtag Fargate," integrated into ECS, your hashtag, open, ADM's open. What is that hundred thousand number. Is that the total number, is that an example? On elastic container service, what does that mean? >> So ECS is a very large scale, multi-tenant container operation service that we've had for several years. It's in production, if you compare it to Kubernetes it's running much larger clusters, and it's been running at production-grade for longer. So it's a little bit more robust and secure and all those kinds of things. So I think it's missing some Kubernetes features, and there's a few places where we want to bring in capabilities from Kubernetes and make ECS a better experience for people. Think of Kubernetes as some what optimized for the developer experience, and ECS for more the operations experience, and we're trying to bring all this together. It is operating over a hundred thousand clusters of containers, over a hundred thousand clusters. And I think the other number was hundreds of millions of new containers are launched every week, or something like that. I think it was hundreds of millions a week. So, it's a very large scale system that is already deployed, and we're running some extremely large customers on, like Expedia and Macbook. Macbook ... Mac Box. Some of these people are running tens of thousands of containers in production as a single, we have single clusters in the tens of thousands range. So it's a different beast, right? And it meets a certain need, and we're going to evolve it forwards, and Kubernetes is serving a very different purpose. If you look at our data science space, if you want exactly the same Hadoop thing, you can get that on prem, you can run EMR. But we have Athena and Red Shift and all these other ways that are more native to the way we think, where we can go iterate and build something very specific to AWS, so you blend these two together and it depends on what you're trying to achieve. >> Well Adrian, congratulations on a great opportunity, I think the world is excited to have you in your role, if you could clarify and just put the narrative around, what's actually happening in AWS, what's been happening, and what you guys are going to do forward. I'll give you the last minute to let folks know what your job is, what your objective is, what you're looking for to hire, and your philosophy in the open source for AWS. >> I think there's a couple of other projects, and we've talked, this is really all about containers. The other two key project areas that we've been looking at are deep learning frameworks, since all of the deep learning frameworks are open source. A lot of Kubernetes people are using it to run GPUs and do that kind of stuff. So Apache MXNet is another focus on my team. It went into the incubation phase last January, we're walking it through, helping it on its way. It's something where we're 30, 40% of that project is AWS contribution. So we're not dominating it, but we're one of its main sponsors, and we're working with other companies. There's joint work with, it's lots of open source projects around here. We're working with Microsoft on Gluon, we're working with Facebook and Microsoft on Onyx which is an open URL network exchange. There's a whole lot of things going on here. And I have somebody on my team who hasn't started yet, can't tell you who it is, but they're starting pretty soon, who's going to be focusing on that open source, deep learning AI space. And the final area I think is interesting is IOT, serverless, Edge, that whole space. One announcement recently is free AltOS. So again, we sort of acquired the founder of this thing, this free real-time operating system. Everything you have, you probably personally own hundreds of instances of this without knowing it, it's in everything. Just about every little thing that sits there, that runs itself, every light bulb, probably, in your house that has a processor in it, those are all free AltOS. So it's incredibly pervasive, and we did an open source announcement last week where we switched its license to be a pure MIT license, to be more friendly for the community, and announced an Amazon version of it with better Amazon integration, but also some upgrades to the open source version. So, again, we're pushing an open source platform, strategy, in the embedded and IOT space as well. >> And enabling people to build great software, take the software engineering hassles out for the application developers, while giving the software engineers more engineering opportunities to create some good stuff. Thanks for coming on The Cube and congratulations on your continued success, and looking forward to following up on the Amazon Web Services open source collaboration, contribution, and of course, innovation. The Cube doing it's part here with its open source content, three days of coverage of CloudNativeCon and KubeCon. It's our second day, I'm John Furrier, Stu Miniman, we'll be back with more live coverage in Austin, Texas, after this short break. >> Offscreen: Thank you.
SUMMARY :
Brought to you by Red Hat, The Lennox Foundation, exclusive coverage of the CNCF CloudNativeCon This is your first time as an AWS employee on The Cube. What's going on now with you guys, and got everyone on the same page. Is that something that fits into the strategy, So we get these people in and then we kind of say, and there, so it sounds like there was a lot of activity telling the story, you didn't know about it. One of the announcements had a lot of attention, So it's partly the finer grain giving you more Yeah, and the other thing that we did recently was move to You guys have simplified the consumption experience. It's not all one or all the other, you deploy Is that the total number, is that an example? that are more native to the way we think, and what you guys are going to do forward. So it's incredibly pervasive, and we did an open source And enabling people to build great software,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Adrian | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Adrian Cockcroft | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
last week | DATE | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
August | DATE | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
second day | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
CNCF | ORGANIZATION | 0.99+ |
2010 | DATE | 0.99+ |
this week | DATE | 0.99+ |
AltOS | TITLE | 0.99+ |
Austin, Texas | LOCATION | 0.99+ |
yesterday | DATE | 0.99+ |
first minute | QUANTITY | 0.99+ |
Austin | LOCATION | 0.99+ |
last summer | DATE | 0.99+ |
Arun Gupta | PERSON | 0.99+ |
tens of thousands | QUANTITY | 0.99+ |
KubeCon | EVENT | 0.99+ |
today | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
MXNet | ORGANIZATION | 0.99+ |
tomorrow | DATE | 0.99+ |
Macbook | COMMERCIAL_ITEM | 0.99+ |
2009 | DATE | 0.99+ |
John | PERSON | 0.99+ |
three years ago | DATE | 0.99+ |
a year ago | DATE | 0.99+ |
hundreds of millions a week | QUANTITY | 0.99+ |
two | DATE | 0.98+ |
last January | DATE | 0.98+ |
The Cube | ORGANIZATION | 0.98+ |
ten years ago | DATE | 0.98+ |
two things | QUANTITY | 0.98+ |
three days | QUANTITY | 0.98+ |
over a hundred thousand clusters | QUANTITY | 0.98+ |
KubeCon 2017 | EVENT | 0.98+ |
over 100,000 clusters | QUANTITY | 0.98+ |
$4.03 | QUANTITY | 0.97+ |
two | QUANTITY | 0.97+ |
hundred thousand | QUANTITY | 0.97+ |
two new ways | QUANTITY | 0.97+ |
Fargate | ORGANIZATION | 0.97+ |
Lambda | TITLE | 0.97+ |
CloudNativeCon 2017 | EVENT | 0.97+ |
The Lennox Foundation | ORGANIZATION | 0.97+ |
half a gig | QUANTITY | 0.97+ |
Steve Watt, Red Hat | KubeCon 2017
(upbeat music) >> Announcer: Live from Austin, Texas, it's the Cube, covering Kubecon and CloudNativeCon 2017. Brought to you by Red Hat, the Linux Foundation, and the Cube's Ecosystem partners. >> Hello and welcome back to the Cube's exclusive coverage live in Austin, Texas here for the three day CloudNative and now two days of KubeCon, Kubernetes conference. We had the second annual conference celebrating the evolution and growth of Kubernetes. I'm John Furrier, my cohost Stu Miniman and next guest Steve Watt, Chief Architect of Emerging Technologies at Red Hat, welcome back to the Cube. Good to see you. >> Thanks for having me, always a pleasure. >> So Red Hat making some good bets, some Kubernetes, not a bad call. >> No, Kubernetes has done wonders for our openship business, absolutely. (laughter) >> So how is this all playing out? We were just talking before we came on camera here about the just the pace of change. You been at Red Hat five years. We interviewed you when you were at HB during the big day to days, boy the world has certainly grown and changed. What has changed in your mind the most the people need to understand? >> I think Kubernetes has been a single biggest driving force to shift all enterprising architecture from scale up to scale out and I think that has just created a whole number of ripple effects across how applications are designed within the enterprise. >> I think that's the big one. >> Yeah. >> So Steve, that whole shift from scale up to scale out has affected lots of parts of the stack, but storage is something you've been working on, something we've been keeping a close eye on and was one of the top items we wanted to kind of dig into this week. Maybe, bring us inside a little bit, what's happening, what's Red Hat's role? >> Sure. >> Help explain. >> Absolutely, one of my favorite topics. It's kind of counterintuitive. I work in a CT office, I run the emerging technologies team, which is sort of the team that does the experiments that help shape and inform our long term strategy. And so you might think, well storage is kind of old news, how does that fit into this CloudNative world? Why does Red Hat care about it so much for their platform? And I think if you look at the CloudNative stack today, you have GKE, the new Amazon Kubernetes service, Azure, et cetera, these are all places where you can run your Kubernetes app, but just in that one place. Red Hat's platform perspective's a little different. We want you to be able to run your platform in an open hybrid cloud, whether that's in Google, in Azure or on premise, on OpenStack or on Bare Metal So you want to be able to run everywhere, but what's the biggest problem to achieving that application portability? It's data locking, so storage becomes cool again. (laughter) We got to solve this problem. >> Because you got to store the data somewhere. >> Steve: Right. >> And that's in the storage devices. >> Right, exactly. >> In the new way, the architecture. >> The new architecture, right? So the problem is, you've got to be very careful that if you want to move, ever you should think upfront about your persistence platform, so that it gives you the freedom to be able to move around. So Red Hat is investing heavily in trying to solve this problem. We've got a few exploratory prototypes that we're actually showing at this conference. And we work in both Kubernetes, building out the storage sub-system there, but also sort of in our products for like container native storage. >> Steve walk us through a little bit because we've been talking about this in the Docker Ecosystem for a bunch of years, where are we, what's being worked on? What still needs to be kind of sorted out? >> So, yeah that's interesting, I think we're finally over the hump where everybody's asking, Who's solving the persistence problem for containers? It used to drive me crazy, that went on for about three years. I think people finally realize, there are solutions. Kubernetes has always had them actually. And so, we've got past sort of the day one, like being able to, dynamically provision. Kind of like you'd see with Cinder in OpenStack. We've got a great storage. we've got a vibrant huge storage ecosystem and at our Kubernetes face to face meetings we have 50 people, they're like a mini conference. So we've got broad engagement from the entire storage ecosystem and that's doing everything that you need sort of on the file level, but there is recent (mumbles) work that we've done in Kubernetes for Service Broker is now the pattern to sort of provision object storage if you need it and most importantly, we've just enabled lock storage in Kubernetes in the 1.9 release that ships this week. And that is really interesting because it opens up the potential to run virtualization with loads on Kubernetes. >> Where's the action for the projects with storage? I heard some hallway rumbles just when I was, the Rook project. >> Steve: Yeah. >> Is that something, what projects, if I'm interested in storage, where do I dive in? Where's the most action for moving the needle for tuning the innovation around storage. >> I think it's if you're a storage vendor it's different if you're a storage consumer so Rook is a project that's focused on providing a sort of an abstraction for software defined storage platforms to run inside Kubernetes. Cluster doesn't take that approach, we've used sort of more of the pure Kubernetes approach. Sort of get to the same place. But Rook is definitely an interesting project in that, it's sort of an inception level project phase. Then for people that are wanting to consume storage, I think Kubernetes is the king of the pack. I obviously have a strong opinion on it, amongst the other container orchestrators, but the amount of investment in allowing people to do more continually more sophisticated features, you know snapshot's in, you know cloning, things like that. And obviously, I'm sure you've heard a little bit about container storage interface. >> Yes. >> CSI, and that makes it a lot easier for storage vendors to build one adapter that works across, Decos, Cloud foundry, Kubernetes, et cetera. >> What's the biggest surprise here for you, because we've been looking trying to read the tea leaves. Obviously Kubernetes, clear the runway, good standardization seeing some commoditization, great adoption, although people can tailor it. A lot of different versions, still early. >> Steve: Yeah. >> We're only two years old conference. >> I know. >> Three years it's been around. What's surprising you right now? What's jumping out at you? >> I think Amazon's announcement yesterday was very interesting. I think the fact that it's heartening to see that there's pure Kubernetes as a service being offered in Azure, Google and Amazon. And I think that quite interesting for affordability standpoint, right. And so I think to me that was a big surprise. Amazon doesn't usually go the pure vanilla open source approach and also the statements they're going to contribute back to Kubernetes, I think is quite interesting as well. So to me that's the one thing that stood out. >> What's going on for the future too? You mentioned you've got to set the roadmap. You guys have an agenda there obviously of installed base. >> Steve: Yeah. >> Now you've got OpenShift doing really well. What are you guys looking at? What's on your radar, how do you see this thing unfolding? What's in your mind? >> Yeah, I think there's a couple of really interesting things. Container orchestration is a legitimate disruption to virtualization. And that it solves the same problem opportunity space but in a fundamentally different manner that reshapes the market. I think the Kubert project is something that we're working on at Red Hat. It's another one of our sort of emerging technology focus areas. And when we enable block storage and it enables virtualization, what it gives us the opportunity to do in Kubernetes is have a single deployed platform that can serve both later adopters and early adopters. So the early adopters with pure container orchestration, but if you're wanting to have the same platform and do virtualization too on it, you can have sort of one investment, one shared experience to be able to do all of those. I think that's pretty cool. (laughter) >> Steve, talk about the customers that are watching or will be hearing over the next few months and a year around how to architectually package this and think about it in their mind. Whether it's a mental model or specifics. 'Cause there's always going to be that time tested trade off between performance, security and so you have, obviously people have VM's, not going away, but containerization where Google say, hey, we don't really care about VM's, we're a container company. There's always still going to be trade offs. >> Steve: Yeah. >> Speed, security. >> Steve: Security. >> So security factors in there. How should a practitioner think about getting their arms around this? >> I think this is the tact that OpenShift takes which is that Kubernetes is a decent project. Despite the huge amount of interest and contributions that we have and its maturity curve as far as, there are different things at attention, like enterprise use cases, versus public cloud use cases. And so we're very focused on our enterprise use cases and sort of enabling that inside OpenShift and bringing OpenShift up as a platform back to sort of enterprise level that our customers would expect. Virtualization platforms are much further down the maturity curve, and so I think that's sort of our approach is that, where that tries to meet our customers where they are. Some organizations have teams that are more advanced. Some that are less advanced. And so we try to offer, you know if you want to go virtualization we've got OpenStack, we've got Rev. If you want you could use this new school Kubernetes based container orchestration and you got teams understand it. (laughter) And you corrupt microservices then we've got a solution for that. >> Well you know that whole theme here is infrastructures is boring storage. It used to be called snorage back in the day. >> Steve: Yeah. >> It's pretty boring but relevant. Most people look at like Lambda from Amazon and some other serverless trends and certainly see them here with ServiceMesh and what not, the abstraction way of infrastructure, it's almost eliminating storage in the mind of the developer, yet it's changing, how are you guys specifically riding that wave? Because one, it's good for developers. >> Steve: Right. >> The velocity of developers increases, but the role of storage is changing. You mention block, people are like, oh block-- >> Yeah. >> It's dead. I mean storage has been dead for like 20 years now? >> Steve: Yeah. >> It keeps growing and growing, but now the role changes to the developer, abstracted away and also more important for automation and some of the dev ops things. What specifically are you guys doing? >> So, I think you said the word role. That's really important right? Like to an application developer what you said is absolutely true, they want to use persistence platforms for storing their data in a cloud native way, okay. However, the maturity code is also important. Not every application developer team is fully microservice based and understands all these architectural patterns. It's a journey, right? So we want to basically give them multiple options along their journey. So that's the one around the application persistence. So if they used to like file storage or object storage, et cetera, like we have our container native storage platform provides that for them from the application persistence level, but from an OpenShift standpoint, an OpenShift is our new platform. It's based on real but it's our new platform, our new service area to build applications and most notably, infrastructure services on. So just like with (mumbles) where we have, we created the opportunity to have a fertile ecosystem around it, we're doing the same with OpenShift, which means that we've got to enable the companies that are providing those persistence platforms. Those message cues, those NoSQL databases, to run on OpenShift. You want to run Cassandra on OpenShift on premise? What do you need underneath the Cassandra? Block storage, direct attached block storage, which we're building in Kubernetes 1.10. >> Steve, any patterns you're seeing between the customers that are being able to embrace really the kind of this new cloud data world versus those that are having challenges? Any advice you can give based on customer interactions and what you're seeing. >> That's a good question. I think, I just have to fall back on the fact that culture is a hard thing to change. It takes a long time. Institutions are persistent and so I think that for what we sort of say to our customers, our guidance on these topics is that, what we try and give you is choice. Depending on where you are on the journey, slowly move our customers through that journey and try to give them a variety of different choices on that. I think personally like with any new disruption, it usually has like 10 x value. Like the one benefit of containers over to machines is you don't have to bring the operating system along every time you create a new container, right? You can much more densely pack a server with containers with virtual machines. Get more resource utilization, but it takes a long time for an application development team to like fully get there. And so, that's the thing I think, is you just got to be judicious about like the right tool at the right time. >> Yeah, the other thing related to that is the pace of change. >> Steve: Yeah. >> I've talked to some of the people that created Kubernetes, the people who are running all this and they're like, I can't keep up with all these projects. What are you finding internally in Red Hat, as well as from your customers? >> Yeah, I think that it's absolutely true. I was just remarking on that a minute ago it's, you know I'm walking around. I hear this great quote, like why do you come to conferences? Do you come to conferences to learn or do you come to conferences to learn about what you need to learn? (laughter) >> Yeah. >> And it's the latter for me, right. And the ecosystem, the CloudNative ecosystem is exploding. And so I think what we try to do at Red Hat is, especially our team. Our goal in Emerging Technologies is to look 18 months down the road and pick the winners. Like community vitality standpoint, but also like the right technology. And there's this plethora of choices that we need to wave through and what we tend to do is distill that down into our platform that's something our customers can rely on. And that's reliable and we've picked the right project, but it's a big challenge. Like there's so much happening and even in storage it's becoming challenging. >> Steve Watt the Chief Architect of Emerging Engineering at Red Hat thanks for coming on the Cube, appreciate your perspective. It's an architectural game right now. A lot of people putting these new architectures together. It's cultural change. Congratulations on your success with OpenShift and everything else. >> Steve: Yeah, thank you very much. >> Alright, and more coverage here on the Cube after this short break. >> Steve: Thanks. (upbeat music)
SUMMARY :
Brought to you by Red Hat, the Linux Foundation, the evolution and growth of Kubernetes. So Red Hat making some good bets, some Kubernetes, (laughter) most the people need to understand? and I think that has just created a whole number has affected lots of parts of the stack, And I think if you look at the CloudNative stack today, so that it gives you the freedom to be able to move around. is now the pattern to sort of provision Where's the action for the projects with storage? Where's the most action for moving the needle but the amount of investment in allowing people to do CSI, and that makes it a lot easier for storage What's the biggest surprise here for you, What's surprising you right now? and also the statements they're going to contribute What's going on for the future too? What are you guys looking at? And that it solves the same problem opportunity and so you have, obviously people have VM's, not going away, How should a practitioner think And so we try to offer, you know if you want to go Well you know that whole theme here the mind of the developer, yet it's changing, but the role of storage is changing. I mean storage has been dead for like 20 years now? but now the role changes to the developer, So that's the one around the application persistence. between the customers that are being able to And so, that's the thing I think, is you just got to be Yeah, the other thing related created Kubernetes, the people who are running all this learn about what you need to learn? And it's the latter for me, right. at Red Hat thanks for coming on the Cube, on the Cube after this short break. Steve: Thanks.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Steve | PERSON | 0.99+ |
Steve Watt | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Linux Foundation | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Austin, Texas | LOCATION | 0.99+ |
20 years | QUANTITY | 0.99+ |
two days | QUANTITY | 0.99+ |
50 people | QUANTITY | 0.99+ |
18 months | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
Three years | QUANTITY | 0.99+ |
Cube | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Cassandra | TITLE | 0.99+ |
KubeCon | EVENT | 0.99+ |
OpenShift | TITLE | 0.99+ |
three day | QUANTITY | 0.99+ |
five years | QUANTITY | 0.99+ |
both | QUANTITY | 0.98+ |
this week | DATE | 0.98+ |
Kubernetes | TITLE | 0.98+ |
Kubernetes 1.10 | TITLE | 0.97+ |
one | QUANTITY | 0.96+ |
Lambda | TITLE | 0.95+ |
CloudNative | TITLE | 0.95+ |
OpenStack | TITLE | 0.94+ |
Azure | TITLE | 0.94+ |
10 x | QUANTITY | 0.94+ |
Rook | ORGANIZATION | 0.94+ |
one investment | QUANTITY | 0.92+ |
CloudNativeCon 2017 | EVENT | 0.92+ |
about three years | QUANTITY | 0.92+ |
today | DATE | 0.91+ |
CloudNative | EVENT | 0.91+ |
Cube | COMMERCIAL_ITEM | 0.9+ |
OpenShift | ORGANIZATION | 0.89+ |
one place | QUANTITY | 0.89+ |
two years old | QUANTITY | 0.88+ |
a minute ago | DATE | 0.88+ |
one adapter | QUANTITY | 0.87+ |
Matt Pley, Fortinet | AWS re:Invent
>> Narrator: Live from Las Vegas it's The Cube, covering AWS Reinvent 2017 presented by AWS, Intel and our ecosystem of partners. >> John: And we are live here at Las Vegas, at the Sands Expo wrapping up our coverage here. Reinvent three days strong, inter-going with AWS and a number of great partners within their ecosystem. One of those is Fortinet and we're now joined by Matt Play who is the VP of Cloud Carrier and service providers there at Fortinet, thanks for joining us here Matt, good to see you, sir. >> Matt: It's a pleasure to be here. Thank you. >> John: Yeah, tell us a little bit about, you know, first of all, what you do as far as the company's concerned and about your relationship with AWS and I know you're exhibiting just over our shoulder here so, its a big week for you and for them. >> Matt: I mean the energy here is unlike any other event I've been to, it's fantastic, you can't even describe what this feels like, you have to really be here to really appreciate it so it's just been, it's my first show, being here and it's just absolutely great to see, you know, all the companies collaborating, people getting together and working together. So from the show perspective, I mean it's just fantastic. So we're just happy to be a part of it and Fortinet is doing a lot of great things with AWS. I think our synergies are really well aligned, we have a lot of commonality in our DNA and our culture in history. So, you know, we love to, we're innovators. We love technology and we really hang our hat on that. So as a security ISV you can the products, that's always important right, the products that are involved in it but it's really about the theory or philosophy behind it that we really look towards to accelerate our partnership. >> John: Yeah. I mean, I don't know how much booth time you spend but I'm just always curious at shows. What's the chatter about? You know, when people come up what are they most interested about? You know, what's been like in your mind an overall theme or that recurring theme that you're hearing a lot from potential customers here? >> It's really diverse so it's not just one talk track. It's really a number of different points or elements or what's important. You look around you see IOT, you see dev-ops, you see a number of different things that are kind of bubbling up and you saw all the announcements from AWS and Bare-Metal so that's gonna change things quite a bit. So it was really surprising to see, you know some of our announcements this week were really important. >> John: You've had a big week. >> Matt: We had a great week. >> John: Yeah. >> Matt: We had some really special things that we'd been working on that got announced this week. You know FortiSandboxes and On Demand now in AWS. So we're the only sandbox available in AWS. I think that's very compelling and that's a pretty useful thing. The WAF rules that was a launch yesterday that happened and we're part of the role set so you can take some of the role sets it goes out to our fort of guard enforces it. And then finally they became fabric partner. So fabric really is for us an ecosystem of products. But not only our products it's really about working with other collaboration partners but sometimes competitors and that's okay with us because we believe that's really the only way security's gonna be effective. >> Justin: We were talking before and you were explaining some of the portfolio approach that Fortinet takes to security. We've always been talking about defense in depth as being a thing that you should do with security and really there is no one magic silver bullet that you can use. You have to have different tools for different use cases. And you've got lots of different products that all work well together but they also work well with other products. Which is quite interesting, that fabric concept. Could you maybe give us a bit more color on what the fabric is and some of the portfolio products that plug in? >> Absolutely. So to your first point, we have eight products in AWS and available. It's really about creating a security stack of enforcement because one product necessarily won't do the entire job you need it to do. So we have complementary products, we have, you know, bespoke products, or pointed products. So we have, like I was saying, eight. That's the most out of any other security ISV in the marketplace today. So I think that's a huge competitive advantage. And really what's important is that you really need to see, have a single pane of glass console to look to look at your environments. >> Yeah. >> Statistics say around 65% of organizations or enterprises will have a hybrid environment. So kinda the legacy bespoke, or the legacy traditional networks, and then they're gonna have obviously AWS in instances and it's really important for security that correlation and automation to see across your entire network and footprint that you have. Really all the products to us are all the same whether you deploy them on-premise, off-premise, in the cloud, private cloud, public cloud, really doesn't matter to us. It's all the same sort of look and feel for our products. >> Yeah, I am hearing from all the security people both vendor and on the customer side that I speak to that there is a real collaboration going on in security right now. And we were talking just before we went to air that the security has just blown up in the last sort of four or five years. What used to be a bit of an afterthought is now front of mind for a lot of customers. So they're using some of the products like Fortinet to be able to say "well I want to solve this and this is something I need to do today but I also need it to work with other things that I'm doing". So it's interesting that Fortinet's chosen to take that partnering approach particularly something like your relationship with AWS with web application firewalls that you're doing. That's a real partnership approach where you're saying "we do something quite well but Amazon can use this to give us access to more and more customers". Is that part of Fortinet's core way of doing things? Has that always been the case? >> Yeah, I think, you know the history and sort of DNA of Fortinet it was kinda founded on let's do it ourselves. Let's build it because we believe we can build it the best. And so kinda through the generation of that like you said, you know, security is one of the things that it was sort of geeky and kinda cool sorta specific like people didn't really understand security all that well but now it's headline news and changes market cap literally overnight. Right, we see that a lot in the news and unfortunately some nasty things happen to peoples' information. So if you look at that our CFO talked about digital trust a number of years ago. And really, you want to do business with companies you trust and that's so important. So when you give your credit card information, your social security number, that's important, right. You want somebody to take caution when jotting down that information, right. So, for us, we saw it as a competitive advantage because how it really started for us before the fabric was threat information sharing. So we have an initiative where we work with others in the marketplace who are security vendors to share threat data and to make that more useful for companies because really that's what's gonna win. And sharing and looking at the portfolio it really goes back Fortiguard platform. Everything kind of points back to that as far as the threat vectors, right. >> You mention that there'd been problems, obviously, there's a headline a week, right. And that's kind of the point of the question I want to get at here with you. In a way, from a consumer standpoint, we're almost desensitized a little bit because oh god another one, right. Another breach, another problem so what kind of mind set are you fighting in terms of you can have 1000 wins but one loss or a million wins but one loss it's another headline it's another problem and it's another barrier for you. I mean, how do you look at that from a philosophical approach as a company and a mind set approach as a company? >> That's a great question, right. So there is kind of this, I think in the industry, there's this consensus that it's not a question of if it's a question of when. And so, that's a little hard to stomach, right. >> John: Cause you wanna win them all. >> You're saying, hey look, right you wanna win 100% of the time and that's just the reality of life, right. Yeah, of course, right. So, of course we look at it as a layered approach. So if you, I'll use a very simple analogy but I think it's sort of effective. If you lock your front door, if you lock your windows, if you put on your alarm system, you have cameras, and then ideally you live in a gated neighborhood. They're just layers to ensure that if someone comes by to look at your environment, and they go "man that's too hard, it's just too much work. There's cameras there, I can tell they have a dog, it's way too complicated I'm getting into that". And that's what security really should be. Security should be a multi layer approach that uses complementary products that coordinate and orchestrate together and automate and those are really important things when it comes to security and keeping the bad guys out. So you sort of want to have this security posture that's just so many layers of defense that it's very hard to penetrate. >> So when's someone's thinking about what they've currently got in there like looking at their threat model that they might have and what their risk would be at the moment. How would you help customers to evaluate well, what should I do next? We were talking with someone else on the cube earlier today about well okay, you need to do the basics first. You need to brush your teeth. How do you help customers identify what is the 'locking your front door'? What is the 'I need to buy a dog'? What is the 'I need to make sure I've got all my windows locked'? >> Matt: Right. >> So how do you help customers figure that out? >> That's a great question. Always when we talk with customers we evaluate their environment, very bespoke and sort of custom-tailored. That's very important to understand exactly what you're trying to solve. However, just like your credit score, we believe that there should be a security competency score. So it's sort of an in depth evaluation of your network, the holistic security posture, how that looks, and so we're now offering that in our new platform. To come in and offer sort of a security threat score, if you will, to say how effective we think you are. So I think that helps. >> Justin: It's like a maturity model. >> What's that? >> It's like a maturity model. >> Matt: Yep, exactly so I think that's gonna help a lot of people make sense out of it. And there's different parameters and how we report that back to our customers that sort of makes sense and then we say, "well we believe these products should be the products that you choose and this is the kind of security posture you have". And, you know the reality is, if you're connected to the internet you have to get out and things have to get it. >> [One Of The Hosts] Yeah. >> So you just have to have that layered approach. >> John: It used to be like it was a cost decision way back. Now reputation's on the line, you said market cap as you pointed out. I mean, the risks and the exposure and the damage is exponentially worse today than it was just three, four years ago, right. >> I think it even seems different from like six months ago. You know, it's really insane to think that companies could disappear based upon what happens with that information. We don't want to be the ambulance chaser, that's not our philosophy but it's about being protected and it's about being you know. I think with with security, retrospectively going back, is not all that useful. With security, it's better to kinda take the work and do it up front and we believe that that's what's really changing in the market that pivot to when you design a network security is arm and arm with networking or you know in this case, in AWS' case, how you move workloads elastically to the cloud. I mean those are all considerations now and I think you see that in the marketplace and in AWS' launch, like we said in the WAF rules. I think that's one step closer to marrying security with the function of what you're trying to do in the cloud. >> Justin: Yeah, and it needs to evolve over time as well. So again, you start with the basics, and even as your business changes you might be in completely on-site today but if you go to cloud, well now I need to do cloud-based security things. You have to start thinking about it in a different way. So hopefully customers are looking at things and looking at the portfolio approach and saying "okay, today I might need one product but tomorrow, well, I'm gonna need something else, I'm gonna need two or three or four". And as you say, if I've got something that plays well with others, then I can make a decision yesterday that well, actually I can still use this. I don't suddenly have to throw everything I did a year ago, throw it all away and start again from scratch. >> Matt: Sure, absolutely and that's the fun of what we do. Working in technology, it moves so fast. That's why we all do it I think. Because it's, to be close to it and to be a part of it I mean, that's why I get up everyday it's the coolest thing we get to do. And so because of that, you're right. We don't even know what's gonna happen in 2019. I mean think about the change in just one year's time. The velocity, and what we're seeing that AWS is accomplishing in short periods of time you really need to move quickly. And that's our commitment absolutely as a security company. >> John: And fortunately, you're busy. Right, and fortunately you're busy but because the risks are greater the threats are bigger so all the more reason for the success that you've had obviously and the continued success. We wish you that and thank you for being here on The Cube. We appreciate it. >> Matt: Thanks for having me. >> John: Yeah, always good to see you. That wraps up our coverage here on the cube for my colleague, Mr. Warren, and all of us behind the scenes. Thank you for joining us here, live from AWS Reinvent in Las Vegas. (light music)
SUMMARY :
Narrator: Live from Las Vegas it's The Cube, good to see you, sir. Matt: It's a pleasure to be here. first of all, what you do as far as the company's concerned being here and it's just absolutely great to see, you know, What's the chatter about? So it was really surprising to see, you know some and we're part of the role set so you can take some of the Justin: We were talking before and you were explaining So we have complementary products, we have, you know, Really all the products to us are all the same whether So it's interesting that Fortinet's chosen to take So when you give your credit card information, your social I mean, how do you look at that from a And so, that's a little hard to stomach, right. So you sort of want to have this security posture that's What is the 'I need to make sure I've got all my if you will, to say how effective we think you are. should be the products that you choose and this Now reputation's on the line, you said market cap really changing in the market that pivot to when So again, you start with the basics, and even as your accomplishing in short periods of time you really need to We wish you that and thank you for being here on The Cube. Thank you for joining us here, live from AWS Reinvent
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Warren | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Justin | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Matt | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Matt Pley | PERSON | 0.99+ |
AWS' | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
Matt Play | PERSON | 0.99+ |
Fortinet | ORGANIZATION | 0.99+ |
2019 | DATE | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
tomorrow | DATE | 0.99+ |
1000 wins | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
one year | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
one loss | QUANTITY | 0.99+ |
one product | QUANTITY | 0.99+ |
a year ago | DATE | 0.99+ |
one loss | QUANTITY | 0.99+ |
four | QUANTITY | 0.99+ |
first point | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
this week | DATE | 0.99+ |
six months ago | DATE | 0.99+ |
eight | QUANTITY | 0.98+ |
Sands Expo | EVENT | 0.98+ |
three days | QUANTITY | 0.98+ |
five years | QUANTITY | 0.98+ |
Intel | ORGANIZATION | 0.98+ |
Bare-Metal | ORGANIZATION | 0.98+ |
four years ago | DATE | 0.97+ |
one | QUANTITY | 0.97+ |
around 65% | QUANTITY | 0.96+ |
eight products | QUANTITY | 0.95+ |
first show | QUANTITY | 0.95+ |
One | QUANTITY | 0.94+ |
a week | QUANTITY | 0.94+ |
three | DATE | 0.94+ |
earlier today | DATE | 0.93+ |
single pane | QUANTITY | 0.92+ |
WAF | TITLE | 0.91+ |
a million wins | QUANTITY | 0.9+ |
IOT | TITLE | 0.86+ |
FortiSandboxes | ORGANIZATION | 0.84+ |
One Of The Hosts | QUANTITY | 0.84+ |
one talk | QUANTITY | 0.82+ |
AWS Reinvent | ORGANIZATION | 0.81+ |
The Cube | TITLE | 0.79+ |
Ross Turk, Red Hat | Open Source Summit 2017
(upbeat music) >> Announcer: Live from Los Angeles, it's theCUBE covering Open Source Summit, North America 2017, brought to you by the Linux Foundation, and the Red Hat. >> Okay, welcome back everyone. Live here in Los Angeles, is theCUBE's exclusive coverage of the Open Source Summit, North America. I'm John Furrier, your host, with my cohost, Stu Miniman with Wikibon. Our next guest is Ross Turk, Director of Evangelism at Red Hat. Welcome to theCUBE, good to see you again. >> Good to see you. >> So, evangelizing is now going to be super more important as Open Source Summit, formerly called The Linux Con, Linux kernel. So, Linux is really now the foundation. So, now all these new products are emerging, hence the new name Open Source Summit. You guys are in the middle of it. >> Ross: Mm-hmm. >> What's the themes that you guys are pumping out there right now from an evangelist standpoint? Give me the order of operations in terms of priorities. >> Well, gosh, we're trying to tell stories about how people operate infrastructure in today's modern world, right, which is a lot about making sure that, you know, dealing with ephemeral infrastructure, dealing with containerized applications, and that sort of thing. It gives a lot more flexibility to people who are doing modern operations. It's about applications that spill over across multiple machines and doing so in a way that doesn't require a lot of heavy lifting or wiring things up by hand. So, there's this whole modern operations experience thing we talk about, but we also talk about a modern developer experience. What does it mean to build applications today? And, of course, you combine those together it turns into Dev Ops, right. But, the large companies still work in these two separate worlds. But, people are building technology differently than they ever did in the past, and they're deploying it differently than they did in the past. So, there's lots of stories that can come out of that. >> Well, let's start with the story that we love. Stu and I were talking about the server list at the beginning because you have the Dev Ops movement certainly is going mainstream. You're seeing a lot of enterprises looking at that as viable. Now, they're operationalizing it, and they need to have that industrial strength Red Hat, Linux. But, now Kubernetes and Servalist, the younger developers, they just want an infrastructure as code. >> That seems to be a very hot story here, and Kubernetes server list is kind of in the hallway conversations. How do you guys bring that to bear? >> Well, I think that what Red Hat does is we give an operating environment that can sit underneath all of it with Rail and everything else we build that is stable and secure and reliable. And, you need that in order to have all of this chaos happening above it with developers deploying microservices and moving things around, and demands changing and all these other things, you need to have something really stable and reliable underneath that, something that you know can be if the applications and virtual machines and containers aren't long running, what sits underneath of them is long running, and it still needs to be stable and reliable. So, a lot of the work we've been doing for the past 20 years around Linux Engineering, I think, contributes to making this stable environment for a modern developer. >> Yeah, Ross, one of the challenges in scaling is usually I've got to worry things like storage. You know, state is there, you know data gravity is something we need to be concerned about. It's great to say ephemeral and I want everything anywhere, and, I can put it in this cloud or use it in that application, but at the end of the day it's tough to build some of these pieces. How's Red Hat helping there as containerization and scale, how does that fit into kind of this storage discussion moving on? >> It's a real struggle right, because you can talk to people and they say oh, every single one of the microservices held over and they scale out, and all this, and they talk about this really elaborate infrastructure like well, where is all your data being stored? Oh, it's sitting in Oracle, you know, so you find this sort of like dissonance between how data is managed and how applications are managed. At Red Hat, we believe that storage should be another microservice alongside all the other microservices make up and application. So, that's why we put a lot of engineering effort into making things like Ceph and Red Hat Gluster Storage work well alongside Open Shift so that a developer can provision storage as needed without having to go to an ops person, and that when that storage gets provisioned it's in containers alongside other containers that are providing the other things that your application needs. >> Software defined storage was the answer, it's the Holy Grail. We've heard software defined data center. We've been covering this also in the VM world, heard an awful lot about that. But, that still is a key part of the software, and now you have hardware stacks, so IOT and Cloud are opening up these new use cases for enterprises where whoa, we actually kind of didn't test that hardware with that software, so it's kind of interesting dynamic because software defined is still super important. What's your view on software defined storage, in particular, is that an answer, is it stable, what's your thoughts? >> Well, I think it's an answer, but it depends on what the question is, just to be kind of-- >> What is software defined storage? Let's start with that one. >> Well, so, what is software defined storage? Software defined storage is, okay, so I'll say it in more like what it isn't. >> The traditional storage, traditional storage solutions get deployed as appliances, which are vertically integrated hardware and software solutions that are built to do one thing, and to do that one thing well. And, that one thing is to store data. They're kind of like big refrigerator-sized things that you bring into your data center with a forklift and it's a big oepration, and then they provide storage for any number of applications. What software defined storage does is it implements those same services and those same capabilities, but it does it entirely in software. So, instead of being this vertically integrated software, hardware solution, you end up with software that lets you build it on any hardware, and that hardware can be physical hardware so you can build a storage cluster made up of 1,000 bare metal servers, or you could build that same cluster on a thousand VMs inside of a public cloud. So, in making storage no longer a hardware problem, like it used to be, I mean fundamentally it's a hardware problem, you get down bits are stored somewhere, but, the management of storage is no longer a hardware concern, it's a software concern, now, and that means it's a little bit more flexible. You can containerize it. You can deploy it in the public cloud. You can deploy it in VMs. You can deploy it on bare metal. So, that's what software defined storage is doing is it's changing things around, but it requires different skills. >> Come on Ross, I want a storageless environment, can we get on that? >> A storageless environment? Sure, I guess. Storage has become somebody else's problem at that point. >> Absolutely, how about, how is containers changing that whole discussion? You know, it took us like a decade to kind of get storage working in a virtualized environment, networking seems to be really tackling the container piece, storage seems a little further behind, you know, what're you seeing some of the big challenges there and how are we looking to solve that? >> Well, here there's when you look at containers and storage, there are really two things to consider. The first is how do you make storage such that a containerized environment can consume it easily, right. This is what at Red Hat we call container ready. So, we call a storage solution container ready, what it means that your container platform knows how to consume it. Most storage is container ready, all it takes is a Kubernetes volume driver to be container ready, and that's one half of it, and that's really, really important. It's the same kind of thing we had to do with virtualization, making sure every hypervisor could talk to every storage system. Now, we're making sure every container platform can talk to every storage system. That's important, but it's only half the puzzle, 'cause the other half is now that you have storage as a software thing, a distributed software thing, you can actually deploy that storage inside the same containers that you're using, that are driving the demand for that storage. So, it's this kind of weird, you know, snake eating it's own tail thing where you as a developer, let's say I'm deploying an application, I need a database, I need a web server, blah, blah, and a bunch of other things, and I need a scale out storage system, I can deploy that in containers just alongside everything else, and it uses the local storage of each of the container hosts to build that shared storage that then is used to provide services to other containerized applications. So, it's the ability to have storage in containers Which is really strange. We call that container native storage. >> It's interesting the markets going pretty crazy, so if you kind of take the Dev Ops and say assume for a minute infrastructure is programmable. >> Mm-hmm. >> But, then you look at the developer action right now on the App side, we've seen all kinds of new stuff Apple has their announcement today with the new iPhone 8. We've been covering that on siliconangle.com. Forbes has got great stories as well. New AR kit, so augmented reality is a huge deal, virtual reality obviously still hyped up, is still promised, those are going to require new chips. That's going to require consumer behavior change, so, the developers are staring at a different market than worrying about provisioning storage, right. So, but, these are now new pressures. New hardware, new opportunities, as a developer, advocate, and evangelist, and an industry participant, and user, how do you look at that, and how is that impacting the developer market because Androids got good stuff coming down, too, not just Apple, Samsung? >> Ross: Yeah. >> It's all multimedia, I mean. >> Well, what's interesting about AR kit is that if you go just back five years that same capability required a very, very particular type of phone, you know, like the project Tango stuff required all these depth cameras and like connect style stuff to do the AR kit, and Apple was able to solve a lot of that in software just using two cameras, right, and in software. And, I think that's really-- >> John: On a phone? >> On a phone, on a phone no less, and I think what's amazing about that is all of the capabilities that we walk around with in our pocket now were really hard to get a long time ago. >> Well, this is interesting, your point, let's stay on this because this really illustrates the point. AR kit, for example is proving that the iPhone now is smart enough and with software, enough horsepower to do that kind of thing, but that's replicable across all devices now as an IOT device. The Internet of Things is going to be a freight train coming down the tracks, security, endpoint security, whether it's, I mean all kinds of coolness, but yet threats are there. So, software has to do all this, right. So, how's that going to impact the cloud game, your business, you guys you have to move faster on hardening things, be more organic on the innovation side, not business-wise, but technical strategy. >> Well, I think a lot of it is enabling developers to work more quickly and build features more quickly, also, educating developers on the security and privacy ramifications of the things that they build because it's really easy to just go out in front and advance and innovate and forget about all of that stuff. So, it's about changing developer culture so that you consider security and privacy first, as opposed to later. And, also, maybe you want to consider storage as well if you're talking about machine learning or IOT and all of these types of things, you're -- >> Videos, I mean this is video, software rendering. That's a storage nightmare. >> It's all got to live somewhere, and once you put it in that place where it lives, it's really hard to move it. So, this is a thing you want to plan from the very beginning. >> And, I think that's what's cool about AI, too, and self-driving cars it's a consumer, you know, flashy, coolness that can say hey, this is happening. I mean how fast is happening, but the developer is now bringing it to the businesses and say, okay, we don't have an AR virtual reality strategy for our retail, for instance, you potentially could be out of business. So, these are the kind of thoughts that are going at the C-level that now are going into what used to be IT, but all of IT, how do you handle this? This is an architectural question, so your thoughts on that, because that seems to be a conversation we see a lot. Architectural that's going to solve problems today, not foreclose future opportunities. >> Well, it's cultural, too, inside of the company, like everywhere inside of a company there used to be Internet teams in companies, remember. We used to be like oh, go talk to the Internet team because something's wrong with the Web or whatever, now, there's no Internet team, everybody's the Internet team, Every single team in an organization is thinking about how to leverage the Internet to make their job more effective. The same is going to be true for everything that we're talking about, you know. Security, interestingly enough, so many people always thought security was somebody else's problem. but just this week, we were reminded that it's everybody's problem, hundreds of millions of people's problem, security. So, I think that as these things kind of advance-- >> John: Security first, and privacy first is critical. >> It is absolutely critical, and there used to be, I mean, I think at some point maybe there won't be a security team inside of a company because everybody's going to be the security team, but it's like everybody's the Internet team now, and I always felt the same way about open source communities. I thought there would never, you know, always everybody-- >> Well, people are ruling their own security now. You have these LifeLock or whatever they call them, these services for a password protection because you can't trust even all these databases that are out there. You have block chain with immutability, yeah, certainly the wallets are not yet, but I mean certainly this is where it might be a future scenario. >> Yeah, and I think for all of these things agility is going to be key. The ability to go down a path a certain distance and realize whoa I've run into a privacy problem, back up a little bit, continue down another path. I think that the faster we can make the development process, I think the less risky we make going into all these new frontiers. >> Yeah, Ross, one of the things we've really liked watching the last kind of five years or so is storage turning into a discussion of data and how can we leverage that data, real-time data, you know, decisions at the edge, analytics, what's exciting you the most about kind of the storage world these days? >> Oh boy. Well, you know, I just spent about five years in the storage infrastructure world, so a lot of what kind of kept me going day and night was saving people money, making things faster, making things easier, but also, giving storage platforms that were elastic enough to handle all of this really interesting stuff that happens on top of them. So, there's all kinds of new big data stacks that I find particularly interesting, a lot of the real time analysis stuff like Apache Spark and things like that. There's so much going into visualization right now, as well, how you handle large amounts of time series data and that sort of thing. There's been a lot of advancements in exactly that. Personally, I'm really excited lately in all the data of this stuff, all the ways you can extract meaning from all this data, you know, the ways that you can give it a business context that allows you to make better decisions with it. >> Not a lot of data conversations here at this conference as is open source software, but I mean data I mean I've said and I wrote a blog post in 2008 Dave always, Dave Olantho always jokes with me because I always reference it, I said data is the new development kit, meaning data is going to be part of the software development model, and it actually is with big data, but, you're not hearing a lot of it here because most people are talking about their communities, their projects, but the role of data is fundamental at the edge. >> Ross: Absolutely. >> And, so, how is that going to change some of these conversations and can data be developed on, and is data now part of the software development life cycle that's coming to fruition in the new way. >> Interesting, I think that's an interesting observation that as we see sort of Dev and Ops coming together, right, the world of the operator and the world of the developer coming together, I think we'll probably, at some point, see the world of the developer come together with the world of the data scientist because as I kind of wrack my brain I'm thinking okay, what type of future developer wouldn't have to be dealing with large amounts of data wouldn't have to have that kind of skill to be able to deal with it. So, I think we're going to start to see more software developers getting more involved in big data, machine learning, data analytics, and things like that for sure. >> Well, either way, this open source growth that's coming is going to be exponential. Data is already there. I mean we have a joke in our office software is eating the world as Mark Andreasen would say years ago, but, data is eating software. So, in terms of how you look at it someone's eating somebody, but, this becomes interesting for the IOT developer, or the industrial developer. Those systems were never connected to IT in the past. It was like they ran their own stuff from their own terminals. >> And, there's this idea that everybody's heard that data has gravity, right. And, I actually was talking to somebody about this and they said, well, actually the data has inertia, and I'm like, no, that's not really it 'cause once it's moving it's not hard to stop it. The idea that data has gravity means that let's say I'm putting together this new IOT application, or whatever, I'm gathering data from a bunch of sensors or whatever, and I've got the data in that place. Now, having all that data in that place is more meaningful to me than most of the software that I wrote. You know, it's like that is the value, the kernel of the data is there, and data having gravity means that it's hard to move once it's in a certain place, but, it also means that it attracts workloads to it, right. So, it used to be that software was king, and software created data and managed data, and now data is king, and it brings software to it, I think. >> I totally agree with you, and I think they might even call this the open data summit soon, but it's beyond open source. Now, this is going to be great. They work hand in hand. Software and data are going to be great. Stu what's your thoughts on the role that data's not being talked much here? >> Yeah, John at Amazon weighed in last year. When we talked to Andy Jassy it was the customers were the flywheel, and I think data's going to be that next flywheel of really feeding into that data gravity discussion that you were having, Ross. You know, when Hadoop came out it was like oh, we're going to bring the code to the data. Well, we know if I'm going to have more data I'm going to have my data sources, I'm going to have third party data sources that I want to be able to work and interact with those, so, data absolutely huge opportunities there, and the companies that can leverage that and get more value out of it is going to be a-- >> Well, we already see it's a competitive advantage, no doubt, but it's the privacy issue still the big debate like we know in our immediate businesses. Look at Facebook, I've got a free App I get to see all my friends' photos, their vacations, everyone's living a great life on Facebook, but, then all of a sudden I give my data away for free for the privilege to use that App, but all the sudden they start injecting fake news at me. I don't want that anymore, and you're still making money off of my data, so that's interesting. Facebook makes money off of my data. >> Yeah, that's-- >> That's my contract with them. >> Yeah, If you ask what their asset is, one person might think it's traffic, you know, or eyeballs, but, I think it's data. >> So, they're using data, I might not like it, so that might be an opportunity for somebody else so your point Stu, so if you start thinking about it differently, data decisions are going to be an architectural challenge. >> Yeah, absolutely. I think enterprise architecture thinking, even today, you're seeing enterprise architects thinking more and more and more about data than they have in the past. >> Ross, what do you think about the show, final word in the segment, what's going on Open Source Summit, share with the folks that are watching? >> The vibe here, it's now a new name, but it's still the same game, multiple events come together. >> Yeah, multiple events together. I like Open Source Summit as a name. I think it's a good name. It's properly named for what's going on here. It's been an interesting experience for me because I've been in this community for a really long time. So, I come here and I run into all kinds of old friends, the hallway track's always a good track for me. The content is fantastic, but the hallway track is always really good, and I can't think of anywhere else where you can go and get this selection of people, right. You have people who're working on all layers of the problem, and they can all come together and talk. So, I don't know-- >> It's really a cross-fertilization, cross-pollination, whatever word you want to use. I think this event's going to be in the 30,000 pretty quickly. I mean this is going to be. Well, if you look at the growth, the numbers, you know, presented on stage, Jim Zemlon, was pointing out the growth, by 2026, 400 million libraries. I mean people still think that's underestimated. >> Yeah. >> So, that's a lot of growth. >> I think it could get there, and I think these folks organize great shows, so I look forward to seeing them scale up to 30,000. >> Ross, thanks for your commentary, appreciate the perspective, and the insight here on theCUBE. >> Thank you. >> Thanks for joining us. This is theCUBE live coverage from Open Source Summit, North America. I'm John, for Stu Miniman, back with more after this short break. (upbeat music)
SUMMARY :
brought to you by the Linux Foundation, and the Red Hat. Welcome to theCUBE, good to see you again. So, evangelizing is now going to be super more important What's the themes that you guys are pumping out there And, of course, you combine those together beginning because you have the Dev Ops movement That seems to be a very hot story here, So, a lot of the work we've been doing for the past 20 years and scale, how does that fit into kind of this storage providing the other things that your application needs. But, that still is a key part of the software, What is software defined storage? Well, so, what is software defined storage? hardware and software solutions that are built to do one Storage has become somebody else's problem at that point. So, it's the ability to have storage in containers so if you kind of take the Dev Ops and say and user, how do you look at that, and how is that impacting like the project Tango stuff required all these depth amazing about that is all of the capabilities that we So, how's that going to impact the cloud game, So, it's about changing developer culture so that you Videos, I mean this is video, software rendering. It's all got to live somewhere, and once you put it in because that seems to be a conversation we see a lot. The same is going to be true for everything that we're going to be the security team, but it's like everybody's these services for a password protection because you agility is going to be key. give it a business context that allows you to make meaning data is going to be part of the software and is data now part of the software development life cycle to be able to deal with it. coming is going to be exponential. You know, it's like that is the value, Software and data are going to be great. the flywheel, and I think data's going to be for the privilege to use that App, with them. think it's traffic, you know, or eyeballs, differently, data decisions are going to be and more and more about data than they have in the past. but it's still the same game, multiple events come together. The content is fantastic, but the hallway track is I think this event's going to be organize great shows, so I look forward to seeing perspective, and the insight here on theCUBE. This is theCUBE live coverage from
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jim Zemlon | PERSON | 0.99+ |
Mark Andreasen | PERSON | 0.99+ |
Ross | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Dave Olantho | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Samsung | ORGANIZATION | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Ross Turk | PERSON | 0.99+ |
2008 | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
Andy Jassy | PERSON | 0.99+ |
Linux Foundation | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Los Angeles | LOCATION | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
last year | DATE | 0.99+ |
iPhone 8 | COMMERCIAL_ITEM | 0.99+ |
Stu | PERSON | 0.99+ |
Open Source Summit | EVENT | 0.99+ |
two cameras | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
theCUBE | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.98+ |
400 million libraries | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
North America | LOCATION | 0.97+ |
five years | QUANTITY | 0.97+ |
Linux | TITLE | 0.96+ |
Androids | TITLE | 0.96+ |
Dave | PERSON | 0.96+ |
about five years | QUANTITY | 0.96+ |
this week | DATE | 0.96+ |
Open Source Summit 2017 | EVENT | 0.96+ |
Linux kernel | TITLE | 0.94+ |
2026 | DATE | 0.93+ |
two separate worlds | QUANTITY | 0.93+ |
siliconangle.com | OTHER | 0.93+ |
one person | QUANTITY | 0.92+ |
one | QUANTITY | 0.91+ |
one thing | QUANTITY | 0.91+ |
each | QUANTITY | 0.91+ |
hundreds of millions of people | QUANTITY | 0.91+ |
up to 30,000 | QUANTITY | 0.89+ |
America | LOCATION | 0.88+ |
half | QUANTITY | 0.88+ |
Kubernetes | ORGANIZATION | 0.88+ |
The Linux Con | EVENT | 0.86+ |
30,000 | QUANTITY | 0.86+ |
Wikibon | ORGANIZATION | 0.86+ |
2017 | EVENT | 0.85+ |
one thing | QUANTITY | 0.84+ |
Forbes | ORGANIZATION | 0.83+ |
1,000 bare metal servers | QUANTITY | 0.82+ |
one half | QUANTITY | 0.8+ |
Frank Palumbo, Cisco Systems & Andy Vandeveld, Veeam - VeeamOn 2017 - #VeeamOn - #theCUBE
>> Voiceover: Live from New Orleans, it's the Cube covering VeeamON 2017 brought to you by Veeam. >> Welcome back to New Orleans everybody. This is the Cube, the leader in live tech coverage. We go out to the events and extract the signal from the noise. My name is Dave Vellante, and I'm here with my cohost Stu Miniman. Frank Palumbo is here. He's the senior vice president at Cisco Systems. And Andy Vandeveld is the vice president of Global Alliances at Veeam Software. Gents, welcome to The Cube. >> How we doing? >> Thank you. >> It's great to be here. >> Good, Frank, hot off the keynote. It was great, Yankees fan, love it. The rivalry continues. Of course you guys know the Cube, Red Sox fans, some of us. Stu's not. >> Not all of us. >> So we love it. We love the action, and it's always fun. But Frank we had to cut out a little bit before your keynote because we had to get ready to do the Cube. But you put up a slide that was awesome. We could do an hour on The Cube on that, and it's all about the apps, I mean really. But you had this great slide with apps and microservices and virtualization and bare metal and OnPrim and really laying out the complexity today. And you guys are at the heart of that. Maybe give us a quick summary of how you guys see the world. >> When you're talking about the applications, the application profile, it's important, the network kind of brings this together because we do touch everything. Where people are in this kind of application history is some of them are on legacy, mainframe. Some of them are on RISC processors. But as a network provider, we have to bring those in too even with the more modern applications. So you look at what the platforms or workloads are on so move those in. And then you're looking at workload placement, on Prim or in the Cloud. Do we put data in a colo? Do we put the application in the Cloud? There's different hybrid mentalities to do that. Then you get into the systems management where there's just too much stuff out there. Humans can't manage it anymore so the machines and the software have to manage the machines and the software. We'd like to think we're right in the middle of that because of the way we bring things together with the network. >> So Andy, I look at the... Stu and I walked the floor before we come in here, the ecosystem is really quite impressive-- >> Andy: Thank you. >> for a relatively small company. I mean not that small anymore. It didn't just happen overnight. Maybe you could talk a little bit about themes and philosophy with partnerships and some of the things that you're doing with Alliances generally and specifically get into the Cisco partnership. >> Well I think partnerships have been in our DNA since the beginning of the company. We're a 100% channel-lead company. We don't have a direct sales force. That's an important piece of the company's philosophy. These alliances are really key for us because as we start to move into markets that are maybe a little bit higher than where we've been into the large enterprise and mid-enterprise and large enterprise, we really look at partnerships like the one with Cisco that are going to benefit Veeam and the customers by us being together doing joint developments. Some of the things that Frank talked about in his keynote speech, those are the sorts of things that create solutions for that level of customer where Cisco's been resident for many, many years. So we look at these partnerships as really central to where Veeam wants to go as a company and where we think customers want Veeam to participate with the partners. >> What's the specific nature of the partnership? Can you unpack that a little bit for us? >> From my side, certainly we have a robust go-to-market relationship in terms of when we're positioning UCS or Hyperflex, our server and hyper converged platforms, now we can bring to bear the Veeam value problem as we go forward with customers. And customers look to Cisco really to complete the story and offer an end-to-end solution. We weren't able to complete it without the Veeam technology. Then on the development side, some of the things that we're doing, we've integrated so now the Veeam software can work with our Snap technology and hyper converge. So you're starting to see it come together at the screen level with the bits and bytes in terms of the integration. >> Dave: So there's a greater degree of technical integration as well. >> Frank: Yes. >> It's not just go-to, I mean that's important because a lot of times back-up data protection is kind of an afterthought. It's a bolt-on. But if you're going to be a complete solution provider, that's fundamental and it's becoming more important. >> I think you know I was just mentioning to Frank back in the green room before we came out here I look at the start of this partnership as really being about 18 months ago. Although we'd had a partnership for a while, we really kind of started about 18 months ago in this meeting that we had at their partner conference in Maui. And Radmeer and I sat down with Frank and kind of explained why we thought data protection was a solution that Cisco could get behind particularly now that they were coming out with their S-Series devices. But that's just the start of it. It has to come with integration as well. Then we started with Hyperflex. It was a new product for them, 1.0 version. With the 2.0 version, we got integrated with snapshot technology like Frank mentioned. I look at this short runway of time in this relationship that kicked off with our meeting with Frank and he got it right away. We didn't have to explain it. >> Dave: It resignated. >> Frank: Oh, no question. We're very proud of our S-Series storage server. The hardware is nice. The infrastructure piece is nice, but it really doesn't come together unless you got the application on a run with it. That's where Veeam just jumps in and fills that gap perfectly for us. >> Frank, I think back to when virtualization really took off. Networking was one of the things that we had to fix. It put a lot of stress on the network. It's one of the reasons Cisco created UCS and backup also creates a lot of strain on the network. So it seems a natural fit. Can you talk about all the complexities that are coming and how you're going to be, what can we expect to see from jointly going forward? >> I think we've learned a lot from Veeam in terms of they've been able to really attack complex issues in a very simple fashion. Simplicity is the game with customers right now. Things are moving so fast. If you can't be simple, you're going to have a tough time out there. So I think that's where it's really come together for us in that vein. But when you look at the value of data and whether it's a second old or two years old or a year old, there's so many different more paradigms coming out about what you can do with this data. And customers and even customers of customers have now found ways to use this data either to make better decisions, monetize it, to stay away from things. So that's why this whole lifecycle for us is so important. This is where Veeam and us can really do some nice things for customers. >> Andy, can you build on that about the multi-Cloud position that Veeam has? How many of those, do you know, touch what Cisco's doing here and how does the partnership help drive that value of data type offering? >> For Veeam, our message is all about availability, availability of the data which makes the applications available and which basically makes the business stay up and running. One analogy we use is a cell phone. When you're cell phone dies, you can't get access to your email. You can't get access to your instant messages. >> Dave: You freak bascially. >> You feel like you're lost, right? >> Frank: It's getting kind of pathetic. >> Yeah. >> Dave: It is pretty bad. >> So think about not being able to get access to your data or access to your applications because of some outage, not being able to backup and recover. Your business could go out of business. Working with Cisco on solutions that are on premise, that are in the Cloud, that are multi-Cloud is really the value of the partnership that we have that we bring together. It's just at the beginning. We've got solutions that we're building now. We got solutions that are on the horizon. We've got a very strong go-to-market partnership in a very short period of time that are targeting enterprise customers, service providers, the whole gamut. It's really that sort of relationship that you find in an industry every so often. When it comes together like it has with us and Cisco, it's really a very strong, strong value prop. >> Well Veeam capitalized on the original virtualization trend with VMware that was a big transformation, the server infrastructure. You're seeing a huge network transformation now. There are so many forces affecting the network that I wonder, Frank, if you could comment on. You got ScaleOut. There's Flash. There's Cloud. There's Microservice. There's DevOps makes everything go faster. The flattening of the network. Describe what's happening and then maybe you can talk about how your ecosystem is going to take advantage of that. From what were the challenges the network has is exactly like you said. You have certainly the virtualized workloads now. The Microservices containerize workloads. I think the one people forget about is there's still a ton of bare metal out there, right? You look at the Hadoop workloads and such. A lot of these are bare metal oriented, right? Quite frankly, moving a VM around a fabric is actually pretty easy to do. But when you got to move a bare metal workload around a fabric, and that's something we can do with UCS the way we do it statelessly, that's much harder. That's why we have the extraction layer with what we call the fabric interconnection with UCS to do that kind of stuff. I think that's sometimes lost in the translation in terms of how you're going to handle all these different workloads. >> If I understand it, the link then to the opportunity for you guys, Andy, is that the stakes are just much higher now, right? You could do so much more around the networks. Stakes are so much higher. That increases the need for your products and services. Carry that through if you would. >> Well, it is. As we make our way up-market into the enterprise, the amount of data that businesses are spinning off of, their infrastructure and their data center or from robo offices or wherever, is growing immensely. Being able to have a partnership with an infrastructure provider like Cisco, where we can put solutions together that really give the customers the rock solid base for backing up their data and making sure that it's available is really critical for us as we move into those larger enterprise and larger environments. So this is an essential relationship I would say. >> I think, too, if I could mention, this is something our channel wanted to see, too. We're the same. We're at about 98% of our business goes through the channel. So they're selling our full line of infrastructure products. This completes the story for them. So we got a lot of guides to them say, "Hey, yes, Cisco. "We'd like to see you come together with Veeam "so we can start bundling offers out there in the market "and be that kind of end-end-to supplier, too." That was a big impetus especially from mid-market up to enterprise customers. >> Excellent, well, we got to wrap there. The partnerships give you huge leverage as a small, again not so small company anymore. The fact that you can get somebody like Frank to come down, talk about the partnership, is a testament to what you guys have built. So congratulations. Really appreciate you guys coming on The Cube. >> No, my pleasure, our pleasure. >> All right, keep it right there, everybody. We'll be back with our next guest. This is The Cube. We're live from New Orleans, VeeamON 2017. We'll be right back. (tinkling music)
SUMMARY :
Voiceover: Live from New Orleans, it's the Cube and extract the signal from the noise. Good, Frank, hot off the keynote. and really laying out the complexity today. because of the way we bring things together the ecosystem is really quite impressive-- and some of the things Some of the things that Frank talked about at the screen level with the bits and bytes Dave: So there's a greater degree But if you're going to be a complete solution provider, back in the green room before we came out here and fills that gap perfectly for us. and backup also creates a lot of strain on the network. Simplicity is the game with customers right now. availability of the data We got solutions that are on the horizon. on the original virtualization trend with VMware You could do so much more around the networks. that really give the customers the rock solid base "We'd like to see you come together with Veeam The fact that you can get somebody like Frank to come down, We'll be back with our next guest.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Frank Palumbo | PERSON | 0.99+ |
Andy Vandeveld | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Frank | PERSON | 0.99+ |
Andy | PERSON | 0.99+ |
Red Sox | ORGANIZATION | 0.99+ |
New Orleans | LOCATION | 0.99+ |
100% | QUANTITY | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Cisco Systems | ORGANIZATION | 0.99+ |
Maui | LOCATION | 0.99+ |
Veeam Software | ORGANIZATION | 0.99+ |
Veeam | ORGANIZATION | 0.99+ |
a year | QUANTITY | 0.99+ |
Stu | PERSON | 0.99+ |
S-Series | COMMERCIAL_ITEM | 0.99+ |
UCS | ORGANIZATION | 0.99+ |
Yankees | ORGANIZATION | 0.99+ |
Cube | COMMERCIAL_ITEM | 0.98+ |
one | QUANTITY | 0.98+ |
Hyperflex | ORGANIZATION | 0.98+ |
The Cube | ORGANIZATION | 0.98+ |
about 98% | QUANTITY | 0.97+ |
an hour | QUANTITY | 0.96+ |
One analogy | QUANTITY | 0.95+ |
today | DATE | 0.94+ |
Cube | ORGANIZATION | 0.93+ |
about 18 months ago | DATE | 0.93+ |
Global Alliances | ORGANIZATION | 0.9+ |
DevOps | TITLE | 0.87+ |
ScaleOut | TITLE | 0.83+ |
VeeamON 2017 | EVENT | 0.82+ |
second old | QUANTITY | 0.81+ |
two years old | QUANTITY | 0.79+ |
Radmeer | PERSON | 0.75+ |
Veeam | PERSON | 0.73+ |
1.0 | QUANTITY | 0.73+ |
a ton of bare metal | QUANTITY | 0.7+ |
VMware | TITLE | 0.59+ |
2.0 | QUANTITY | 0.57+ |
Mark Baker, Canonical - OpenStackSummit 2017 - #OpenStackSummit - #theCUBE
(upbeat music) >> Narrator: Live from Boston, Massachusetts it's The CUBE covering OpenStack Summit 2017, brought to you by the OpenStack Foundation, Red Hat, an additional ecosystem of support. >> Welcome back, I'm Stu Miniman with my co-host John Troyer. Happy to welcome back to the program. It's been a couple of years but Mark Baker, who is the Ubuntu Product Manager for OpenStack at Canonical. Thanks so much for joining us. >> Oh, you're welcome, it's a pleasure to be back on. >> All right so you said you've been coming to these shows for over six years now. You sit on the OpenStack Foundation. We've been talking this week. There's all that fuzz and misinformation and God what does (faint) say this morning? It's like fear is one of the most powerful weapons out there. Sometimes there's just misinformation out there but for you, OpenStack today where you see it in general and in your role with Canonical? >> Sure so OpenStack is one of the cornerstones of our business. It's certainly a big revenue generator for us. We continue to grow customers in that space, and that mirrors what we see in the OpenStack community. So all of the numbers you'll have seen in the OpenStack survey showed that adoption continues to grow. Sure, there is, I don't know if I want to call it fake news out there but there's definitely a meme is going that okay, OpenStack is perhaps declining in popularity. That's not what we see in adoption. We see adoption continuing to grow, more customers coming onto the platform, more revenue is coming from those customers. >> Yeah Mark any data you can share? We did have we had Heidi Joy on from the foundation to talk about the survey. I mean big you know adoption over 74% of deployments are outside of the US. We talked to Mark and Jonathan this morning. They said well that's where more than 74% of the population of the world lives outside of the US on any trends or data points specifically about a bunch of customers. >> Sure so we we definitely have big customers outside the US. You look at perhaps one of our best well-known is Deutsche Telekom, obviously a global telco that's situated in Europe that's deploying OpenStack. Really at the core of their network and I was going into multiple countries, and we see not only more customers but also those existing customers growing their estate and we've got other engagements as well in the Nordics with Tele2, another telco that has a larger stake too. And increasingly out in Asia too. So we definitely see this as being a global trend towards adoption. >> All right and Mark, there was you know for years, it was okay. How many distributions are there out there? How many do we need on out there? Why do customers turn to Ubuntu when they want OpenStack? >> So the challenge of operating infrastructure is scale. It's not can I deploy it? It's not so much even you know how performant is it? It's really kind of boils down to economics, and a large part of that economics is how are you able to operate that cloud efficiently? We've proven time and time again that a lot of the work that we've put in since the very beginning around tooling, around operations is what allows people to stand up these clouds, operate them at scale, upgrade them, apply patches, do all of those things but operate them efficiently at scale without having to scale the number of staff they require to operate that cloud, yeah. >> I think back to the staff that's been around for at least 15 years is company spent 70 or 80% or even more of their budget on keeping the lights on, running around the data center doing that. Anything you could tell us about OpenStack and how that shifts those economics for the data center? >> Sure, so OpenStack has gone through a typical sort of evolution that many technologies go through and we liken it to Linux obviously, we're a Linux company. In the beginning with Linux many people would build their own distributions, they'd compile their own kernels, they'd make modifications. A lot of the big lighthouse users of OpenStack went through that process. We are seeing the adoption changing now. So people are coming to companies like us with an OpenStack distribution that's off-the-shelf, ready and packaged with reference architectures, proven methodologies for implementing this successfully, and consuming it much more like that. Without that package, this free software can actually be very expensive to operate. So you have to get getting those economics right comes from having those packages for people to be able to deploy, manage it and scale it efficiently on-site. >> So you've been involved with OpenStack throughout the whole evolution. Is there anything you see now and 2017 at this summit? This is my first summit. I'm very impressed as an outsider. Again, we started off talking about what you hear from the outside, talking to people here at the show, people standing up their very first clouds this year, very bullish very kind of conscious of okay this is a, this is not a winner-take-all world. There's a place for OpenStack. >> Mark: Yeap. That's actually very kind of clear and very well fit. Do you see a difference in the customers that are you're working with now in 2017, their maturity level, their expectations than perhaps you did a few years ago? >> So yes certainly, customers have complex and diverse requirements, and so they want to deliver different styles of applications in different ways, and OpenStack is a great way of delivering machines, whether it's virtual machines or container machines to applications and provides a very robust and agile environment for doing that. But other styles of application may require to run natively on Bare Metal. OpenStack can do some of that, and do a lot of that but we're seeing, certainly seeing customers understanding okay, OpenStack has a role, public cloud has a role, container technologies have a role. A lot of these intersect together. Then it's really our objective is to help them whether they're choosing container platforms and OpenStack, whether they're using public cloud to ensure that they're able to manage this in an efficient way to deliver value to their business. >> You talked about operability and we talked with Mark Shuttleworth. He was also, we were marking that Ubuntu, the operating system is by far the majority choice in OpenStack and in a lot of cloud projects. Can you talk a little bit more about operability? Again the traditional dig from outside the project a few years ago science project, hard to use, need to have computer scientists to even get it running, which as a former Linux person myself, I think I find that a little bit insulting. It's rocket science but it's not that, it's not that complicated. >> (faint) Were involved in the beginning. >> That is true. But can you just talk a little bit about operability in terms of getting what you're seeing, in terms of either private cloud or at people standing up, the operations team needed, the maintainability day to day operation, that sort of thing in a modern OpenStack environment? >> Yeah, so OpenStack is becoming, certainly a lot of the enterprise customers that we're working with now is becoming another platform that will sit alongside the VMware. There may be some intersection of that. Our goal is to have common operations. So if I want to deploy applications into containers, I could do that in to Kubernetes or just running on VMware, I could do that on OpenStack, I could do it in public cloud to have common tooling and common operations across as much of the estate as we can because that's where I'll get efficiencies. It's where I'll get smart economics and smart operations. So well definitely, people are looking for those solutions. They know they're going to have diverse environments. They're looking for commonality that runs across those diverse environments and Ubuntu provides a great deal of commonality across. >> Mark, can you speak to Canonical's involvement in some of the projects? I know you have a lot of contributors but where particularly did your company spend the most focus? >> So, OpenStack, the initial challenge with OpenStack was to deliver capability and functionality. Canonical was one of those contributors in the early days. It was helping drive new features, helping drive new capabilities in OpenStack. More or less, we've switched to addressing that operations problem. There are many clouds out there that's stuck on older versions. For OpenStack to succeed as it moves forward, we need to be able to show you can upgrade gracefully without service interruption. We're demonstrating that with customers. So a lot of the work that we've been doing is how we streamline these operations, how we crowdsource, if you like, best practice for operating these clouds of scale to deliver efficient value to the business. >> Oh, another interesting conversation here at the show has been about containers. >> Yeah. >> Both Kubernetes and I know Canonical been involved with with Alex D. So can you talk a little bit about the interrelation of containers with OpenStack and how you're seeing that play out? >> Yes, absolutely so containers is all over OpenStack. We do smile somewhat when people talk about containers being a new thing with OpenStack as we've been deploying OpenStack inside LXD containers for several years now. So many of our customers are running containerized OpenStack today in production but this there's certainly this great intersection of that running Kubernetes on top of OpenStack. For example, we're seeing a lot of interest in that. We deploy, as they say, our OpenStack services in containers to give flexibility around architectural choices. We're very happy to run Canonical's distribution of kubernetes inside of OpenStack, which we do, and say have customers doing that. So there are also people looking at how you can containerize control plane in other ways. We're certainly keeping tabs on that, and you know exploring that with some customers but containers are all across the OpenStack ecosystem. They're not competitive. They're very much sort of building a higher level of value for customers so they have choice in how they deploy their applications. >> All right, Mark anything new this week surprised you or any interesting conversations that you'd want to share? >> So I came into this knowing that there was going to be a lot of discussion around containerized applications in OpenStack and containers perhaps, and the control plane. The thing that has surprised me actually has been the speed with which people are looking at OpenStack for edge cloud. Cloud on the edge, it's kind of a telco thing but cloud on the edge is how I can deliver capabilities and services, infrastructure services in an environment, in a mobile environment, it could be attached to a cell phone mask for example. It's not a traditional big data center but you need to deliver content and data out to mobile devices. So there's a lot of discussion especially today, within the telco community here at OpenStack Summit about how OpenStack can deliver those kinds of capabilities on the edge. That's been interesting and a surprise for me to see how quickly it's come up. >> All right Mark, want to give you the final word as to what you want people taking way of Ubuntu's participation in OpenStack. >> Well, some of this talk about OpenStack you know is it had its day in the sun, there are other things now taking over. You need to I think people out there will need to understand that OpenStack is deeply embedded inside big companies like AT&T, and like Deutsche Telekom. It's going to be there for a decade or more, right. So OpenStack is definitely here to stay. We continue to see our business growing. The number of customers Canonical is working with deploying OpenStack continues to grow. Ubuntu as a platform for OpenStack continues to grow. So it's definitely going to be part of the infrastructure as we roll forward. Yes, you'll see it working more in conjunction with those container technologies and application platforms. Parsers for example but it's here. It's just no longer quite the bright new shiny thing it used to be. It's kind of getting to be part of regular infrastructure. >> All right, well Mark not everything could be as bright and shiny as the Ubuntu orange shirt. So thank you so much for joining us again. We'll be back with more coverage here. From Boston, Massachusetts, you're watching The CUBE. (upbeat music)
SUMMARY :
brought to you by the OpenStack Foundation, Happy to welcome back to the program. It's like fear is one of the most So all of the numbers you'll have seen We talked to Mark and Jonathan this morning. Really at the core of their network All right and Mark, there was you know for years, It's not so much even you know how performant is it? and how that shifts those economics for the data center? So people are coming to companies like talking to people here at the show, Do you see a difference in the customers that are and do a lot of that but we're seeing, and we talked with Mark Shuttleworth. the maintainability day to day operation, I could do that in to Kubernetes So a lot of the work that we've been doing at the show has been about containers. So can you talk a little bit about the interrelation and you know exploring that with some customers and the control plane. as to what you want people taking way of It's kind of getting to be part of regular infrastructure. So thank you so much for joining us again.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Mark | PERSON | 0.99+ |
Jonathan | PERSON | 0.99+ |
Mark Shuttleworth | PERSON | 0.99+ |
John Troyer | PERSON | 0.99+ |
Mark Baker | PERSON | 0.99+ |
Canonical | ORGANIZATION | 0.99+ |
Europe | LOCATION | 0.99+ |
Deutsche Telekom | ORGANIZATION | 0.99+ |
Asia | LOCATION | 0.99+ |
US | LOCATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Tele2 | ORGANIZATION | 0.99+ |
2017 | DATE | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
OpenStack Foundation | ORGANIZATION | 0.99+ |
80% | QUANTITY | 0.99+ |
70 | QUANTITY | 0.99+ |
Heidi Joy | PERSON | 0.99+ |
AT&T | ORGANIZATION | 0.99+ |
more than 74% | QUANTITY | 0.99+ |
OpenStack | ORGANIZATION | 0.99+ |
Nordics | LOCATION | 0.99+ |
Alex D. | PERSON | 0.99+ |
Boston, Massachusetts | LOCATION | 0.98+ |
one | QUANTITY | 0.98+ |
Linux | TITLE | 0.98+ |
OpenStack | TITLE | 0.98+ |
over six years | QUANTITY | 0.98+ |
OpenStackSummit | EVENT | 0.98+ |
this year | DATE | 0.98+ |
first summit | QUANTITY | 0.97+ |
OpenStack Summit 2017 | EVENT | 0.97+ |
today | DATE | 0.96+ |
Ubuntu | TITLE | 0.96+ |
Both | QUANTITY | 0.95+ |
over 74% | QUANTITY | 0.95+ |
OpenStack Summit | EVENT | 0.95+ |
this week | DATE | 0.92+ |
Bare Metal | ORGANIZATION | 0.92+ |
#OpenStackSummit | EVENT | 0.92+ |
this morning | DATE | 0.91+ |
first clouds | QUANTITY | 0.91+ |
a decade | QUANTITY | 0.89+ |
few years ago | DATE | 0.88+ |
Harry Mower, Red Hat | Red Hat Summit 2017
>> Host: Live from Boston, Massachusetts it's The Cube, covering Red Hat Summit 2017 brought to you by Red Hat. >> Welcome back to The Cube's coverage of the Red Hat Summit here in Boston, Massachusetts. I'm your host Rebecca Knight along with my co-host, Stu Miniman. We are joined by Harry Mower. He is the senior director Programs and Tools here at Red Hat. Thanks so much for joining us. >> Thanks for having me. >> So, I want to start out by talking about the product launch that you are announcing this week, a new set of developer tools, Open Shift IO. What does it do? What does it not do? Break it down. >> Sure absolutely, so on the first day of the summit we announced probably one of the largest developer tools we've had in a long time, and it's a brand new product. It's a hosted online environment for building cloud services, whether you choose to do that as a microservice or a monolith or whatever architectural pattern you choose. We provide end to end tools for development teams to build them and host them on open shift online. When I say end to end, what that means is it comes with everything development teams need to plan, code, analyze, and deploy their applications. If this were the '90's, we would have called it a new ALM platform, but now it's dev-ops, right? It's our new approach to dev-ops. It completes the open shift experience, and makes it easier for development teams and developers to build those applications and host them on open shift online. >> Why did we need a new approach to dev-ops? >> Yes exactly, so with this release we were really trying to solve three fundamental problems. The first is we see a lot of our customers spending probably too much time and money to integrate and maintain their tool chains. We know customers have entire teams dedicated just to integrating all the tools that they need and keeping it up and running. We wanted to take that off the table. We wanted to make it really simple for our customers just to get coding and not have to worry about creating this entire end to end environment. We feel like a lot of this stuff has been commoditized in some way, and it's not really differentiating. If you can integrate your tool better than mine it doesn't really help you produce better code at the end of the day, so we just wanted to make that simple for our customers. Second thing we wanted to do was make it really easy for developers to use containers in development, and help them get started faster. Developers can spend as much as 50% of their time just maintaining their local environment to do dev-end test. What we wanted to do was make it simple. One click, automatically create containerized development testing and staging environment without the need to type doctor commands or learn Kubernetes files, make it super simple for developers. And then third thing we want to do, which we think is really unique, is help developers make better decisions. This is one of the things that gets overlooked, in the whole dev-ops process. Is that developers have a lot of freedom of choice to choose things, basically anything off the internet that they want to use, and a lot of times, development teams and developers aren't quite sure if it's the right decision. So we're taking an analytics-based approach to helping solve that problem. We've created a new AI service that's built into the platform that analyzes their packages that they choose, based on 15 years of history that we have working on open source projects, plus other data that we use. And we help developers make better decisions, because we recommend packages based on that information. So if we see a package that they chose that might have a known vulnerability or that is one that developers frequently don't use, we flag that for them, and offer suggestions for better ones for them to use. >> Nudging them in the right decision. >> Harry: Yes. >> Harry, been to a lot of shows where we're talking about digital transformation. It's kind of a trope these days that says, software's leading the world and every company's becoming a software company. >> Harry: Or is a software company. >> Or is a software company, everything from the banks, to whatnot. Do you have some examples of what, some early customers that have been playing with Open Shift IO, how does this help them along that way, learn from your peer, and therefore know when you'll when to jump in? >> Sure. We don't have any customers on it now, this is one of those projects that we have been developing over the past year, and we really just announced it today. But we did take a lot of feedback from customers, and saw what they were doing. If you look at, probably one of the obvious ones that we look at are automotive companies. The four wheels and the engine is the commodity part of the car, sort of today. Much of the decisions you make are based on the technology that you choose. So it's really important for them to differentiate at the technology level. And you can only go so far with hardware, it's really software that powers everything else. And so you could think of most car companies now. That's how they become software companies. It goes down the line. If you think of banking, if you don't have a mobile banking app, is that a bank you're going to choose? It's pretty obvious examples of companies that are now software companies. >> So let's, if I'm an automotive car, and saying, "Okay, I got to worry about autonomous "vehicles, and all the competition" How will Open Shift IO help them forward faster? >> Sure. Building software is building software. No matter where you deploy it. And so the process that you go through to get your team, to envision the project, to set up the project and then divvy out the work and then have the work be done. Open Shift IO provides all the tools to do that. And then once the developer's get working on actually coding and doing the testing, and everything that the developer's do, one of the things that we provide is, like I said, every developer struggles, whether you're developing for something in a car, or somewhere else, struggles with the idea of setting up my local environment, setting up my data environment. Like I said, Open Shift IO makes it really simple for those developers, because we can let them choose pre-defined technology stacks. So in the case of the automotive maker, they can set a corporate standard for what type of technology stack they want to use, developers choose those stacks, and then we automatically create a containerized environment for them to work off of. Where they're working doesn't have to be their local machine, we host it for them in the cloud, so they never had to install anything or worry, again, another thing they don't have to worry about is, is it mismatched from everybody else working on that software? So we ensure consistency across the team, and what's going in production. So we minimize the risks there. And it doesn't matter if you're building a banking application or an embedded application, the steps are the same, and that's why we feel like it's commodotized at this point. It really is non-differentiating, so if we can streamline that whole process, we feel like it's the right decision for all developers. >> We want to talk big picture here about this space that you are in. Before the cameras were rolling, you were telling us about your prior career at Microsoft, but you've been in this developer evangelism, you call it an evangelist space for a long time, can you tell us how it's changed over the years? >> Yeah. So the obvious generations of going through the technology fads is one thing, now we're back to multiple micro-service type architectures and those sorts of things, so the technology trends and fads always come and go. But I think there's one fundamental shift that is sticking more, and it's not necessarily about the individual developer. It's about development teams. It's how do you get the entire team to function well? How do you build not just better code but better applications? And how do you fix that end-to-end experience? Because at the end of the day, the way developers add value to your business isn't by writing another line of code that doesn't necessarily have a bug, it's how do they shift better software faster? >> And so this focus on teams, and the end-to-end process, I think is a fundamental shift that we've-- I wouldn't say it's a shift, maybe it's a maturity that I've seen over the 20 years almost that I've been doing this. And so that's why we've really honed in on that. And I think another thing, people ask me questions about, we see these new modern types, new modern trends in application development. Mostly containers and microservices. And they usuallay put them together. And I try to tell people not to do that, because they're two separate things, and I think the one thing the industry has made a decision on is containers. I think that is the new, I call it the atomic unit of app execution. No matter where they're going to execute, their app's going to be in a container. Now whatever pattern they choose to use inside that container, I think it's still up for debate,, whether it's microservices or some other sort of pattern they want to use. So I think focus on teams and shift to containers, and a new type of level of isolation I think are two big-- >> And just to be clear, you're saying that, if I'm choosing microservices, I'm probably going to use containers but just because I'm using containers doesn't mean I'm using microservices? >> Harry: Exactly. And even in the case of microservices, it depends on how many containers you're going to use. The debate is, do I put, is a service per container, is it some level of services per container? I think there's a whole set of technology there to help manage people moving into that space, 'cause complexity grows pretty quickly when you start to get into that world, and we're going to focus on the tools for that as well. >> I want to get your opinion, the question is also, how much does the developer-- Where in the stack do they need to worry about? Can they just focus on writing the application, do they have to worry about... How far underneath it do they have to worry about? What's your thoughts, things about... We talked about containers, Kubernetes going, the whole serverless development. Function as a service. How do those fit into your thinking? >> So our approach in Open Shift IO, is to have developers worry up to the framework level. Everything below the framework, don't worry about it anymore, including containers. If you saw the demos on the summit keynote, all of that was containerized and we never once typed a docker command, you never saw a Kubernetes value, you never saw anything about containers, we just did it all for you. What you did see is choices around the frameworks and the components that I want to use inside my application, and how I express myself in code. And that's kind of where we, at least in Open Shift IO, that's where we see the dileneation. I don't want developers to have to worry about containers and everything below that. It should just come for free. Especially when we get in the world of serverless, where it's debatable what you're ever going to have to worry about at that point. That's the way we see it. >> When you're talking about workplace culture, and you said that there's a really big emphasis on teams and helping teams make better decisions, collaborate more effectively. Red Hat is known for having such a powerful culture, a cutlure of candor, a culture of risk-taking, a culture of openness and transparency. How does that translate into the kinds of tools that you are coming out with? >> Yeah, so one of the first things we knew we had to do and decide we had to do, is we're going to build Open Shift IO with Open Shift IO So our first customers are us, ourselves. >> Rebecca: You're the guinea pig. >> We're the guinea pig, and if anybody knows anything about Red Hat, its' exactly what you said, we have a very diverse, very geographically dispersed, very opinionated set of people at Red Hat, right? And so we had to take all that into account when building the application to satisfy our team first, and so I would say that the product that we're building today is a direct reflection on the culture of Red Hat, because if it can work it for Red Hat, it can work for many and most companies, let me tell you (laughs). >> Can you help connect the dots between Open Shift IO and what's happening with Open Shift in adoption there? I think that speaks to the maturity and the adoption of Open Shift itself that led you to this new tool. >> Yeah, when we first started to build the product, which was a little over a year ago, we wanted to build a product that was going to service the entire Red Hat portfolio. Which included Bare Metal, Rel and other platforms. But as we went through the process of building the application, we really did realize that Open Shift is becoming our default platform. Especially for containers as applications, and what developers want to do. So we decided to maximize our efforts around building the best experience for Open Shift, because it is the future for Red Hat. So the name shift at that point, we then went from a Redhat branded name to an Open Shift branded name. I think right now the Open Shift IO name, I will admit, is a little confusing for people, and it is intended to be kind of one of the first of the family of Open Shift products. Over time, it may emerge and be part of Open Shift overall. But right now, it's meant to complement Open Shift Online. And it's the developer experience for Open Shift Online, >> And it's free, the Open Shift IO, eventually, some of what you create there ends up in Open Shift, which would be something they paid for, right? >> Yeah, and we're trying to figure out what that model is right now. I think right now it is all free, we don't have any intentions to charge for the tools themselves. I think as developers use it, and then they consume more resources on Open Shift Online, we'll start to charge for the resources on Open Shift Online, that's probably the most obvious model. But that's still all stuff we're trying to work out as a company. >> It's a work in progress. >> Harry: Work in progress, definitely. >> Thanks so much for your time, Harry. >> Thanks for having me, it was great. >> From Rebecca Knight, and Stu Miniman, we hope to see you back here again for more from Redhat Summitt. (electronic jingle)
SUMMARY :
brought to you by Red Hat. He is the senior director Programs and Tools the product launch that you are announcing this week, Sure absolutely, so on the first day of the summit This is one of the things that gets overlooked, software's leading the world and every company's the banks, to whatnot. Much of the decisions you make are based on And so the process that you go through Before the cameras were rolling, So the obvious generations of going through that I've seen over the 20 years almost And even in the case of microservices, Where in the stack do they need to worry about? That's the way we see it. and you said that there's a really big Yeah, so one of the first things we knew is a direct reflection on the culture of Red Hat, I think that speaks to the maturity and the adoption the application, we really did realize that for the tools themselves. we hope to see you back here again
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Rebecca Knight | PERSON | 0.99+ |
Rebecca | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Harry | PERSON | 0.99+ |
Harry Mower | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
15 years | QUANTITY | 0.99+ |
Open Shift | TITLE | 0.99+ |
Open Shift IO | TITLE | 0.99+ |
one | QUANTITY | 0.99+ |
Boston, Massachusetts | LOCATION | 0.99+ |
Red Hat Summit | EVENT | 0.99+ |
50% | QUANTITY | 0.99+ |
first customers | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
first | QUANTITY | 0.98+ |
first day | QUANTITY | 0.98+ |
Open Shift Online | TITLE | 0.98+ |
today | DATE | 0.98+ |
Red Hat Summit 2017 | EVENT | 0.98+ |
One click | QUANTITY | 0.98+ |
this week | DATE | 0.98+ |
Second thing | QUANTITY | 0.97+ |
'90's | DATE | 0.96+ |
two separate things | QUANTITY | 0.95+ |
20 years | QUANTITY | 0.94+ |
one thing | QUANTITY | 0.91+ |
four wheels | QUANTITY | 0.91+ |
third thing | QUANTITY | 0.91+ |
three fundamental problems | QUANTITY | 0.9+ |
over a year ago | DATE | 0.9+ |
Redhat Summitt | EVENT | 0.9+ |
past year | DATE | 0.89+ |
Redhat | ORGANIZATION | 0.88+ |
Kubernetes | TITLE | 0.87+ |
one fundamental shift | QUANTITY | 0.85+ |
first things | QUANTITY | 0.8+ |
Red Hat | TITLE | 0.78+ |
over | QUANTITY | 0.77+ |
The Cube | ORGANIZATION | 0.67+ |
Bare Metal | ORGANIZATION | 0.64+ |
Shift IO | TITLE | 0.49+ |
things | QUANTITY | 0.49+ |
Rel | ORGANIZATION | 0.43+ |
Paul Cormier, Red Hat | Red Hat Summit 2017
>> Announcer: Live from Boston, Massachusetts, it's The Cube covering Red Hat Summit 2017. Brought to you by Red Hat. (electronic music) >> Welcome Back to The Cube's coverage of the Red Hat Summit, Boston, Massachusetts. I'm your host Rebecca Knight, along with my cohost Stu Miniman. We are joined by Paul Cormier. He is the executive vice president and president of products and technologies here at Red Hat. Thanks so much for joining us. >> Thank you. >> I want to ask you about a point you made earlier in your keynote. You talked about the challenges the customer is facing. You talked about how last year the three big ones were cost, security, and automation. This year it's all about Cloud strategy and about the pace of innovation. What is driving this shift in customer priorities and challenges? >> I think the big thing that's driving it, I think over the previous years, people were really test driving a lot of the Cloud and the hybrid technologies. And now, as they actually start to move to the next phase and they actually have to stitch it into their environment, that's where we get real. And that's actually why we see a lot of customers here 'cause that's what we've done over the last 12 to 18 months is worked with our customers in getting this into their environment. Cloud as part of their IT environment and not the entire IT environment. So I think that's what driving it. We're solving real world problems now, and I think that's what we do best, and I think that's what open source does best. >> Paul, I thought it was a great point. I loved to see that the Cloud strategy was like the number one thing, because it is what I've been hearing when I've been talking to practitioners last year or two. I had a T-shirt that said, Blah blah cloud, because we spent so many years talking about it. In the industry it's always, Oh, there's this cool new thing and customer you need to get on it. Now, having a Cloud strategy is critical for any IT department to understand how they're going forward, where they deploy resources, where they go to their partners, like yourself, to be able to change and shift many of the things that they're doing. >> Well, what we've found, even in my own shop, right, even my own development shop, what we've found is you had a lot of departmental groups going out to the public Cloud. And now you're getting, now, because you're spending so much there and pieces going out, now the CIO gets involved, and now they want to look at it. How is this going to fit into my overall strategy? And so, at that point, the only way is hybrid. And so, the CIO now, they don't want five islands of different operating environments, they want one. As a little operating group, really doesn't care, they want their own thing, but when the CIO's now looking at an overall structure for the entire company, that's what's really driving hybrid right now. And that's really driving these implementations, and frankly, that's what's driving a lot of the desire to have this common operating environment that we've been talking about for a long time. And implementing for a long time. >> So how do you do it? When you talked about these five separate islands, but those five islands now need to work together and communicate and collaborate and come up with a unified strategy, how do you do it? >> Two things. First of all, because so much has moved to Linux, RHEL is that platform. The Cloud is about the application. One of the points that I made in my keynote this morning, kind of made it a little subtly, so maybe it didn't come through, we're not building infrastructure for the sake of building infrastructure. We're building infrastructure for the applications. And so, that's the really important part. The applications run on Linux, so the first step, the first step is really getting a common operating environment for the application. We did that 15 years ago with RHEL. So now, when you see RHEL on Bare Metal, RHEL as a virtual machine on US, VMware, or Microsoft, RHEL as a container in a private Cloud, RHEL in one of the public guys, it's the same RHEL. So, we do seven one or seven two, it's seven one or seven two, we upgrade in the same way with the same number of bits. When we have a security update for seven two, it's the same thing. So now the application really with RHEL really gets that consistency. Then, with OpenShift now we bring the infrastructure to maintain it, support it, deploy it, and manage it. And so, that's what's really, the light bulb's going on for a lot of CIO's as they've seen OpenShift, and OpenStack as well, because we're making this hybrid world now manageable and secure. But RHEL's been the key because that's the application. That's the application layer. Frankly, that is the piece that VMware didn't have, right? VMware didn't have any pieces that touched the app. Apps don't run on hypervisors, they run operating systems. And even containers, it's just a Linux OS sliced up in different way. So that's really been the key. We've been at this for 15 years. Really, if you look at it that way, we've evolved this over 15 years. >> Alright, Paul you mentioned briefly in your keynote an announcement with AWS. I know keynote tomorrow is going to go into more detail, but, we think it's a pretty big deal. I've been talking to some of the press, we talked to one of your customers, Optum, who's one of the keynote speakers. I mean, he said game changer. This is, he uses Open Shift, loves what we can do this. You were just talking about the application Affinity, and that's what infrastructure's for. Can you connect that with what we're talking about with AWS here? >> I think why this is a game changer for all of us, and mostly the customer, is because, prior to this, invoking an Amazon service for an application would mean that it could only be invoked from that infrastructure at AWS, can only be run there, frankly. And it really was limiting. With now bringing the connection points back into OpenShift the application can now invoke that Amazon service from on Amazon, or even on Premise. And it really extends the reach of Amazon to come in to really now build a hybrid environment. And I also think it's significant for our customers telling both of us, both Red Hat and Amazon, that they want want to run in a hybrid world. So, that's the game changer. It really extends both of our reaches that way while keeping that consistent operating environment with the RHEL base. >> And that's different than just saying oh, I can run a VM in an Amazon environment. >> Right, because you're running a VM as an island. Now, you're running an actual system that's spanning across the hybrid world being managed and orchestrated from one place. >> I want to talk to you about your approach to the product design and development process. In the past you have talked about the virtues of patience and how you do not build a multi-million dollar product overnight. It takes years. And yet, on the other hand, there is this desire and hunger for fast innovation and changes. How do you strike that balance with your team and also with customers? >> My wife wouldn't say I had that much patience. (laughing) >> But at least you appreciate that it's a good thing. >> No, I mean, frankly, our company and even all the way to our board of directors has been very, very supportive of that. I mean, the first thing we do is we start and ease up stream communities. And really, what we are doing now is we're really integrating multiple communities together. When it was just the OS in the past people used to say all the time there is no Linux community, there's multiple communities and our job is to bring it all together. Right now, it's that on steroids. We try to pick the right technologies and drive it. I mean, I'll give you a great example. We bought a company a few years back, Qumranet. At the time Zen was the hyper visor, the community was going to KVM, we bought the company, they had zero revenue, we had zero additional revenue because it was a hyper visor. We bought it so we could get behind the community, bolster it, and know it would go in the right direction. That is the key that no one else has really figured out, is to place yourself in these communities over the years, and drive it, drive it, drive it, and then bring that innovation into a product. I call it the difference between a project and a product. Our products are really an amalgamation of many communities put together in a platform to solve a real world problem. But you have to have the patience. RHEL has been such a successful product for us, frankly, it's fueled financially, it's fueled us and given us the ability to have the patience for all these next generation platforms. That's what's done it for us really. >> Your CEO Jim Whitehurst, in his book, talked about how from an acquisition standpoint, everything you do, it's got to be open sourced. Does that hamper you at all or are there certain technology areas, things are moving so fast, that would you buy something and keep it internal for a while until it was open source? How do you handle something like that? >> The last five or six acquisitions were not open sourced, so we open sourced them. >> Stu: Okay. >> It's just in our DNA, frankly, I think it's forced us to do it the right way, because we couldn't have a closed sourced product now if we tried. If Jim and I said we're going to have a closed sourced product we'd be in the office alone. And it's in the DNA, and it's really forced us to build better software, because we never ever think here's the line and everything below is open and above is closed. We never have to think that. It's all open. And it just forces that innovation. The landscape is littered with companies that have tried to have that line. It just doesn't work. You confuse your your engineers, you confuse your market, you confuse your customers, you confuse your partners. It's all open. And that's what really drives the innovation. >> Let's talk about recruitment and getting this war for talent that we're seeing in the tech industry. Red Hat's based in North Carolina. You're based here in Boston. Of course we have people here 70 different countries, as your CEO mentioned in his opening remarks. What are you seeing? What are the trends? What do the best and brightest developers want out of an employer? And how are you giving it to them? >> A couple things. Up here in Boston the products group is headquartered up here. Sales group is headquartered up here. So we sort of live together. One of the things we've just did, we just announced we're opening an office right across the street here, for both R&D and our customer briefing center. So one thing is-- >> Congratulations. We're excited for that. Of course you'd had the Westford facility with lots of engineers. But Boston, a block away from where GE's new headquarters going to be. >> A block away. It's about collaborating with the universities, collaborating with the students to come out of the universities. I see it around the world. No, but they want to be in the city. >> Rebecca: Yeah. >> They want to be in the city. That's the first thing. We have a thousand engineers in the Czech Republic that are core to our product. They build many of the products in the Czech Republic. We're near universities. The reason why we did Boston for the R&D is universities, just as the Czech Republic. Because now what's taught in engineering and computer science programs is Linux and open source. So when students can get out, go work for a company, we give them the freedom to really drive where the technology needs to go, that's really our recruiting draw. I would never go into our engineers and say you will implement this this way. They implement it the right way. >> Rebecca: So autonomy? >> Autonomy. >> Rebecca: And cities. (laughing) >> Paul: Well, autonomy and cities in the right places. >> Right, right. >> We're really looking for the talent that really wants to innovate. And they're coming out of the universities now doing that. So that's what's been successful for us. >> Alright, Paul we were talking about this is the 13th year of the show, it's the fourth year we've done it. The Cloud piece has really matured a lot. If you looked forward, if we come back a year from now, what do you kind of see as some of the major things that we'll want to have accomplished? What's on your plate for the next 12 months? >> One of the things that we're looking at now, I sort of ended it up in my keynote, is we really think that we've really abstracted the differences for the application layer, storage layer, application layer, management layer, across the hybrid world, but there's a lot of pieces of the infrastructure that the operations people have to deal with every day. The network stacks, the really underneath and the plumbing storage stacks. Sort of the difference between OpenShift and OpenStack. VM's being orchestrated beside containers. So we really starting to see those pieces come together. Really that application layer and that infrastructure layer coming together. We think of OpenStack as bringing the infrastructure to the hybrid world and OpenShift as bringing the application to the hybrid world. Starting to bring those pieces together. And I think that's what you'll see more of next year. Is commonality around management, orchestration, networking, storage, just more of that, and more ease of plug and play. >> Great, well Paul Cormier thank you so much for joining us. This is Rebecca Knight along with Stu Miniman. Thank you for joining us at Red Hat Summit 2017. We'll be back just after this. (electronic music)
SUMMARY :
Brought to you by Red Hat. He is the executive vice president and president and about the pace of innovation. and not the entire IT environment. In the industry it's always, Oh, there's this cool new thing And so, at that point, the only way is hybrid. And so, that's the really important part. and that's what infrastructure's for. And it really extends the reach of Amazon to come in And that's different than just saying that's spanning across the hybrid world being managed In the past you have talked about the virtues of patience (laughing) I mean, the first thing we do is we start and ease Does that hamper you at all so we open sourced them. And it's in the DNA, What are the trends? One of the things we've just did, we just announced GE's new headquarters going to be. I see it around the world. the technology needs to go, Rebecca: And cities. the talent that really wants to innovate. it's the fourth year we've done it. that the operations people have to deal with every day. Thank you for joining us at Red Hat Summit 2017.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jim | PERSON | 0.99+ |
Paul | PERSON | 0.99+ |
Paul Cormier | PERSON | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
Jim Whitehurst | PERSON | 0.99+ |
Rebecca | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Czech Republic | LOCATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
North Carolina | LOCATION | 0.99+ |
15 years | QUANTITY | 0.99+ |
RHEL | TITLE | 0.99+ |
first step | QUANTITY | 0.99+ |
13th year | QUANTITY | 0.99+ |
GE | ORGANIZATION | 0.99+ |
next year | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
Boston, Massachusetts | LOCATION | 0.99+ |
Linux | TITLE | 0.99+ |
first | QUANTITY | 0.99+ |
OpenShift | TITLE | 0.99+ |
fourth year | QUANTITY | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Red Hat Summit | EVENT | 0.99+ |
five islands | QUANTITY | 0.99+ |
five separate islands | QUANTITY | 0.99+ |
This year | DATE | 0.99+ |
Zen | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
Red Hat Summit 2017 | EVENT | 0.98+ |
OpenStack | TITLE | 0.98+ |
70 different countries | QUANTITY | 0.98+ |
tomorrow | DATE | 0.98+ |
first thing | QUANTITY | 0.98+ |
Microsoft | ORGANIZATION | 0.97+ |
six acquisitions | QUANTITY | 0.97+ |
First | QUANTITY | 0.97+ |
Stu | PERSON | 0.97+ |
Optum | PERSON | 0.97+ |
zero revenue | QUANTITY | 0.97+ |
five | QUANTITY | 0.96+ |
US | LOCATION | 0.96+ |
over 15 years | QUANTITY | 0.96+ |
Open Shift | TITLE | 0.95+ |