Image Title

Search Results for Ingress:

ON DEMAND API GATEWAYS INGRESS SERVICE MESH


 

>> Thank you, everyone for joining. I'm here today to talk about ingress controllers, API gateways, and service mesh on Kubernetes, three very hot topics that are also frequently confusing. So I'm Richard Li, founder/CEO of Ambassador Labs, formerly known as Datawire. We sponsor a number of popular open source projects that are part of the Cloud Native Computing Foundation, including Telepresence and Ambassador, which is a Kubernetes native API gateway. And most of what I'm going to talk about today is related to our work around Ambassador. So I want to start by talking about application architecture and workflow on Kubernetes and how applications that are being built on Kubernetes really differ from how they used to be built. So when you're building applications on Kubernetes, the traditional architecture is the very famous monolith. And the monolith is a central piece of software. It's one giant thing that you build deploy, run. And the value of a monolith is it's really simple. And if you think about the monolithic development process, more importantly is that architecture is really reflected in that workflow. So with a monolith, you have a very centralized development process. You tend not to release too frequently because you have all these different development teams that are working on different features, and then you decide in advance when you're going to release that particular piece of software and everyone works towards that release train. And you have specialized teams. You have a development team, which has all your developers. You have a QA team, you have a release team, you have an operations team. So that's your typical development organization and workflow with a monolithic application. As organizations shift to microservices, they adopt a very different development paradigm. It's a decentralized development paradigm where you have lots of different independent teams that are simultaneously working on different parts of this application, and those application components are really shipped as independent services. And so you really have a continuous release cycle because instead of synchronizing all your teams around one particular vehicle, you have so many different release vehicles that each team is able to ship as soon as they're ready. And so we call this full cycle development because that team is really responsible not just for the coding of that microservice, but also the testing and the release and operations of that service. So this is a huge change, particularly with workflow, and there's a lot of implications for this. So I have a diagram here that just tries to visualize a little bit more the difference in organization. With the monolith, you have everyone who works on this monolith. With microservices, you have the yellow folks work on the yellow microservice and the purple folks work on the purple microservice and maybe just one person work on the orange microservice and so forth. So there's a lot more diversity around your teams and your microservices, and it lets you really adjust the granularity of your development to your specific business needs. So how do users actually access your microservices? Well, with a monolith, it's pretty straightforward. You have one big thing, so you just tell the internet, well, I have this one big thing on the internet. Make sure you send all your traffic to the big thing. But when you have microservices and you have a bunch of different microservices, how do users actually access these microservices? So the solution is an API gateway. So the API gateway consolidates all access to your microservices. So requests come from the internet. They go to your API gateway. The API gateway looks at these requests, and based on the nature of these requests, it routes them to the appropriate microservice. And because the API gateway is centralizing access to all of the microservices, it also really helps you simplify authentication, observability, routing, all these different cross-cutting concerns, because instead of implementing authentication in each of your microservices, which would be a maintenance nightmare and a security nightmare, you've put all of your authentication in your API gateway. So if you look at this world of microservices, API gateways are a really important part of your infrastructure which are really necessary, and pre-microservices, pre-Kubernetes, an API gateway, while valuable, was much more optional. So that's one of the really big things around recognizing with the microservices architecture, you really need to start thinking much more about an API gateway. The other consideration with an API gateway is around your management workflow, because as I mentioned, each team is actually responsible for their own microservice, which also means each team needs to be able to independently manage the gateway. So Team A working on that microservice needs to be able to tell the API gateway, this is how I want you to route requests to my microservice, and the purple team needs to be able to say something different for how purple requests get routed to the purple microservice. So that's also a really important consideration as you think about API gateways and how it fits in your architecture, because it's not just about your architecture, it's also about your workflow. So let me talk about API gateways on Kubernetes. I'm going to start by talking about ingress. So ingress is the process of getting traffic from the internet to services inside the cluster. Kubernetes, from an architectural perspective, it actually has a requirement that all the different pods in a Kubernetes cluster needs to communicate with each other. And as a consequence, what Kubernetes does is it creates its own private network space for all these pods, and each pod gets its own IP address. So this makes things very, very simple for interpod communication. Kubernetes, on the other hand, does not say very much around how traffic should actually get into the cluster. So there's a lot of detail around how traffic actually, once it's in the cluster, how you route it around the cluster, and it's very opinionated about how this works, but getting traffic into the cluster, there's a lot of different options and there's multiple strategies. There's Pod IP, there's Ingress, there's LoadBalancer resources, there's NodePort. I'm not going to go into exhaustive detail on all these different options, and I'm going to just talk about the most common approach that most organizations take today. So the most common strategy for routing is coupling an external load balancer with an ingress controller. And so an external load balancer can be a hardware load balancer. It can be a virtual machine. It can be a cloud load balancer. But the key requirement for an external load balancer is to be able to attach a stable IP address so that you can actually map a domain name and DNS to that particular external load balancer, and that external load balancer usually, but not always, will then route traffic and pass that traffic straight through to your ingress controller. And then your ingress controller takes that traffic and then routes it internally inside Kubernetes to the various pods that are running your microservices. There are other approaches, but this is the most common approach. And the reason for this is that the alternative approaches really require each of your microservices to be exposed outside of the cluster, which causes a lot of challenges around management and deployment and maintenance that you generally want to avoid. So I've been talking about an ingress controller. What exactly is an ingress controller? So an ingress controller is an application that can process rules according to the Kubernetes ingress specification. Strangely, Kubernetes is not actually shipped with a built-in ingress controller. I say strangely because you think, well, getting traffic into a cluster is probably a pretty common requirement, and it is. It turns out that this is complex enough that there's no one size fits all ingress controller. And so there is a set of ingress rules that are part of the Kubernetes ingress specification that specify how traffic gets routed into the cluster, and then you need a proxy that can actually route this traffic to these different pods. And so an ingress controller really translates between the Kubernetes configuration and the proxy configuration, and common proxies for ingress controllers include HAProxy, Envoy Proxy, or NGINX. So let me talk a little bit more about these common proxies. So all these proxies, and there are many other proxies. I'm just highlighting what I consider to be probably the three most well-established proxies, HAProxy, NGINX, and Envoy Proxy. So HAProxy is managed by HAProxy Technologies. Started in 2001. The HAProxy organization actually creates an ingress controller. And before they created an ingress controller, there was an open source project called Voyager which built an ingress controller on HAProxy. NGINX, managed by NGINX, Inc., subsequently acquired by F5. Also open source. Started a little bit later, the proxy, in 2004. And there's the Nginx-ingress, which is a community project. That's the most popular. As well as the Nginx, Inc. kubernetes-ingress project, which is maintained by the company. This is a common source of confusion because sometimes people will think that they're using the NGINX ingress controller, and it's not clear if they're using this commercially supported version or this open source version. And they actually, although they have very similar names, they actually have different functionality. Finally, Envoy Proxy, the newest entrant to the proxy market, originally developed by engineers at Lyft, the ride sharing company. They subsequently donated it to the Cloud Native Computing Foundation. Envoy has become probably the most popular cloud native proxy. It's used by Ambassador, the API gateway. It's used in the Istio service mesh. It's used in the VMware Contour. It's been used by Amazon in App Mesh. It's probably the most common proxy in the cloud native world. So as I mentioned, there's a lot of different options for ingress controllers. The most common is the NGINX ingress controller, not the one maintained by NGINX, Inc., but the one that's part of the Kubernetes project. Ambassador is the most popular Envoy-based option. Another common option is the Istio Gateway, which is directly integrated with the Istio mesh, and that's actually part of Docker Enterprise. So with all these choices around ingress controller, how do you actually decide? Well, the reality is the ingress specification's very limited. And the reason for this is that getting traffic into a cluster, there's a lot of nuance into how you want to do that, and it turns out it's very challenging to create a generic one size fits all specification because of the vast diversity of implementations and choices that are available to end users. And so you don't see ingress specifying anything around resilience. So if you want to specify a timeout or rate-limiting, it's not possible. Ingress is really limited to support for HTTP. So if you're using gRPC or web sockets, you can't use the ingress specification. Different ways of routing, authentication. The list goes on and on. And so what happens is that different ingress controllers extend the core ingress specification to support these use cases in different ways. So NGINX ingress, they actually use a combination of config maps and the ingress resources plus custom annotations that extend the ingress to really let you configure a lot of the additional extensions that is exposed in the NGINX ingress. With Ambassador, we actually use custom resource definitions, different CRDs that extend Kubernetes itself to configure Ambassador. And one of the benefits of the CRD approach is that we can create a standard schema that's actually validated by Kubernetes. So when you do a kub control apply of an Ambassador CRD, kub control can immediately validate and tell you if you're actually applying a valid schema and format for your Ambassador configuration. And as I previously mentioned, Ambassador's built on Envoy Proxy, Istio Gateway also uses CRDs. They can be used in extension of the service mesh CRDs as opposed to dedicated gateway CRDs. And again, Istio Gateway is built on Envoy Proxy. So I've been talking a lot about ingress controllers, but the title of my talk was really about API gateways and ingress controllers and service mesh. So what's the difference between an ingress controller and an API gateway? So to recap, an ingress controller processes Kubernetes ingress routing rules. An API gateway is a central point for managing all your traffic to Kubernetes services. It typically has additional functionality such as authentication, observability, a developer portal, and so forth. So what you find is that not all API gateways are ingress controllers because some API gateways don't support Kubernetes at all. So you can't, they can't be ingress controllers. And not all ingress controllers support the functionality such as authentication, observability, developer portal, that you would typically associate with an API gateway. So generally speaking, API gateways that run on Kubernetes should be considered a superset of an ingress controller. But if the API gateway doesn't run on Kubernetes, then it's an API gateway and not an ingress controller. So what's the difference between a service mesh and an API gateway? So an API gateway is really focused on traffic into and out of a cluster. So the colloquial term for this is North/South traffic. A service mesh is focused on traffic between services in a cluster, East/West traffic. All service meshes need an API gateway. So Istio includes a basic ingress or API gateway called the Istio Gateway, because a service mesh needs traffic from the internet to be routed into the mesh before it can actually do anything. Envoy Proxy, as I mentioned, is the most common proxy for both mesh and gateways. Docker Enterprise provides an Envoy-based solution out of the box, Istio Gateway. The reason Docker does this is because, as I mentioned, Kubernetes doesn't come package with an ingress. It makes sense for Docker Enterprise to provide something that's easy to get going, no extra steps required, because with Docker enterprise, you can deploy it and get going, get it exposed on the internet without any additional software. Docker Enterprise can also be easily upgraded to Ambassador because they're both built on Envoy. It ensures consistent routing semantics. And also with Ambassador, you get greater security for single sign-on. There's a lot of security by default that's configured directly into Ambassador. Better control over TLS, things like that. And then finally, there's commercial support that's actually available for Ambassador. Istio is an open source project that has a very broad community, but no commercial support options. So to recap, ingress controllers and API gateways are critical pieces of your cloud native stack. So make sure that you choose something that works well for you. And I think a lot of times organizations don't think critically enough about the API gateway until they're much further down the Kubernetes journey. Considerations around how to choose that API gateway include functionality such as how does it do with traffic management and observability? Does it support the protocols that you need? Also nonfunctional requirements such as does it integrate with your workflow? Do you offer commercial support? Can you get commercial support for this? An API gateway is focused on North/South traffic, so traffic into and out of your Kubernetes cluster. A service mesh is focused on East/West traffic, so traffic between different services inside the same cluster. Docker Enterprise includes Istio Gateway out of the box. Easy to use, but can also be extended with Ambassador for enhanced functionality and security. So thank you for your time. Hope this was helpful in understanding the difference between API gateways, ingress controllers, and service meshes, and how you should be thinking about that on your Kubernetes deployment.

Published Date : Sep 14 2020

SUMMARY :

So ingress is the process

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
2004DATE

0.99+

Richard LiPERSON

0.99+

2001DATE

0.99+

Ambassador LabsORGANIZATION

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

each teamQUANTITY

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

each teamQUANTITY

0.99+

DatawireORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

each podQUANTITY

0.99+

LyftORGANIZATION

0.99+

Nginx, Inc.ORGANIZATION

0.99+

todayDATE

0.98+

eachQUANTITY

0.98+

KubernetesTITLE

0.98+

one personQUANTITY

0.98+

HAProxy TechnologiesORGANIZATION

0.98+

HAProxyTITLE

0.97+

Docker EnterpriseTITLE

0.96+

AmbassadorORGANIZATION

0.96+

bothQUANTITY

0.96+

NGINXTITLE

0.96+

NGINX, Inc.ORGANIZATION

0.96+

Docker EnterpriseTITLE

0.96+

Envoy ProxyTITLE

0.96+

oneQUANTITY

0.95+

one big thingQUANTITY

0.95+

NGINX ingressTITLE

0.95+

Docker enterpriseTITLE

0.94+

one particular vehicleQUANTITY

0.93+

ingressORGANIZATION

0.91+

TelepresenceORGANIZATION

0.87+

F5ORGANIZATION

0.87+

EnvoyTITLE

0.86+

Nginx-ingressTITLE

0.85+

three very hot topicsQUANTITY

0.82+

both meshQUANTITY

0.82+

three most well-established proxiesQUANTITY

0.76+

single signQUANTITY

0.75+

Istio GatewayOTHER

0.75+

one giant thingQUANTITY

0.73+

VMware ContourTITLE

0.71+

IngressORGANIZATION

0.7+

Docker EnterpriseORGANIZATION

0.69+

AmbassadorTITLE

0.67+

VoyagerTITLE

0.67+

EnvoyORGANIZATION

0.65+

Istio GatewayTITLE

0.65+

IstioORGANIZATION

0.62+

API Gateways Ingress Service Mesh | Mirantis Launchpad 2020


 

>>thank you everyone for joining. I'm here today to talk about English controllers. AP Gateways and service mention communities three very hot topics that are also frequently confusing. So I'm Richard Lee, founder CEO of Ambassador Labs, formerly known as Data Wire. We sponsor a number of popular open source projects that are part of the Cloud Native Computing Foundation, including telepresence and Ambassador, which is a kubernetes native AP gateway. And most of what I'm going to talk about today is related to our work around ambassador. Uh huh. So I want to start by talking about application architecture, er and workflow on kubernetes and how applications that are being built on kubernetes really differ from how they used to be built. So when you're building applications on kubernetes, the traditional architectures is the very famous monolith, and the monolith is a central piece of software. It's one giant thing that you build, deployed run, and the value of a monolith is it's really simple. And if you think about the monolithic development process, more importantly, is the architecture er is really reflecting that workflow. So with the monolith, you have a very centralized development process. You tend not to release too frequently because you have all these different development teams that are working on different features, and then you decide in advance when you're going to release that particular pieces offering. Everyone works towards that release train, and you have specialized teams. You have a development team which has all your developers. You have a Q A team. You have a release team, you have an operations team, so that's your typical development organization and workflow with a monolithic application. As organization shift to micro >>services, they adopt a very different development paradigm. It's a decentralized development paradigm where you have lots of different independent teams that are simultaneously working on different parts of the application, and those application components are really shipped as independent services. And so you really have a continuous release cycle because instead of synchronizing all your teams around one particular vehicle, you have so many different release vehicles that each team is able to ship a soon as they're ready. And so we call this full cycle development because that team is >>really responsible, not just for the coding of that micro service, but also the testing and the release and operations of that service. Um, >>so this is a huge change, particularly with workflow. And there's a lot of implications for this, s o. I have a diagram here that just try to visualize a little bit more the difference in organization >>with the monolith. You have everyone who works on this monolith with micro services. You have the yellow folks work on the Yellow Micro Service, and the purple folks work on the Purple Micro Service and maybe just one person work on the Orange Micro Service and so forth. >>So there's a lot more diversity around your teams and your micro services, and it lets you really adjust the granularity of your development to your specific business need. So how do users actually access your micro services? Well, with the monolith, it's pretty straightforward. You have one big thing. So you just tell the Internet while I have this one big thing on the Internet, make sure you send all your travel to the big thing. But when you have micro services and you have a bunch of different micro services, how do users actually access these micro services? So the solution is an AP gateway, so the gateway consolidates all access to your micro services, so requests come from the Internet. They go to your AP gateway. The AP Gateway looks at these requests, and based on the nature of these requests, it routes them to the appropriate micro service. And because the AP gateway is centralizing thing access to all the micro services, it also really helps you simplify authentication, observe ability, routing all these different crosscutting concerns. Because instead of implementing authentication in each >>of your micro services, which would be a maintenance nightmare and a security nightmare, you put all your authentication in your AP gateway. So if you look at this world of micro services, AP gateways are really important part of your infrastructure, which are really necessary and pre micro services. Pre kubernetes Unhappy Gateway Well valuable was much more optional. So that's one of the really big things around. Recognizing with the micro services architecture er, you >>really need to start thinking much more about maybe a gateway. The other consideration within a P A gateway is around your management workflow because, as I mentioned, each team is actually response for their own micro service, which also means each team needs to be able to independently manage the gateway. So Team A working on that micro service needs to be able to tell the AP at Gateway. This this is >>how I want you to write. Request to my micro service, and the Purple team needs to be able to say something different for how purple requests get right into the Purple Micro Service. So that's also really important consideration as you think about AP gateways and how it fits in your architecture. Because it's not just about your architecture. It's also about your workflow. So let me talk about a PR gateways on kubernetes. I'm going to start by talking about ingress. So ingress is the process of getting traffic from the Internet to services inside the cluster kubernetes. From an architectural perspective, it actually has a requirement that all the different pods in a kubernetes cluster needs to communicate with each other. And as a consequence, what Kubernetes does is it creates its own private network space for all these pods, and each pod gets its own I p address. So this makes things very, very simple for inter pod communication. Cooper in any is, on the other hand, does not say very much around how traffic should actually get into the cluster. So there's a lot of detail around how traffic actually, once it's in the cluster, how you routed around the cluster and it's very opinionated about how this works but getting traffic into the cluster. There's a lot of different options on there's multiple strategies pot i p. There's ingress. There's low bounce of resource is there's no port. >>I'm not gonna go into exhaustive detail on all these different options on. I'm going to just talk about the most common approach that most organizations take today. So the most common strategy for routing is coupling an external load balancer with an ingress controller. And so an external load balancer can be >>ah, Harvard load balancer. It could be a virtual machine. It could be a cloud load balancer. But the key requirement for an external load balancer >>is to be able to attack to stable I people he address so that you can actually map a domain name and DNS to that particular external load balancer and that external load balancer, usually but not always well, then route traffic and pass that traffic straight through to your ingress controller, and then your English controller takes that traffic and then routes it internally inside >>kubernetes to the various pods that are running your micro services. There are >>other approaches, but this is the most common approach. And the reason for this is that the alternative approaches really required each of your micro services to be exposed outside of the cluster, which causes a lot of challenges around management and deployment and maintenance that you generally want to avoid. So I've been talking about in English controller. What exactly is an English controller? So in English controller is an application that can process rules according to the kubernetes English specifications. Strangely, Kubernetes is not actually ship with a built in English controller. Um, I say strangely because you think, well, getting traffic into a cluster is probably a pretty common requirement. And it is. It turns out that this is complex enough that there's no one size fits all English controller. And so there is a set of ingress >>rules that are part of the kubernetes English specifications at specified how traffic gets route into the cluster >>and then you need a proxy that can actually route this traffic to these different pods. And so an increase controller really translates between the kubernetes configuration and the >>proxy configuration and common proxies for ingress. Controllers include H a proxy envoy Proxy or Engine X. So >>let me talk a little bit more about these common proxies. So all these proxies and there >>are many other proxies I'm just highlighting what I consider to be probably the most three most well established proxies. Uh, h a proxy, uh, Engine X and envoy proxies. So H a proxy is managed by a plastic technology start in 2000 and one, um, the H a proxy organization actually creates an ingress controller. And before they kept created ingress controller, there was an open source project called Voyager, which built in ingress Controller on >>H a proxy engine X managed by engine. Xing, subsequently acquired by F five Also open source started a little bit later. The proxy in 2004. And there's the engine Xing breast, which is a community project. Um, that's the most popular a zwelling the engine Next Inc Kubernetes English project which is maintained by the company. This is a common source of confusion because sometimes people will think that they're using the ingress engine X ingress controller, and it's not clear if they're using this commercially supported version or the open source version, and they actually, although they have very similar names, uh, they actually have different functionality. Finally. Envoy Proxy, the newest entrant to the proxy market originally developed by engineers that lift the ride sharing company. They subsequently donated it to the cloud. Native Computing Foundation Envoy has become probably the most popular cloud native proxy. It's used by Ambassador uh, the A P a. Gateway. It's using the SDO service mash. It's using VM Ware Contour. It's been used by Amazon and at mesh. It's probably the most common proxy in the cloud native world. So, as I mentioned, there's a lot of different options for ingress. Controller is the most common. Is the engine X ingress controller, not the one maintained by Engine X Inc but the one that's part of the Cooper Nannies project? Um, ambassador is the most popular envoy based option. Another common option is the SDO Gateway, which is directly integrated with the SDO mesh, and that's >>actually part of Dr Enterprise. So with all these choices around English controller. How do you actually decide? Well, the reality is the ingress specifications very limited. >>And the reason for this is that getting traffic into the cluster there's a lot of nuance into how you want to do that. And it turns out it's very challenging to create a generic one size fits all specifications because of the vast diversity of implementations and choices that are available to end users. And so you don't see English specifying anything around resilience. So if >>you want to specify a time out or rate limiting, it's not possible in dresses really limited to support for http. So if you're using GSPC or Web sockets, you can't use the ingress specifications, um, different ways of routing >>authentication. The list goes on and on. And so what happens is that different English controllers extend the core ingress specifications to support these use cases in different ways. Yeah, so engine X ingress they actually use a combination of config maps and the English Resource is plus custom annotations that extend the ingress to really let you configure a lot of additional extensions. Um, that is exposing the engineers ingress with Ambassador. We actually use custom resource definitions different CRTs that extend kubernetes itself to configure ambassador. And one of the benefits of the CRD approach is that we can create a standard schema that's actually validated by kubernetes. So when you do a coup control apply of an ambassador CRD coop Control can immediately validate and tell >>you if you're actually applying a valid schema in format for your ambassador configuration on As I previously mentioned, ambassadors built on envoy proxy, >>it's the Gateway also uses C R D s they can to use a necks tension of the service match CRD s as opposed to dedicated Gateway C R D s on again sdo Gateway is built on envoy privacy. So I've been talking a lot about English controllers. But the title of my talk was really about AP gateways and English controllers and service smashed. So what's the difference between an English controller and an AP gateway? So to recap, an immigrant controller processes kubernetes English routing rules and a P I. G. Wave is a central point for managing all your traffic to community services. It typically has additional functionality such as authentication, observe, ability, a >>developer portal and so forth. So what you find Is that not all Ap gateways or English controllers? Because some MP gateways don't support kubernetes at all. S o eso you can't make the can't be ingress controllers and not all ingrates. Controllers support the functionality such as authentication, observe, ability, developer portal >>that you would typically associate with an AP gateway. So, generally speaking, um, AP gateways that run on kubernetes should be considered a super set oven ingress controller. But if the A p a gateway doesn't run on kubernetes, then it's an AP gateway and not an increase controller. Yeah, so what's the difference between a service Machin and AP Gateway? So an AP gateway is really >>focused on traffic into and out of a cluster, so the political term for this is North South traffic. A service mesh is focused on traffic between services in a cluster East West traffic. All service meshes need >>an AP gateway, so it's Theo includes a basic ingress or a P a gateway called the SDO gateway, because a service mention needs traffic from the Internet to be routed into the mesh >>before it can actually do anything Omelet. Proxy, as I mentioned, is the most common proxy for both mesh and gateways. Dr. Enterprise provides an envoy based solution out of the box. >>Uh, SDO Gateway. The reason Dr does this is because, as I mentioned, kubernetes doesn't come package with an ingress. Uh, it makes sense for Dr Enterprise to provide something that's easy to get going. No extra steps required because with Dr Enterprise, you can deploy it and get going. Get exposed on the Internet without any additional software. Dr. Enterprise can also be easily upgraded to ambassador because they're both built on envoy and interest. Consistent routing. Semantics. It also with Ambassador. You get >>greater security for for single sign on. There's a lot of security by default that's configured directly into Ambassador Better control over TLS. Things like that. Um And then finally, there's commercial support that's actually available for Ambassador. SDO is an open source project that has a has a very broad community but no commercial support options. So to recap, ingress controllers and AP gateways are critical pieces of your cloud native stack. So make sure that you choose something that works well for you. >>And I think a lot of times organizations don't think critically enough about the AP gateway until they're much further down the Cuban and a journey. Considerations around how to choose that a p a gateway include functionality such as How does it do with traffic management and >>observe ability? Doesn't support the protocols that you need also nonfunctional requirements such as Does it integrate with your workflow? Do you offer commercial support? Can you get commercial support for this on a P? A. Gateway is focused on north south traffic, so traffic into and out of your kubernetes cluster. A service match is focused on East West traffic, so traffic between different services inside the same cluster. Dr. Enterprise includes SDO Gateway out of the box easy to use but can also be extended with ambassador for enhanced functionality and security. So thank you for your time. Hope this was helpful in understanding the difference between a P gateways, English controllers and service meshes and how you should be thinking about that on your kubernetes deployment

Published Date : Sep 12 2020

SUMMARY :

So with the monolith, you have a very centralized development process. And so you really have a continuous release cycle because instead of synchronizing all your teams really responsible, not just for the coding of that micro service, but also the testing and so this is a huge change, particularly with workflow. You have the yellow folks work on the Yellow Micro Service, and the purple folks work on the Purple Micro Service and maybe just so the gateway consolidates all access to your micro services, So that's one of the really big things around. really need to start thinking much more about maybe a gateway. So ingress is the process of getting traffic from the Internet to services So the most common strategy for routing is coupling an external load balancer But the key requirement for an external load balancer kubernetes to the various pods that are running your micro services. And the reason for this is that the and the So So all these proxies and So H a proxy is managed by a plastic technology Envoy Proxy, the newest entrant to the proxy the reality is the ingress specifications very limited. And the reason for this is that getting traffic into the cluster there's a lot of nuance into how you want to do that. you want to specify a time out or rate limiting, it's not possible in dresses really limited is that different English controllers extend the core ingress specifications to support these use cases So to recap, an immigrant controller processes So what you find Is that not all Ap gateways But if the A p a gateway doesn't run on kubernetes, then it's an AP gateway focused on traffic into and out of a cluster, so the political term for this Proxy, as I mentioned, is the most common proxy for both mesh because with Dr Enterprise, you can deploy it and get going. So make sure that you choose something that works well for you. to choose that a p a gateway include functionality such as How does it do with traffic Doesn't support the protocols that you need also nonfunctional requirements

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Richard LeePERSON

0.99+

2004DATE

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

2000DATE

0.99+

Ambassador LabsORGANIZATION

0.99+

each teamQUANTITY

0.99+

Engine X IncORGANIZATION

0.99+

Data WireORGANIZATION

0.99+

each teamQUANTITY

0.99+

each podQUANTITY

0.99+

Native Computing FoundationORGANIZATION

0.99+

todayDATE

0.99+

EnglishOTHER

0.99+

one personQUANTITY

0.98+

SDOTITLE

0.98+

threeQUANTITY

0.98+

oneQUANTITY

0.97+

eachQUANTITY

0.97+

ingressORGANIZATION

0.96+

AmbassadorORGANIZATION

0.96+

PurpleORGANIZATION

0.95+

HarvardORGANIZATION

0.95+

one big thingQUANTITY

0.94+

bothQUANTITY

0.94+

Orange Micro ServiceORGANIZATION

0.93+

one giant thingQUANTITY

0.92+

Purple Micro ServiceORGANIZATION

0.92+

SDOOTHER

0.9+

Next Inc KubernetesORGANIZATION

0.89+

CubanLOCATION

0.89+

one particular vehicleQUANTITY

0.88+

SDO GatewayTITLE

0.86+

three most well established proxiesQUANTITY

0.85+

envoyORGANIZATION

0.85+

purpleORGANIZATION

0.85+

Cooper NanniesORGANIZATION

0.83+

CooperPERSON

0.81+

Yellow Micro ServiceORGANIZATION

0.8+

single signQUANTITY

0.8+

A P a.COMMERCIAL_ITEM

0.77+

hot topicsQUANTITY

0.76+

Launchpad 2020COMMERCIAL_ITEM

0.75+

both mesh andQUANTITY

0.69+

EnvoyTITLE

0.65+

CEOPERSON

0.64+

DrTITLE

0.64+

APORGANIZATION

0.63+

VM Ware ContourTITLE

0.62+

Dr EnterpriseORGANIZATION

0.61+

MirantisORGANIZATION

0.59+

North SouthLOCATION

0.57+

GatewayTITLE

0.54+

folksORGANIZATION

0.54+

VoyagerTITLE

0.5+

Dr. EnterpriseTITLE

0.49+

OmeletTITLE

0.45+

MachinTITLE

0.45+

EnterpriseORGANIZATION

0.43+

Prem Balasubramanian and Manoj Narayanan | Hitachi Vantara: Build Your Cloud Center of Excellence


 

(Upbeat music playing) >> Hey everyone, thanks for joining us today. Welcome to this event of Building your Cloud Center of Excellence with Hitachi Vantara. I'm your host, Lisa Martin. I've got a couple of guests here with me next to talk about redefining cloud operations and application modernization for customers. Please welcome Prem Balasubramanian the SVP and CTO at Hitachi Vantara, and Manoj Narayanan is here as well, the Managing Director of Technology at GTCR. Guys, thank you so much for joining me today. Excited to have this conversation about redefining CloudOps with you. >> Pleasure to be here. >> Pleasure to be here >> Prem, let's go ahead and start with you. You have done well over a thousand cloud engagements in your career. I'd love to get your point of view on how the complexity around cloud operations and management has evolved in the last, say, three to four years. >> It's a great question, Lisa before we understand the complexity around the management itself, the cloud has evolved over the last decade significantly from being a backend infrastructure or infrastructure as a service for many companies to become the business for many companies. If you think about a lot of these cloud bond companies cloud is where their entire workload and their business wants. With that, as a background for this conversation if you think about the cloud operations, there was a lot of there was a lot of lift and shift happening in the market where people lifted their workloads or applications and moved them onto the cloud where they treated cloud significantly as an infrastructure. And the way they started to manage it was again, the same format they were managing there on-prem infrastructure and they call it I&O, Infrastructure and Operations. That's kind of the way traditionally cloud is managed. In the last few years, we are seeing a significant shift around thinking of cloud more as a workload rather than as just an infrastructure. And what I mean by workload is in the cloud, everything is now code. So you are codifying your infrastructure. Your application is already code and your data is also codified as data services. With now that context apply the way you think about managing the cloud has to significantly change and many companies are moving towards trying to change their models to look at this complex environment as opposed to treating it like a simple infrastructure that is sitting somewhere else. So that's one of the biggest changes and shifts that are causing a lot of complexity and headache for actually a lot of customers for managing environments. The second critical aspect is even that, even exasperates the situation is multicloud environments. Now, there are companies that have got it right with things about right cloud for the right workload. So there are companies that I reach out and I talk with. They've got their office applications and emails and stuff running on Microsoft 365 which can be on the Azure cloud whereas they're running their engineering applications the ones that they build and leverage for their end customers on Amazon. And to some extent they've got it right but still they have a multiple cloud that they have to go after and maintain. This becomes complex when you have two clouds for the same type of workload. When I have to host applications for my end customers on Amazon as well as Azure, Azure as well as Google then, I get into security issues that I have to be consistent across all three. I get into talent because I need to have people that focus on Amazon as well as Azure, as well as Google which means I need so much more workforce, I need so many so much more skills that I need to build, right? That's becoming the second issue. The third one is around data costs. Can I make these clouds talk to each other? Then you get into the ingress egress cost and that creates some complexity. So bringing all of this together and managing is really become becoming more complex for our customers. And obviously as a part of this we will talk about some of the, some of the ideas that we can bring for in managing such complex environments but this is what we are seeing in terms of why the complexity has become a lot more in the last few years. >> Right. A lot of complexity in the last few years. Manoj, let's bring you into the conversation now. Before we dig into your cloud environment give the audience a little bit of an overview of GTCR. What kind of company are you? What do you guys do? >> Definitely Lisa. GTCR is a Chicago based private equity firm. We've been in the market for more than 40 years and what we do is we invest in companies across different sectors and then we manage the company drive it to increase the value and then over a period of time, sell it to future buyers. So in a nutshell, we got a large portfolio of companies that we need to manage and make sure that they perform to expectations. And my role within GTCR is from a technology viewpoint so where I work with all the companies their technology leadership to make sure that we are getting the best out of technology and technology today drives everything. So how can technology be a good compliment to the business itself? So, my role is to play that intermediary role to make sure that there is synergy between the investment thesis and the technology lures that we can pull and also work with partners like Hitachi to make sure that it is done in an optimal manner. >> I like that you said, you know, technology needs to really compliment the business and vice versa. So Manoj, let's get into the cloud operations environment at GTCR. Talk to me about what the experience has been the last couple of years. Give us an idea of some of the challenges that you were facing with existing cloud ops and and the solution that you're using from Hitachi Vantara. >> A a absolutely. In fact, in fact Prem phrased it really well, one of the key things that we're facing is the workload management. So there's so many choices there, so much complexities. We have these companies buying more companies there is organic growth that is happening. So the variables that we have to deal with are very high in such a scenario to make sure that the workload management of each of the companies are done in an optimal manner is becoming an increasing concern. So, so that's one area where any help we can get anything we can try to make sure it is done better becomes a huge value at each. A second aspect is a financial transparency. We need to know where the money is going where the money is coming in from, what is the scale especially in the cloud environment. We are talking about an auto scale ecosystem. Having that financial transparency and the metrics associated with that, it, these these become very, very critical to ensure that we have a successful presence in the multicloud environment. >> Talk a little bit about the solution that you're using with Hitachi and, and the challenges that it is eradicated. >> Yeah, so it end of the day, right, we we need to focus on our core competence. So, so we have got a very strong technology leadership team. We've got a very strong presence in the respective domains of each of the portfolio companies. But where Hitachi comes in and HAR comes in as a solution is that they allow us to excel in focusing on our core business and then make sure that we are able to take care of workload management or financial transparency. All of that is taken off the table from us and and Hitachi manages it for us, right? So it's such a perfectly compliment relationship where they act as two partners and HARC is a solution that is extremely useful in driving that. And, and and I'm anticipating that it'll become more important with time as the complexity of cloud and cloud associate workloads are only becoming more challenging to manage and not less. >> Right? That's the thing that complexity is there and it's also increasing Prem, you talked about the complexities that are existent today with respect to cloud operations the things that have happened over the last couple of years. What are some of your tips, Prem for the audience, like the the top two or three things that you would say on cloud operations that that people need to understand so that they can manage that complexity and allow their business to be driven and complimented by technology? >> Yeah, a big great question again, Lisa, right? And I think Manoj alluded to a few of these things as well. The first one is in the new world of the cloud I think think of migration, modernization and management as a single continuum to the cloud. Now there is no lift and shift and there is no way somebody else separately manages it, right? If you do not lift and shift the right applications the right way onto the cloud, you are going to deal with the complexity of managing it and you'll end up spending more money time and effort in managing it. So that's number one. Migration, modernization, management of cloud work growth is a single continuum and it's not three separate activities, right? That's number one. And the, the second is cost. Cost traditionally has been an afterthought, right? People move the workload to the cloud. And I think, again, like I said, I'll refer back to what Manoj said once we move it to the cloud and then we put all these fancy engineering capability around self-provisioning, every developer can go and ask for what he or she wants and they get an environment immediately spun up so on and so forth. Suddenly the CIO wakes up to a bill that is significantly larger than what he or she expected right? And, and this is this is become a bit common nowadays, right? The the challenge is because we think cost in the cloud as an afterthought. But consider this example in, in previous world you buy hard, well, you put it in your data center you have already amortized the cost as a CapEx. So you can write an application throw it onto the infrastructure and the application continues to use the infrastructure until you hit a ceiling, you don't care about the money you spent. But if I write a line of code that is inefficient today and I deploy it on the cloud from minute one, I am paying for the inefficiency. So if I realize it after six months, I've already spent the money. So financial discipline, especially when managing the cloud is now is no more an afterthought. It is as much something that you have to include in your engineering practice as much as any other DevOps practices, right? Those are my top two tips, Lisa, from my standpoint, think about cloud, think about cloud work, cloud workloads. And the last one again, and you will see you will hear me saying this again and again, get into the mindset of everything is code. You don't have a touch and feel infrastructure anymore. So you don't really need to have foot on the ground to go manage that infrastructure. It's codified. So your code should be managing it, but think of how it happens, right? That's where we, we are going as an evolution >> Everything is code. That's great advice, great tips for the audience there. Manoj, I'll bring you back into the conversation. You know, we, we can talk about skills gaps on on in many different facets of technology the SRE role, relatively new, skillset. We're hearing, hearing a lot about it. SRE led DevSecOps is probably even more so of a new skillset. If I'm an IT leader or an application leader how do I ensure that I have the right skillset within my organization to be able to manage my cloud operations to, to dial down that complexity so that I can really operate successfully as a business? >> Yeah. And so unfortunately there is no perfect answer, right? It's such a, such a scarce skillset that a, any day any of the portfolio company CTOs if I go and talk and say, Hey here's a great SRE team member, they'll be more than willing to fight with each of to get the person in right? It's just that scarce of a skillset. So, so a few things we need to look at it. One is, how can I build it within, right? So nobody gets born as an SRE, you, you make a person an SRE. So how do you inculcate that culture? So like Prem said earlier, right? Everything is software. So how do we make sure that everybody inculcates that as part of their operating philosophy be they part of the operations team or the development team or the testing team they need to understand that that is a common guideline and common objective that we are driving towards. So, so that skillset and that associated training needs to be driven from within the organization. And that in my mind is the fastest way to make sure that that role gets propagated across organization. That is one. The second thing is rely on the right partners. So it's not going to be possible for us, to get all of these roles built in-house. So instead prioritize what roles need to be done from within the organization and what roles can we rely on our partners to drive it for us. So that becomes an important consideration for us to look at as well. >> Absolutely. That partnership angle is incredibly important from, from the, the beginning really kind of weaving these companies together on this journey to to redefine cloud operations and build that, as we talked about at the beginning of the conversation really building a cloud center of excellence that allows the organization to be competitive, successful and and really deliver what the end user is, is expecting. I want to ask - Sorry Lisa, - go ahead. >> May I add something to it, I think? >> Sure. >> Yeah. One of the, one of the common things that I tell customers when we talk about SRE and to manages point is don't think of SRE as a skillset which is the common way today the industry tries to solve the problem. SRE is a mindset, right? Everybody in >> Well well said, yeah >> That, so everybody in a company should think of him or her as a cycle liability engineer. And everybody has a role in it, right? Even if you take the new process layout from SRE there are individuals that are responsible to whom we can go to when there is a problem directly as opposed to going through the traditional ways of AI talk to L one and L one contras all. They go to L two and then L three. So we, we, we are trying to move away from an issue escalation model to what we call as a a issue routing or a incident routing model, right? Move away from incident escalation to an incident routing model. So you get to route to the right folks. So again, to sum it up, SRE should not be solved as a skillset set because there is not enough people in the market to solve it that way. If you start solving it as a mindset I think companies can get a handhold of it. >> I love that. I've actually never heard that before, but it it makes perfect sense to think about the SRE as a mindset rather than a skillset that will allow organizations to be much more successful. Prem I wanted to get your thoughts as enterprises are are innovating, they're moving more products and services to the as a service model. Talk about how the dev teams the ops teams are working together to build and run reliable, cost efficient services. Are they working better together? >> Again, a a very polarizing question because some customers are getting it right many customers aren't, there is still a big wall between development and operations, right? Even when you think about DevOps as a terminology the fundamental principle was to make sure dev and ops works together. But what many companies have achieved today, honestly is automating the operations for development. For example, as a developer, I can check in code and my code will appear in production without any friction, right? There is automated testing, automated provisioning and it gets promoted to production, but after production, it goes back into the 20 year old model of operating the code, right? So there is more work that needs to be done for Devon and Ops to come closer and work together. And one of the ways that we think this is achievable is not by doing radical org changes, but more by focusing on a product-oriented single backlog approach across development and operations. Which is, again, there is change management involved but I think that's a way to start embracing the culture of dev ops coming together much better now, again SRE principles as we double click and understand it more and Google has done a very good job playing it out for the world. As you think about SRE principle, there are ways and means in that process of how to think about a single backlog. And in HARC, Hitachi Application Reliability Centers we've really got a way to look at prioritizing the backlog. And what I mean by that is dev teams try to work on backlog that come from product managers on features. The SRE and the operations team try to put backlog into the say sorry, try to put features into the same backlog for improving stability, availability and financials financial optimization of your code. And there are ways when you look at your SLOs and error budgets to really coach the product teams to prioritize your backlog based on what's important for you. So if you understand your spending more money then you reduce your product features going in and implement the financial optimization that came from your operations team, right? So you now have the ability to throttle these parameters and that's where SRE becomes a mindset and a principle as opposed to a skillset because this is not an individual telling you to do. This is the company that is, is embarking on how to prioritize my backlog beyond just user features. >> Right. Great point. Last question for both of you is the same talk kind of take away things that you want me to remember. If I am at an IT leader at, at an organization and I am planning on redefining CloudOps for my company Manoj will start with you and then Prem to you what are the top two things that you want me to walk away with understanding how to do that successfully? >> Yeah, so I'll, I'll go back to basics. So the two things I would say need to be taken care of is, one is customer experience. So all the things that I do end of the day is it improving the customer experience or not? So that's a first metric. The second thing is anything that I do is there an ROI by doing that incremental step or not? Otherwise we might get lost in the technology with surgery, the new tech, et cetera. But end of the day, if the customers are not happy if there is no ROI, everything else you just can't do much on top of that >> Now it's all about the customer experience. Right? That's so true. Prem what are your thoughts, the the top things that I need to be taking away if I am a a leader planning to redefine my cloud eye company? >> Absolutely. And I think from a, from a company standpoint I think Manoj summarized it extremely well, right? There is this ROI and there is this customer experience from my end, again, I'll, I'll suggest two two more things as a takeaway, right? One, cloud cost is not an afterthought. It's essential for us to think about it upfront. Number two, do not delink migration modernization and operations. They are one stream. If you migrate a long, wrong workload onto the cloud you're going to be stuck with it for a long time. And an example of a wrong workload, Lisa for everybody that that is listening to this is if my cost per transaction profile doesn't change and I am not improving my revenue per transaction for a piece of code that's going run in production it's better off running in a data center where my cost is CapEx than amortized and I have control over when I want to upgrade as opposed to putting it on a cloud and continuing to pay unless it gives me more dividends towards improvement. But that's a simple example of when we think about what should I migrate and how will it cost pain when I want to manage it in the longer run. But that's, that's something that I'll leave the audience and you with as a takeaway. >> Excellent. Guys, thank you so much for talking to me today about what Hitachi Vantara and GTCR are doing together how you've really dialed down those complexities enabling the business and the technology folks to really live harmoniously. We appreciate your insights and your perspectives on building a cloud center of excellence. Thank you both for joining me. >> Thank you. >> For my guests, I'm Lisa. Martin, you're watching this event building Your Cloud Center of Excellence with Hitachi Vantara. Thanks for watching. (Upbeat music playing) (Upbeat music playing) (Upbeat music playing) (Upbeat music playing)

Published Date : Mar 2 2023

SUMMARY :

the SVP and CTO at Hitachi Vantara, in the last, say, three to four years. apply the way you think in the last few years. and the technology lures that we can pull and the solution that you're that the workload management the solution that you're using All of that is taken off the table from us and allow their business to be driven have foot on the ground to have the right skillset And that in my mind is the that allows the organization to be and to manages point is don't of AI talk to L one and L one contras all. Talk about how the dev teams The SRE and the operations team that you want me to remember. But end of the day, if the I need to be taking away that I'll leave the audience and the technology folks to building Your Cloud Center of Excellence

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
HitachiORGANIZATION

0.99+

GTCRORGANIZATION

0.99+

Lisa MartinPERSON

0.99+

Prem BalasubramanianPERSON

0.99+

HARCORGANIZATION

0.99+

LisaPERSON

0.99+

Manoj NarayananPERSON

0.99+

GoogleORGANIZATION

0.99+

ChicagoLOCATION

0.99+

AmazonORGANIZATION

0.99+

Hitachi VantaraORGANIZATION

0.99+

two partnersQUANTITY

0.99+

threeQUANTITY

0.99+

second issueQUANTITY

0.99+

bothQUANTITY

0.99+

more than 40 yearsQUANTITY

0.99+

ManojORGANIZATION

0.99+

eachQUANTITY

0.99+

third oneQUANTITY

0.99+

SREORGANIZATION

0.99+

todayDATE

0.99+

first metricQUANTITY

0.99+

one streamQUANTITY

0.99+

PremPERSON

0.99+

secondQUANTITY

0.99+

OneQUANTITY

0.99+

MartinPERSON

0.99+

oneQUANTITY

0.98+

twoQUANTITY

0.98+

first oneQUANTITY

0.98+

four yearsQUANTITY

0.98+

second thingQUANTITY

0.98+

second aspectQUANTITY

0.98+

three thingsQUANTITY

0.98+

ManojPERSON

0.98+

DevonORGANIZATION

0.97+

one areaQUANTITY

0.97+

two thingsQUANTITY

0.96+

Hitachi Application Reliability CentersORGANIZATION

0.96+

singleQUANTITY

0.95+

L twoOTHER

0.95+

single backlogQUANTITY

0.93+

two tipsQUANTITY

0.93+

three separate activitiesQUANTITY

0.92+

SRETITLE

0.91+

20 year oldQUANTITY

0.91+

CloudOpsTITLE

0.9+

L threeOTHER

0.9+

last decadeDATE

0.9+

second critical aspectQUANTITY

0.89+

yearsDATE

0.89+

MicrosoftORGANIZATION

0.89+

last couple of yearsDATE

0.88+

AzureTITLE

0.88+

Prem Balasubramanian and Manoj Narayanan | Hitachi Vantara: Build Your Cloud Center of Excellence


 

(Upbeat music playing) >> Hey everyone, thanks for joining us today. Welcome to this event of Building your Cloud Center of Excellence with Hitachi Vantara. I'm your host, Lisa Martin. I've got a couple of guests here with me next to talk about redefining cloud operations and application modernization for customers. Please welcome Prem Balasubramanian the SVP and CTO at Hitachi Vantara, and Manoj Narayanan is here as well, the Managing Director of Technology at GTCR. Guys, thank you so much for joining me today. Excited to have this conversation about redefining CloudOps with you. >> Pleasure to be here. >> Pleasure to be here >> Prem, let's go ahead and start with you. You have done well over a thousand cloud engagements in your career. I'd love to get your point of view on how the complexity around cloud operations and management has evolved in the last, say, three to four years. >> It's a great question, Lisa before we understand the complexity around the management itself, the cloud has evolved over the last decade significantly from being a backend infrastructure or infrastructure as a service for many companies to become the business for many companies. If you think about a lot of these cloud bond companies cloud is where their entire workload and their business wants. With that, as a background for this conversation if you think about the cloud operations, there was a lot of there was a lot of lift and shift happening in the market where people lifted their workloads or applications and moved them onto the cloud where they treated cloud significantly as an infrastructure. And the way they started to manage it was again, the same format they were managing there on-prem infrastructure and they call it I&O, Infrastructure and Operations. That's kind of the way traditionally cloud is managed. In the last few years, we are seeing a significant shift around thinking of cloud more as a workload rather than as just an infrastructure. And what I mean by workload is in the cloud, everything is now code. So you are codifying your infrastructure. Your application is already code and your data is also codified as data services. With now that context apply the way you think about managing the cloud has to significantly change and many companies are moving towards trying to change their models to look at this complex environment as opposed to treating it like a simple infrastructure that is sitting somewhere else. So that's one of the biggest changes and shifts that are causing a lot of complexity and headache for actually a lot of customers for managing environments. The second critical aspect is even that, even exasperates the situation is multicloud environments. Now, there are companies that have got it right with things about right cloud for the right workload. So there are companies that I reach out and I talk with. They've got their office applications and emails and stuff running on Microsoft 365 which can be on the Azure cloud whereas they're running their engineering applications the ones that they build and leverage for their end customers on Amazon. And to some extent they've got it right but still they have a multiple cloud that they have to go after and maintain. This becomes complex when you have two clouds for the same type of workload. When I have to host applications for my end customers on Amazon as well as Azure, Azure as well as Google then, I get into security issues that I have to be consistent across all three. I get into talent because I need to have people that focus on Amazon as well as Azure, as well as Google which means I need so much more workforce, I need so many so much more skills that I need to build, right? That's becoming the second issue. The third one is around data costs. Can I make these clouds talk to each other? Then you get into the ingress egress cost and that creates some complexity. So bringing all of this together and managing is really become becoming more complex for our customers. And obviously as a part of this we will talk about some of the, some of the ideas that we can bring for in managing such complex environments but this is what we are seeing in terms of why the complexity has become a lot more in the last few years. >> Right. A lot of complexity in the last few years. Manoj, let's bring you into the conversation now. Before we dig into your cloud environment give the audience a little bit of an overview of GTCR. What kind of company are you? What do you guys do? >> Definitely Lisa. GTCR is a Chicago based private equity firm. We've been in the market for more than 40 years and what we do is we invest in companies across different sectors and then we manage the company drive it to increase the value and then over a period of time, sell it to future buyers. So in a nutshell, we got a large portfolio of companies that we need to manage and make sure that they perform to expectations. And my role within GTCR is from a technology viewpoint so where I work with all the companies their technology leadership to make sure that we are getting the best out of technology and technology today drives everything. So how can technology be a good compliment to the business itself? So, my role is to play that intermediary role to make sure that there is synergy between the investment thesis and the technology lures that we can pull and also work with partners like Hitachi to make sure that it is done in an optimal manner. >> I like that you said, you know, technology needs to really compliment the business and vice versa. So Manoj, let's get into the cloud operations environment at GTCR. Talk to me about what the experience has been the last couple of years. Give us an idea of some of the challenges that you were facing with existing cloud ops and and the solution that you're using from Hitachi Vantara. >> A a absolutely. In fact, in fact Prem phrased it really well, one of the key things that we're facing is the workload management. So there's so many choices there, so much complexities. We have these companies buying more companies there is organic growth that is happening. So the variables that we have to deal with are very high in such a scenario to make sure that the workload management of each of the companies are done in an optimal manner is becoming an increasing concern. So, so that's one area where any help we can get anything we can try to make sure it is done better becomes a huge value at each. A second aspect is a financial transparency. We need to know where the money is going where the money is coming in from, what is the scale especially in the cloud environment. We are talking about an auto scale ecosystem. Having that financial transparency and the metrics associated with that, it, these these become very, very critical to ensure that we have a successful presence in the multicloud environment. >> Talk a little bit about the solution that you're using with Hitachi and, and the challenges that it is eradicated. >> Yeah, so it end of the day, right, we we need to focus on our core competence. So, so we have got a very strong technology leadership team. We've got a very strong presence in the respective domains of each of the portfolio companies. But where Hitachi comes in and HAR comes in as a solution is that they allow us to excel in focusing on our core business and then make sure that we are able to take care of workload management or financial transparency. All of that is taken off the table from us and and Hitachi manages it for us, right? So it's such a perfectly compliment relationship where they act as two partners and HARC is a solution that is extremely useful in driving that. And, and and I'm anticipating that it'll become more important with time as the complexity of cloud and cloud associate workloads are only becoming more challenging to manage and not less. >> Right? That's the thing that complexity is there and it's also increasing Prem, you talked about the complexities that are existent today with respect to cloud operations the things that have happened over the last couple of years. What are some of your tips, Prem for the audience, like the the top two or three things that you would say on cloud operations that that people need to understand so that they can manage that complexity and allow their business to be driven and complimented by technology? >> Yeah, a big great question again, Lisa, right? And I think Manoj alluded to a few of these things as well. The first one is in the new world of the cloud I think think of migration, modernization and management as a single continuum to the cloud. Now there is no lift and shift and there is no way somebody else separately manages it, right? If you do not lift and shift the right applications the right way onto the cloud, you are going to deal with the complexity of managing it and you'll end up spending more money time and effort in managing it. So that's number one. Migration, modernization, management of cloud work growth is a single continuum and it's not three separate activities, right? That's number one. And the, the second is cost. Cost traditionally has been an afterthought, right? People move the workload to the cloud. And I think, again, like I said, I'll refer back to what Manoj said once we move it to the cloud and then we put all these fancy engineering capability around self-provisioning, every developer can go and ask for what he or she wants and they get an environment immediately spun up so on and so forth. Suddenly the CIO wakes up to a bill that is significantly larger than what he or she expected right? And, and this is this is become a bit common nowadays, right? The the challenge is because we think cost in the cloud as an afterthought. But consider this example in, in previous world you buy hard, well, you put it in your data center you have already amortized the cost as a CapEx. So you can write an application throw it onto the infrastructure and the application continues to use the infrastructure until you hit a ceiling, you don't care about the money you spent. But if I write a line of code that is inefficient today and I deploy it on the cloud from minute one, I am paying for the inefficiency. So if I realize it after six months, I've already spent the money. So financial discipline, especially when managing the cloud is now is no more an afterthought. It is as much something that you have to include in your engineering practice as much as any other DevOps practices, right? Those are my top two tips, Lisa, from my standpoint, think about cloud, think about cloud work, cloud workloads. And the last one again, and you will see you will hear me saying this again and again, get into the mindset of everything is code. You don't have a touch and feel infrastructure anymore. So you don't really need to have foot on the ground to go manage that infrastructure. It's codified. So your code should be managing it, but think of how it happens, right? That's where we, we are going as an evolution >> Everything is code. That's great advice, great tips for the audience there. Manoj, I'll bring you back into the conversation. You know, we, we can talk about skills gaps on on in many different facets of technology the SRE role, relatively new, skillset. We're hearing, hearing a lot about it. SRE led DevSecOps is probably even more so of a new skillset. If I'm an IT leader or an application leader how do I ensure that I have the right skillset within my organization to be able to manage my cloud operations to, to dial down that complexity so that I can really operate successfully as a business? >> Yeah. And so unfortunately there is no perfect answer, right? It's such a, such a scarce skillset that a, any day any of the portfolio company CTOs if I go and talk and say, Hey here's a great SRE team member, they'll be more than willing to fight with each of to get the person in right? It's just that scarce of a skillset. So, so a few things we need to look at it. One is, how can I build it within, right? So nobody gets born as an SRE, you, you make a person an SRE. So how do you inculcate that culture? So like Prem said earlier, right? Everything is software. So how do we make sure that everybody inculcates that as part of their operating philosophy be they part of the operations team or the development team or the testing team they need to understand that that is a common guideline and common objective that we are driving towards. So, so that skillset and that associated training needs to be driven from within the organization. And that in my mind is the fastest way to make sure that that role gets propagated across organization. That is one. The second thing is rely on the right partners. So it's not going to be possible for us, to get all of these roles built in-house. So instead prioritize what roles need to be done from within the organization and what roles can we rely on our partners to drive it for us. So that becomes an important consideration for us to look at as well. >> Absolutely. That partnership angle is incredibly important from, from the, the beginning really kind of weaving these companies together on this journey to to redefine cloud operations and build that, as we talked about at the beginning of the conversation really building a cloud center of excellence that allows the organization to be competitive, successful and and really deliver what the end user is, is expecting. I want to ask - Sorry Lisa, - go ahead. >> May I add something to it, I think? >> Sure. >> Yeah. One of the, one of the common things that I tell customers when we talk about SRE and to manages point is don't think of SRE as a skillset which is the common way today the industry tries to solve the problem. SRE is a mindset, right? Everybody in >> Well well said, yeah >> That, so everybody in a company should think of him or her as a cycle liability engineer. And everybody has a role in it, right? Even if you take the new process layout from SRE there are individuals that are responsible to whom we can go to when there is a problem directly as opposed to going through the traditional ways of AI talk to L one and L one contras all. They go to L two and then L three. So we, we, we are trying to move away from an issue escalation model to what we call as a a issue routing or a incident routing model, right? Move away from incident escalation to an incident routing model. So you get to route to the right folks. So again, to sum it up, SRE should not be solved as a skillset set because there is not enough people in the market to solve it that way. If you start solving it as a mindset I think companies can get a handhold of it. >> I love that. I've actually never heard that before, but it it makes perfect sense to think about the SRE as a mindset rather than a skillset that will allow organizations to be much more successful. Prem I wanted to get your thoughts as enterprises are are innovating, they're moving more products and services to the as a service model. Talk about how the dev teams the ops teams are working together to build and run reliable, cost efficient services. Are they working better together? >> Again, a a very polarizing question because some customers are getting it right many customers aren't, there is still a big wall between development and operations, right? Even when you think about DevOps as a terminology the fundamental principle was to make sure dev and ops works together. But what many companies have achieved today, honestly is automating the operations for development. For example, as a developer, I can check in code and my code will appear in production without any friction, right? There is automated testing, automated provisioning and it gets promoted to production, but after production, it goes back into the 20 year old model of operating the code, right? So there is more work that needs to be done for Devon and Ops to come closer and work together. And one of the ways that we think this is achievable is not by doing radical org changes, but more by focusing on a product-oriented single backlog approach across development and operations. Which is, again, there is change management involved but I think that's a way to start embracing the culture of dev ops coming together much better now, again SRE principles as we double click and understand it more and Google has done a very good job playing it out for the world. As you think about SRE principle, there are ways and means in that process of how to think about a single backlog. And in HARC, Hitachi Application Reliability Centers we've really got a way to look at prioritizing the backlog. And what I mean by that is dev teams try to work on backlog that come from product managers on features. The SRE and the operations team try to put backlog into the say sorry, try to put features into the same backlog for improving stability, availability and financials financial optimization of your code. And there are ways when you look at your SLOs and error budgets to really coach the product teams to prioritize your backlog based on what's important for you. So if you understand your spending more money then you reduce your product features going in and implement the financial optimization that came from your operations team, right? So you now have the ability to throttle these parameters and that's where SRE becomes a mindset and a principle as opposed to a skillset because this is not an individual telling you to do. This is the company that is, is embarking on how to prioritize my backlog beyond just user features. >> Right. Great point. Last question for both of you is the same talk kind of take away things that you want me to remember. If I am at an IT leader at, at an organization and I am planning on redefining CloudOps for my company Manoj will start with you and then Prem to you what are the top two things that you want me to walk away with understanding how to do that successfully? >> Yeah, so I'll, I'll go back to basics. So the two things I would say need to be taken care of is, one is customer experience. So all the things that I do end of the day is it improving the customer experience or not? So that's a first metric. The second thing is anything that I do is there an ROI by doing that incremental step or not? Otherwise we might get lost in the technology with surgery, the new tech, et cetera. But end of the day, if the customers are not happy if there is no ROI, everything else you just can't do much on top of that >> Now it's all about the customer experience. Right? That's so true. Prem what are your thoughts, the the top things that I need to be taking away if I am a a leader planning to redefine my cloud eye company? >> Absolutely. And I think from a, from a company standpoint I think Manoj summarized it extremely well, right? There is this ROI and there is this customer experience from my end, again, I'll, I'll suggest two two more things as a takeaway, right? One, cloud cost is not an afterthought. It's essential for us to think about it upfront. Number two, do not delink migration modernization and operations. They are one stream. If you migrate a long, wrong workload onto the cloud you're going to be stuck with it for a long time. And an example of a wrong workload, Lisa for everybody that that is listening to this is if my cost per transaction profile doesn't change and I am not improving my revenue per transaction for a piece of code that's going run in production it's better off running in a data center where my cost is CapEx than amortized and I have control over when I want to upgrade as opposed to putting it on a cloud and continuing to pay unless it gives me more dividends towards improvement. But that's a simple example of when we think about what should I migrate and how will it cost pain when I want to manage it in the longer run. But that's, that's something that I'll leave the audience and you with as a takeaway. >> Excellent. Guys, thank you so much for talking to me today about what Hitachi Vantara and GTCR are doing together how you've really dialed down those complexities enabling the business and the technology folks to really live harmoniously. We appreciate your insights and your perspectives on building a cloud center of excellence. Thank you both for joining me. >> Thank you. >> For my guests, I'm Lisa. Martin, you're watching this event building Your Cloud Center of Excellence with Hitachi Vantara. Thanks for watching. (Upbeat music playing) (Upbeat music playing) (Upbeat music playing) (Upbeat music playing)

Published Date : Feb 27 2023

SUMMARY :

the SVP and CTO at Hitachi Vantara, in the last, say, three to four years. apply the way you think in the last few years. and the technology lures that we can pull and the solution that you're that the workload management the solution that you're using All of that is taken off the table from us and allow their business to be driven have foot on the ground to have the right skillset And that in my mind is the that allows the organization to be and to manages point is don't of AI talk to L one and L one contras all. Talk about how the dev teams The SRE and the operations team that you want me to remember. But end of the day, if the I need to be taking away that I'll leave the audience and the technology folks to building Your Cloud Center of Excellence

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
HitachiORGANIZATION

0.99+

GTCRORGANIZATION

0.99+

Lisa MartinPERSON

0.99+

Prem BalasubramanianPERSON

0.99+

HARCORGANIZATION

0.99+

LisaPERSON

0.99+

Manoj NarayananPERSON

0.99+

GoogleORGANIZATION

0.99+

ChicagoLOCATION

0.99+

AmazonORGANIZATION

0.99+

Hitachi VantaraORGANIZATION

0.99+

two partnersQUANTITY

0.99+

threeQUANTITY

0.99+

second issueQUANTITY

0.99+

bothQUANTITY

0.99+

more than 40 yearsQUANTITY

0.99+

ManojORGANIZATION

0.99+

eachQUANTITY

0.99+

third oneQUANTITY

0.99+

SREORGANIZATION

0.99+

todayDATE

0.99+

first metricQUANTITY

0.99+

one streamQUANTITY

0.99+

PremPERSON

0.99+

secondQUANTITY

0.99+

OneQUANTITY

0.99+

MartinPERSON

0.99+

oneQUANTITY

0.98+

twoQUANTITY

0.98+

first oneQUANTITY

0.98+

four yearsQUANTITY

0.98+

second thingQUANTITY

0.98+

second aspectQUANTITY

0.98+

three thingsQUANTITY

0.98+

ManojPERSON

0.98+

DevonORGANIZATION

0.97+

one areaQUANTITY

0.97+

two thingsQUANTITY

0.96+

Hitachi Application Reliability CentersORGANIZATION

0.96+

singleQUANTITY

0.95+

L twoOTHER

0.95+

single backlogQUANTITY

0.93+

two tipsQUANTITY

0.93+

three separate activitiesQUANTITY

0.92+

SRETITLE

0.91+

20 year oldQUANTITY

0.91+

CloudOpsTITLE

0.9+

L threeOTHER

0.9+

last decadeDATE

0.9+

second critical aspectQUANTITY

0.89+

yearsDATE

0.89+

MicrosoftORGANIZATION

0.89+

last couple of yearsDATE

0.88+

AzureTITLE

0.88+

Satish Iyer, Dell Technologies | SuperComputing 22


 

>>We're back at Super Computing, 22 in Dallas, winding down the final day here. A big show floor behind me. Lots of excitement out there, wouldn't you say, Dave? Just >>Oh, it's crazy. I mean, any, any time you have NASA presentations going on and, and steampunk iterations of cooling systems that the, you know, it's, it's >>The greatest. I've been to hundreds of trade shows. I don't think I've ever seen NASA exhibiting at one like they are here. Dave Nicholson, my co-host. I'm Paul Gell, in which with us is Satish Ier. He is the vice president of emerging services at Dell Technologies and Satit, thanks for joining us on the cube. >>Thank you. Paul, >>What are emerging services? >>Emerging services are actually the growth areas for Dell. So it's telecom, it's cloud, it's edge. So we, we especially focus on all the growth vectors for, for the companies. >>And, and one of the key areas that comes under your jurisdiction is called apex. Now I'm sure there are people who don't know what Apex is. Can you just give us a quick definition? >>Absolutely. So Apex is actually Dells for a into cloud, and I manage the Apex services business. So this is our way of actually bringing cloud experience to our customers, OnPrem and in color. >>But, but it's not a cloud. I mean, you don't, you don't have a Dell cloud, right? It's, it's of infrastructure as >>A service. It's infrastructure and platform and solutions as a service. Yes, we don't have our own e of a public cloud, but we want to, you know, this is a multi-cloud world, so technically customers want to consume where they want to consume. So this is Dell's way of actually, you know, supporting a multi-cloud strategy for our customers. >>You, you mentioned something just ahead of us going on air. A great way to describe Apex, to contrast Apex with CapEx. There's no c there's no cash up front necessary. Yeah, I thought that was great. Explain that, explain that a little more. Well, >>I mean, you know, one, one of the main things about cloud is the consumption model, right? So customers would like to pay for what they consume, they would like to pay in a subscription. They would like to not prepay CapEx ahead of time. They want that economic option, right? So I think that's one of the key tenets for anything in cloud. So I think it's important for us to recognize that and think Apex is basically a way by which customers pay for what they consume, right? So that's a absolutely a key tenant for how, how we want to design Apex. So it's absolutely right. >>And, and among those services are high performance computing services. Now I was not familiar with that as an offering in the Apex line. What constitutes a high performance computing Apex service? >>Yeah, I mean, you know, I mean, this conference is great, like you said, you know, I, there's so many HPC and high performance computing folks here, but one of the things is, you know, fundamentally, if you look at high performance computing ecosystem, it is quite complex, right? And when you call it as an Apex HPC or Apex offering offer, it brings a lot of the cloud economics and cloud, you know, experience to the HPC offer. So fundamentally, it's about our ability for customers to pay for what they consume. It's where Dell takes a lot of the day to day management of the infrastructure on our own so that customers don't need to do the grunge work of managing it, and they can really focus on the actual workload, which actually they run on the CHPC ecosystem. So it, it is, it is high performance computing offer, but instead of them buying the infrastructure, running all of that by themself, we make it super easy for customers to consume and manage it across, you know, proven designs, which Dell always implements across these verticals. >>So what, what makes the high performance computing offering as opposed to, to a rack of powered servers? What do you add in to make it >>Hpc? Ah, that's a great question. So, I mean, you know, so this is a platform, right? So we are not just selling infrastructure by the drink. So we actually are fundamentally, it's based on, you know, we, we, we launch two validated designs, one for life science sales, one for manufacturing. So we actually know how these PPO work together, how they actually are validated design tested solution. And we also, it's a platform. So we actually integrate the softwares on the top. So it's just not the infrastructure. So we actually integrate a cluster manager, we integrate a job scheduler, we integrate a contained orchestration layer. So a lot of these things, customers have to do it by themself, right? If they're buy the infrastructure. So by basically we are actually giving a platform or an ecosystem for our customers to run their workloads. So make it easy for them to actually consume those. >>That's Now is this, is this available on premises for customer? >>Yeah, so we, we, we make it available customers both ways. So we make it available OnPrem for customers who want to, you know, kind of, they want to take that, take that economics. We also make it available in a colo environment if the customers want to actually, you know, extend colo as that OnPrem environment. So we do both. >>What are, what are the requirements for a customer before you roll that equipment in? How do they sort of have to set the groundwork for, >>For Well, I think, you know, fundamentally it starts off with what the actual use case is, right? So, so if you really look at, you know, the two validated designs we talked about, you know, one for, you know, healthcare life sciences, and one other one for manufacturing, they do have fundamentally different requirements in terms of what you need from those infrastructure systems. So, you know, the customers initially figure out, okay, how do they actually require something which is going to require a lot of memory intensive loads, or do they actually require something which has got a lot of compute power. So, you know, it all depends on what they would require in terms of the workloads to be, and then we do havet sizing. So we do have small, medium, large, we have, you know, multiple infrastructure options, CPU core options. Sometimes the customer would also wanna say, you know what, as long as the regular CPUs, I also want some GPU power on top of that. So those are determinations typically a customer makes as part of the ecosystem, right? And so those are things which would, they would talk to us about to say, okay, what is my best option in terms of, you know, kind of workloads I wanna run? And then they can make a determination in terms of how, how they would actually going. >>So this, this is probably a particularly interesting time to be looking at something like HPC via Apex with, with this season of Rolling Thunder from various partners that you have, you know? Yep. We're, we're all expecting that Intel is gonna be rolling out new CPU sets from a powered perspective. You have your 16th generation of PowerEdge servers coming out, P C I E, gen five, and all of the components from partners like Invidia and Broadcom, et cetera, plugging into them. Yep. What, what does that, what does that look like from your, from your perch in terms of talking to customers who maybe, maybe they're doing things traditionally and they're likely to be not, not fif not 15 G, not generation 15 servers. Yeah. But probably more like 14. Yeah, you're offering a pretty huge uplift. Yep. What, what do those conversations look >>Like? I mean, customers, so talking about partners, right? I mean, of course Dell, you know, we, we, we don't bring any solutions to the market without really working with all of our partners, whether that's at the infrastructure level, like you talked about, you know, Intel, amd, Broadcom, right? All the chip vendors, all the way to software layer, right? So we have cluster managers, we have communities orchestrators. So we usually what we do is we bring the best in class, whether it's a software player or a hardware player, right? And we bring it together as a solution. So we do give the customers a choice, and the customers always want to pick what you they know actually is awesome, right? So they that, that we actually do that. And, you know, and one of the main aspects of, especially when you talk about these things, bringing it as a service, right? >>We take a lot of guesswork away from our customer, right? You know, one of the good example of HPC is capacity, right? So customers, these are very, you know, I would say very intensive systems. Very complex systems, right? So customers would like to buy certain amount of capacity, they would like to grow and, you know, come back, right? So give, giving them the flexibility to actually consume more if they want, giving them the buffer and coming down. All of those things are very important as we actually design these things, right? And that takes some, you know, customers are given a choice, but it actually, they don't need to worry about, oh, you know, what happens if I actually have a spike, right? There's already buffer capacity built in. So those are awesome things. When we talk about things as a service, >>When customers are doing their ROI analysis, buying CapEx on-prem versus, versus using Apex, is there a point, is there a crossover point typically at which it's probably a better deal for them to, to go OnPrem? >>Yeah, I mean, it it like specifically talking about hpc, right? I mean, why, you know, we do have a ma no, a lot of customers consume high performance compute and public cloud, right? That's not gonna go away, right? But there are certain reasons why they would look at OnPrem or they would look at, for example, Ola environment, right? One of the main reasons they would like to do that is purely have to do with cost, right? These are pretty expensive systems, right? There is a lot of ingress, egress, there is a lot of data going back and forth, right? Public cloud, you know, it costs money to put data in or actually pull data back, right? And the second one is data residency and security requirements, right? A lot of these things are probably proprietary set of information. We talked about life sciences, there's a lot of research, right? >>Manufacturing, a lot of these things are just, just in time decision making, right? You are on a factory floor, you gotta be able to do that. Now there is a latency requirement. So I mean, I think a lot of things play, you know, plays into this outside of just cost, but data residency requirements, ingress, egress are big things. And when you're talking about mass moments of data you wanna put and pull it back in, they would like to kind of keep it close, keep it local, and you know, get a, get a, get a price >>Point. Nevertheless, I mean, we were just talking to Ian Coley from aws and he was talking about how customers have the need to sort of move workloads back and forth between the cloud and on-prem. That's something that they're addressing without posts. You are very much in the, in the on-prem world. Do you have, or will you have facilities for customers to move workloads back and forth? Yeah, >>I wouldn't, I wouldn't necessarily say, you know, Dell's cloud strategy is multi-cloud, right? So we basically, so it kind of falls into three, I mean we, some customers, some workloads are suited always for public cloud. It's easier to consume, right? There are, you know, customers also consume on-prem, the customers also consuming Kohler. And we also have like Dell's amazing piece of software like storage software. You know, we make some of these things available for customers to consume a software IP on their public cloud, right? So, you know, so this is our multi-cloud strategy. So we announced a project in Alpine, in Delta fold. So you know, if you look at those, basically customers are saying, I love your Dell IP on this, on this product, on the storage, can you make it available through, in this public environment, whether, you know, it's any of the hyper skill players. So if we do all of that, right? So I think it's, it shows that, you know, it's not always tied to an infrastructure, right? Customers want to consume the best thumb and if we need to be consumed in hyperscale, we can make it available. >>Do you support containers? >>Yeah, we do support containers on hpc. We have, we have two container orchestrators we have to support. We, we, we have aner similarity, we also have a container options to customers. Both options. >>What kind of customers are you signing up for the, for the HPC offerings? Are they university research centers or is it tend to be smaller >>Companies? It, it's, it's, you know, the last three days, this conference has been great. We probably had like, you know, many, many customers talking to us. But HC somewhere in the range of 40, 50 customers, I would probably say lot of interest from educational institutions, universities research, to your point, a lot of interest from manufacturing, factory floor automation. A lot of customers want to do dynamic simulations on factory floor. That is also quite a bit of interest from life sciences pharmacies because you know, like I said, we have two designs, one on life sciences, one on manufacturing, both with different dynamics on the infrastructure. So yeah, quite a, quite a few interest definitely from academics, from life sciences, manufacturing. We also have a lot of financials, big banks, you know, who wants to simulate a lot of the, you know, brokerage, a lot of, lot of financial data because we have some, you know, really optimized hardware we announced in Dell for, especially for financial services. So there's quite a bit of interest from financial services as well. >>That's why that was great. We often think of Dell as, as the organization that democratizes all things in it eventually. And, and, and, and in that context, you know, this is super computing 22 HPC is like the little sibling trailing around, trailing behind the super computing trend. But we definitely have seen this move out of just purely academia into the business world. Dell is clearly a leader in that space. How has Apex overall been doing since you rolled out that strategy, what, two couple? It's been, it's been a couple years now, hasn't it? >>Yeah, it's been less than two years. >>How are, how are, how are mainstream Dell customers embracing Apex versus the traditional, you know, maybe 18 months to three year upgrade cycle CapEx? Yeah, >>I mean I look, I, I think that is absolutely strong momentum for Apex and like we, Paul pointed out earlier, we started with, you know, making the infrastructure and the platforms available to customers to consume as a service, right? We have options for customers, you know, to where Dell can fully manage everything end to end, take a lot of the pain points away, like we talked about because you know, managing a cloud scale, you know, basically environment for the customers, we also have options where customers would say, you know what, I actually have a pretty sophisticated IT organization. I want Dell to manage the infrastructure, but up to this level in the layer up to the guest operating system, I'll take care of the rest, right? So we are seeing customers who are coming to us with various requirements in terms of saying, I can do up to here, but you take all of this pain point away from me or you do everything for me. >>It all depends on the customer. So we do have wide interest. So our, I would say our products and the portfolio set in Apex is expanding and we are also learning, right? We are getting a lot of feedback from customers in terms of what they would like to see on some of these offers. Like the example we just talked about in terms of making some of the software IP available on a public cloud where they'll look at Dell as a software player, right? That's also is absolutely critical. So I think we are giving customers a lot of choices. Our, I would say the choice factor and you know, we are democratizing, like you said, expanding in terms of the customer choices. And I >>Think it's, we're almost outta our time, but I do wanna be sure we get to Dell validated designs, which you've mentioned a couple of times. How specific are the, well, what's the purpose of these designs? How specific are they? >>They, they are, I mean I, you know, so the most of these valid, I mean, again, we look at these industries, right? And we look at understanding exactly how would, I mean we have huge embedded base of customers utilizing HPC across our ecosystem in Dell, right? So a lot of them are CapEx customers. We actually do have an active customer profile. So these validated designs takes into account a lot of customer feedback, lot of partner feedback in terms of how they utilize this. And when you build these solutions, which are kind of end to end and integrated, you need to start anchoring on something, right? And a lot of these things have different characteristics. So these validated design basically prove to us that, you know, it gives a very good jump off point for customers. That's the way I look at it, right? So a lot of them will come to the table with, they don't come to the blank sheet of paper when they say, oh, you know what I'm, this, this is my characteristics of what I want. I think this is a great point for me to start from, right? So I think that that gives that, and plus it's the power of validation, really, right? We test, validate, integrate, so they know it works, right? So all of those are hypercritical. When you talk to, >>And you mentioned healthcare, you, you mentioned manufacturing, other design >>Factoring. We just announced validated design for financial services as well, I think a couple of days ago in the event. So yep, we are expanding all those DVDs so that we, we can, we can give our customers a choice. >>We're out of time. Sat ier. Thank you so much for joining us. Thank you. At the center of the move to subscription to everything as a service, everything is on a subscription basis. You really are on the leading edge of where, where your industry is going. Thanks for joining us. >>Thank you, Paul. Thank you Dave. >>Paul Gillum with Dave Nicholson here from Supercomputing 22 in Dallas, wrapping up the show this afternoon and stay with us for, they'll be half more soon.

Published Date : Nov 17 2022

SUMMARY :

Lots of excitement out there, wouldn't you say, Dave? you know, it's, it's He is the vice Thank you. So it's telecom, it's cloud, it's edge. Can you just give us a quick definition? So this is our way I mean, you don't, you don't have a Dell cloud, right? So this is Dell's way of actually, you know, supporting a multi-cloud strategy for our customers. You, you mentioned something just ahead of us going on air. I mean, you know, one, one of the main things about cloud is the consumption model, right? an offering in the Apex line. we make it super easy for customers to consume and manage it across, you know, proven designs, So, I mean, you know, so this is a platform, if the customers want to actually, you know, extend colo as that OnPrem environment. So, you know, the customers initially figure out, okay, how do they actually require something which is going to require Thunder from various partners that you have, you know? I mean, of course Dell, you know, we, we, So customers, these are very, you know, I would say very intensive systems. you know, we do have a ma no, a lot of customers consume high performance compute and public cloud, in, they would like to kind of keep it close, keep it local, and you know, get a, Do you have, or will you have facilities So you know, if you look at those, basically customers are saying, I love your Dell IP on We have, we have two container orchestrators We also have a lot of financials, big banks, you know, who wants to simulate a you know, this is super computing 22 HPC is like the little sibling trailing around, take a lot of the pain points away, like we talked about because you know, managing a cloud scale, you know, we are democratizing, like you said, expanding in terms of the customer choices. How specific are the, well, what's the purpose of these designs? So these validated design basically prove to us that, you know, it gives a very good jump off point for So yep, we are expanding all those DVDs so that we, Thank you so much for joining us. Paul Gillum with Dave Nicholson here from Supercomputing 22 in Dallas,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
TerryPERSON

0.99+

Dave NicholsonPERSON

0.99+

AWSORGANIZATION

0.99+

Ian ColeyPERSON

0.99+

Dave VellantePERSON

0.99+

Terry RamosPERSON

0.99+

DavePERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

EuropeLOCATION

0.99+

Paul GellPERSON

0.99+

DavidPERSON

0.99+

Paul GillumPERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

John FurrierPERSON

0.99+

Andy JassyPERSON

0.99+

190 daysQUANTITY

0.99+

AmazonORGANIZATION

0.99+

PaulPERSON

0.99+

European Space AgencyORGANIZATION

0.99+

Max PetersonPERSON

0.99+

DellORGANIZATION

0.99+

CIAORGANIZATION

0.99+

AfricaLOCATION

0.99+

oneQUANTITY

0.99+

Arcus GlobalORGANIZATION

0.99+

fourQUANTITY

0.99+

BahrainLOCATION

0.99+

D.C.LOCATION

0.99+

EvereeORGANIZATION

0.99+

AccentureORGANIZATION

0.99+

JohnPERSON

0.99+

UKLOCATION

0.99+

four hoursQUANTITY

0.99+

USLOCATION

0.99+

DallasLOCATION

0.99+

Stu MinimanPERSON

0.99+

Zero DaysTITLE

0.99+

NASAORGANIZATION

0.99+

WashingtonLOCATION

0.99+

Palo Alto NetworksORGANIZATION

0.99+

CapgeminiORGANIZATION

0.99+

Department for Wealth and PensionsORGANIZATION

0.99+

IrelandLOCATION

0.99+

Washington, DCLOCATION

0.99+

an hourQUANTITY

0.99+

ParisLOCATION

0.99+

five weeksQUANTITY

0.99+

1.8 billionQUANTITY

0.99+

thousandsQUANTITY

0.99+

GermanyLOCATION

0.99+

450 applicationsQUANTITY

0.99+

Department of DefenseORGANIZATION

0.99+

AsiaLOCATION

0.99+

John WallsPERSON

0.99+

Satish IyerPERSON

0.99+

LondonLOCATION

0.99+

GDPRTITLE

0.99+

Middle EastLOCATION

0.99+

42%QUANTITY

0.99+

Jet Propulsion LabORGANIZATION

0.99+

Gunnar Hellekson & Adnan Ijaz | AWS re:Invent 2022


 

>>Hello everyone. Welcome to the Cube's coverage of AWS Reinvent 22. I'm John Ferer, host of the Cube. Got some great coverage here talking about software supply chain and sustainability in the cloud. We've got a great conversation. Gunner Helickson, Vice President and general manager at Red Hat Enterprise Linux and Business Unit of Red Hat. Thanks for coming on. And Edon Eja Director, Product Management of commercial software services aws. Gentlemen, thanks for joining me today. >>Oh, it's a pleasure. >>You know, the hottest topic coming out of Cloudnative developer communities is slide chain software sustainability. This is a huge issue. As open source continues to power away and fund and grow this next generation modern development environment, you know, supply chain, you know, sustainability is a huge discussion because you gotta check things out where, what's in the code. Okay, open source is great, but now we gotta commercialize it. This is the topic, Gunner, let's get in, get with you. What, what are you seeing here and what's some of the things that you're seeing around the sustainability piece of it? Because, you know, containers, Kubernetes, we're seeing that that run time really dominate this new abstraction layer, cloud scale. What's your thoughts? >>Yeah, so I, it's interesting that the, you know, so Red Hat's been doing this for 20 years, right? Making open source safe to consume in the enterprise. And there was a time when in order to do that you needed to have a, a long term life cycle and you needed to be very good at remediating security vulnerabilities. And that was kind of, that was the bar that you had that you had to climb over. Nowadays with the number of vulnerabilities coming through, what people are most worried about is, is kind of the providence of the software and making sure that it has been vetted and it's been safe, and that that things that you get from your vendor should be more secure than things that you've just downloaded off of GitHub, for example. Right? And that's, that's a, that's a place where Red Hat's very comfortable living, right? >>Because we've been doing it for, for 20 years. I think there, there's another, there's another aspect to this, to this supply chain question as well, especially with the pandemic. You know, we've got these, these supply chains have been jammed up. The actual physical supply chains have been jammed up. And, and the two of these issues actually come together, right? Because as we've been go, as we go through the pandemic, we've had these digital transformation efforts, which are in large part people creating software in order to manage better their physical supply chain problems. And so as part of that digital transformation, you have another supply chain problem, which is the software supply chain problem, right? And so these two things kind of merge on these as people are trying to improve the performance of transportation systems, logistics, et cetera. Ultimately it all boils down to it all. Both supply chain problems actually boil down to a software problem. It's very >>Interesting that, Well, that is interesting. I wanna just follow up on that real quick if you don't mind. Because if you think about the convergence of the software and physical world, you know, that's, you know, IOT and also hybrid cloud kind of plays into that at scale, this opens up more surface area for attacks, especially when you're under a lot of pressure. This is where, you know, you can, you have a service area in the physical side and you have constraints there. And obviously the pandemic causes problems, but now you've got the software side. Can you, how are you guys handling that? Can you just share a little bit more of how you guys are looking at that with Red Hat? What's, what's the customer challenge? Obviously, you know, skills gaps is one, but like that's a convergence at the same time. More security problems. >>Yeah, yeah, that's right. And certainly the volume of, if we just look at security vulnerabilities themselves, just the volume of security vulnerabilities has gone up considerably as more people begin using the software. And as the software becomes more important to kind of critical infrastructure, more eyeballs are on it. And so we're uncovering more problems, which is kind of, that's, that's okay. That's how the world works. And so certainly the, the number of remediations required every year has gone up. But also the customer expectations, as I've mentioned before, the customer expectations have changed, right? People want to be able to show to their auditors and to their regulators that no, we, we, in fact, I can show the providence of the software that I'm using. I didn't just download something random off the internet. I actually have, like you, you know, adults paying attention to the, how the software gets put together. >>And it's still, honestly, it's still very early days. We can, I think the, in as an industry, I think we're very good at managing, identifying remediating vulnerabilities in the aggregate. We're pretty good at that. I think things are less clear when we talk about kind of the management of that supply chain, proving the provenance, proving the, and creating a resilient supply chain for software. We have lots of tools, but we don't really have lots of shared expectations. Yeah. And so it's gonna be interesting over the next few years, I think we're gonna have more rules are gonna come out. I see NIST has already, has already published some of them. And as these new rules come out, the whole industry is gonna have to kind of pull together and, and really and really rally around some of this shared understanding so we can all have shared expectations and we can all speak the same language when we're talking about this >>Problem. That's awesome. A and Amazon web service is obviously the largest cloud platform out there, you know, the pandemic, even post pandemic, some of these supply chain issues, whether it's physical or software, you're also an outlet for that. So if someone can't buy hardware or, or something physical, they can always get the cloud. You guys have great network compute and whatnot and you got thousands of ISVs across the globe. How are you helping customers with this supply chain problem? Because whether it's, you know, I need to get in my networking gears delayed, I'm gonna go to the cloud and get help there. Or whether it's knowing the workloads and, and what's going on inside them with respect open source. Cause you've got open source, which is kind of an external forcing function. You got AWS and you got, you know, physical compute stores, networking, et cetera. How are you guys helping customers with the supply chain challenge, which could be an opportunity? >>Yeah, thanks John. I think there, there are multiple layers to that. At, at the most basic level we are helping customers buy abstracting away all these data central constructs that they would have to worry about if they were running their own data centers. They would have to figure out how the networking gear, you talk about, you know, having the right compute, right physical hardware. So by moving to the cloud, at least they're delegating that problem to AWS and letting us manage and making sure that we have an instance available for them whenever they want it. And if they wanna scale it, the, the, the capacity is there for them to use now then that, so we kind of give them space to work on the second part of the problem, which is building their own supply chain solutions. And we work with all kinds of customers here at AWS from all different industry segments, automotive, retail, manufacturing. >>And you know, you see that the complexity of the supply chain with all those moving pieces, like hundreds and thousands of moving pieces, it's very daunting. So cus and then on the other hand, customers need more better services. So you need to move fast. So you need to build, build your agility in the supply chain itself. And that is where, you know, Red Hat and AWS come together where we can build, we can enable customers to build their supply chain solutions on platform like Red Hat Enterprise, Linux Rail or Red Hat OpenShift on, on aws. We call it Rosa. And the benefit there is that you can actually use the services that we, that are relevant for the supply chain solutions like Amazon managed blockchain, you know, SageMaker. So you can actually build predictive and s you can improve forecasting, you can make sure that you have solutions that help you identify where you can cut costs. And so those are some of the ways we are helping customers, you know, figure out how they actually wanna deal with the supply chain challenges that we're running into in today's world. >>Yeah, and you know, you mentioned sustainability outside of software su sustainability, you know, as people move to the cloud, we've reported on silicon angle here in the cube that it's better to have the sustainability with the cloud because then the data centers aren't using all that energy too. So there's also all kinds of sustainability advantages, Gunner, because this is, this is kind of how your relationship with Amazon's expanded. You mentioned Rosa, which is Red Hat on, you know, on OpenShift, on aws. This is interesting because one of the biggest discussions is skills gap, but we were also talking about the fact that the humans are huge part of the talent value. In other words, the, the humans still need to be involved and having that relationship with managed services and Red Hat, this piece becomes one of those things that's not talked about much, which is the talent is increasing in value the humans, and now you got managed services on the cloud, has got scale and human interactions. Can you share, you know, how you guys are working together on this piece? Cuz this is interesting cuz this kind of brings up the relationship of that operator or developer. >>Yeah, Yeah. So I think there's, so I think about this in a few dimensions. First is that the kind of the, I it's difficult to find a customer who is not talking about automation at some level right now. And obviously you can automate the processes and, and the physical infrastructure that you already have that's using tools like Ansible, right? But I think that the, combining it with the, the elasticity of a solution like aws, so you combine the automation with kind of elastic and, and converting a lot of the capital expenses into operating expenses, that's a great way actually to save labor, right? So instead of like racking hard drives, you can have somebody who's somebody do something a little more like, you know, more valuable work, right? And so, so okay, but that gives you a platform and then what do you do with that platform? >>And if you've got your systems automated and you've got this kind of elastic infrastructure underneath you, what you do on top of it is really interesting. So a great example of this is the collaboration that, that we had with running the rel workstation on aws. So you might think like, well why would anybody wanna run a workstation on, on a cloud? That doesn't make a whole lot of sense unless you consider how complex it is to set up, if you have the, the use case here is like industrial workstations, right? So it's animators, people doing computational fluid dynamics, things like this. So these are industries that are extremely data heavy. They have workstations have very large hardware requirements, often with accelerated GPUs and things like this. That is an extremely expensive thing to install on premise anywhere. And if the pandemic taught us anything, it's, if you have a bunch of very expensive talent and they all have to work from a home, it is very difficult to go provide them with, you know, several tens of thousands of dollars worth of worth of worth of workstation equipment. >>And so combine the rail workstation with the AWS infrastructure and now all that workstation computational infrastructure is available on demand and on and available right next to the considerable amount of data that they're analyzing or animating or, or, or working on. So it's a really interesting, it's, it was actually, this is an idea that I was actually born with the pandemic. Yeah. And, and it's kind of a combination of everything that we're talking about, right? It's the supply chain challenges of the customer, It's the lack of lack of talent, making sure that people are being put their best and highest use. And it's also having this kind of elastic, I think, opex heavy infrastructure as opposed to a CapEx heavy infrastructure. >>That's a great example. I think that's illustrates to me what I love about cloud right now is that you can put stuff in, in the cloud and then flex what you need when you need it at in the cloud rather than either ingress or egress data. You, you just more, you get more versatility around the workload needs, whether it's more compute or more storage or other high level services. This is kind of where this NextGen cloud is going. This is where, where, where customers want to go once their workloads are up and running. How do you simplify all this and how do you guys look at this from a joint customer perspective? Because that example I think will be something that all companies will be working on, which is put it in the cloud and flex to the, whatever the workload needs and put it closer to the work compute. I wanna put it there. If I wanna leverage more storage and networking, Well, I'll do that too. It's not one thing. It's gotta flex around what's, how are you guys simplifying this? >>Yeah, I think so for, I'll, I'll just give my point of view and then I'm, I'm very curious to hear what a not has to say about it, but the, I think and think about it in a few dimensions, right? So there's, there is a, technically like any solution that aan a nun's team and my team wanna put together needs to be kind of technically coherent, right? The things need to work well together, but that's not the, that's not even most of the job. Most of the job is actually the ensuring and operational consistency and operational simplicity so that everything is the day-to-day operations of these things kind of work well together. And then also all the way to things like support and even acquisition, right? Making sure that all the contracts work together, right? It's a really in what, So when Aon and I think about places of working together, it's very rare that we're just looking at a technical collaboration. It's actually a holistic collaboration across support acquisition as well as all the engineering that we have to do. >>And on your, your view on how you're simplifying it with Red Hat for your joint customers making Collabo >>Yeah. Gun, Yeah. Gunner covered it. Well I think the, the benefit here is that Red Hat has been the leading Linux distribution provider. So they have a lot of experience. AWS has been the leading cloud provider. So we have both our own point of views, our own learning from our respective set of customers. So the way we try to simplify and bring these things together is working closely. In fact, I sometimes joke internally that if you see Ghana and my team talking to each other on a call, you cannot really tell who who belongs to which team. Because we're always figuring out, okay, how do we simplify discount experience? How do we simplify programs? How do we simplify go to market? How do we simplify the product pieces? So it's really bringing our, our learning and share our perspective to the table and then really figure out how do we actually help customers make progress. Rosa that we talked about is a great example of that, you know, you know, we, together we figured out, hey, there is a need for customers to have this capability in AWS and we went out and built it. So those are just some of the examples in how both teams are working together to simplify the experience, make it complete, make it more coherent. >>Great. That's awesome. That next question is really around how you help organizations with the sustainability piece, how to support them, simplifying it. But first, before we get into that, what is the core problem around this sustainability discussion we're talking about here, supply chain sustainability, What is the core challenge? Can you both share your thoughts on what that problem is and what the solution looks like and then we can get into advice? >>Yeah. Well from my point of view, it's, I think, you know, one of the lessons of the last three years is every organization is kind of taking a careful look at how resilient it is. Or ever I should say, every organization learned exactly how resilient it was, right? And that comes from both the, the physical challenges and the logistics challenges that everyone had. The talent challenges you mentioned earlier. And of course the, the software challenges, you know, as everyone kind of embarks on this, this digital transformation journey that, that we've all been talking about. And I think, so I really frame it as, as resilience, right? And and resilience is at bottom is really about ensuring that you have options and that you have choices. The more choices you have, the more options you have, the more resilient you, you and your organization is going to be. And so I know that that's how, that's how I approach the market. I'm pretty sure that's exact, that's how AON is, has approaching the market, is ensuring that we are providing as many options as possible to customers so that they can assemble the right, assemble the right pieces to create a, a solution that works for their particular set of challenges or their unique set of challenges and and unique context. Aon, is that, does that sound about right to you? Yeah, >>I think you covered it well. I, I can speak to another aspect of sustainability, which is becoming increasingly top of mind for our customer is like how do they build products and services and solutions and whether it's supply chain or anything else which is sustainable, which is for the long term good of the, the planet. And I think that is where we have been also being very intentional and focused in how we design our data center. How we actually build our cooling system so that we, those are energy efficient. You know, we, we are on track to power all our operations with renewable energy by 2025, which is five years ahead of our initial commitment. And perhaps the most obvious example of all of this is our work with arm processors Graviton three, where, you know, we are building our own chip to make sure that we are designing energy efficiency into the process. And you know, we, there's the arm graviton, three arm processor chips, there are about 60% more energy efficient compared to some of the CD six comparable. So all those things that are also we are working on in making sure that whatever our customers build on our platform is long term sustainable. So that's another dimension of how we are working that into our >>Platform. That's awesome. This is a great conversation. You know, the supply chain is on both sides, physical and software. You're starting to see them come together in great conversations and certainly moving workloads to the cloud running in more efficiently will help on the sustainability side, in my opinion. Of course, you guys talked about that and we've covered it, but now you start getting into how to refactor, and this is a big conversation we've been having lately, is as you not just lift and ship but re-platform and refactor, customers are seeing great advantages on this. So I have to ask you guys, how are you helping customers and organizations support sustainability and, and simplify the complex environment that has a lot of potential integrations? Obviously API's help of course, but that's the kind of baseline, what's the, what's the advice that you give customers? Cause you know, it can look complex and it becomes complex, but there's an answer here. What's your thoughts? >>Yeah, I think so. Whenever, when, when I get questions like this from from customers, the, the first thing I guide them to is, we talked earlier about this notion of consistency and how important that is. It's one thing, it it, it is one way to solve the problem is to create an entirely new operational model, an entirely new acquisition model and an entirely new stack of technologies in order to be more sustainable. That is probably not in the cards for most folks. What they want to do is have their existing estate and they're trying to introduce sustainability into the work that they are already doing. They don't need to build another silo in order to create sustainability, right? And so there have to be, there has to be some common threads, there has to be some common platforms across the existing estate and your more sustainable estate, right? >>And, and so things like Red Hat enterprise Linux, which can provide this kind of common, not just a technical substrate, but a common operational substrate on which you can build these solutions if you have a common platform on which you are building solutions, whether it's RHEL or whether it's OpenShift or any of our other platforms that creates options for you underneath. So that in some cases maybe you need to run things on premise, some things you need to run in the cloud, but you don't have to profoundly change how you work when you're moving from one place to another. >>And that, what's your thoughts on, on the simplification? >>Yeah, I mean think that when you talk about replatforming and refactoring, it is a daunting undertaking, you know, in today's, in the, especially in today's fast paced work. So, but the good news is you don't have to do it by yourself. Customers don't have to do it on their own. You know, together AWS and Red Hat, we have our rich partner ecosystem, you know AWS over AWS has over a hundred thousand partners that can help you take that journey, the transformation journey. And within AWS and working with our partners like Red Hat, we make sure that we have all in, in my mind there are really three big pillars that you have to have to make sure that customers can successfully re-platform refactor their applications to the modern cloud architecture. You need to have the rich set of services and tools that meet their different scenarios, different use cases. Because no one size fits all. You have to have the right programs because sometimes customers need those incentives, they need those, you know, that help in the first step and last but no needs, they need training. So all of that, we try to cover that as we work with our customers, work with our partners and that is where, you know, together we try to help customers take that step, which is, which is a challenging step to take. >>Yeah. You know, it's great to talk to you guys, both leaders in your field. Obviously Red hats, well story history. I remember the days back when I was provisioning, loading OSS on hardware with, with CDs, if you remember, that was days gunner. But now with high level services, if you look at this year's reinvent, and this is like kind of my final question for the segment is then we'll get your reaction to is last year we talked about higher level services. I sat down with Adam Celski, we talked about that. If you look at what's happened this year, you're starting to see people talk about their environment as their cloud. So Amazon has the gift of the CapEx, the all that, all that investment and people can operate on top of it. They're calling that environment their cloud. Okay, For the first time we're seeing this new dynamic where it's like they have a cloud, but they're Amazon's the CapEx, they're operating. So you're starting to see the operational visibility gun around how to operate this environment. And it's not hybrid this, that it's just, it's cloud. This is kind of an inflection point. Do you guys agree with that or, or having a reaction to that statement? Because I, I think this is kind of the next gen super cloud-like capability. It's, it's, we're going, we're building the cloud. It's now an environment. It's not talking about private cloud, this cloud, it's, it's all cloud. What's your reaction? >>Yeah, I think, well I think it's a very natural, I mean we used words like hybrid cloud, multi-cloud, if, I guess super cloud is what the kids are saying now, right? It's, it's all, it's all describing the same phenomena, right? Which is, which is being able to take advantage of lots of different infrastructure options, but still having something that creates some commonality among them so that you can, so that you can manage them effectively, right? So that you can have kind of uniform compliance across your estate so that you can have kind of, you can make the best use of your talent across the estate. I mean this is a, this is, it's a very natural thing. >>They're calling it cloud, the estate is the cloud. >>Yeah. So yeah, so, so fine if it, if it means that we no longer have to argue about what's multi-cloud and what's hybrid cloud, I think that's great. Let's just call it cloud. >>And what's your reaction, cuz this is kind of the next gen benefits of, of higher level services combined with amazing, you know, compute and, and resource at the infrastructure level. What's your, what's your view on that? >>Yeah, I think the construct of a unified environment makes sense for customers who have all these use cases which require, like for instance, if you are doing some edge computing and you're running it WS outpost or you know, wave lent and these things. So, and, and it is, it is fear for customer to say, think that hey, this is one environment, same set of tooling that they wanna build that works across all their different environments. That is why we work with partners like Red Hat so that customers who are running Red Hat Enterprise Linux on premises and who are running in AWS get the same level of support, get the same level of security features, all of that. So from that sense, it actually makes sense for us to build these capabilities in a way that customers don't have to worry about, Okay, now I'm actually in the AWS data center versus I'm running outpost on premises. It is all one. They, they just use the same set of cli command line APIs and all of that. So in that sense, it's actually helps customers have that unification so that that consistency of experience helps their workforce and be more productive versus figuring out, okay, what do I do, which tool I use? Where >>And on you just nailed it. This is about supply chain sustainability, moving the workloads into a cloud environment. You mentioned wavelength, this conversation's gonna continue. We haven't even talked about the edge yet. This is something that's gonna be all about operating these workloads at scale and all the, with the cloud services. So thanks for sharing that and we'll pick up that edge piece later. But for reinvent right now, this is really the key conversation. How to bake the sustained supply chain work in a complex environment, making it simpler. And so thanks for sharing your insights here on the cube. >>Thanks. Thanks for having >>Us. Okay, this is the cube's coverage of ados Reinvent 22. I'm John Fur, your host. Thanks for watching.

Published Date : Nov 3 2022

SUMMARY :

host of the Cube. and grow this next generation modern development environment, you know, supply chain, And that was kind of, that was the bar that you had that you had to climb And so as part of that digital transformation, you have another supply chain problem, which is the software supply chain the software and physical world, you know, that's, you know, IOT and also hybrid cloud kind of plays into that at scale, And as the software becomes more important to kind of critical infrastructure, more eyeballs are on it. And so it's gonna be interesting over the next few years, I think we're gonna have more rules are gonna come out. Because whether it's, you know, you talk about, you know, having the right compute, right physical hardware. And so those are some of the ways we are helping customers, you know, figure out how they Yeah, and you know, you mentioned sustainability outside of software su sustainability, you know, so okay, but that gives you a platform and then what do you do with that platform? it is very difficult to go provide them with, you know, several tens of thousands of dollars worth of worth of worth of And so combine the rail workstation with the AWS infrastructure and now all that I think that's illustrates to me what I love about cloud right now is that you can put stuff in, operational consistency and operational simplicity so that everything is the day-to-day operations of Rosa that we talked about is a great example of that, you know, you know, we, together we figured out, Can you both share your thoughts on what that problem is and And of course the, the software challenges, you know, as everyone kind of embarks on this, And you know, we, there's the So I have to ask you guys, And so there have to be, there has to be some common threads, there has to be some common platforms So that in some cases maybe you need to run things on premise, So, but the good news is you don't have to do it by yourself. if you look at this year's reinvent, and this is like kind of my final question for the segment is then we'll get your reaction to So that you can have kind of uniform compliance across your estate so that you can have kind of, hybrid cloud, I think that's great. amazing, you know, compute and, and resource at the infrastructure level. have all these use cases which require, like for instance, if you are doing some edge computing and you're running it And on you just nailed it. Thanks for having Us. Okay, this is the cube's coverage of ados Reinvent 22.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AmazonORGANIZATION

0.99+

AWSORGANIZATION

0.99+

JohnPERSON

0.99+

John FererPERSON

0.99+

twoQUANTITY

0.99+

Adam CelskiPERSON

0.99+

Adnan IjazPERSON

0.99+

Gunnar HelleksonPERSON

0.99+

last yearDATE

0.99+

Edon EjaPERSON

0.99+

John FurPERSON

0.99+

20 yearsQUANTITY

0.99+

RosaPERSON

0.99+

2025DATE

0.99+

Gunner HelicksonPERSON

0.99+

Red HatORGANIZATION

0.99+

AONORGANIZATION

0.99+

NISTORGANIZATION

0.99+

FirstQUANTITY

0.99+

bothQUANTITY

0.99+

RHELTITLE

0.99+

firstQUANTITY

0.99+

OpenShiftTITLE

0.99+

both teamsQUANTITY

0.99+

two thingsQUANTITY

0.99+

Red Hat Enterprise LinuxTITLE

0.99+

this yearDATE

0.98+

oneQUANTITY

0.98+

second partQUANTITY

0.98+

todayDATE

0.98+

thousandsQUANTITY

0.98+

CapExORGANIZATION

0.98+

first timeQUANTITY

0.98+

pandemicEVENT

0.98+

Linux RailTITLE

0.98+

Red Hat Enterprise LinuxORGANIZATION

0.98+

LinuxTITLE

0.98+

both sidesQUANTITY

0.97+

Red HatTITLE

0.97+

over a hundred thousand partnersQUANTITY

0.97+

WSORGANIZATION

0.97+

Red Hat OpenShiftTITLE

0.97+

GhanaLOCATION

0.97+

GunnerPERSON

0.96+

one wayQUANTITY

0.96+

about 60%QUANTITY

0.96+

five yearsQUANTITY

0.96+

tens of thousands of dollarsQUANTITY

0.96+

Red Hat EnterpriseTITLE

0.96+

one thingQUANTITY

0.94+

NextGenORGANIZATION

0.94+

first stepQUANTITY

0.92+

GitHubORGANIZATION

0.92+

both leadersQUANTITY

0.91+

hundreds and thousands of moving piecesQUANTITY

0.91+

awsORGANIZATION

0.9+

three big pillarsQUANTITY

0.89+

Bich Le, Platform9 Cloud Native at Scale


 

>>Welcome back everyone, to the special presentation of Cloud Native at scale, the Cube and Platform nine special presentation going in and digging into the next generation super cloud infrastructure as code and the future of application development. We're here with Bickley, who's the chief architect and co-founder of Platform nine Pick. Great to see you Cube alumni. We, we met at an OpenStack event in about eight years ago, or later, earlier when OpenStack was going. Great to see you and great to see congratulations on the success of Platform nine. Thank >>You very much. >>Yeah. You guys have been at this for a while and this is really the, the, the year we're seeing the, the crossover of Kubernetes because of what happens with containers. Everyone now has realized, and you've seen what Docker's doing with the new docker, the open source Docker now just a success Exactly. Of containerization. Right? And now the Kubernetes layer that we've been working on for years is coming, Bearing fruit. This is huge. >>Exactly, Yes. >>And so as infrastructure, as code comes in, we talked to Bacar, talking about Super Cloud. I met her about, you know, the new Arlon, our, our lawn you guys just launched, the infrastructure's code is going to another level, and then it's always been DevOps infrastructure is code. That's been the ethos that's been like from day one, developers just code. Then you saw the rise of serverless and you see now multi-cloud or on the horizon. Connect the dots for us. What is the state of infrastructures code today? >>So I think, I think I'm, I'm glad you mentioned it. Everybody or most people know about infrastructures code, but with Kubernetes, I think that project has evolved at the concept even further. And these dates, it's infrastructure is configuration, right? So, which is an evolution of infrastructure as code. So instead of telling the system, here's how I want my infrastructure by telling it, you know, do step A, B, C, and D. Instead, with Kubernetes, you can describe your desired state declaratively using things called manifest resources. And then the system kind of magically figures it out and tries to converge the state towards the one that you specify. So I think it's, it's a even better version of infrastructures code. Yeah, >>Yeah. And, and that really means it's developer just accessing resources. Okay. That declare, Okay, give me some compute, stand me up some, turn the lights on, turn 'em off, turn 'em on. That's kind of where we see this going. And I like the configuration piece. Some people say composability, I mean now with open source, so popular, you don't have to have to write a lot of code, this code being developed. And so it's into integrations, configuration. These are areas that we're starting to see computer science principles around automation, machine learning, assisting open source. Cuz you've got a lot of code that's right in hearing software, supply chain issues. So infrastructure as code has to factor in these new, new dynamics. Can you share your opinion on these new dynamics of, as open source grows, the glue layers, the configurations, the integration, what are the core issues? >>I think one of the major core issues is with all that power comes complexity, right? So, you know, despite its expressive power systems like Kubernetes and declarative APIs let you express a lot of complicated and complex stacks, right? But you're dealing with hundreds if not thousands of these yamo files or resources. And so I think, you know, the emergence of systems and layers to help you manage that complexity is becoming a key challenge and opportunity in, in this space. That's, >>I wrote a LinkedIn post today, it was comments about, you know, hey, enterprise is the new breed, the trend of SaaS companies moving our consumer comp consumer-like thinking into the enterprise has been happening for a long time, but now more than ever, you're seeing it the old way used to be solve complexity with more complexity and then lock the customer in. Now with open source, it's speed, simplification and integration, right? These are the new dynamic power dynamics for developers. Yeah. So as companies are starting to now deploy and look at Kubernetes, what are the things that need to be in place? Because you have some, I won't say technical debt, but maybe some shortcuts, some scripts here that make it look like infrastructure is code. People have done some things to simulate or or make infrastructure as code happen. Yes. But to do it at scale Yes. Is harder. What's your take on this? What's your >>View? It's hard because there's a per proliferation of methods, tools, technologies. So for example, today it's very common for DevOps and platform engineering tools, I mean, sorry, teams to have to deploy a large number of Kubernetes clusters, but then apply the applications and configurations on top of those clusters. And they're using a wide range of tools to do this, right? For example, maybe Ansible or Terraform or bash scripts to bring up the infrastructure and then the clusters. And then they may use a different set of tools such as Argo CD or other tools to apply configurations and applications on top of the clusters. So you have this sprawl of tools. You, you also have this sprawl of configurations and files because the more objects you're dealing with, the more resources you have to manage. And there's a risk of drift that people call that where, you know, you think you have things under control, but some people from various teams will make changes here and there and then before the end of the day systems break and you have no idea of tracking them. So I think there's real need to kind of unify, simplify, and try to solve these problems using a smaller, more unified set of tools and methodologies. And that's something that we tried to do with this new project. Arlon. >>Yeah. So, so we're gonna get into our line in a second. I wanna get into the why Arlon. You guys announced that at our GoCon, which was put on here in Silicon Valley at the, at the community invite in two where they had their own little day over there at their headquarters. But before we get there, vascar, your CEO came on and he talked about Super Cloud at our in AAL event. What's your definition of super cloud? If you had to kind of explain that to someone at a cocktail party or someone in the industry technical, how would you look at the super cloud trend that's emerging? It's become a thing. What's your, what would be your contribution to that definition or the narrative? >>Well, it's, it's, it's funny because I've actually heard of the term for the first time today, speaking to you earlier today. But I think based on what you said, I I already get kind of some of the, the gist and the, the main concepts. It seems like super cloud, the way I interpret that is, you know, clouds and infrastructure, programmable infrastructure, all of those things are becoming commodity in a way. And everyone's got their own flavor, but there's a real opportunity for people to solve real business problems by perhaps trying to abstract away, you know, all of those various implementations and then building better abstractions that are perhaps business or application specific to help companies and businesses solve real business problems. >>Yeah, I remember that's a great, great definition. I remember, not to date myself, but back in the old days, you know, IBM had a proprietary network operating system, so of deck for the mini computer vendors, deck net and SNA respectively. But T C P I P came out of the osi, the open systems interconnect and remember, ethernet beat token ring out. So not to get all nerdy for all the young kids out there, look, just look up token ring, you'll see, you've probably never heard of it. It's IBM's, you know, connection to the internet at the, the layer too is Amazon, the ethernet, right? So if T C P I P could be the Kubernetes and the container abstraction that made the industry completely change at that point in history. So at every major inflection point where there's been serious industry change and wealth creation and business value, there's been an abstraction Yes. Somewhere. Yes. What's your reaction to that? >>I think this is, I think a saying that's been heard many times in this industry and, and I forgot who originated it, but I think the saying goes like, there's no problem that can't be solved with another layer of indirection, right? And we've seen this over and over and over again where Amazon and its peers have inserted this layer that has simplified, you know, computing and, and infrastructure management. And I believe this trend is going to continue, right? The next set of problems are going to be solved with these insertions of additional abstraction layers. I think that that's really a, yeah, it's gonna continue. >>It's interesting. I just, when I wrote another post today on LinkedIn called the Silicon Wars AMD stock is down arm has been on a rise. We've remember pointing for many years now, that arm's gonna be hugely, it has become true. If you look at the success of the infrastructure as a serviced layer across the clouds, Azure, aws, Amazon's clearly way ahead of everybody. The stuff that they're doing with the silicon and the physics and the, the atoms, the pro, you know, this is where the innovation, they're going so deep and so strong at ISAs, the more that they get that gets come on, they have more performance. So if you're an app developer, wouldn't you want the best performance and you'd want to have the best abstraction layer that gives you the most ability to do infrastructures, code or infrastructure for configuration, for provisioning, for managing services. And you're seeing that today with service MeSHs, a lot of action going on in the service mesh area in in this community of, of co con, which we will be covering. So that brings up the whole what's next? You guys just announced Arlon at ar GoCon, which came out of Intuit. We've had Mariana Tessel at our super cloud event. She's the cto, you know, they're all in the cloud. So they contributed that project. Where did Arlon come from? What was the origination? What's the purpose? Why arlon, why this announcement? Yeah, >>So the, the inception of the project, this was the result of us realizing that problem that we spoke about earlier, which is complexity, right? With all of this, these clouds, these infrastructure, all the variations around and, you know, compute storage networks and the proliferation of tools we talked about the Ansibles and Terraforms and Kubernetes itself, you can think of that as another tool, right? We saw a need to solve that complexity problem, and especially for people and users who use Kubernetes at scale. So when you have, you know, hundreds of clusters, thousands of applications, thousands of users spread out over many, many locations, there, there needs to be a system that helps simplify that management, right? So that means fewer tools, more expressive ways of describing the state that you want and more consistency. And, and that's why, you know, we built our lawn and we built it recognizing that many of these problems or sub problems have already been solved. So Arlon doesn't try to reinvent the wheel, it instead rests on the shoulders of several giants, right? So for example, Kubernetes is one building block, GI ops, and Argo CD is another one, which provides a very structured way of applying configuration. And then we have projects like cluster API and cross plane, which provide APIs for describing infrastructure. So arlon takes all of those building blocks and builds a thin layer, which gives users a very expressive way of defining configuration and desired state. So that's, that's kind of the inception of, >>And what's the benefit of that? What does that give the, what does that give the developer, the user, in this case, >>The developers, the, the platform engineer, team members, the DevOps engineers, they get a a ways to provision not just infrastructure and clusters, but also applications and configurations. They get a way, a system for provisioning, configuring, deploying, and doing life cycle management in a, in a much simpler way. Okay. Especially as I said, if you're dealing with a large number of applications. >>So it's like an operating fabric, if you will. Yes. For them. Okay, so let's get into what that means for up above and below the, the, this abstraction or thin layer below as the infrastructure. We talked a lot about what's going on below that. Yeah. Above our workloads. At the end of the day, you, I talk to CXOs and IT folks that, that are now DevOps engineers. They care about the workloads and they want the infrastructure's code to work. They wanna spend their time getting in the weeds, figuring out what happened when someone made a push that that happened or something happened to need observability and they need to, to know that it's working. That's right. And here's my workloads running effectively. So how do you guys look at the workload side of it? Cuz now you have multiple workloads on these fabric, right? >>So workloads, so Kubernetes has defined kind of a standard way to describe workloads and you can, you know, tell Kubernetes, I wanna run this container this particular way, or you can use other projects that are in the Kubernetes cloud native ecosystem, like K native, where you can express your application in more at a higher level, right? But what's also happening is in addition to the workloads, DevOps and platform engineering teams, they need to very often deploy the applications with the clusters themselves. Clusters are becoming this commodity. It's, it's becoming this host for the application and it kind of comes bundled with it. In many cases it is like an appliance, right? So DevOps teams have to provision clusters at a really incredible rate and they need to tear them down. Clusters are becoming more, >>It's coming like an EC two instance, spin up a cluster. We very, people used words like that. >>That's right. And before arlon you kind of had to do all of that using a different set of tools as, as I explained. So with Arlon you can kind of express everything together. You can say I want a cluster with a health monitoring stack and a logging stack and this ingress controller and I want these applications and these security policies. You can describe all of that using something we call a profile. And then you can stamp out your app, your applications and your clusters and manage them in a very, >>So essentially standard like creates a mechanism. Exactly. Standardized, declarative kind of configurations. And it's like a playbook, deploy it. Now what there between say a script like I'm, I have scripts, I can just automate scripts >>Or yes, this is where that declarative API and infrastructures configuration comes in, right? Because scripts, yes you can automate scripts, but the order in which they run matters, right? They can break, things can break in the middle and, and sometimes you need to debug them. Whereas the declarative way is much more expressive and powerful. You just tell the system what you want and then the system kind of figures it out. And there are these things got controllers which will in the background reconcile all the state to converge towards your desire. It's a much more powerful, expressive and reliable way of getting things done. >>So infrastructure has configuration is built kind of on it's super set of infrastructures code because it's >>An evolution. >>You need edge re's code, but then you can configure the code by just saying do it. You basically declaring it's saying Go, go do that. That's right. Okay, so, alright, so cloud native at scale, take me through your vision of what that means. Someone says, Hey, what does cloud native at scale mean? What's success look like? How does it roll out in the future as you, not future next couple years. I mean people are now starting to figure out, okay, it's not as easy as it sounds. Kubernetes has value. We're gonna hear this year coan a lot of this. What does cloud native at scale mean? >>Yeah, there are different interpretations, but if you ask me, when people think of scale, they think of a large number of deployments, right? Geographies, many, you know, supporting thousands or tens or millions of, of users there, there's that aspect to scale. There's also an equally important a aspect of scale, which is also something that we try to address with Arran. And that is just complexity for the people operating this or configuring this, right? So in order to describe that desired state, and in order to perform things like maybe upgrades or updates on a very large scale, you want the humans behind that to be able to express and direct the system to do that in, in relatively simple terms, right? And so we want the tools and the abstractions and the mechanisms available to the user to be as powerful but as simple as possible. So there's, I think there's gonna be a number and there have been a number of CNCF and cloud native projects that are trying to attack that complexity problem as well. And Arlon kind of falls in in that >>Category. Okay, so I'll put you on the spot. Rogue got Coan coming up and obviously this'll be shipping this segment series out before. What do you expect to see at this year? What's the big story this year? What's the, what's the most important thing happening? Is it in the open source community and also within a lot of the, the people jogging for leadership. I know there's a lot of projects and still there's some white space in the overall systems map about the different areas get run time, there's ability in all these different areas. What's the, where's the action? Where, where's the smoke? Where's the fire? Where's the piece? Where's the tension? >>Yeah, so I think one thing that has been happening over the past couple of cub cons and I expect to continue and, and that is the, the word on the street is Kubernetes is getting boring, right? Which is good, right? >>Boring means simple. >>Well, >>Well maybe, >>Yeah, >>Invisible, >>No drama, right? So, so the, the rate of change of the Kubernetes features and, and all that has slowed, but in, in a, in a positive way. But there's still a general sentiment and feeling that there's just too much stuff. If you look at a stack necessary for hosting applications based on Kubernetes, there are just still too many moving parts, too many components, right? Too much complexity. I go, I keep going back to the complexity problem. So I expect Cube Con and all the vendors and the players and the startups and the people there to continue to focus on that complexity problem and introduce further simplifications to, to the stack. >>Yeah. Vic, you've had an storied career, VMware over decades with them, obviously in 12 years with 14 years or something like that. Big number co-founder here at Platform now you's been around for a while at this game. We, man, we talked about OpenStack, that project you, we interviewed at one of their events. So OpenStack was the beginning of that, this new revolution. I remember the early days it was, it wasn't supposed to be an alternative to Amazon, but it was a way to do more cloud cloud native. I think we had a cloud a Rod team at that time. We would joke we, you know, about, about the dream. It's happening now, now at Platform nine. You guys have been doing this for a while. What's the, what are you most excited about as the chief architect? What did you guys double down on? What did you guys pivot from or two, did you do any pivots? Did you extend out certain areas? Cuz you guys are in a good position right now, a lot of DNA in Cloud native. What are you most excited about and what does Platform Nine bring to the table for customers and for people in the industry watching this? >>Yeah, so I think our mission really hasn't changed over the years, right? It's been always about taking complex open source software because open source software, it's powerful. It solves new problems, you know, every year and you have new things coming out all the time, right? Open Stack was an example where the Kubernetes took the world by storm. But there's always that complexity of, you know, just configuring it, deploying it, running it, operating it. And our mission has always been that we will take all that complexity and just make it, you know, easy for users to consume regardless of the technology, right? So the successor to Kubernetes, you know, I don't have a crystal ball, but you know, you have some indications that people are coming up of new and simpler ways of running applications. There are many projects around there who knows what's coming next year or the year after that. But platform will, a, platform nine will be there and we will, you know, take the innovations from the, the, the community. We will contribute our own innovations and make all of those things very consumable to customers. >>Simpler, faster, cheaper. Exactly. Always a good business model technically to make that happen. Yes. Yeah. I think the, the reigning in the chaos is key, you know, Now we have now visibility into the scale. Final question before we depart Yeah. On this segment, what is at scale, how many clusters do you see that would be a, a watermark for an at scale conversation around an enterprise? Is it workloads we're looking at or, or clusters? How would you Yeah, I would you describe that when people try to squint through and evaluate what's a scale, what's the at scale kind of threshold? >>Yeah. And, and the number of clusters doesn't tell the whole story because clusters can be small in terms of the number of nodes or they can be large. But roughly speaking when we say, you know, large scale cluster deployments, we're talking about maybe hundreds, two thousands. Yeah. >>And final final question, what's the role of the hyperscalers? You got AWS continuing to do well, but they got their core ias, they got a PAs, they're not too too much putting a SaaS out there. They have some SaaS apps, but mostly it's the ecosystem. They have marketplaces doing, doing over $2 billion billions of transactions a year. And, and it's just like, just sitting there. It hasn't really, they're now innovating on it, but that's gonna change ecosystems. What's the role the cloud play in the cloud Native at scale? >>The the hyper square? >>Yeah. Yeah. Abras, Azure, Google, >>You mean from a business perspective, they're, they have their own interests that, you know, that they're, they will keep catering to, They, they will continue to find ways to lock their users into their ecosystem of services and, and APIs. So I don't think that's gonna change, right? They're just gonna keep Well, >>They got great I performance, I mean from a, from a hardware standpoint, yes. That's gonna be key, right? >>Yes. I think the, the move from X 86 being the dominant way and platform to run workloads is changing, right? That, that, that, that, and I think the, the hyperscalers really want to be in the game in terms of, you know, the, the new risk and arm ecosystems and the >>Platforms. Yeah. Not joking aside, Paul Morritz, when he was the CEO of VMware, when he took over once said, I remember our first year doing the cube. Oh, the cloud is one big distributed computer. It's, it's hardware and you got software and you got middleware. And he kinda over, well he kind of tongue in cheek, but really you're talking about large compute and sets of services that is essentially a distributed computer. Yes, >>Exactly. >>It's, we're back in the same game. Thank you for coming on the segment. Appreciate your time. This is cloud native at scale special presentation with Platform nine. Really unpacking super cloud Arlon open source and how to run large scale applications on the cloud, Cloud native develop for developers. And John Feer with the cube. Thanks for Washington. We'll stay tuned for another great segment coming right up.

Published Date : Oct 20 2022

SUMMARY :

Great to see you and great to see congratulations on the success And now the Kubernetes layer that we've been working on for years you know, the new Arlon, our, our lawn you guys just launched, So instead of telling the system, here's how I want my infrastructure by telling it, I mean now with open source, so popular, you don't have to have to write a lot of code, you know, the emergence of systems and layers to help you manage that complexity is becoming I wrote a LinkedIn post today, it was comments about, you know, hey, enterprise is the new breed, the trend of SaaS companies So you have this sprawl of tools. how would you look at the super cloud trend that's emerging? the way I interpret that is, you know, clouds and infrastructure, It's IBM's, you know, connection to the internet at the, this layer that has simplified, you know, computing and, the physics and the, the atoms, the pro, you know, this is where the innovation, all the variations around and, you know, compute storage networks the DevOps engineers, they get a a ways to So how do you guys look at the workload I wanna run this container this particular way, or you can It's coming like an EC two instance, spin up a cluster. So with Arlon you can kind of express And it's like a playbook, deploy it. tell the system what you want and then the system kind of figures You need edge re's code, but then you can configure the code by just saying do it. And that is just complexity for the people operating this or configuring this, What do you expect to see at this year? If you look at a stack necessary for hosting What's the, what are you most excited about as the chief architect? So the successor to Kubernetes, you know, I don't I think the, the reigning in the chaos is key, you know, Now we have now visibility into But roughly speaking when we say, you know, What's the role the cloud play in the cloud Native at scale? you know, that they're, they will keep catering to, They, they will continue to find right? terms of, you know, the, the new risk and arm ecosystems It's, it's hardware and you got software and you got middleware. Thank you for coming on the segment.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Paul MorritzPERSON

0.99+

IBMORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

14 yearsQUANTITY

0.99+

12 yearsQUANTITY

0.99+

Mariana TesselPERSON

0.99+

Silicon ValleyLOCATION

0.99+

John FeerPERSON

0.99+

thousandsQUANTITY

0.99+

millionsQUANTITY

0.99+

tensQUANTITY

0.99+

VMwareORGANIZATION

0.99+

twoQUANTITY

0.99+

GoogleORGANIZATION

0.99+

LinkedInORGANIZATION

0.99+

hundredsQUANTITY

0.99+

ArlonORGANIZATION

0.99+

AWSORGANIZATION

0.99+

next yearDATE

0.99+

BickleyPERSON

0.99+

arlonORGANIZATION

0.99+

first yearQUANTITY

0.98+

thousands of usersQUANTITY

0.98+

two thousandsQUANTITY

0.98+

todayDATE

0.98+

CubeORGANIZATION

0.98+

thousands of applicationsQUANTITY

0.98+

hundreds of clustersQUANTITY

0.98+

one thingQUANTITY

0.97+

KubernetesTITLE

0.97+

Platform nineORGANIZATION

0.97+

this yearDATE

0.97+

IntuitORGANIZATION

0.97+

over $2 billion billionsQUANTITY

0.97+

AbrasORGANIZATION

0.97+

GoConEVENT

0.97+

first timeQUANTITY

0.97+

BacarPERSON

0.96+

VicPERSON

0.96+

AnsiblesORGANIZATION

0.95+

oneQUANTITY

0.95+

OpenStackORGANIZATION

0.95+

EC twoTITLE

0.93+

AMDORGANIZATION

0.92+

earlier todayDATE

0.9+

vascarPERSON

0.9+

Bich LePERSON

0.9+

AzureORGANIZATION

0.89+

Platform nineORGANIZATION

0.88+

Open StackTITLE

0.87+

AALEVENT

0.86+

next couple yearsDATE

0.85+

Platform9ORGANIZATION

0.85+

PlatformORGANIZATION

0.83+

TerraformsORGANIZATION

0.83+

WashingtonLOCATION

0.82+

CoanTITLE

0.8+

one big distributed computerQUANTITY

0.78+

about eight years agoDATE

0.78+

CloudORGANIZATION

0.76+

Platform NineTITLE

0.74+

Bich Le, Platform9 | Cloud Native at Scale


 

foreign [Music] to the special presentation of cloud native at scale the cube and Platform 9 special presentation going in and digging into the next generation super cloud infrastructure as code and the future of application development we're here with dick Lee who's the Chief Architect and co-founder of platform nine pick great to see you Cube alumni we we met at openstack event in about eight years ago or later earlier uh when openstack was going great to see you and great congratulations on the success of platform nine thank you very much yeah you guys been at this for a while and this is really the the Year we're seeing the the crossover of kubernetes because of what happens with containers everyone now was realized and you've seen what docker's doing with the new Docker the open source Docker now just the success of containerization and now the kubernetes layer that we've been working on for years is coming bearing fruit this is huge exactly yes and so as infrastructure as code comes in we talked to baskar talking about super cloud I met her about you know the new Arlo our our lawn um you guys just launched the infrastructure's code is going to another level and it's always been devops infrastructure is code that's been the ethos that's been like from day one developers just code I think you saw the rise of serverless and you see now multi-cloud or on the horizon connect the dots for us what is the state of infrastructure as code today so I think I think um I'm glad you mentioned it everybody or most people know about infrastructure as code but with kubernetes I think that project has evolved at the concept even further and these days it's um infrastructure as configuration right so which is an evolution of infrastructure as code so instead of telling the system here's how I want my infrastructure by telling it you know do step a b c and d uh instead with kubernetes you can describe your desired State declaratively using things called manifest resources and then the system kind of magically figures it out and tries to converge the state towards the one that you specify so I think it's it's a even better version of infrastructure as code yeah and that really means it's developer just accessing resources okay that declare okay give me some compute stand me up some turn the lights on turn them off turn them on that's kind of where we see this going and I like the configuration piece some people say composability I mean now with open source so popular you don't have to have to write a lot of code this code being developed and so it's integration it's configuration these are areas that we're starting to see computer science principles around automation machine learning assisting open source because you've got a lot of code that's what you're hearing software supply chain issues so infrastructure as code has to factor in these new Dynamics can you share your opinion on these new dynamics of as open source grows the glue layers the configurations the integration what are the core issues I think one of the major core issues is with all that power comes complexity right so um You know despite its expressive Power Systems like kubernetes and declarative apis let you express a lot of complicated and complex Stacks right but you're dealing with um hundreds if not thousands of these yaml files or resources and so I think you know the emergence of systems and layers to help you manage that complexity is becoming a key Challenge and opportunity in this space I wrote a LinkedIn post today those comments about you know hey Enterprise is the new breed the trend of SAS companies moving uh our consumer consumer-like thinking into the Enterprise has been happening for a long time but now more than ever you're seeing it the old way used to be solve complexity with more complexity and then lock the customer in now with open source it's speed simplification and integration right these are the new Dynam power dynamics for developers so as companies are starting to now deploy and look at kubernetes what are the things that need to be in place because you have some I won't say technical debt but maybe some shortcuts some scripts here that make it look like infrastructure as code people have done some things to simulate or or make infrastructures code happen yes but to do it at scale yes is harder what's your take on this what's your view it's hard because there's a proliferation of of methods tools Technologies so for example today it's a very common for devops and platform engineering tools I mean sorry teams to have to deploy a large number of kubernetes clusters but then apply the applications and configurations on top of those clusters and they're using a wide range of tools to do this right for example maybe ansible or terraform or bash scripts to bring up the infrastructure and then the Clusters and then they may use a different set of tools such as Argo CD or other tools to apply configurations and applications on top of the Clusters so you have this sprawl of tools you also you also have this sprawl of configurations and files because the more objects you're dealing with the more resources you have to manage and there's a risk of drift that people call that where you know you think you have things under control but some people from various teams will make changes here and there and then before the end of the day systems break and you have no idea of tracking them so I think there's real need to kind of unify simplify and try to solve these problems using a smaller more unified set of tools and methodology apologies and that's something that we try to do with this new project Arlon yeah so so we're going to get to our line in a second I want to get to the yr lawn you guys announced that at argocon which was put on here in Silicon Valley at the community meeting by Intuit they had their own little day over their headquarters but before we get there um Bhaskar your CEO came on and he talked about super cloud at our inaugural event what's your definition of super cloud if you had to kind of explain that to someone at a cocktail party or someone in the industry technical how would you look at the super cloud Trend that's emerging has become a thing what's your what would be your contribution to that definition or the narrative well it's it's uh funny because I've actually heard of the term for the first time today speaking to you earlier today but I think based on what you said I I already get kind of some of the the gist and the the main Concepts it seems like uh super cloud the way I interpret that is you know um clouds and infrastructure um programmable infrastructure all of those things are becoming commodity in a way and everyone's got their own flavor but there's a real opportunity for people to solve real business Problems by perhaps trying to abstract away you know all of those various implementations and then building uh um better abstractions that are perhaps business or application specific to help companies and businesses solve real business problems yeah I remember it's a great great definition I remember not to date myself but back in the old days you know IBM had its proprietary Network operating system so the deck for the mini computer vintage deck net and sna respectively um but tcpip came out of the OSI the open systems interconnect and remember ethernet beat token ring out so not to get all nerdy for all the young kids out there look just look up token ring you'll see if I never heard of it it's IBM's you know a connection for the internet at the layer two is Amazon the ethernet right so if TCP could be the kubernetes and containers abstraction that made the industry completely change at that point in history so at every major inflection point where there's been serious industry change and wealth creation and business value there's been an abstraction Yes somewhere yes what's your reaction to that I think um this is um I think a saying that's been heard many times in this industry and I forgot who originated it but um I think the saying goes like there's no problem that can't be solved with another layer of indirection right and we've seen this over and over and over again where Amazon and its peers have inserted this layer that has simplified you know Computing and infrastructure management and I believe this trend is going to continue right the next set of problems are going to be solved with these insertions of additional abstraction layers I think that that's really a yeah it's going to continue it's interesting just when I wrote another post today on LinkedIn called the Silicon Wars AMD stock is down arm has been on the rise we've been reporting for many years now that arm's going to be huge it has become true if you look at the success of the infrastructure as a service layer across the clouds Azure AWS Amazon's clearly way ahead of everybody the stuff that they're doing with the Silicon and the physics and the atoms the pro you know this is where the Innovation they're going so deep and so strong at is the more that they get that gets gone they have more performance so if you're an app developer wouldn't you want the best performance and you'd want to have the best abstraction layer that gives you the most ability to do infrastructures code or infrastructure for configuration for provisioning for managing services and you're seeing that today with service meshes a lot of action going on in the service mesh area in this community of kubecon which we'll be covering so that brings up the whole what's next you guys just announced our lawn at argocon which came out of Intuit we've had Mariana Tesla out our supercloud event she's a CTO you know they're all in the cloud so there contributed that project where did Arlon come from what was the origination what's the purpose why our lawn why this announcement yeah so um the the Inception of the project this was the result of um us realizing that problem that we spoke about earlier which is complexity right with all of this these clouds these infrastructure all the variations around and you know compute storage networks and um the proliferation of tools we talked about the ansibles and terraforms and kubernetes itself you can think of that as another tool right we saw a need to solve that complexity problem and especially for people and users who use kubernetes at scale so when you have you know hundreds of clusters thousands of applications thousands of users spread out over many many locations there there needs to be a system that helps simplify that management right so that means fewer tools more expressive ways of describing the state that you want and more consistency and and that's why um you know we built um Arlon and we built it um recognizing that many of these problems or sub problems have already been solved so Arlon doesn't try to reinvent the wheel it instead rests on the shoulders of several Giants right so for example kubernetes is one building block get Ops and Argo CD is another one which provides a very structured way of applying configuration and then we have projects like cluster API and cross-plane which provide apis for describing infrastructure so Arlon takes all of those building blocks and um builds a thin layer which gives users a very expressive way of defining configuration and desired state so that's that's kind of the Inception and what's the benefit of that what does that give what does that give the developer the user in this case the developers the the platform engineer team members the devops engineers they uh get a ways to provision not just infrastructure and clusters but also applications and configurations they get away a system for provisioning configuring deploying and doing life cycle Management in a in a much simpler way okay especially as I said if you're dealing with a large number of applications so it's like an operating fabric if you will yes for them okay so let's get into what that means for up above and below the the abstraction or thin layer below is the infrastructure we talked a lot about what's going on below that yeah above our workloads at the end of the day and I talked to cxos and um I.T folks that are now devops Engineers they care about the workloads and they want the infrastructure's code to work they want to spend their time getting in the weeds figuring out what happened when someone made a push that that happened or something happened they need observability and they need to to know that it's working that's right and as my workloads running if effectively so how do you guys look at the workload side because now you have multiple workloads on these fabric right so workloads so kubernetes has defined kind of a standard way to describe workloads and you can you know tell kubernetes I want to run this container this particular way or you can use other projects that are in the kubernetes cloud native ecosystem like k-native where you can express your application in more at a higher level right but what's also happening is in addition to the workloads devops and platform engineering teams they need to very often deploy the applications with the Clusters themselves clusters are becoming this commodity it's it's becoming this um host for the application and it kind of comes bundled with it in many cases it's like an appliance right so devops teams have to provision clusters at a really incredible rate and they need to tear them down clusters are becoming more extremely like an ec2 instance spin up a cluster we've heard people used words like that that's right and before Arlon you kind of had to do all of that using a different set of tools as I explained so with our own you can kind of express everything together you can say I want a cluster with a health monitoring stack and a logging stack and this Ingress controller and I want these applications and these security policies you can describe all of that using something we call the profile and then you can stamp out your app your applications and your clusters and manage them in a very essentially standard that creates a mechanism it's standardized declarative kind of configurations and it's like a Playbook you just deploy it now what's this between say a script like I have scripts I can just automate Scripts or yes this is where that um declarative API and um infrastructures configuration comes in right because scripts yes you can automate scripts but the order in which they run matters right they can break things can break in the middle and um and sometimes you need to debug them whereas the declarative way is much more expressive and Powerful you just tell the system what you want and then the system kind of uh figures it out and there are these things called controllers which will in the background reconcile all the state to converge towards your desire to say it's a much more powerful expressive and reliable way of getting things done so infrastructure as configuration is built kind of on it's a superset of infrastructures code because different Evolution you need Edge restaurant's code but then you can configure The Code by just saying do it you're basically declaring and saying go go do that that's right okay so all right so Cloud native at scale take me through your vision of what that means someone says hey what is cloud native at scale mean what's success look like how does it roll out in the future as you that future next couple years I mean people are now starting to figure out okay it's not as easy as it sounds kubernetes has value we're going to hear this year kubecon a lot of this what is cloud native at scale mean yeah there are different interpretations but if you ask me when people think of scale they think of a large number of deployments right geographies many you know supporting thousands or tens or millions of users there's that aspect to scale there's also um an equally important aspect of scale which is also something that we try to address with Arlon and that is just complexity for the people operating this or configuring this right so in order to describe that desired State and in order to perform things like maybe upgrades or updates on a very large scale you want the humans behind that to be able to express and direct the system to do that in in relatively simple terms right and so we want uh the tools and the abstractions and the mechanisms available to the user to be as powerful but as simple as possible so there's I think there's going to be a number and there have been a number of cncf and Cloud native projects that are trying to attack that complexity problem as well and Arlon kind of Falls in in that category okay so I'll put you on the spot where I've got kubecon coming up and obviously this will be shipping this seg series out before what do you expect to see at kubecon issue it's the big story this year what's the what's the most important thing happening is it in the open source community and also within a lot of the the people jockeying for leadership I know there's a lot of projects and still there's some white space on the overall systems map about the different areas get runtime and observability in all these different areas what's the where's the action where's the smoke where's the fire where's the piece where's the tension yeah so uh I think uh one thing that has been happening over the past couple of coupons and I expect to continue and and that is uh the the word on the street is kubernetes getting boring right which is good right or I mean simple well um well maybe yeah invisible no drama right so so the rate of change of the kubernetes features and and all that has slowed but in a positive way um but um there's still a general sentiment and feeling that there's just too much stuff if you look at a stack necessary for uh hosting applications based on kubernetes they're just still too many moving Parts too many uh components right too much complexity I go I keep going back to the complexity problem so I expect kubecon and all the vendors and the players and the startups and the people there to continue to focus on that complexity problem and introduce a further simplifications uh to to the stack yeah Vic you've had a storied career VMware over decades with them uh obviously 12 years for the 14 years or something like that big number co-founder here platform I think it's been around for a while at this game uh we man we'll talk about openstack that project you we interviewed at one of their events so openstack was the beginning of that this new Revolution I remember the early days was it wasn't supposed to be an alternative to Amazon but it was a way to do more cloud cloud native I think we had a Colorado team at that time I mean it's a joke we you know about about the dream it's happening now now at platform nine you guys have been doing this for a while what's the what are you most excited about as the Chief Architect what did you guys double down on what did you guys pivot from or two did you do any pivots did you extend out certain areas because you guys are in a good position right now a lot of DNA in Cloud native um what are you most excited about and what is platform nine bring to the table for customers and for people in the industry watching this yeah so I think our mission really hasn't changed over the years right it's been always about taking complex open source software because open source software it's powerful it solves new problems you know every year and you have new things coming out all the time right openstack was an example within kubernetes took the World by storm but there's always that complexity of you know just configuring it deploying it running it operating it and our mission has always been that we will take all that complexity and just make it you know easy for users to consume regardless of the technology right so the successor to kubernetes you know I don't have a crystal ball but you know you have some indications that people are coming up of new and simpler ways of running applications there are many projects around there who knows what's coming uh next year or the year after that but platform will a Platform 9 will be there and we will you know take the Innovations from the the community we will contribute our own Innovations and make all of those things uh very consumable to customers simpler faster cheaper always a good business model technically to make that happen yeah I think the reigning in the chaos is key you know now we have now visibility into the scale final question before we depart you know this segment um what is that scale how many clusters do you see that would be a high a watermark for an at scale conversation around an Enterprise um is it workloads we're looking at or or clusters how would you yeah how would you describe that and when people try to squint through and evaluate what's a scale what's the at scale kind of threshold yeah and the number of clusters doesn't tell the whole story because clusters can be small in terms of the number of nodes or they can be large but roughly speaking when we say you know large-scale cluster deployments we're talking about um maybe a hundreds uh two thousands yeah and final final question what's the role of the hyperscalers you've got AWS continuing to do well but they got their core I asked they got a pass they're not too too much putting assess out there they have some SAS apps but mostly it's the ecosystem they have marketplaces doing over two billion dollars billions of transactions a year um and and it's just like just sitting there it has really they're now innovating on it but that's going to change ecosystems what's the role the cloud play and the cloud native at scale the the hyperscale yeah Abus Azure Google you mean from a business they have their own interests that you know that they're uh they will keep catering to they they will continue to find ways to lock their users into their ecosystem of uh services and and apis um so I don't think that's going to change right they're just going to keep well they got great uh performance I mean from a from a hardware standpoint yes that's going to be key right yes I think the uh the move from x86 being the dominant away and platform to run workloads is changing right that that that and I think the the hyperscalers really want to be in the game in terms of you know the the new risk and arm ecosystems and platforms yeah that joking aside Paul maritz when he was the CEO of VMware when he took over once said I remember our first year doing the cube the cloud is one big distributed computer it's it's hardware and you've got software and you got middleware and uh he kind of over these kind of tongue-in-cheek but really you're talking about large compute and sets of services that is essentially a distributed computer yes exactly it's we're back in the same game Vic thank you for coming on the segment appreciate your time this is uh Cloud native at scale special presentation with platform nine really unpacking super cloud rlon open source and how to run large-scale applications uh on the cloud cloud native philadelph4 developers and John Furrier with the cube thanks for watching and we'll stay tuned for another great segment coming right up foreign [Music]

Published Date : Oct 12 2022

SUMMARY :

the successor to kubernetes you know I

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Paul maritzPERSON

0.99+

IBMORGANIZATION

0.99+

12 yearsQUANTITY

0.99+

thousandsQUANTITY

0.99+

Silicon ValleyLOCATION

0.99+

AmazonORGANIZATION

0.99+

14 yearsQUANTITY

0.99+

tensQUANTITY

0.99+

millionsQUANTITY

0.99+

hundredsQUANTITY

0.99+

AWSORGANIZATION

0.99+

dick LeePERSON

0.99+

John FurrierPERSON

0.99+

first yearQUANTITY

0.98+

VMwareORGANIZATION

0.98+

todayDATE

0.98+

thousands of usersQUANTITY

0.98+

thousands of applicationsQUANTITY

0.98+

twoQUANTITY

0.98+

Mariana TeslaPERSON

0.98+

over two billion dollarsQUANTITY

0.98+

two thousandsQUANTITY

0.98+

next yearDATE

0.98+

LinkedInORGANIZATION

0.98+

openstackORGANIZATION

0.97+

argoconORGANIZATION

0.97+

this yearDATE

0.96+

ArlonORGANIZATION

0.96+

kubeconORGANIZATION

0.96+

ColoradoLOCATION

0.95+

first timeQUANTITY

0.95+

oneQUANTITY

0.95+

IntuitORGANIZATION

0.95+

AMDORGANIZATION

0.94+

baskarPERSON

0.94+

earlier todayDATE

0.93+

one thingQUANTITY

0.92+

DockerTITLE

0.91+

GoogleORGANIZATION

0.91+

a lot of projectsQUANTITY

0.91+

hundreds of clustersQUANTITY

0.91+

AzureTITLE

0.9+

Platform9ORGANIZATION

0.88+

platform nineORGANIZATION

0.87+

CubeORGANIZATION

0.87+

about eight years agoDATE

0.84+

openstackEVENT

0.83+

next couple yearsDATE

0.8+

billions of transactions a yearQUANTITY

0.8+

Platform 9TITLE

0.8+

Platform 9ORGANIZATION

0.8+

platform nineORGANIZATION

0.79+

ArgoTITLE

0.78+

ArloORGANIZATION

0.75+

ec2TITLE

0.72+

over decadesQUANTITY

0.72+

cxosORGANIZATION

0.71+

nineQUANTITY

0.69+

one big distributed computerQUANTITY

0.68+

x86TITLE

0.67+

yearsQUANTITY

0.67+

BhaskarPERSON

0.64+

IngressORGANIZATION

0.63+

dockerTITLE

0.62+

Cloud Native atTITLE

0.62+

laterDATE

0.62+

yearQUANTITY

0.62+

PlaybookTITLE

0.61+

ArlonTITLE

0.57+

CEOPERSON

0.57+

Tim Jefferson & Sinan Eren, Barracuda | AWS re:Inforce 2022


 

>>And welcome back to the cubes coverage of a, of us. Reinforc here in Boston, Massachusetts. I'm John furrier. We're here for a great interview on the next generation topic of state of industrial security. We have two great guests, Tim Jefferson, senior vice president data network and application security at Barracuda. And Cenon Aron vice president of zero trust engineering at Barracuda. Gentlemen. Thanks for coming on the queue. Talk about industrial security. >>Yeah, thanks for having us. >>So one of the, one of the big things that's going on, obviously you got zero trust. You've got trusted, trusted software supply chain challenges. You've got hardware mattering more than ever. You've got software driving everything, and all this is talking about industrial, you know, critical infrastructure. We saw the oil pipeline had a hack and ransomware attack, and that's just constant barrage of threats in the industrial area. And all the data is pointing to that. This area is gonna be fast growth machine learning's kicking in automation is coming in. You see a huge topic, huge growth trend. What is the big story going on here? >>Yeah, I think at a high level, you know, we did a survey and saw that, you know, over 95% of the organizations are experiencing, you know, security challenges in this space. So, you know, the blast radius in the, of the, the interface that this creates so many different devices and things and objects that are getting network connected now create a huge challenge for security teams to kind of get their arms around that. >>Yeah. And I can add that, you know, majority of these incidents that, that these organizations suffer lead to significant downtime, right? And we're talking about operational technology here, you know, lives depend on, on these technologies, right? Our, our wellbeing everyday wellbeing depend on those. So, so that is a key driver of initiatives and projects to secure industrial IOT and operational technologies in, in these businesses. >>Well, it's great to have both of you guys on, you know, Tim, you know, you had a background at AWS and sit on your startup, founder, soldier, coming to Barracuda, both very experienced, seeing the ways before in this industry. And I'd like to, if you don't mind talk about three areas, remote access, which we've seen in huge demand with, with the pandemic and the out, coming out with the hybrid and certainly industrial, that's a big part of it. And then secondly, that the trend of clear commitment from enterprises to have in a public cloud component, and then finally the secure access edge, you know, with SAS business models, securing these things, these are the three hot areas let's go into the first one, remote access. Why is this important? It seems that this is the top priority for having immediate attention on what's the big challenge here? Is it the most unsecure? Is it the most important? What, why is this relevant? >>So now I'll let you jump in there. >>Yeah, sure. Happy to. I mean, if you think about it, especially now, we've been through a, a pandemic shelter in place cycle for almost two years. It, it becomes essentially a business continuity matter, right? You do need remote access. We also seen a tremendous shift in hiring the best talent, wherever they are, right. Onboarding them and bringing the talent into, into, into, into businesses that have maybe a lot more distributed environments than traditionally. So you have to account for remote access in every part of everyday life, including industrial technologies, you need remote support, right? You need vendors that might be overseas providing you, you know, guidance and support for these technologies. So remote support is every part of life. Whether you work from home, you work on your, on the go, or you are getting support from a vendor that happens to be in Germany, you know, teleporting into your environment in Hawaii. You know, all these things are essentially critical parts of everyday life. Now >>Talk about ZT and a zero trust network access is a, this is a major component for companies. Obviously, you know, it's a position taking trust and verifies. One other approach, zero trust is saying, Hey, I don't trust you. Take us through why that's important. Why is zero trust network access important in this area? >>Yeah. I mean, I could say that traditionally remote access, if you think about infancy of the internet in the nineties, right? It was all about encryption in, in transit, right? You were all about internet was vastly clear text, right? We didn't have even SSL TLS, widely distributed and, and available. So when VPNs first came out, it was more about preventing sniffing, clear tech clear text information from, from, from the network, right? It was more about securing the, the transport, but now that kind of created a, a big security control gap, which implicitly trusted user users, once they are teleported into a remote network, right? That's the essence of having a remote access session you're brought from wherever you are into an internal network. They implicitly trust you that simply breakdown over time because you are able to compromise end points relatively easily using browser exploits. >>You know, so, so for supply chain issues, water hole attacks, and leverage the existing VPN tunnels to laterally move into the organization from within the network, you literally move in further and further and further down, you know, down the network, right? So the VPN needed a, a significant innovation. It was meant to be securing packets and transit. It was all about an encryption layer, but it had an implicit trust problem with zero trust. We turn it into an explicit trust problem, right? Explicit trust concept, ideally. Right? So you are, who do you say you are? And you are authorized to access only to things that you need to access to get the work done. >>So you're talking about granular levels versus the one time database look up you're in >>That's right. >>Tim, talk about the OT it side of this equation of industrial, because it, you know, is IP based, networking, OT have been purpose built, you know, maybe some proprietary technology yeah. That connects to the internet internet, but it's mainly been secure. Those have come together over the years and now with no perimeter security, how is this world evolving? Because there's gonna be more cloud there, be more machine learning, more hybrid on premise, that's going on almost a reset if you will. I mean, is it a reset? What's the, what's the situation. >>Yeah. I think, you know, in typical human behavior, you know, there's a lot of over rotation going on. You know, historically a lot of security controls are all concentrated in a data center. You know, a lot of enterprises had very large sophisticated well-established security stacks in a data center. And as those applications kind of broke down and, and got rearchitected for the cloud, they got more modular, they got more distributed that centralized security stack became an anti pattern. So now this kind of over rotation, Hey, let's take this stack and, and put it up in the cloud. You know, so there's lots of names for this secure access, service edge, you know, secure service edge. But in the end, you know, you're taking your controls and, and migrating them into the cloud. And, you know, I think ultimately this creates a great opportunity to embrace some of security, best practices that were difficult to do in some of the legacy architectures, which is being able to push your controls as far out to the edge as possible. >>And the interesting thing about OT and OT now is just how far out the edge is, right? So instead of being, you know, historically it was the branch or user edge, remote access edge, you know, Syon mentioned that you, you have technologies that can VPN or bring those identities into those networks, but now you have all these things, you know, partners, devices. So it's the thing, edge device edge, the user edge. So a lot more fidelity and awareness around who users are. Cause in parallel, a lot of the IDP and I IBM's platforms have really matured. So marrying those concepts of this, this lot of maturity around identity management yeah. With device in and behavior management into a common security framework is really exciting. But of course it's very nascent. So people are, it's a difficult time getting your arms around >>That. It's funny. We were joking about the edge. We just watching the web telescope photos come in the deep space, the deep edge. So the edge is continuing to be pushed out. Totally see that. And in fact, you know, one of the things we're gonna, we're gonna talk about this survey that you guys had done by an independent firm has a lot of great data. I want to unpack that, but one of the things that was mentioned in there, and I'll get, I wanna get your both reaction to this is that virtually all organizations are committing to the public cloud. Okay. I think it was like 96% or so was the stat. And if you combine in that, the fact that the edge is expanding, the cloud model is evolving at the edge. So for instance, a building, there's a lot behind it. You know, how far does it go? So we don't and, and what is the topology because the topology seem to change too. So there's this growth and change where we need cloud operations, DevOps at, at the edge and the security, but it's changing. It's not pure cloud, but it's cloud. It has to be compatible. What's your reaction to that, Tim? I mean, this is, this is a big part of the growth of industrial. >>Yeah. I think, you know, if you think about, there's kind of two exciting developments that I would think of, you know, obviously there's this increase to the surface area, the tax surface areas, people realize, you know, it's not just laptops and devices and, and people that you're trying to secure, but now they're, you know, refrigerators and, you know, robots and manufacturing floors that, you know, could be compromised, have their firmware updated or, you know, be ransomware. So this a huge kind of increase in surface area. But a lot of those, you know, industrial devices, weren't built around the concept with network security. So kind of bolting on, on thinking through how can you secure who and what ultimately has access to those, to those devices and things. And where is the control framework? So to your point, the control framework now is typically migrated now into public cloud. >>These are custom applications, highly distributed, highly available, very modular. And then, you know, so how do you, you know, collect the telemetry or control information from these things. And then, you know, it creates secure connections back into these, these control applications, which again, are now migrated to public cloud. So you have this challenge, you know, how do you secure? We were talking about this last time we discussed, right. So how do you secure the infrastructure that I've, I've built in deploying now, this control application and in public cloud, and then connect in with this, this physical presence that I have with these, you know, industrial devices and taking telemetry and control information from those devices and bringing it back into the management. And this kind marries again, back into the remote axis that Sunan was mentioning now with this increase awareness around the efficacy of ransomware, we are, you know, we're definitely seeing attackers going after the management frameworks, which become very vulnerable, you know, and they're, they're typically just unprotected web applications. So once you get control of the management framework, regardless of where it's hosted, you can start moving laterally and, and causing some damage. >>Yeah. That seems to be the common thread. So no talk about, what's your reaction to that because, you know, zero trust, if it's evolving and changing, you, you gotta have zero trust you. I didn't even know it's out there and then it gets connected. How do you solve that problem? Cuz you know, there's a lot of surface area that's evolving all the OT stuff and the new, it, what's the, what's the perspective and posture that the clients your clients are having and customers. Well, >>I, I think they're having this conversation about further mobilizing identity, right? We did start with, you know, user identity that become kind of the first foundational building block for any kind of zero trust implementation. You work with, you know, some sort of SSO identity provider, you get your, you sync with your user directories, you have a single social truth for all your users. >>You authenticate them through an identity provider. However that didn't quite cut it for industrial OT and OT environments. So you see like we have the concept of hardware machines, machine identities now become an important construct, right? The, the legacy notion of being able to put controls and, and, and, and rules based on network constructs doesn't really scale anymore. Right? So you need to have this concept of another abstraction layer of identity that belongs to a service that belongs to an application that belongs to a user that belongs to a piece of hardware. Right. And then you can, yeah. And then you can build a lot more, of course, scalable controls that basically understand the, the trust relation between these identities and enforce that rather than trying to say this internal network can talk to this other internal network through a, through a network circuit. No, those things are really, are not scalable in this new distributed landscape that we live in today. So identity is basically going to operationalize zero trust and a lot more secure access going forward. >>And that's why we're seeing the sassy growth. Right. That's a main piece of it. Is that what you, what you're seeing too? I mean, that seems to be the, the approach >>I think like >>Go >>Ahead to, yeah. I think like, you know, sassy to me is really about, you know, migrating and moving your security infrastructure to the cloud edge, you know, as we talked to the cloud, you know, and then, you know, do you funnel all ingress and egress traffic through this, you know, which is potentially an anti pattern, right? You don't wanna create, you know, some brittle constraint around who and what has access. So again, a security best practices, instead of doing all your enforcement in one place, you can distribute and push your controls out as far to the edge. So a lot of SASI now is really around centralizing policy management, which is the big be one of the big benefits is instead of having all these separate management plans, which always difficult to be very federated policy, right? You can consolidate your policy and then decide mechanism wise how you're gonna instrument those controls at the edge. >>So I think that's the, the real promise of, of the, the sassy movement and the, I think the other big piece, which you kind of touched on earlier is around analytics, right? So it creates an opportunity to collect a whole bunch of telemetry from devices and things, behavior consumption, which is, which is a big, common, best practice around once you have SA based tools that you can instrument in a lot of visibility and how users and devices are behaving in being operated. And to Syon point, you can marry that in with their identity. Yeah. Right. And then you can start building models around what normal behavior is and, you know, with very fine grain control, you can, you know, these types of analytics can discover things that humans just can't discover, you know, anomalous behavior, any kind of indicators are compromised. And those can be, you know, dynamic policy blockers. >>And I think sun's point about what he was talking about, talks about the, the perimeters no longer secure. So you gotta go to the new way to do that. Totally, totally relevant. I love that point. Let me ask you guys a question on the, on the macro, if you don't mind, how concerned are you guys on the current threat landscape in the geopolitical situation in terms of the impact on industrial IOT in this area? >>So I'll let you go first. Yeah. >>I mean, it's, it's definitely significantly concerning, especially if now with the new sanctions, there's at least two more countries being, you know, let's say restricted to participate in the global economic, you know, Mar marketplace, right? So if you look at North Korea as a pattern, since they've been isolated, they've been sanctioned for a long time. They actually double down on rents somewhere to even fund state operations. Right? So now that you have, you know, BES be San Russia being heavily sanctioned due to due to their due, due to their activities, we can envision more increase in ransomware and, you know, sponsoring state activities through illegal gains, through compromising, you know, pipelines and, you know, industrial, you know, op operations and, and seeking large payouts. So, so I think the more they will, they're ized they're pushed out from the, from the global marketplace. There will be a lot more aggression towards critical infrastructure. >>Oh yeah. I think it's gonna ignite more action off the books, so to speak as we've seen. Yeah. We've >>Seen, you know, another point there is, you know, Barracuda also runs a, a backup, you know, product. We do a, a purpose built backup appliance and a cloud to cloud backup. And, you know, we've been running this service for over a decade. And historically the, the amount of ransomware escalations that we got were very slow, you know, is whenever we had a significant one, helping our customers recover from them, you know, you know, once a month, but over the last 18 months, this is routine now for us, this is something we deal with on a daily basis. And it's becoming very common. You know, it's, it's been a well established, you know, easily monetized route to market for the bad guys. And, and it's being very common now for people to compromise management planes, you know, they use account takeover. And the first thing they're doing is, is breaking into management planes, looking at control frameworks. And then first thing they'll do is delete, you know, of course the backups, which this sort of highlights the vulnerability that we try to talk to our customers about, you know, and this affects industrial too, is the first thing you have to do is among other things, is, is protect your management planes. Yeah. And putting really fine grain mechanisms like zero trust is, is a great, >>Yeah. How, how good is backup, Tim, if you gets deleted first is like no backup. There it is. So, yeah. Yeah. Air gaping. >>I mean, obviously that's kinda a best practice when you're bad guys, like go in and delete all the backups. So, >>And all the air gaps get in control of everything. Let me ask you about the, the survey pointed out that there's a lot of security incidents happening. You guys pointed that out and discussed a little bit of it. We also talked about in the survey, you know, the threat vectors and the threat landscape, the common ones, ransomware was one of them. The area that I liked, what that was interesting was the, the area that talked about how organizations are investing in security and particularly around this, can you guys share your thoughts on how you see the, the market, your customers and, and the industry investing? What are they investing in? What stage are they in when it comes to IOT and OT, industrial IOT and OT security, do they do audits? Are they too busy? I mean, what's the state of their investment thesis progress of, of, of how they're investing in industrial IOT? >>Yeah. Our, our view is, you know, we have a next generation product line. We call, you know, our next, our cloud chain firewalls. And we have a form factor that sports industrial use cases we call secure connectors. So it's interesting that if you, what we learned from that business is a tremendous amount of bespoke efforts at this point, which is sort of indicative of a, of a nascent market still, which is related to another piece of information I thought was really interested in the survey that I think it was 93% of the, the participants, the enterprises had a failed OT initiative, you know, that, you know, people tried to do these things and didn't get off the ground. And then once we see build, you know, strong momentum, you know, like we have a, a large luxury car manufacturer that uses our secure connectors on the, on the robots, on the floor. >>So well established manufacturing environments, you know, building very sophisticated control frameworks and, and security controls. And, but again, a very bespoke effort, you know, they have very specific set of controls and specific set of use cases around it. So it kind of reminds me of the late nineties, early two thousands of people trying to figure out, you know, networking and the blast radi and networking and, and customers, and now, and a lot of SI are, are invested in this building, you know, fast growing practices around helping their customers build more robust controls in, in helping them manage those environments. So, yeah, I, I think that the market is still fairly nascent >>From what we seeing, right. But there are some encouraging, you know, data that shows that at least helpful of the organizations are actively pursuing. There's an initiative in place for OT and a, you know, industrial IOT security projects in place, right. They're dedicating time and resources and budget for this. And, and in, in regards to industries, verticals and, and geographies oil and gas, you know, is, is ahead of the curve more than 50% responded to have the project completed, which I guess colonial pipeline was the, you know, the call to arms that, that, that was the big, big, you know, industrial, I guess, incident that triggered a lot of these projects to be accelerating and, and, you know, coming to the finish line as far as geographies go DACA, which is Germany, Austria, Switzerland, and of course, north America, which happens to be the industrial powerhouses of, of the world. Well, APAC, you know, also included, but they're a bit behind the curve, which is, you know, that part is a bit concerning, but encouragingly, you know, Western Europe and north America is ahead of these, you know, projects. A lot of them are near completion or, or they're in the middle of some sort of an, you know, industrial IOT security project right >>Now. I'm glad you brought the colonial pipeline one and, and oil and gas was the catalyst. Again, a lot of, Hey, scared that better than, than me kinda attitude, better invest. So I gotta ask you that, that supports Tim's point about the management plane. And I believe on that hack or ransomware, it wasn't actually control of the pipeline. It was control over the management billing, and then they shut down the pipeline cuz they were afraid it was gonna move over. So it wasn't actually the critical infrastructure itself to your point, Tim. >>Yeah. It's hardly over the critical infrastructure, by the way, you always go through the management plane, right. It's such an easier lying effort to compromise because it runs on an endpoint it's standard endpoint. Right? All this control software will, will be easier to get to rather than the industrial hardware itself. >>Yeah. It's it's, it's interesting. Just don't make a control software at the endpoint, put it zero trust. So down that was a great point. Oh guys. So really appreciate the time and the insight and, and the white paper's called NETEC it's on the Barracuda. Netex industrial security in 2022. It's on the barracuda.com website Barracuda network guys. So let's talk about the read force event hasn't been around for a while cuz of the pandemic we're back in person what's changed in 2019 a ton it's like security years is not dog years anymore. It's probably dog times too. Right. So, so a lot's gone on where are we right now as an industry relative to the security cybersecurity. Could you guys summarize kind of the, the high order bit on where we are today in 2022 versus 2019? >>Yeah, I think, you know, if you look at the awareness around how to secure infrastructure in applications that are built in public cloud in AWS, it's, you know, exponentially better than it was. I think I remember when you and I met in 2018 at one of these conferences, you know, there were still a lot of concerns, whether, you know, IAS was safe, you know, and I think the amount of innovation that's gone on and then the amount of education and awareness around how to consume, you know, public cloud resources is amazing. And you know, I think that's facilitated a lot of the fast growth we've seen, you know, the consistent, fast growth that we've seen across all these platforms >>Say that what's your reaction to the, >>I think the shared responsibility model is well understood, you know, and, and, and, and we can see a lot more implementation around, you know, CSBM, you know, continuously auditing the configurations in these cloud environments become a, a standard table stake, you know, investment from every stage of any business, right? Whether from early state startups, all the way to, you know, public companies. So I think it's very well understood and, and the, and the investment has been steady and robust when it comes to cloud security. We've been busy, you know, you know, helping our customers and AWS Azure environments and, and others. So I, I think it's well understood. And, and, and we are on a very optimistic note actually in a good place when it comes to public cloud. >>Yeah. A lot of great momentum, a lot of scale and data act out there. People sharing data, shared responsibility. Tim is in, thank you for sharing your insights here in this cube segment coverage of reinforce here in Boston. Appreciate it. >>All right. Thanks for having >>Us. Thank you. >>Okay, everyone. Thanks for watching the we're here at the reinforced conference. AWS, Amazon web services reinforced. It's a security focused conference. I'm John furier host of the cube. We'd right back with more coverage after the short break.

Published Date : Jul 27 2022

SUMMARY :

Thanks for coming on the queue. and all this is talking about industrial, you know, critical infrastructure. Yeah, I think at a high level, you know, we did a survey and saw that, you know, here, you know, lives depend on, on these technologies, right? Well, it's great to have both of you guys on, you know, Tim, you know, you had a background at AWS and sit on your startup, Germany, you know, teleporting into your environment in Hawaii. Obviously, you know, it's a position taking trust and verifies. breakdown over time because you are able to compromise end points relatively easily further and further down, you know, down the network, right? you know, maybe some proprietary technology yeah. But in the end, you know, you're taking your controls and, So instead of being, you know, historically it was the branch or user edge, And in fact, you know, one of the things we're gonna, we're gonna talk about this survey that you guys had done by But a lot of those, you know, industrial devices, And then, you know, it creates secure connections back into these, these control applications, Cuz you know, there's a lot of surface area that's evolving all the OT stuff and the you know, some sort of SSO identity provider, you get your, you sync with your user directories, So you need to have this concept of another abstraction layer of identity I mean, that seems to be the, the approach I think like, you know, sassy to me is really about, you know, behavior is and, you know, with very fine grain control, you can, you know, So you gotta go to the new way to do that. So I'll let you go first. the new sanctions, there's at least two more countries being, you know, I think it's gonna ignite more action off the books, so to speak as that we try to talk to our customers about, you know, and this affects industrial too, is the first thing you have Yeah. I mean, obviously that's kinda a best practice when you're bad guys, like go in and delete all the backups. We also talked about in the survey, you know, you know, that, you know, people tried to do these things and didn't get off the ground. So well established manufacturing environments, you know, the, you know, the call to arms that, that, that was the big, big, you know, industrial, So I gotta ask you that, that supports Tim's point about the management plane. It's such an easier lying effort to compromise because it runs on an endpoint it's standard endpoint. Could you guys summarize kind of the, at one of these conferences, you know, there were still a lot of concerns, whether, you know, Whether from early state startups, all the way to, you know, public companies. Tim is in, thank you for sharing your insights here in this Thanks for having I'm John furier host of the cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Tim JeffersonPERSON

0.99+

AWSORGANIZATION

0.99+

Sinan ErenPERSON

0.99+

BostonLOCATION

0.99+

AmazonORGANIZATION

0.99+

HawaiiLOCATION

0.99+

GermanyLOCATION

0.99+

2018DATE

0.99+

TimPERSON

0.99+

2022DATE

0.99+

96%QUANTITY

0.99+

2019DATE

0.99+

93%QUANTITY

0.99+

John furierPERSON

0.99+

SwitzerlandLOCATION

0.99+

AustriaLOCATION

0.99+

IBMORGANIZATION

0.99+

oneQUANTITY

0.99+

north AmericaLOCATION

0.99+

Boston, MassachusettsLOCATION

0.99+

firstQUANTITY

0.99+

bothQUANTITY

0.99+

San RussiaORGANIZATION

0.99+

more than 50%QUANTITY

0.98+

Western EuropeLOCATION

0.98+

zero trustQUANTITY

0.98+

BESORGANIZATION

0.98+

three hot areasQUANTITY

0.98+

BarracudaORGANIZATION

0.97+

todayDATE

0.97+

over 95%QUANTITY

0.97+

pandemicEVENT

0.97+

APACORGANIZATION

0.97+

two exciting developmentsQUANTITY

0.97+

once a monthQUANTITY

0.97+

late ninetiesDATE

0.96+

singleQUANTITY

0.95+

one timeQUANTITY

0.94+

first thingQUANTITY

0.94+

first oneQUANTITY

0.94+

over a decadeQUANTITY

0.91+

ninetiesDATE

0.91+

SASIORGANIZATION

0.88+

NetexORGANIZATION

0.88+

zero trustQUANTITY

0.87+

three areasQUANTITY

0.86+

two great guestsQUANTITY

0.84+

AzureTITLE

0.83+

John furrierPERSON

0.83+

Cenon AronPERSON

0.83+

almost two yearsQUANTITY

0.83+

one placeQUANTITY

0.82+

North KoreaORGANIZATION

0.82+

DACATITLE

0.81+

zeroQUANTITY

0.81+

SunanPERSON

0.81+

SASORGANIZATION

0.8+

BarracudaLOCATION

0.8+

least two more countriesQUANTITY

0.79+

secondlyQUANTITY

0.77+

last 18 monthsDATE

0.75+

a tonQUANTITY

0.75+

two thousands of peopleQUANTITY

0.75+

One other approachQUANTITY

0.72+

Dante Orsini, Justin Giardina, and Brett Diamond | VeeamON 2022


 

we're back at vemma in 2022 we're here at the aria hotel in las vegas this is thecube's continuous coverage we're day two welcome to the cxo session we have ceo cto cso chief strategy officer brett diamond is the ceo justin jardina is the cto and dante orsini is the chief strategy officer for 11 11 systems recently named i guess today the impact cloud service provider of the year congratulations guys welcome thank you welcome back to the cube great to see you again thank you great likewise so okay brett let's start with you tell give us the overview of 11 1111 uh your focus area talk about the the the island acquisition what that what that's all about give us the setup yeah so we started 11-11 uh really with a focus on taking the three core pillars of our business which are cloud connectivity and security bring them together into one platform allowing a much easier way for our customers and our partners to procure those three solution sets through a single company and really focus on uh the three main drivers of the business uh which you know have a litany of other services associated with them under each platform okay so so justin cloud connectivity and security they all dramatically changed in march of 2020 everybody had to go to the cloud the rather rethink the network had a secure remote worker so what did you see from a from a cto's perspective what changed and how did 11 respond sure so early on when we built our cloud even back into 2008 we really focused on enterprise great features one of which being uh very flexible in the networking so we found early on was that we would be able to architect solutions for customers that were dipping their toe in the cloud and set ourselves apart from some of the vendors at the time so if you fast forward from 2008 until today we still see that as a main component for iaz and draz and the ability to start taking into some of the things brett talked about where customers may need a point-to-point circuit to offload data connectivity to us or develop sd-wan and multi-cloud solutions to connect to their resources in the cloud in my opinion it's just the natural progression of what we set out to do in 2008 and to couple that with the security um if you think about what that opens up from a security landscape now you have multiple clouds you have different ingress and egress points you have different people accessing workloads in each one of these clouds so the idea or our idea is that we can layer a comprehensive security solution over this new multi-cloud networking world and then provide visibility and manageability to our customer base so what does that mean specifically for your customers because i mean we saw obviously a rapid move toward endpoint um cloud security uh identity access you know people really started thinking rethinking that as opposed to trying to just you know build a moat around the castle right um what does that mean for for your customer you take care of all that you partner with whomever you need to partner in the ecosystem and then you provide the managed service how does that work right it does and that's a great analogy you know we have a picture of a hamburger in our office exploded with all the components and they say a good security policy is all the pieces and it's really synonymous with what you said so to answer your question yes we have all that baked in the platform we can offer managed services around it but we also give the consumer the ability to access that data whether it's a ui or api so dante i know you talk to a lot of customers all you do is watch the stock market go like this and like that you say okay the pandemic drove all these but but when you talk to csos and customers a lot of things are changing permanently first of all they were forced to march to digital when previously they were like we'll get there i mean a lot of customers were let's face it i mean some were serious about it but many weren't now if you're not a digital business you're out of business what have you seen when you talk to customers in terms of the permanence of some of these changes what are they telling you well i think we go through this for ourselves right the business continues to grow you've got tons of people that are working remotely and that are going to continue to work remotely right as much as we'd like to offer up hybrid workspace and things like that some folks are like hey i've worked it out i'm working out great from home right and also i think what justin was saying also is we've seen time go on that operating environment has gotten much more complex you've got stuff in the data center stuff it's somebody's you know endpoint you've got various different public clouds different sas services right that's why it's been phenomenal to work with veeam because we can protect that data regardless of where it exists but when you start to look at some of the managed security services that we're talking about we're helping those csos you get better visibility better control and take proactive action against the infrastructure um when we look at threat mitigation and how to actually respond when when something does happen right and i think that's the key because there's no shortage of great security vendors right but how do you tie it all together into a single solution right with a vendor that you can actually partner with to help secure the environment while you go focus on the things they're more strategic to the business i was talking to jim mercer at um red hat summit last week he's an idc analyst and he said we did a survey i think it was last summer and we asked customers to your point about there's no shortage of security tools how do you want to buy your security and you know do you want you know best to breed bespoke tools and you sort of put it together or do you kind of want your platform provider to do it now surprisingly they said platform provider the the problem is that's aspirational for a lot of platforms providers so they've got to look to a managed service provider so brett talk about the the island acquisition what green cloud is how that all fits together so we acquired island and green cloud last year and the reality is that the people at both of those companies and the technology is what drove us to making those acquisitions they were the foundational pieces to eleven eleven uh obviously the things that justin has been able to create from an automation and innovation perspective uh at the company is transforming this business in a litany of different ways as well so those two acquisitions allow us at this point to take a cloud environment on a geographic footprint not only throughout the us but globally uh have a security product that was given to us from from the green cloud acquisition of cascade and add-on connectivity to allow us to have all three platforms in one all three pillars so i like 11 11 11 is near and dear to my heart i am so where'd the name come from uh everybody asked me this question i think five times a day so uh growing up as a kid everyone in my family would always say 11 11 make a wish whenever you'd see it on the clock and uh during coven we were coming up with a new name for the business my daughter looked at the microwave said dad it's 11 11. make a wish the reality was though i had no idea why i'd been doing it for all that time and when you look up kind of the background origination derivation of the word uh it means the time of day when everything's in line um and when things are complex especially with running all the different businesses that we have aligning them so that they're working together it seemed like a perfect man when i had the big corner office at idc i had my staff meetings at 11 11. because the universe was aligned and then the other thing was nobody could forget the time so they gave him 11 minutes to be there now you'll see it all the time even when you don't want to so justin we've been talking a lot about ransomware and and not just backup but recovery my friend fred moore who you know coined the phrase backup is one thing recovery is everything and recovery time network speeds and and the like are critical especially when you're thinking cloud how are you architecting recovery for your clients maybe you could dig into that a little bit sure so it's really a multitude of things you know you mentioned ransomware seeing the ransomware landscape evolve over time especially in our business with backup and dr it's very singular you know people protecting against host nodes now we're seeing ransomware be able to get into an environment land and expand actually delete backups target backup vendors so the ransomware point i guess um trying to battle that is a multi-step process right you need to think about how data flows into the organization from a security perspective from a networking perspective you need to think about how your workloads are protected and then when you think about backups i know we're at veeam vmon now talking about veeam there's a multitude of ways to protect that data whether it's retention whether it's immutability air gapping data so while i know we focus a lot sometimes on protecting data it's really that hamburg analogy where the sum of the parts make up the protection so how do you provide services i mean you say okay you want immutability there's a there's a line item for that um you want faster or you know low rpo fast rto how does that all work for as a customer what what am i buying from you is it just a managed service we'll take care of everything platinum gold silver or is it if if you don't mind so i'm glad you asked that question because this is something that's very unique about us years ago his team actually built the ip because we were scaling at such an incredible rate globally through all our joint partners with veeam that how do we take all the intelligence that we have in his team and all of our solution architects and scale it so they actually developed a tool called catalyst and it's a pre-sales tool it's an application you download it you install it it basically takes a snapshot of your environment you start to manipulate the data what are you trying to do dave are you trying to protect that data are you backing up to us are you trying to replicate for dr purposes um you know what are you doing for production or maybe it's a migration it analyzes the network it analyzes all your infrastructure it helps the ses know immediately if we're a feasible solution based on what you are trying to do so nobody in the space is doing this and that's been a huge key to our growth because the channel community as well as the customer they're working with real data so we can get past all the garbage and get right to what's important for them for the outcome yeah that's huge who do you guys sell to is it is it more mid-sized businesses that maybe don't have the large teams is it larger enterprises who want to complement to their business is it both well i would say with the two acquisitions that we made the go-to-market sales strategies and the clientele were very different when you look at green cloud they're selling predominantly wholesale through msps and those msps are mostly selling to smbs right so we covered that smb market for the most part through our acquisition of green cloud island on the other hand was more focused on selling direct inbound through vars through the channel mid enterprise big enterprise so really those two acquisitions outside of the ip that we got from the systems we have every single go-to-market sale strategy and we're aligned from smb all the way up to the fortune 500. i heard a stat a couple months ago that that less than 50 of enterprises have a sock it blew me away and you know even small businesses need one they may not be able to afford but certainly a medium size or larger business should have some kind of sock is it does that stat jive with what you're seeing in the marketplace 100 if that's true the need for a managed service like this is just it's going to explode it is exploding yeah i mean 100 right there is zero unemployment in the cyberspace right just north america alone there's about a million or so folks in that space and right now you've got about 600 000 open wrecks just in north america right so earlier we talked about no shortage of tools right but the shortage of head count is a significant challenge big time right most importantly the people that you do have on staff they've got alert fatigue from the tools that they do have that's why you're seeing this massive insurgence in the managed security services provider lack of talent is number one challenge for csos that's what they'll tell you and there's no end in sight to that and it's you know another tool and and it's amazing because you see security companies popping up all the time billion dollar evaluations i mean lacework did a billion dollar raise and so so there's no shortage of funding now maybe that'll change you know with the market but i wanted to turn our attention to the keynotes this morning you guys got some serious love up on stage um there was a demo uh it was a pretty pretty cool demo fast recovery very very tight rpo as i recall it was i think four minutes of data loss is that right was that the right knit stat i was happy it wasn't zero data loss because there's really you know no such thing uh but so you got to feel good about that tell us about um how that all came about your relationship with with veeam who wants to take it sure i can i can take a step at it so one of the or two of the things that i'm um most excited about at least with this vmon is our team was able to work with veeam on that demo and what that demo was showing was some cdp-based features for cloud providers so we're really happy to see that and the reason why we're happy to see that is that with the veeam platform it's now given the customers the ability to do things like snapshot replication cdp replication on-prem backup cloud backup immutability air gap the list goes on and on and in our opinion having a singular software vendor that can provide all that through you know with a cloud provider on prem or not is really like the icing on the cake so for us it's very exciting to see that and then also coupled with a lot of the innovation that veeam's doing in the sas space right so again having that umbrella product that can cover all those use cases i'll tell you if you guys can get a that was a very cool demo if we can get a youtube of that that that demo i'll make sure we put it in the the show notes and uh of this video or maybe pop it into one of the blogs that we write about it um so so how you guys feel i mean this is a new chapter for you very cool with a couple of acquisitions that are now the main mainspring of your strategy so the first veeam on in a couple years so what's the vibe been like for you what's the nighttime activity the customer interaction i know you guys are running a lot of the back end demos so you're everywhere what's the what's the vibe like at veeamon and how does it feel to be back look at that one at dante as far as yeah you got a lot of experience here yeah let me loose on this one dave i'm like so excited about this right it's been it's been far too long to get face to face again and um veeam always does it right and i think that uh for years we've been back-ending like all the hands-on lab infrastructure here but forget about that i think the part that's really exciting is getting face-to-face with such a great team right we have phenomenal architects that we work with at veeam day in and day out they put up with us pushing them pushing and pushing them and together we've been able to create a lot of magic together right but i think it's you can't replace the human interaction that we've all been starving for for the last two years but the vibe's always fantastic at veeam if you're going to be around tonight i'll be looking forward to enjoying some of that veeam love with you at the after party yeah that's well famous after parties we'll see if that culture continues i have a feeling it will um brett where do you want to take 11 11. a new new phase in all of your careers you got a great crew out here it looks like i i love that you're all out and uh make some noise here people let's hear it all right let's see you this is the biggest audience we've had all week where do you want to take 11 11. i think you know if uh if you look at what we've done so far in the short six months since the acquisitions of green cloud and ireland obviously the integration is a key piece we're going to be laser focused on growing organically across those three pillars we've got to put more capital and resources into the incredible ip like i said earlier that just and his team have created on those front ends the user experience but you know we made two large acquisitions obviously mna is a is a key piece for us we're going to be diligent and we're probably going to be very aggressive on that front as well to be able to grow this business into the global leader of cloud connectivity and security and i think we've really hit a void in the industry that's been looking for this for a very long time and we want to be the first ones to be able to collaborate and combine those three into one when the when the cloud started to hit the steep part of the s-curve kind of early part of the last decade people thought oh wow these managed service providers are toast the exact opposite happened it created such a tailwind and need for consistent services and integration and managed services we've seen it all across the stack so guys wish you best of luck congratulations on the acquisitions thank you uh hope to have you back soon yeah thank you around the block all right keep it right there everybody dave vellante for the cube's coverage of veeamon 2022 we'll be right back after this short break

Published Date : May 24 2022

SUMMARY :

drivers of the business uh which you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Justin GiardinaPERSON

0.99+

Brett DiamondPERSON

0.99+

2008DATE

0.99+

11 minutesQUANTITY

0.99+

fred moorePERSON

0.99+

2022DATE

0.99+

brett diamondPERSON

0.99+

march of 2020DATE

0.99+

jim mercerPERSON

0.99+

last yearDATE

0.99+

last summerDATE

0.99+

north americaLOCATION

0.99+

twoQUANTITY

0.99+

last weekDATE

0.99+

one platformQUANTITY

0.99+

two acquisitionsQUANTITY

0.99+

green cloud islandORGANIZATION

0.99+

las vegasLOCATION

0.99+

two acquisitionsQUANTITY

0.99+

veeamORGANIZATION

0.99+

six monthsQUANTITY

0.99+

each platformQUANTITY

0.98+

four minutesQUANTITY

0.98+

justinPERSON

0.97+

billion dollarQUANTITY

0.97+

todayDATE

0.97+

bothQUANTITY

0.97+

DantePERSON

0.97+

threeQUANTITY

0.97+

100QUANTITY

0.97+

tonightDATE

0.96+

11 11DATE

0.96+

ctoORGANIZATION

0.95+

last decadeDATE

0.95+

single companyQUANTITY

0.94+

oneQUANTITY

0.94+

pandemicEVENT

0.94+

youtubeORGANIZATION

0.94+

five times a dayQUANTITY

0.93+

three main driversQUANTITY

0.93+

two large acquisitionsQUANTITY

0.93+

justin jardinaPERSON

0.93+

three solutionQUANTITY

0.93+

hamburgLOCATION

0.92+

chiefPERSON

0.92+

rsiniPERSON

0.92+

about a million or so folksQUANTITY

0.92+

three core pillarsQUANTITY

0.91+

irelandLOCATION

0.9+

11 1111OTHER

0.9+

zeroQUANTITY

0.89+

this morningDATE

0.89+

first onesQUANTITY

0.88+

each oneQUANTITY

0.88+

about 600 000 open wrecksQUANTITY

0.88+

zero unemploymentQUANTITY

0.86+

three platformsQUANTITY

0.86+

single solutionQUANTITY

0.86+

years agoDATE

0.85+

tons of peopleQUANTITY

0.85+

catalystTITLE

0.85+

less than 50 of enterprisesQUANTITY

0.84+

one of the blogsQUANTITY

0.84+

green cloudORGANIZATION

0.84+

couple months agoDATE

0.83+

last two yearsDATE

0.82+

dante orsiniPERSON

0.82+

aria hotelORGANIZATION

0.82+

vemmaLOCATION

0.81+

dave vellantePERSON

0.81+

elevenQUANTITY

0.81+

VeeamONORGANIZATION

0.8+

firstQUANTITY

0.79+

drazORGANIZATION

0.79+

ingressORGANIZATION

0.79+

brettPERSON

0.77+

multitude of waysQUANTITY

0.75+

ceoPERSON

0.75+

veeamonORGANIZATION

0.71+

ransomwareTITLE

0.71+

Greg Muscarella, SUSE | Kubecon + Cloudnativecon Europe 2022


 

>>The cube presents, Coon and cloud native con Europe, 2022. Brought to you by red hat, the cloud native computing foundation and its ecosystem partners. >>Welcome to Valencia Spain and cuon cloud native con 20 Europe, 2022. I'm your host Keith towns alongside a new hope en Rico, senior reti, senior editor. I'm sorry, senior it analyst at <inaudible> Enrique. Welcome to the program. >>Thank you very much. And thank you for having me. It's exciting. >>So thoughts, high level thoughts of CU con first time in person again in couple years? >>Well, this is amazing for several reasons. And one of the reasons is that yeah, I had the chance to meet, uh, with, uh, you know, people like you again. I mean, we, we met several times over the internet over zoom calls. I, I started to eat these zoom codes. <laugh> because they're really impersonal in the end. And like last night we, we are together group of friends, industry folks. It's just amazing. And a part of that, I mean, the event is, uh, is a really cool, it's really cool. There are a lot from people interviews and, you know, real people doing real stuff, not just, uh, you know, again, in personal calls, you don't even know if they're telling the truth, but when you can, you know, look in their eyes, what they're doing, I, I think that's makes a difference. >>So speaking about real people, meeting people for the first time, new jobs, new roles, Greg Moscarella, enterprise container management and general manager at SUSE. Welcome to the show, welcome back clue belong. >>Thank you very much. It's awesome to be here. It's awesome to be back in person. And I completely agree with you. Like there's a certain fidelity to the conversation and a certain, uh, ability to get to know people a lot more. So it's absolutely fantastic to be here. >>So Greg, tell us about your new role and what SUSE has gone on at KU coupon. >>Sure. So I joined SA about three months ago to lead the rancher business unit, right? So our container management pieces and, you know, it's a, it's a fantastic time. Cause if you look at the transition from virtual machines to containers and to moving to microservices, right alongside that transition from on-prem to cloud, like this is a very exciting time to be in this industry. And rancher has been setting the stage. And again, I'm go back to being here. Rancher's all about the community, right? So this is a very open, independent, uh, community driven product and project. And so this, this is kinda like being back to our people, right. And being able to reconnect here. And so, you know, doing it, digital is great, but, but being here is changes the game for us. So we, we feed off that community. We feed off the energy. So, uh, and again, going back to the space and what's happening in it, great time to be in this space. And you guys have seen the transitions you've seen, I mean, we've seen just massive adoption, uh, of containers and Kubernetes overall and ranchers been been right there with some amazing companies doing really interesting things that I'd never thought of before. Uh, so I'm, I'm still learning on this, but, um, but it's been great so far. >>Yeah. And you know, when we talk about strategy about Kubernetes today, we are talking about very broad strategies. I mean, not just the data center or the cloud with, you know, maybe smaller organization adopting Kubernetes in the cloud, but actually large organization thinking guide and more and more the edge. So what's your opinion on this, you know, expansion of Kubernetes towards the edge. >>So I think you're, I think you're exactly right. And that's actually a lot of meetings I've been having here right now is these are some of these interesting use cases. So people who, uh, whether it be, you know, ones that are easy to understand in the telco space, right? Especially the adoption of 5g and you have all these space stations, new towers, and they have not only the core radio functions or network functions that they're trying to do there, but they have other applications that wanna run on that same environment. Uh, I spoke recently with some of our, our good friends at a major automotive manufacturer, doing things in their factories, right. That can't take the latency of being somewhere else. Right. So they have robots on the factory floor, the latency that they would experience if they tried to run things in the cloud meant that robot would've moved 10 centimeters. >>By the time, you know, the signal got back, it may not seem like a lot to you, but if, if, if you're an employee, you know, there, you know, uh, a big 2000 pound robot being 10 centimeters closer to you may not be what you, you really want. Um, there's, there's just a tremendous amount of activity happening out there on the retail side as well. So it's, it's amazing how people are deploying containers in retail outlets. You know, whether it be fast food and predicting, what, what, how many French fries you need to have going at this time of day with this sort of weather. Right. So you can make sure those queues are actually moving through. It's, it's, it's really exciting and interesting to look at all the different applications that are happening. So yes, on the edge for sure, in the public cloud, for sure. In the data center and we're finding is people want a common platform across those as well. Right? So for the management piece too, but also for security and for policies around these things. So, uh, it really is going everywhere. >>So talk to me, how do, how are we managing that as we think about pushing stuff out of the data center, out of the cloud cloud, closer to the edge security and life cycle management becomes like top of mind thought as, as challenges, how is rancher and sushi addressing >>That? Yeah. So I, I think you're, again, spot on. So it's, it starts off with the think of it as simple, but it's, it's not simple. It's the provisioning piece. How do we just get it installed and running right then to what you just asked the management piece of it, everything from your firmware to your operating system, to the, the cluster, uh, the Kubernetes cluster, that's running on that. And then the workloads on top of that. So with rancher, uh, and with the rest of SUSE, we're actually tacking all those parts of the problems from bare metal on up. Uh, and so we have lots of ways for deploying that operating system. We have operating systems that are, uh, optimized for the edge, very secure and ephemeral container images that you can build on top of. And then we have rancher itself, which is not only managing your ES cluster, but can actually start to manage the operating system components, uh, as well as the workload components. >>So all from your single interface, um, we mentioned policy and security. So we, yeah, we'll probably talk about it more, um, uh, in a little bit, but, but new vector, right? So we acquired a company called new vector, just open sourced, uh, that here in January, that ability to run that level of, of security software everywhere again, is really important. Right? So again, whether I'm running it on, whatever my favorite public cloud providers, uh, managed Kubernetes is, or out at the edge, you still have to have security, you know, in there. And, and you want some consistency across that. If you have to have a different platform for each of your environments, that's just upping the complexity and the opportunity for error. So we really like to eliminate that and simplify our operators and developers' lives as much as possible. >>Yeah. From this point of view, are you implying that even you, you are matching, you know, self, uh, let's say managed clusters at the, at the very edge now with, with, you know, added security, because these are the two big problems lately, you know, so having something that is autonomous somehow easier to manage, especially if you are deploying hundreds of these that's micro clusters. And on the other hand, you need to know a policy based security that is strong enough to be sure again, if you have these huge robots moving too close to you, because somebody act the, the, the class that is managing them, that is, could be a huge problem. So are you, you know, approaching this kind of problems? I mean, is it, uh, the technology that you are acquired, you know, ready to, to do this? >>Yeah. I, I mean, it, it really is. I mean, there's still a lot of innovation happening. Don't, don't get me wrong. We're gonna see a lot of, a lot more, not just from, from SA and ranch here, but from the community, right. There's a lot happening there, but we've come a long way and we solved a lot of problems. Uh, if I think about, you know, how do you have this distributed environment? Uh, well, some of it comes down to not just, you know, all the different environments, but it's also the applications, you know, with microservices, you have very dynamic environment now just with your application space as well. So when we think about security, we really have to evolve from a fairly static policy where like, you might even be able to set an IP address and a port and some configuration on that. >>It's like, well, your workload's now dynamically moving. So not only do you have to have that security capability, like the ability to like, look at a process or look at a network connection and stop it, you have to have that, uh, manageability, right? You can't expect an operator or someone to like go in and manually configure a YAML file, right? Because things are changing too fast. It needs to be that combination of convenient, easy to manage with full function and ability to protect your, your, uh, your resources. And I think that's really one of the key things that new vector really brings is because we have so much intelligence about what's going on there. Like the configuration is pretty high level, and then it just runs, right? So it's used to this dynamic environment. It can actually protect your workloads wherever it's going from pod to pod. Uh, and it's that, that combination, again, that manageability with that high functionality, um, that, that is what's making it so popular. And what brings that security to those edge locations or cloud locations or your data center. >>So one of the challenges you're kind of, uh, touching on is this abstraction on, upon abstraction. When I, I ran my data center, I could put, uh, say this IP address, can't talk to this IP address on this port. Then I got next generation firewalls where I could actually do, uh, some analysis. Where are you seeing the ball moving to when it comes to customers, thinking about all these layers of abstraction IP address doesn't mean anything anymore in cloud native it's yes, I need one, but I'm not, I'm not protecting based on IP address. How are customers approaching security from the name space perspective? >>Well, so it's, you're absolutely right. In fact, even when you go to IPV six, like, I don't even recognize IP addresses anymore. <laugh> yeah. >>That doesn't mean anything like, oh, just a bunch of, yeah. Those are numbers, alpha Ric >>And colons. Right. You know, it's like, I don't even know anymore. Right. So, um, yeah, so it's, it comes back to that, moving from a static, you know, it's the pets versus cattle thing. Right? So this static thing that I can sort of know and, and love and touch and kind of protect to this almost living, breathing thing, which is moving all around, it's a swarm of, you know, pods moving all over the place. And so, uh, it, it is, I mean, that's what Kubernetes has done for the workload side of it is like, how do you get away from, from that, that pet to a declarative approach to, you know, identifying your workload and the components of that workload and what it should be doing. And so if we go on the security side some more like, yeah, it's actually not even namespace namespace. >>Isn't good enough if we wanna get, if we wanna get to zero trust, it's like, just cuz you're running in my namespace doesn't mean I trust you. Right. So, and that's one of the really cool things about new vectors because of the, you know, we're looking at protocol level stuff within the network. So it's pod to pod, every single connection we can look at and it's at the protocol layer. So if you say you're on my SQL database and I have a mye request going into it, I can confirm that that's actually a mye protocol being spoken and it's well formed. Right. And I know that this endpoint, you know, which is a, uh, container image or a pod name or some, or a label, even if it's in the same name, space is allowed to talk to and use this protocol to this other pod that's running in my same name space. >>Right. So I can either allow or deny. And if I can, I can look into the content that request and make sure it's well formed. So I'll give you an example is, um, do you guys remember the log four J challenges from not too long ago, right. It was a huge deal. So if I'm doing something that's IP and port based and name space based, so what are my protections? What are my options for something that's got logged four J embedded in like, I either run the risk of it running or I shut it down. Those are my options. Like those neither one of those are very good. So we can do, because again, we're at the protocol layer. It's like, ah, I can identify any log for J protocol. I can look at whether it's well formed, you know, or if it's malicious and it's malicious, I can block it. If it's well formed, I can let it go through. So I can actually look at those, those, um, those vulnerabilities. I don't have to take my service down. I can run and still be protected. And so that, that extra level, that ability to kind of peek into things and also go pod to pod, you know, not just same space level is one of the key differences. So I talk about the evolution or how we're evolving with, um, with the security. Like we've grown a lot, we've got a lot more coming. >>So let's talk about that a lot more coming what's in the pipeline for SUSE. >>Well, probably before I get to that, we just announced new vector five. So maybe I can catch us up on what was released last week. Uh, and then we can talk a little bit about going, going forward. So new vector five, introduce something called um, well, several things, but one of the things I can talk in more detail about is something called zero drift. So I've been talking about the network security, but we also have run time security, right? So any, any container that's running within your environment has processes that are running that container. What we can do is actually comes back to that manageability and configuration. We can look at the root level of trust of any process that's running. And as long as it has an inheritance, we can let that process run without any extra configuration. If it doesn't have a root level of trust, like it didn't spawn from whatever the, a knit, um, function was in that container. We're not gonna let it run. Uh, so the, the configuration that you have to put in there is, is a lot simpler. Um, so that's something that's in, in new vector five, um, the web application firewall. So this layer seven security inspection has gotten a lot more granular now. So it's that pod Topo security, um, both for ingress egress and internal on the cluster. Right. >>So before we get to what's in the pipeline, one question around new vector, how is that consumed and deployed? >>How is new vector consumed, >>Deployed? And yeah, >>Yeah, yeah. So, uh, again with new vector five and, and also rancher 2 65, which just were released, there's actually some nice integration between them. So if I'm a rancher customer and I'm using 2 65, I can actually deploy that new vector with a couple clicks of the button in our, uh, in our marketplace. And we're actually tied into our role-based access control. So an administrator who has that has the rights can just click they're now in a new vector interface and they can start setting those policies and deploying those things out very easily. Of course, if you aren't using, uh, rancher, you're using some other, uh, container management platform, new vector still works. Awesome. You can deploy it there still in a few clicks. Um, you're just gonna get into, you have to log into your new vector, uh, interface and, and use it from there. >>So that's how it's deployed. It's, it's very, it's very simple to use. Um, I think what's actually really exciting about that too, is we've opensourced it? Um, so it's available for anyone to go download and try, and I would encourage people to give it a go. Uh, and I think there's some compelling reasons to do that now. Right? So we have pause security policies, you know, depreciated and going away, um, pretty soon in, in Kubernetes. And so there's a few things you might look at to make sure you're still able to run a secure environment within Kubernetes. So I think it's a great time to look at what's coming next, uh, for your security within your Kubernetes. >>So Paul, we appreciate chief stopping by from ity of Spain, from Spain, I'm Keith Townsend, along with en Rico Sinte. Thank you. And you're watching the, the leader in high tech coverage.

Published Date : May 19 2022

SUMMARY :

Brought to you by red hat, Welcome to the program. And thank you for having me. I had the chance to meet, uh, with, uh, you know, people like you again. So speaking about real people, meeting people for the first time, new jobs, So it's absolutely fantastic to be here. So Greg, tell us about your new role and what SUSE has gone So our container management pieces and, you know, it's a, it's a fantastic time. you know, maybe smaller organization adopting Kubernetes in the cloud, So people who, uh, whether it be, you know, By the time, you know, the signal got back, it may not seem like a lot to you, to what you just asked the management piece of it, everything from your firmware to your operating system, managed Kubernetes is, or out at the edge, you still have to have security, And on the other hand, you need to know a policy based security that is strong have to evolve from a fairly static policy where like, you might even be able to set an IP address and a port and some configuration So not only do you have to have So one of the challenges you're kind of, uh, touching on is this abstraction In fact, even when you go to IPV six, like, Those are numbers, it comes back to that, moving from a static, you know, it's the pets versus cattle thing. And I know that this endpoint, you know, and also go pod to pod, you know, not just same space level is one of the key differences. the configuration that you have to put in there is, is a lot simpler. Of course, if you aren't using, uh, rancher, you're using some other, So I think it's a great time to look at what's coming next, uh, for your security within your So Paul, we appreciate chief stopping by from ity of Spain,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Greg MoscarellaPERSON

0.99+

Greg MuscarellaPERSON

0.99+

SpainLOCATION

0.99+

PaulPERSON

0.99+

JanuaryDATE

0.99+

SUSEORGANIZATION

0.99+

10 centimetersQUANTITY

0.99+

Keith TownsendPERSON

0.99+

EnriquePERSON

0.99+

GregPERSON

0.99+

last weekDATE

0.99+

oneQUANTITY

0.99+

2000 poundQUANTITY

0.99+

one questionQUANTITY

0.99+

Valencia SpainLOCATION

0.98+

2022DATE

0.97+

CoonORGANIZATION

0.97+

bothQUANTITY

0.97+

KubernetesTITLE

0.97+

first timeQUANTITY

0.97+

two big problemsQUANTITY

0.97+

single interfaceQUANTITY

0.96+

IPV sixOTHER

0.96+

CloudnativeconORGANIZATION

0.96+

KubeconORGANIZATION

0.95+

ingressORGANIZATION

0.95+

todayDATE

0.95+

eachQUANTITY

0.95+

SQLTITLE

0.93+

5gQUANTITY

0.93+

SUSETITLE

0.92+

ESTITLE

0.92+

red hatORGANIZATION

0.9+

zeroQUANTITY

0.9+

hundredsQUANTITY

0.88+

KubernetesORGANIZATION

0.87+

Keith townsPERSON

0.84+

vector fiveOTHER

0.84+

last nightDATE

0.84+

vector fiveTITLE

0.83+

EuropeLOCATION

0.83+

Rico SintePERSON

0.82+

three months agoDATE

0.81+

cuon cloud native conORGANIZATION

0.79+

cloud native conORGANIZATION

0.79+

SAORGANIZATION

0.79+

couple yearsQUANTITY

0.78+

2 65COMMERCIAL_ITEM

0.76+

aboutDATE

0.73+

RicoPERSON

0.72+

SALOCATION

0.71+

single connectionQUANTITY

0.63+

rancherORGANIZATION

0.63+

FrenchOTHER

0.6+

egressORGANIZATION

0.58+

reasonsQUANTITY

0.57+

20LOCATION

0.56+

foundationORGANIZATION

0.56+

CUORGANIZATION

0.51+

fiveTITLE

0.47+

KubernetesPERSON

0.46+

KUORGANIZATION

0.45+

conEVENT

0.4+

vectorCOMMERCIAL_ITEM

0.36+

sevenQUANTITY

0.35+

couponEVENT

0.33+

Greg Muscarella, SUSE | Kubecon + Cloudnativecon Europe 2022


 

>>The cube presents, Coon and cloud native con Europe 22, brought to you by the cloud native computing foundation. >>Welcome to Valencia Spain and con cloud native con 20 Europe, 2022. I'm your host, Keith Townson alongside a new host en Rico senior reti, senior editor. I'm sorry, senior it analyst at giong Enrique. Welcome to the program. >>Thank you very much. And thank you for having me. It's exciting. >>So thoughts, high level thoughts of CU con first time in person again in couple years? >>Well, this is amazing for several reasons. And one of the reasons is that yeah, I had the chance to meet, uh, with, uh, you know, people like you again. I mean, we, we met several times over the internet, over zoom codes. I, I started to eat these zoom codes. <laugh> because they're very impersonal in the end. And like last night we, we are together group of friends, industry folks. It's just amazing. And a part of that, I mean, the event is, uh, is a really cool, it's really cool. There are a lot from people interviews and, you know, real people doing real stuff, not just, uh, you know, again, in personal calls, you don't even know if they're telling the truth, but when you can, you know, look in their eyes, what they're doing, I, I think that's makes a difference. >>So speaking about real people, meeting people for the first time, new jobs, new roles, Greg Moscarella enterprise container management in general manager at SUSE, welcome to the show, welcome back clue belong. >>Thank you very much. It's awesome to be here. It's awesome to be back in person. And I completely agree with you. Like there's a certain fidelity to the conversation and a certain, uh, ability to get to know people a lot more. So it's absolutely fantastic to be here. >>So Greg, tell us about your new role and what SUSE has gone on at KU con. >>Sure. So I joined SA about three months ago to lead the rancher business unit, right? So our container management pieces and, you know, it's a, it's a fantastic time. Cause if you look at the transition from virtual machines to containers and to moving to micro services, right alongside that transition from on-prem to cloud, like this is a very exciting time to be in this industry and rancher's been setting the stage. And again, I'm go back to being here. Rancher's all about the community, right? So this is a very open, independent, uh, community driven product and project. And so this, this is kinda like being back to our people, right. And being able to reconnect here. And so, you know, doing it, digital is great, but, but being here is changes the game for us. So we, we feed off that community. We feed off the energy. So, uh, and again, going back to the space and what's happening in it, great time to be in this space. And you guys have seen the transitions you've seen, I mean, we've seen just massive adoption, uh, of containers and Kubernetes overall, and rancher has been been right there with some amazing companies doing really interesting things that I'd never thought of before. Uh, so I'm, I'm still learning on this, but, um, but it's been great so far. >>Yeah. And you know, when we talk about strategy about Kubernetes today, we are talking about very broad strategies. I mean, not just the data center or the cloud with, you know, maybe smaller organization adopting Kubernetes in the cloud, but actually large organization thinking guide and more and more the edge. So what's your opinion on this, you know, expansion of Kubernetes towards the edge. >>So I think you're, I think you're exactly right. And that's actually a lot of meetings I've been having here right now is these are some of these interesting use cases. So people who, uh, whether it be, you know, ones that are easy to understand in the telco space, right? Especially the adoption of 5g and you have all these base stations, new towers, and they have not only the core radio functions or network functions that they're trying to do there, but they have other applications that wanna run on that same environment, uh, spoke recently with some of our, our good friends at a major automotive manufacturer, doing things in their factories, right. That can't take the latency of being somewhere else. Right? So they have robots on the factory floor, the latency that they would experience if they tried to run things in the cloud meant that robot would've moved 10 centimeters. >>By the time, you know, the signal got back, it may not seem like a lot to you, but if, if, if you're an employee, you know, there, you know, uh, a big 2000 pound robot being 10 centimeters closer to you may not be what you, you really want. Um, there's, there's just a tremendous amount of activity happening out there on the retail side as well. So it's, it's amazing how people are deploying containers in retail outlets. You know, whether it be fast food and predicting, what, what, how many French fries you need to have going at this time of day with this sort of weather. Right. So you can make sure those queues are actually moving through. It's, it's, it's really exciting and interesting to look at all the different applications that are happening. So yes, on the edge for sure, in the public cloud, for sure. In the data center and we're finding is people want to common platform across those as well. Right? So for the management piece too, but also for security and for policies around these things. So, uh, it really is going everywhere. >>So talk to me, how do, how are we managing that as we think about pushing stuff out of the data center, out of the cloud cloud, closer to the edge security and life cycle management becomes like top of mind thought as, as challenges, how is rancher and sushi addressing >>That? Yeah. So I, I think you're, again, spot on. So it's, it starts off with the think of it as simple, but it's, it's not simple. It's the provisioning piece. How do we just get it installed and running right then to what you just asked the management piece of it, everything from your firmware to your operating system, to the, the cluster, uh, the Kubernetes cluster, that's running on that. And then the workloads on top of that. So with rancher, uh, and with the rest of SUSE, we're actually tacking all those parts of the problems from bare metal on up. Uh, and so we have lots of ways for deploying that operating system. We have operating systems that are, uh, optimized for the edge, very secure and ephemeral container images that you can build on top of. And then we have rancher itself, which is not only managing your Kubernetes cluster, but can actually start to manage the operating system components, uh, as well as the workload components. >>So all from your single interface, um, we mentioned policy and security. So we, yeah, we'll probably talk about it more, um, uh, in a little bit, but, but new vector, right? So we acquired a company called new vector, just open sourced, uh, that here in January, that ability to run that level of, of security software everywhere again, is really important. Right? So again, whether I'm running it on, whatever my favorite public cloud providers, uh, managed Kubernetes is, or out at the edge, you still have to have security, you know, in there. And, and you want some consistency across that. If you have to have a different platform for each of your environments, that's just upping the complexity and the opportunity for error. So we really like to eliminate that and simplify our operators and developers lives as much as possible. >>Yeah. From this point of view, are you implying that even you, you are matching, you know, self, uh, let's say managed clusters at the, at the very edge now with, with, you know, added security, because these are the two big problems lately, you know, so having something that is autonomous somehow easier to manage, especially if you are deploying hundreds of these that's micro clusters. And on the other hand, you need to know a policy based security that is strong enough to be sure again, if you have these huge robots moving too close to you, because somebody act the class that is managing them, that could be a huge problem. So are you, you know, approaching this kind of problems? I mean, is it, uh, the technology that you are acquired, you know, ready to, to do this? >>Yeah. I, I mean, it, it really is. I mean, there's still a lot of innovation happening. Don't, don't get me wrong. We're gonna see a lot of, a lot more, not just from, from SA and rancher, but from the community, right. There's a lot happening there, but we've come a long way and we've solved a lot of problems. Uh, if I think about, you know, how do you have this distributed environment? Uh, well, some of it comes down to not just, you know, all the different environments, but it's also the applications, you know, with microservices, you have very dynamic environment now just with your application space as well. So when we think about security, we really have to evolve from a fairly static policy where like, you might even be able to set an IP address in a port and some configuration on that. It's like, well, your workload's now dynamically moving. >>So not only do you have to have that security capability, like the ability to like, look at a process or look at a network connection and stop it, you have to have that, uh, manageability, right? You can't expect an operator or someone to like go in and manually configure a YAML file, right? Because things are changing too fast. It needs to be that combination of convenient, easy to manage with full function and ability to protect your, your, uh, your resources. And I think that's really one of the key things that new vector really brings is because we have so much intelligence about what's going on there. Like the configuration is pretty high level, and then it just runs, right? So it's used to this dynamic environment. It can actually protect your workloads wherever it's going from pod to pod. Uh, and it's that, that combination, again, that manageability with that high functionality, um, that, that is what's making it so popular. And what brings that security to those edge locations or cloud locations or your data center >>Mm-hmm <affirmative> so one of the challenges you're kind of, uh, touching on is this abstraction on upon abstraction. When I, I ran my data center, I could put, uh, say this IP address, can't talk to this IP address on this port. Then I got next generation firewalls where I could actually do, uh, some analysis. Where are you seeing the ball moving to when it comes to customers, thinking about all these layers of abstraction I IP address doesn't mean anything anymore in cloud native it's yes, I need one, but I'm not, I'm not protecting based on IP address. How are customers approaching security from the name space perspective? >>Well, so it's, you're absolutely right. In fact, even when you go to I P six, like, I don't even recognize IP addresses anymore. <laugh> >>Yeah. Doesn't mean anything like, oh, just a bunch of, yes, those are numbers, ER, >>And colons. Right. You know, it's like, I don't even know anymore. Right. So, um, yeah, so it's, it comes back to that, moving from a static, you know, it's the pets versus cattle thing. Right? So this static thing that I can sort of know and, and love and touch and kind of protect to this almost living, breathing thing, which is moving all around, it's a swarm of, you know, pods moving all over the place. And so, uh, it, it is, I mean, that's what Kubernetes has done for the workload side of it is like, how do you get away from, from that, that pet to a declarative approach to, you know, identifying your workload and the components of that workload and what it should be doing. And so if we go on the security side some more like, yeah, it's actually not even namespace namespace. >>Isn't good enough. We wanna get, if we wanna get to zero trust, it's like, just cuz you're running in my namespace doesn't mean I trust you. Right. So, and that's one of the really cool things about new vectors because of the, you know, we're looking at protocol level stuff within the network. So it's pod to pod, every single connection we can look at and it's at the protocol layer. So if you say you're on my database and I have a mye request going into it, I can confirm that that's actually a mye protocol being spoken and it's well formed. Right. And I know that this endpoint, you know, which is a, uh, container image or a pod name or some, or a label, even if it's in the same name, space is allowed to talk to and use this protocol to this other pod that's running in my same name space. >>Right. So I can either allow or deny. And if I can, I can look into the content that request and make sure it's well formed. So I'll give you an example is, um, do you guys remember the log four J challenges from not too long ago, right. Was, was a huge deal. So if I'm doing something that's IP and port based and name space based, so what are my protections? What are my options for something that's got log four J embedded in like I either run the risk of it running or I shut it down. Those are my options. Like those neither one of those are very good. So we can do, because again, we're at the protocol layers like, ah, I can identify any log for J protocol. I can look at whether it's well formed, you know, or if it's malicious, if it's malicious, I can block it. If it's well formed, I can let it go through. So I can actually look at those, those, um, those vulnerabilities. I don't have to take my service down. I can run and still be protected. And so that, that extra level, that ability to kind of peek into things and also go pod to pod, you know, not just name space level is one of the key differences. So I talk about the evolution or how we're evolving with, um, with the security. Like we've grown a lot, we've got a lot more coming. >>So let's talk about that a lot more coming what's in the pipeline for SUSE. >>Well, how, before I get to that, we just announced new vector five. So maybe I can catch us up on what was released last week. Uh, and then we can talk a little bit about going, going forward. So new vector five, introduce something called um, well, several things, but one of the things I can talk in more detail about is something called zero drift. So I've been talking about the network security, but we also have run time security, right? So any, any container that's running within your environment has processes that are running that container. What we can do is actually comes back to that manageability and configuration. We can look at the root level of trust of any process that's running. And as long as it has an inheritance, we can let that process run without any extra configuration. If it doesn't have a root level of trust, like it didn't spawn from whatever the, a knit, um, function was and that container we're not gonna let it run. Uh, so the, the configuration that you have to put in there is, is a lot simpler. Um, so that's something that's in, in new vector five, um, the web application firewall. So this layer seven security inspection has gotten a lot more granular now. So it's that pod Topo security, um, both for ingress egress and internal on the cluster. Right. >>So before we get to what's in the pipeline, one question around new vector, how is that consumed and deployed? >>How is new vector consumed, >>Deployed? And yeah, >>Yeah, yeah. So, uh, again with new vector five and, and also rancher 2 65, which just were released, there's actually some nice integration between them. So if I'm a rancher customer and I'm using 2 65, I can actually just deploy that new vector with a couple clicks of the button in our, uh, in our marketplace. And we're actually tied into our role-based access control. So an administrator who has that has the rights can just click they're now in a new vector interface and they can start setting those policies and deploying those things out very easily. Of course, if you aren't using, uh, rancher, you're using some other, uh, container management platform, new vector still works. Awesome. You can deploy it there still in a few clicks. Um, you're just gonna get into, you have to log into your new vector, uh, interface and, and use it from there. >>So that's how it's deployed. It's, it's very, it's very simple to use. Um, I think what's actually really exciting about that too, is we've opensourced it? Um, so it's available for anyone to go download and try, and I would encourage people to give it a go. Uh, and I think there's some compelling reasons to do that now. Right? So we have pause security policies, you know, depreciated and going away, um, pretty soon in, in Kubernetes. And so there's a few things you might look at to make sure you're still able to run a secure environment within Kubernetes. So I think it's a great time to look at what's coming next, uh, for your security within your Kubernetes. >>So, Paul, we appreciate you stopping by from ity of Spain. I'm Keith Townsend, along with en Rico Sinte. Thank you. And you're watching the, the leader in high tech coverage.

Published Date : May 18 2022

SUMMARY :

brought to you by the cloud native computing foundation. Welcome to the program. And thank you for having me. I had the chance to meet, uh, with, uh, you know, people like you again. So speaking about real people, meeting people for the first time, new jobs, So it's absolutely fantastic to be here. So Greg, tell us about your new role and what SUSE has gone So our container management pieces and, you know, it's a, it's a fantastic time. you know, maybe smaller organization adopting Kubernetes in the cloud, So people who, uh, whether it be, you know, By the time, you know, the signal got back, it may not seem like a lot to you, to what you just asked the management piece of it, everything from your firmware to your operating system, If you have to have a different platform for each of your environments, And on the other hand, you need to know a policy based security that is strong have to evolve from a fairly static policy where like, you might even be able to set an IP address in a port and some So not only do you have to have that security capability, like the ability to like, Where are you seeing the In fact, even when you go to I P six, like, it comes back to that, moving from a static, you know, it's the pets versus cattle thing. And I know that this endpoint, you know, and also go pod to pod, you know, not just name space level is one of the key differences. the configuration that you have to put in there is, is a lot simpler. Of course, if you aren't using, uh, rancher, you're using some other, So I think it's a great time to look at what's coming next, uh, for your security within your So, Paul, we appreciate you stopping by from ity of Spain.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Keith TownsonPERSON

0.99+

SUSEORGANIZATION

0.99+

Greg MuscarellaPERSON

0.99+

PaulPERSON

0.99+

10 centimetersQUANTITY

0.99+

Keith TownsendPERSON

0.99+

JanuaryDATE

0.99+

Greg MoscarellaPERSON

0.99+

last weekDATE

0.99+

SpainLOCATION

0.99+

GregPERSON

0.99+

2000 poundQUANTITY

0.99+

one questionQUANTITY

0.98+

KubernetesTITLE

0.98+

oneQUANTITY

0.98+

bothQUANTITY

0.98+

Valencia SpainLOCATION

0.97+

todayDATE

0.97+

KubeconORGANIZATION

0.97+

first timeQUANTITY

0.95+

single interfaceQUANTITY

0.95+

two big problemsQUANTITY

0.95+

eachQUANTITY

0.94+

CoonORGANIZATION

0.94+

ingressORGANIZATION

0.94+

zeroQUANTITY

0.9+

three months agoDATE

0.9+

CloudnativeconORGANIZATION

0.88+

22EVENT

0.86+

SUSETITLE

0.86+

fiveTITLE

0.85+

I P sixOTHER

0.84+

EuropeLOCATION

0.81+

giong EnriquePERSON

0.81+

log fourOTHER

0.8+

2 65COMMERCIAL_ITEM

0.79+

2022DATE

0.78+

vector fiveTITLE

0.77+

couple yearsQUANTITY

0.75+

rancherORGANIZATION

0.73+

FrenchOTHER

0.73+

cloud native computingORGANIZATION

0.73+

KubernetesORGANIZATION

0.72+

last nightDATE

0.71+

single connectionQUANTITY

0.71+

one of the reasonsQUANTITY

0.69+

RicoORGANIZATION

0.68+

Rico SintePERSON

0.67+

SAORGANIZATION

0.66+

aboutDATE

0.66+

layer sevenOTHER

0.65+

vectorOTHER

0.64+

5gQUANTITY

0.64+

65COMMERCIAL_ITEM

0.62+

cloud native conORGANIZATION

0.55+

telcoORGANIZATION

0.55+

2TITLE

0.54+

SALOCATION

0.53+

egressORGANIZATION

0.52+

hundredsQUANTITY

0.51+

CU conEVENT

0.46+

KU con.ORGANIZATION

0.44+

vectorCOMMERCIAL_ITEM

0.39+

20EVENT

0.31+

Akanksha Mehrotra, Dell Technologies | Dell Technologies World 2021


 

(upbeat music) >> Welcome back to DTW 2021, theCUBE's continuous coverage of Dell Technologies World, the virtual version. My name is Dave Vellante and for years we've been looking forward to the day that the on-premises experience was substantially similar to that offered in the public cloud. And one of the biggest gaps has been subscription based experiences, pricing and simplicity and transparency with agility and scalability, not buying and installing a box but rather consuming an outcome based service to support my IT infrastructure needs. And with me to talk about how Dell is delivering on this vision is Akanksha Mehrotra, Vice-President Marketing for APEX at Dell Technologies. Welcome Akanksha, great to see you. >> Thank you, thanks for having me. >> It's our pleasure. So we're going to dig into APEX. We know that Dell has been delivering cloud-based solutions for a long time now, but it seems like there's a convergence happening in all these areas. And it's generating a lot of talk in the industry. What are your customers asking you to deliver and how is Dell responding? >> Yeah, there's a few trends that we're seeing and they've been in place for a while, but they have accelerated certainly over the past year. The first one is organizations all over the world want to become more digital in order to modernize their operation and foster innovation on behalf of their customers. And they've been thriving for years, digital transformation can do so. That in and of itself isn't necessarily new, but the relative complexity of driving digital transformation. For example, when they're bringing on a predominantly or all of the remote workforce as well as the relative piece of change, for example, if they see remarkable spike in the consumption of digital content validated over the past year. And because of that the need for agility has gone up. The other trend that we see is that there's a clear preference for a hybrid cloud approach. Customers tell us that they need on-prem cloud resources to help mitigate risk for applications that need dedicated fast performance as well as, you know, in order to contain costs. But then they also tell us that public cloud is here to stay for the increased agility that it provides a simplified operations as well as the faster access to innovation. And so what's really clear is that both private cloud and public cloud has their strengths and picking one you're inevitably trading off the benefits of the other. And so an organizations want the flexibility to be able to choose the right path that best meet their business objectives. And IT is a service delivered at the location of your choice is one way to do that. As you know, we talk a lot to analysts like yourself and they tend to agree with us. IDC predicts that by 2024 or perhaps a better data center infrastructure is going to be consumed as a service. At Dell Technologies, we're beginning to see the shift happen already. As you said, we've been providing flexible consumption and as a service solutions for well over a decade. However, what's different now is that we're radically simplifying that entire technology experience to deliver this at scale to our entire install base and that's what APEX is all about. >> Great, thank you. So I know Dell is very proud of the tie. I think I got this ratio right, do to say ratio, right? The numerator's bigger than the denominator. And you've got a good track record in this regard. You're going to announce project APEX in October and you've provided a preview of what was coming then and today you're fully unveiling APEX, no more project, just APEX. What's APEX all about and what customer benefits specifically does APEX deliver? >> Yeah, so you're right. We announced that this a vision back in October and now we're kind of taking away the project and it's generally available. So you can kind of refer to it as APEX going forward. APEX represents our portfolio of as-a-service offerings. These helps simplify digital transformation for our customers by increasing their IT's agility and their control. We believe it's a solution that helps bridge this divide between public and private cloud by delivering as a service wherever it's needed to help organizations meet the needs of their digital transformation agenda. Talking to our customers in terms of customer benefits, we've centered around three areas and they are simplicity, agility, and control as the key benefits that APEX is going to provide to our customers. So let me unpack these one by one and kind of demonstrate how we're going to deliver on these promises. Let's start with simplicity. APEX represents a fundamental shift in the way that we deliver our technology portfolio. And obviously we do this to simplify IT for our customers. Our goal is to remove complexity from every stage of the customer journey. So for example, with APEX and APEX offers that I'll just get into in a bit, we take away that complexity, the pain and frankly the undifferentiated work of managing infrastructure so that organizations can focus on what they do best, right? Adding value to their organizations. Another way in which we simplify is streamline the procurement process. So we allow customers to just simplify a simple set of outcomes that they're looking for and subscribe to a service using an easy web based console and then we'll take it from there. We will pick the technology and its services that best meets the needs, you know, best delivers on those set of outcomes and then we'll deliver it for them. So as a result, organizations can kind of take advantage of the technology that best meets their needs but without all the complexity of life cycle management whether it's at the beginning or at the end, you know, the decommissioning part of the life cycle. Next, let's talk about agility. This is an area that's been top of mind for our customers as I said, certainly over the past year and frankly, it's been one of the main driving factors over the other service revolution. Again, with APEX we aim to deliver agility to every stage of the customer journey. So for example, with APEX, our goal is to get customers started on projects faster than they ever have before within their data center. We target a 14 day time to value from order to activation or from subscription to activation within the location of their choice. Another driver for agility is having access to technology when you need it without costly over provisioning. So with APEX, you can dynamically scale your resources up and down based on changing business requirements. And then the third barrier of agility and this is a serious one, it's just forecasting costs and containing them. And with APEX, our promise is that you're paying for technology only as it's used using a clear, consistent and a transparent rate. So you're never guessing what you're going to pay. There's no overage charges and you're not paying to access your own data. And then finally from a control standpoint, often business and IT leaders are forced to make difficult trade offs between the simplicity and the flexibility they want and the control, the performance and the data locality that perhaps they need. APEX will help bridge this divide and so we're not going to make them make this kind of false trade off between them. It'll enable organizations to take control of their operations from where resources are located to how they are run to who can access them. So for example, by dictating where they want to run their resources in a cool or at the Edge or within their data center, you know, IT teams can take charge of their compliance obligations and simplify them by using role-based permissions stick to limit access, IT organizations can choose who can access certain functionality for configuring APEX services and thereby kind of reduce risk and simplify those security obligations. So, those are some examples of, you know, how we deliver simplicity, agility and control to our customers with APEX. >> You know, I'll give you a little aside here if I may, you know, you said the trade-offs and I've been working on this scenario of how we're going to come back from the pandemic. And you're seeing this hybrid approach where we're, organizations are having to fund their digital transformation. They're having to support a hybrid workforce and their headquarters investments, their traditional data center investments have been neglected. And the other thing is there's very clearly a skills gap, a shortage of talent. So to the extent that you have something like APEX that where I don't have to be provisioning lungs and spending all time, both waiting and provisioning and tuning, that allows me to free up talent and really deliver on some of those problematic areas that are forcing me today to do a trade-off. So I think that really resonates with me Akanksha, so. >> You're exactly right and we're what kind of refactoring applications, learning new skillset, hiring new people. If the part that resonates with you is that agility and simplicity, you know, why not have it where it makes sense in a skill set? >> So APEX is new way of thinking. I mean, certainly for Dell in terms of how you deliver for way customers consume, can you be specific on some of the offerings that we can expect from DTW this year? >> Yes, we've got a variety of announcements, let me talk about those. Let's start with the APEX console. This is a unified experience for the entire APEX journey. It provides self-service access to our catalog of APEX services. As I mentioned customers simply select the outcomes that they're looking for and it's ascribed to the technology services that best meets their needs and then we'll take it from there. From a day two operation standpoint the console will also give customers insight and oversight into other aspects of the APEX experience. For example, they can limit access to the functionality by role. They can modify, view their subscriptions and then modify it. They can engage and kind of provisioning type tasks. They can see costs transparent, review billing and payment information each month and use it for things like show back or charge back to, you know, various business units within their organization. Over time, we will also be integrating the console with common procurement and provisioning systems so that they can further streamline approval workflows as well as published API for further integration from developers at the customer site. So, Net-Net console will be the single place for us to procure, operate and monitor APEX services and we think it's going to become an important way for us to interact with our customers as well as our partners to interact with Dell Technologies going forward. >> Yes, please, no carry on, thanks. >> The next announcement is APEX data storage services. This one is a first in a series of outcome-based turnkey services in the APEX portfolio. At the end this essentially delivers storage resources at the customers at the location that they would prefer. When subscribing to this which is four parameters that the customers need to think about, what type of data services they're looking for, file block and soon it'll be object. What performance tier, the application that the customer is going to run on these resources needs, they can be in three levels, what base capacity they want where they can start at 50 terabytes and then the time length that they're looking for, the subscription length. We also announced a partnership with Equinix. So if a customer wants they can deploy these resources at Equinix's data centers all around the world and still get a unified bill from us and that's it. Once they make those four selections, they subscribe to the service, we take it from there, there's no selecting what product do you want, what configuration on that product, etc, etc. You know, we take care of all of that, include the right services and then kind of deliver it to them. So it's really an outcome-based way of procuring technology as easily as you would provision resources in a public cloud. >> Awesome, so again console, data storage, cloud services, which are key... >> Now, they check the cloud services. >> And then the partner piece with Equinix for latency and proximity, speed of light type stuff, okay, cool. >> Exactly. Cloud services very quickly are integrated solutions to help simplify that adoption and they support both cloud native as well as traditional workloads. Customers can subscribe either to a private cloud offer or a hybrid cloud offer depending on the level of control that they're looking for and the operational consistency that they need. And again, similar to storage services they pick from kind of four simple steps and we'll deliver it to them within 14 days. And then finally, we've got something called custom solutions. These are for customers who are looking for a more flexible as a service environment, they're available right now in over 30 countries, also available to our partner network. Comes in two flavors, APEX Flex On Demand, which takes anything within our broad infrastructure portfolio, servers, storage, data protection, you name it and we can turn that into a paper use environment. You can also select what services you'd like to include. So if a customer wants it managed, we can manage it for them. If they don't want to managed again, you know, include it without those services. And essentially they can configure their own as a service experience. And the data center utility takes it to the next level and offers even more customization in terms of customer elementary options, etc, etc. So that's kind of a quick summary of the announcements in the APEX portfolio. >> Okay, I think I got it. Five buckets, the console, which gives you that full life cycle, that self-service, the storage piece, the cloud services, the Equinix partnership and the partners, that's a whole nother conversation and then the custom piece if you really want to customize it for your... >> And storage services. >> All right, good, okay, you guys have been busy. So you announced project APEX last fall and so I presume you've been out talking to customers about this, prototyping it, testing it out. Maybe you could share some examples of customers who've tried it out and what the feedback has been and the use cases. >> Yeah, let me give you a couple of examples. We'll start with APEX data storage services. As I said, this one's going generally available now. At Dell we believe in drinking our own champagne. So our own IT team has been engaged in a private data of this service for the past several months and their feedback has helped shape the offer. The feedback that they've given us is that they really liked that, like simple life cycle management. You know, they tell us that it speeds up their folks to do a lot of other things. And that are kind of higher level order tasks if you will versus managing the infrastructure. They're seeing greater efficiencies in the past in performance management, they like not having to worry about building a capacity pipeline. And they like being able to kind of build on a charge back process that will allow them to build internal views based on what's being used. And so they think it's going to be a game changer for them. And, you know, that's the feedback that they and of course they've given us lots of feedback that we've also put into building the product itself, in short they really liked the flexibility of it. Let me give you a, maybe a customer example and then a partner example as well. APEX cloud services. This is one where more and more customers are realizing that for compliance, regulatory or performance reasons, maybe public cloud doesn't really work for them. And so they've been looking for ways to get that experience within their data center. APEX hybrid cloud enables this, using this as a foundation customers are quickly able to extend workloads like VDI into these different environments. A global technology consulting firm wanted to focus on their business of providing consulting service versus you know, managing our infrastructure. And so what they also really liked was the people use model and the ability to scale up without having to engage and kind of renegotiating terms. They also appreciated and like the cost transparency that we provided and their feedback to us that it was sort of unmatched with other solutions that they'd seen and they like sort of cost-containment benefits because it give them much more control over their budget. And then from a partner standpoint, APEX custom solutions as I said is available in over 30 countries today, it's available through our vast partner network. We've got a series of lucrative partner options for them. A recent win that we saw in the space was with a healthcare provider. This particular healthcare provider was constantly challenging their IT team to improve service delivery. They wanted to onboard customers faster, drive services deployment while ensuring the compliance of their healthcare data as you I'm sure know their, you know, some strict requirements in this space. With Flex On Demand they were able to dramatically cut that onboarding time from months to days, they were able to be just as agile while simplifying their compliance with industry regulations for data privacy and sovereignty. And so their feedback with that since they were able to be just as agilent just as cost effective as a cloud solution but without the concerns over data residency. So those are a few use cases and then real customer examples of customers that have tried out these services. >> Awesome, thanks for that. And the real transformation for the partners as well. I think actually if partners leaned in they can make a lot of money doing this. >> It means so much in profitability. >> Yeah, well, hey, that's what the channel cares about. I mean, it's different from the past of selling boxes, That was to do, okay, I know you got my margin there, but this I think actually huge opportunities to get deeper into the customer, add value in so many other different ways, the channel is undergoing tremendous transformation. I have to ask you, so I think the first time I saw it, so you have flexible consumption, you've had that for a number of years. I think the first time I saw it it was like late '90s or early 2000s when I saw these types of models emerge. So can you explain how APEX differs from your past as a service offerings? And I got another sort of second part of the question after that. >> Yeah, you're right. We've offered these solutions for a while and very successfully so I should add, certainly over the past year our business has seen tremendous momentum. And if you listen to our earnings you've probably heard that. What's different here is that we're caking, think of this as APEX is a two durdle of that. So we've been doing that. We're going to continue doing that, but what I talked about in APEX customer solutions is what we've been delivering for a while. And of course, we continue to improve it as we get customer feedback on it. What we're doing here on the turnkey side is that we're taking out a product based, not a service based but really an outcome based approach and what's different there and what I mean by that is we're truly looking to bypass complexity throughout the entire technology life cycle. We're truly kind of looking to figure out where can we remove a significant amount of time and effort from IT teams by delivering them an offer that's simple from the get-go. Each of these offers have been designed from the ground up to provide not just the innovative technology that our customers have known us forever, but to so with greater simplicity, to deliver greater agility while still retaining the control that we know our customers want. That is what is different. And by doing that, by making this consistently available in a very kind of simple way we believe we can scale that experience. That along with backed up with our services, our scale, our supply chain leadership that we've had for awhile built on our industry leading portfolio, the broadest in the industry then delivering that with unmatched time to value at whatever location the customer is looking for, by doing these three things we believe we're combining not just the agility that our customers want and as well as the control that they need and putting it all together in the simplest way possible and delivering it with our partners. So I think that's what's different with what we're doing now and frankly that's also our commitment going forward. So you can imagine today, I talked to you about our cloud solutions, our infrastructure solutions, but imagine going forward all of our solutions, server, storage, data protection, workload, end user devices telecom solutions, Edge Solutions, gaming devices all of them kind of delivered in this way. And you know, only the way that Dell Technologies and our partner community camp. >> When I hear you say outcome based a lot of people may say, well, what's that? I'll tell you what I think it is. The outcome I want is I want is I want my IT to be fast, I want it to be reliable, I want it to be at a fair price. I don't want to run out of storage for example and if I need more, I want it fast and I want it simple. I mean, that's the outcome that I want. Is that what you mean by outcome based? >> Absolutely, those are exactly the types of, you know, it's a combinations like you've said of business as well as technology outcomes that we're targeting. But those are exactly it availability, uptime, performance, you know, time to value. Those are exactly the types of outcomes that we're targeting with these offers and that's what our services are designed from the ground up to do. >> Okay, last question, second part of my other question is, I mean, it's essentially, you've got the cloud model. You're bringing that to on-prem, you've got other on-prem competitors, what's different with Dell from the competition? >> Yeah, so I would say from a competitive standpoint as you've said, we certainly have a series of competitors in the on-prem space, and then we've got another set of competitors in the cloud space. And what we are truly trying to do is, you know, bring the best of that experience to wherever our customers want to deploy these resources. From an on-prem standpoint I think our differentiation always has and will continue to be the breadth of our portfolio. You know, the technology that we provide and bringing this APEX experience in a very simple and consistent way across that entire breadth of products. The other differentiation that I believe we have is frankly our pricing model, right? You mentioned it a few times, I talked a little bit about it earlier as well. If I use storage as an example we are not going to have, you know, we're not going to charge you a penalty if you need to scale up and down. We understand and realize that businesses, you know, need to have that flexibility to be able to go up and down and having a simple clear consistent rate that they understand very clearly upfront, that they have visibility to that, you know, charges them in kind of a fair way is another kind of point of differentiation. So not having that kind of surge pricing, if you will. And then finally, the third differences are our services, our scale, our supply chain leadership and then just say-do ratio, right? When we say something we're going to do it and we're going to deliver it. From a cloud clearer standpoint it's really interesting. You know, I talk about this trade off that our customers often have to make. You have to give up control to get this simplicity and agility, and we're not going to make you do that, right? As an IT DN you manage, you know, you've got full control of that infrastructure while still getting the benefits of the agility and the simplicity that today you often have to go to public cloud for. Again, from a pricing standpoint, the other differentiation that we have is you're not going to be paying to access your old data. You pay a clear rate and it stays consistent, there's no egress ingress charges. There's no retraining of your sales force. There's no refactoring of the application to move it there. There's all these kind of unspoken costs that go into moving an application into public cloud that you're not going to see with us. And then finally, from a performance standpoint we do believe that the performance that we have at APEX Solution is significantly better. You know, just the fact that you've got dedicated infrastructure, like you're not running into issues with noisy neighbors, for example, as well as just the underlying quality of the technology that we deliver. I mean, the experience that we've had and not just in the space, but then delivering it to, you know, hundreds of thousands of customers and hundreds and thousands of locations there's a very good at optimizing for a few locations for hundreds of thousands of customers, but we've been for years delivering this experience, across the world, across hundreds and thousands of data centers and the expertise that our services, our supply chain, and in fact their product teams have built out I think will serve as well. >> Great, a lot of depth there Akanshka, thanks so much. And congratulations for giving birth formerly to APEX and best of luck, really appreciate you coming on theCUBE and sharing. >> Thanks Dave, thank you for having me. >> And it was really our pleasure. And thank you for watching everybody. This is theCUBE's coverage ongoing coverage of Dell Tech World 2021, we'll be right back. (upbeat music)

Published Date : May 5 2021

SUMMARY :

that the on-premises experience of talk in the industry. And because of that the need and what customer benefits that best meets the needs, you know, So to the extent that you If the part that resonates with you some of the offerings and it's ascribed to that the customers need to think about, Awesome, so again console, And then the partner piece with Equinix and the operational that self-service, the storage piece, and so I presume you've been out and the ability to scale And the real transformation I have to ask you, I talked to you about our cloud solutions, I mean, that's the outcome that I want. exactly the types of, you know, You're bringing that to on-prem, and the expertise that our to APEX and best of luck, And thank you for watching everybody.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

MaribelPERSON

0.99+

JohnPERSON

0.99+

KeithPERSON

0.99+

EquinixORGANIZATION

0.99+

Matt LinkPERSON

0.99+

Dave VellantePERSON

0.99+

IndianapolisLOCATION

0.99+

AWSORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

ScottPERSON

0.99+

Dave NicholsonPERSON

0.99+

Tim MinahanPERSON

0.99+

Paul GillinPERSON

0.99+

Lisa MartinPERSON

0.99+

AmazonORGANIZATION

0.99+

DavePERSON

0.99+

LisaPERSON

0.99+

EuropeLOCATION

0.99+

Stephanie CoxPERSON

0.99+

AkanshkaPERSON

0.99+

BudapestLOCATION

0.99+

IndianaLOCATION

0.99+

Steve JobsPERSON

0.99+

OctoberDATE

0.99+

IndiaLOCATION

0.99+

StephaniePERSON

0.99+

NvidiaORGANIZATION

0.99+

Chris LavillaPERSON

0.99+

2006DATE

0.99+

Tanuja RanderyPERSON

0.99+

CubaLOCATION

0.99+

IsraelLOCATION

0.99+

Keith TownsendPERSON

0.99+

AkankshaPERSON

0.99+

DellORGANIZATION

0.99+

Akanksha MehrotraPERSON

0.99+

LondonLOCATION

0.99+

September 2020DATE

0.99+

IntelORGANIZATION

0.99+

David SchmidtPERSON

0.99+

90%QUANTITY

0.99+

$45 billionQUANTITY

0.99+

October 2020DATE

0.99+

AfricaLOCATION

0.99+

Mike Bilodeau, Kong Inc. | AWS Startup Showcase: Innovation with CloudData & CloudOps


 

>>Well, good day and welcome back to the cube as we continue our segment featuring AWS star showcase we're with now Mike Bilodeau, who's in corporate development and operations at Kong. Mike, uh, thank you for joining us here on the cube and particularly on the startup showcase. Nice to have you and pong represented here today. Thanks for having me, John. Great to be here. You bet. All right, first off, let's just tell us about pong a little bit and, and, uh, con cadet, which I know is your, your feature program, um, or, um, service. Oh, I love the name by the way. Um, but tell us a little bit about home and then what connect is all about to? Sure. So, uh, Kong as a company really came about in the past five years, our two co-founders came over from Italy in, uh, in the late, in the late aughts, early 20 teens and, uh, had a company called Mashape. >>And so what they were looking at and what they were betting on at that time was that API APIs, uh, were going to be the future of how software was built and how developers interacted with software. And so what came from that was a piece of, uh, they were running that shape as a marketplace at the time. So connecting developers sit in for an API so they can consume them and use them to build new software. And what they found was that actually the most valuable piece of technology that they created was the backbone for running that marketplace. And that backbone is what Kong is. And so they created it to be able to handle a massive amount of traffic, a massive amount of API APIs, all simultaneously. This is a problem that a lot of enterprises have, especially now that we've started to get some microservices, uh, started to, to have more distributed technologies. >>And so what Kong is really is it's a way to manage all of those different API APIs, all of the connections between different microservices, uh, through a single platform, which is called connect. And now that we've started to have Coobernetti's, uh, the, sort of the birth and the, the nascent space of service mesh con connect allows all of those connections to be managed and to be secured and made reliable, uh, through a single platform. So what's driving this right. I mean, um, you, you mentioned micro services, um, and Coobernetti's, and that environment, which is kind of facilitating, you know, this, uh, I guess transformation you might say. Um, but what's the big driver in your opinion, in terms of, of what's pushing this microservices phenomenon, if you will, or this revolution. Sure. And when I think it starts out at, at the simple active of technology acceleration in general, um, so when you look at just the, the real shifts that have come in enterprise, uh, especially looking, you know, start with that at the cloud, but you could even go back to VMware and virtualization is it's really about allowing people to build software more rapidly. >>Um, all of these different innovations that have happened, you know, with cloud, with virtualization now with containers, Kubernetes, microservices, they're really focused on making it, uh, so that developers can build software a lot more quickly, uh, develop the, the latest and greatest in a more rapid way. >>A huge driver out of this is just making it easier for developers, for organizations to bring new technologies to market. Uh, and we see that as a kind of a key driver in a lot of these decisions that are being made. I think another piece of it that's really coming about is looking at, uh, security, uh, as a really big component, you know, do you have a huge monolithic app? Uh, it can become very challenging to actually secure that if somebody gets into kind of that initial, uh, into the, the initial ops space, they're really past the point of no return and can get access to some things that you might not want them to similar for compliance and governance reasons that becomes challenging. So I think you're seeing this combination of where people are looking at breaking things into smaller pieces, even though it does come with its own challenges around security, um, that you need to manage, it's making it so that, uh, there's less ability to just get in and cause a lot of damage kind of all at once. Often Melissa malicious attackers. >>Yeah. You bring up security. And so, yeah, to me, it's almost, in some cases it's almost counterintuitive. I think about, I've got the, if I got this model, the gap and I've got a big parameter around it, right. And I know that I can confine this thing. I can contain this. This is good. Now microservices, now I got a lot of, it's almost like a lot of villages, right. They're all around. And, and uh, I don't have the castle anymore. I've got all these villages, so I have to build walls around all these villages. Right. But you're saying that there that's actually easier to do, or at least you're more capable of doing that now as opposed to living >>Three years ago. Well, you can almost think of it, uh, as if you have this little just right, and you might, um, if you have one castle and somebody gets inside, they're going to be able to find whatever treasury may have, you know, to extend the analogy here a bit, but now it's different, uh, 50 different villages that, you know, uh, an attacker needs to look in, it starts to become really time-consuming and really difficult. And now when you're looking at, especially this idea of kind of cybersecurity, um, the ability to secure a monolithic app is typically not all that different from what you can do with a microservice or with a once you get past that initial point, instead of thinking of it, you know, I have my one wall around everything, you know, think of it almost as a series of walls where it gets more and more difficult. Again, this all depends on, uh, that you're, you're managing that security well, which can get really time-consuming more than anything else and challenging from a pure management standpoint, but from an actual security posture, it is a way of where you can strengthen it, uh, because you're, you're creating more, um, more difficult ways of accessing information for attackers, as well as just more layers potentially of security. >>But what do you do to lift that burden then from, from the customer? Because like you said, that that that's a concern they really don't want to have. Right. They want, they want you to do that. They want somebody to do that for them. So what can, what do you do to alleviate those kinds of stress >>On their systems? Yeah, it's a great question. And this is really where the idea of API management and, um, in it's in its infancy came from, was thinking about, uh, how do we extract a way these different tasks that people don't really want to do when they're managing, uh, how API, how people can interact with their API APIs, whether that be a device or another human, um, and part of that is just taking away. So what we do and what API gate management tools have always done is abstract that into a, a new piece of software. So instead of having to kind of individually develop and write code for security, for logging, for, you know, routing logic, all these different pieces of how those different APIs will communicate with each other, we're putting that into a single piece of software and we're allowing that to be done in a really easy way. >>And so what we've done now with con connect and where we've extended that to you, is making it even easier to do that at a microservices level of scale. So if you're thinking about hundreds or thousands of different microservices that you understand and be able to manage, that's what we're really building to allow people to do. And so that comes with, you know, being able to, to make it extremely easy, uh, to, to actually add policies like authentication, you know, rate-limiting, whatever it may be, as well as giving people the choice to use what they want to use. Uh, we have great partners, you know, looking at the Datadog's, the Okta's of the world who provide a pretty, pretty incredible product. We don't necessarily want to reinvent the wheel on some of these things that are already out there, and that are widely loved and accepted by, uh, technology, practitioners and developers. We just want to make it really easy to actually use those, uh, those different technologies. And so that's, that's a lot of what we're doing is providing a, a way to make it easy to add this, you know, these policies and this logic into each one of these different services. >>So w if you're providing these kinds of services, right. And, and, and, and they're, they're, they're new, right. Um, and you're merging them sometimes with kind of legacy, uh, components, um, that transition or that interaction I would assume, could be a little complex. And, and you've, you've got your work cut out for you in some regards to kind of retrofit in some respects to make this seamless, to make this smooth. So maybe shine a little light on that process in terms of not throwing all the, you know, the bath out, you know, with, with the baby, all the water here, but just making sure it all works right. And that it makes it simple and, and, um, takes away that kind of complexity that people might be facing. >>Yeah, that's really the name of the game. Uh, we, we do not believe that there is a one size fits all approach in general, to how people should build software. Uh, there are going to need instances aware of building a monolithic app. It makes the most sense. There are going to be instances where building on Kubernetes makes the most sense. Um, the key thing that we want to solve is making sure that it works and that you're able to, to make the best technical decision for your products and for your organization. And so in looking at, uh, sort of how we help to solve that problem, I think the first is that we have first class support for everything. So we support, you know, everything down to, to kind of the oldest bare metal servers to NAMS, to containers across the board. Uh, and, and we had that mindset with every product that we brought to market. >>So thinking about our service mesh offering, for instance, um, Kula is the open source project that under tens now are even, but looking at Kumo, one of the first things that we did when we brought it out, because we saw this gap in space was to make sure that that adds first-class support for and chance at the time that wasn't something that was commonly done at all. Uh, now, you know, there there's more people are moving in that direction because they do see it as a need, which is great for the space. Um, but that's something where we, we understand that the important thing is making sure your point, you said it kind of the exact way that we like to, which is it needs to be reliable. It needs work. So I have a huge estate of, you know, older applications, older, uh, you know, potentially environments, even. I might have data centers that might've cloud being, trying to do everything all at once. Isn't really a pragmatic approach. Always. It needs to be able to support the journey as you move to, to a more modern way of building. So in terms of going from on-premise to the cloud, running in a hybrid approach, whatever it may be, all of those things shouldn't be an all or nothing proposition. It should be a phase approach and moving to, to really where it makes sense for your business and for the specific problem >>Talking about cloud deployments, obviously AWS comes into play there in a major way for you guys. Um, tell me a little bit about that, about how you're leveraging that relationship and how you're partnering with them, and then bringing the, the value then to your customer base and kind of how long that's been going on and the kinds of work that you guys are doing together, uh, ultimately to provide this kind of, uh, exemplary product or at least options to your customers. >>Yeah, of course. I think the way that we're doing it first and foremost is that, um, we, we know exactly who AWS is and the space and, and, you know, a great number of our customers are running on AWS. So again, I think that first class support in general for AWS environments services, uh, both from the container service, their, their Kubernetes services, everything that they can have and that they offer to their customers, we want to be able to support, uh, one of the first areas of really that comes to mind in terms of first-class integration and support is thinking about Lambda and serverless. Um, so at the time when we first came out, was that, again, it was early for us, uh, or early in our journey as product and as company, uh, but really early for the space. And so how we were able to support that and how we were able to see, uh, that it could support our vision and, and what we wanted to bring as a value proposition to the market has been, you know, really powerful. So I think in looking at, you know, how we work with AWS, certainly on a partnership level of where we share a lot of the same customers, we share a very similar ethos and wanting to help people do things in the most cost-effective rapid manner possible, and to build the best software. Uh, and, you know, I mean, for us, we have a little bit of a backstory with AWS because Jeffrey's us was a, an early investor in, in common. >>Yeah, exactly. I mean, the, the whole memo that he wrote about, uh, you know, build an API or you're fired was, was certainly an inspiration to, to us and it catalyzed, uh, so much change in, in the technology landscape in general, about how everyone viewed API APIs about building a software that could be reused and, and was composable. And so that's something that, you know, we, we look at, uh, kind of carrying forward and we've been building on that momentum ever since. So, >>Well, I mean, it's just kind of take a, again, a high level, look at this in terms of microservices. And now that it's changing in terms of cloud connectivity. Thank you. Actually, I have a graphic to that. Maybe we can pull up and take a look at this and let's talk about this evolution. You know, what's occurring here a little bit, and, and as we take a look at this, um, tell us what you think those, these impacts are at the end of the day for your customers and how they're better able to provide their services and satisfy their customer needs. >>Absolutely. So this is really the heart of the connect platform and of our vision in general. Um, we'd spoken just a minute ago about thinking how we can support the entire journey or, uh, the, the enterprise reality that is managing a, a relatively complex environment of modelists different services, microservices, you know, circle assumptions, whatever it may be, uh, as well as lots of different deployment methods and underlying tech platforms. You know, if you have, uh, virtual machines and Kubernetes, whatever, again, whatever it may be. But what we look at is just the different sort of, uh, design patterns that can occur in thinking about a monolithic application. And, um, okay. Mainly that's an edge concern of thinking about how you're going to handle connectivity coming in from the edge and looking at a Kubernetes environment of where you're going to have, you know, many Kubernetes clusters that need to be able to communicate with each other. >>That's where we start to think about, uh, our ingress products and Kubernetes ingress that allows for that cross applic, uh, across application communication. And then within the application itself, and looking at service mesh, which we talked a little bit about of just how do I make sure that I can instrument and secure every transaction that's happening in a, a truly microservices, uh, deployment within Kubernetes or outside of it? How do I make sure that that's reliable and secure? And so what we look at is this is just a, uh, part of it is evolution. And part of it is going to be figuring out what works best when it, um, certainly if you're, if you're building something from scratch, it doesn't always make sense to build it, your MDP, as, you know, microservices running on Kubernetes. It probably makes sense to go with the shortest path, uh, at the same time, if you're trying to run it at massive scale and big applications and make sure they're as reliable as possible, it very well does make sense to spend the time and the effort to, to make humanize work well for you. >>And I think that's, that's the, the beauty of, of how the space is shifting is that, uh, it's, it's going towards a way of the most practical solution to get towards business value, to, to move software quicker, to give customers the value that they want to delight them to use. Amazon's, uh, you know, phrase ology, if that's, uh, if that's a word, uh, it's, it's something of where, you know, that is becoming more and more standard practice versus just trying to make sure that you're doing the, the latest and greatest for the sake of, of, uh, of doing it. >>So we've been talking about customers in, in rather generic terms in terms of what you're providing them. We talked about new surfaces that are certainly, uh, providing added value and providing them solutions to their problems. Can you give us maybe just a couple of examples of some real life success stories, where, where you've had some success in terms of, of providing services that, um, I assume, um, people needed, or at least maybe they didn't know they needed until, uh, you, you provided that kind of development that, but give us an idea of maybe just, uh, shine, a little light on some success that you've had so that people at home watching this can perhaps relate to that experience and maybe give them a reason to think a little more about calm. >>Yeah, absolutely. Uh, there, there's a number that come to mind, but certainly one of the customers that I spent a lot of time with, uh, you know, become almost friends would be with, uh, with the different, with a couple of the practitioners who work there is company called Cargill. Uh, it's a shared one with us and AWS, you know, it's one we've written about in the past, but this is one of the largest companies in the world. Um, and, uh, the, the way that they describe it is, is that if you've ever eaten a Vic muffin or eaten from McDonald's and had breakfast there, you you've used a Cargill service because they provide so much of the, the food supply chain business and the logistics for it. They had a, uh, it's a, it's an old, you know, it's a century and a half old company. >>It has a really story kind of legacy, and it's grown to be an extremely large company that's so private. Uh, but you know, they have some of the most unique challenges. I think that I've, I've seen in the space in terms of needing to be able to ensure, uh, that they're able to, to kind of move quickly and build a lot of new services and software that touch so many different spaces. So they were, uh, the challenge that was put in front of them was looking at really modernizing, you know, again, a century and a half old company modernizing their entire tech stack. And, you know, we're certainly not all of that in any way, shape or form, but we are something that can help that process quite a bit. And so, as they were migrating to AWS, as they were looking at, you know, creating a CICB process for, for really being able to ship and deploy new software as quickly as possible as they were looking at how they could distribute the, the new API APIs and services that they were building, we were helping them with every piece of that journey, um, by being able to, to make sure that the services that they deployed, uh, performed in the way that they expected them to, we're able to give them a lot of competence and being able to move, uh, more rapidly and move a lot of software over from these tried and true, uh, you know, older or more legacy of doing things to a much more cloud native built as they were looking at using Kubernetes in AWS and, and being able to support that handle scale. >>Again, we are something that was able to, to kind of bridge that gap and make sure that there weren't going to be disruptions. So there, there are a lot of kind of great reasons of why they're their numbers really speak for themselves in terms of how, uh, how much velocity they were able to get. You know, they saying them saying them out loud on the sense fake in some cases, um, because they were able to, you know, I think like something, something around the order of 20 X, the amount of new API APIs and services that they were building over a six month period, really kind of crazy crazy numbers. Um, but it is something where, you know, the, for us, we, we got a lot out of them because they were open source users. So calling is first and foremost, an open source company. >>And so they were helping us before they even became paying customers, uh, just by testing the software and providing feedback, really putting it through its paces and using it at a scale that's really hard to replicate, you know, the scale of a, uh, a couple of hundred thousand person company, right? Yeah. Talking about a win-win yeah. That worked out well. It's certainly the proof is in the pudding and I'm sure that's just one of many examples of success that you've had. Uh, we appreciate the time here and certainly the insights and wish you well on down the road. Thanks for joining us, Mike. Thanks, Sean. Thanks for having me. I've been speaking with Mike Villa from Kong. He is in corporate development and operations there on John Walls, and you're watching on the cube, the AWS startup showcase.

Published Date : Mar 24 2021

SUMMARY :

Mike, uh, thank you for joining us here on the cube and particularly on the startup showcase. And so they created it to be able to handle a massive amount of traffic, which is kind of facilitating, you know, this, uh, I guess transformation you might say. Um, all of these different innovations that have happened, you know, with cloud, as a really big component, you know, do you have a huge monolithic app? that there that's actually easier to do, or at least you're more capable of they're going to be able to find whatever treasury may have, you know, to extend the analogy here a bit, So what can, what do you do to alleviate those security, for logging, for, you know, routing logic, And so that comes with, you know, being able to, to make it extremely not throwing all the, you know, the bath out, you know, with, with the baby, So we support, you know, It needs to be able to support the journey as you move to, how long that's been going on and the kinds of work that you guys are doing together, uh, So I think in looking at, you know, how we work with AWS, And so that's something that, you know, we, we look at, um, tell us what you think those, these impacts are at the end of the day for your of modelists different services, microservices, you know, allows for that cross applic, uh, across application communication. Amazon's, uh, you know, phrase ology, Can you give us maybe just a couple of examples of some real life They had a, uh, it's a, it's an old, you know, it's a century and a half uh, you know, older or more legacy of doing things to a much more cloud native built as on the sense fake in some cases, um, because they were able to, you know, I think like something, you know, the scale of a, uh, a couple of hundred thousand person company,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
SeanPERSON

0.99+

AWSORGANIZATION

0.99+

Mike BilodeauPERSON

0.99+

AmazonORGANIZATION

0.99+

ItalyLOCATION

0.99+

MikePERSON

0.99+

JeffreyPERSON

0.99+

CargillORGANIZATION

0.99+

JohnPERSON

0.99+

Mike VillaPERSON

0.99+

MashapeORGANIZATION

0.99+

John WallsPERSON

0.99+

OktaORGANIZATION

0.99+

firstQUANTITY

0.99+

Three years agoDATE

0.98+

two co-foundersQUANTITY

0.98+

a century and a halfQUANTITY

0.98+

bothQUANTITY

0.98+

oneQUANTITY

0.97+

50 different villagesQUANTITY

0.97+

thousandsQUANTITY

0.97+

hundredsQUANTITY

0.97+

one castleQUANTITY

0.97+

20 XQUANTITY

0.96+

KongORGANIZATION

0.96+

single platformQUANTITY

0.96+

KongLOCATION

0.96+

todayDATE

0.95+

first areasQUANTITY

0.95+

DatadogORGANIZATION

0.95+

single pieceQUANTITY

0.95+

first classQUANTITY

0.94+

single platformQUANTITY

0.94+

one wallQUANTITY

0.93+

LambdaTITLE

0.93+

first-classQUANTITY

0.93+

Kong Inc.ORGANIZATION

0.92+

a century and a half oldQUANTITY

0.91+

KumoORGANIZATION

0.9+

a minute agoDATE

0.89+

KubernetesTITLE

0.88+

first thingsQUANTITY

0.88+

each oneQUANTITY

0.85+

KubernetesORGANIZATION

0.83+

VicORGANIZATION

0.83+

over a six monthQUANTITY

0.82+

McDonald'sORGANIZATION

0.8+

CoobernettiORGANIZATION

0.8+

under tensQUANTITY

0.79+

ingressORGANIZATION

0.78+

hundred thousand personQUANTITY

0.77+

one sizeQUANTITY

0.75+

past five yearsDATE

0.74+

20 teensDATE

0.64+

MelissaORGANIZATION

0.52+

KulaORGANIZATION

0.52+

CloudDataTITLE

0.5+

customersQUANTITY

0.48+

CloudOpsTITLE

0.47+

NAMSORGANIZATION

0.47+

Startup ShowcaseEVENT

0.45+

Mike Bilodeau, Kong Inc. | AWS Startup Showcase


 

(upbeat music) >> Well, good day and welcome back to the Cube as we continue our segment, featuring AWS Startup Showcase and we're with now Mike Bilodeau who's in corporate development and operations at Kong. Mike, thank you for joining us here on the Cube and particularly on the Startup Showcase. Nice to have you and Kong represented here today. >> Thanks for having me, John. Great to be here. >> You better and first off let's just tell us about Kong a little bit and column cadet which I know is your feature program or service. I love the name by the way, but tell us a little bit about Kong and then what Kong is all about too? >> Sure, so Kong as a company really came about in the past five years. Our two co-founders came over from Italy in the late aughts early to 20 teens and had a company called Mashape. And so what they were looking at and what they were betting on at that time, was that APIs were going to be the future of how software was built and how developers interacted with software. And so what came from that was a piece of they were running Mashape as a marketplace at the time. So connecting developers to different APIs so they can consume them and use them to build new software. And what they found was that actually the most valuable piece of technology that they had created was the backbone for running that marketplace. And that backbone is what Kong is. And so they created it to be able to handle a massive amount of traffic, a massive amount of APIs, all simultaneously. This is a problem that a lot of enterprises have especially now that we've started to get some microservices, started to have more distributed technologies. And so what Kong is really is, it's a way to manage all of those different APIs. All of the connections between different microservices through a single platform which is Kong connect. And now that we've started to have Kubernetes the sort of the birth and the nascent space of service mesh. Kong connect allows all of those connections to be managed and to be secured and made reliable through a single platform. >> So what's driving this, right? I mean you mentioned microservices and Kubernetes and that environment which is kind of facilitating this, I guess transformation you might say. But what's the big driver in your opinion in terms of what's pushing this microservices phenomenon if you will or this revolution? >> Sure, and when I think it starts out at the simple active of technology acceleration in general. So when you look at just the real shifts that have come in enterprise to hack especially looking, you know start with that at the cloud but you could even go back to VMware and virtualization is it's really about allowing people to build software more rapidly. All of these different innovations that have happened with cloud, with virtualization, now with containers, Kubernetes, microservices they're really focused on making it so that developers can build software a lot more quickly. Develop the latest and greatest in a more rapid way. I think a huge driver out of this is just making it easier for developers, for organizations to bring new technologies to market. And we see that as a key driver in a lot of these decisions that are being made. I think another piece of it that's really coming about is looking at security as a really big component. You know we have a huge monolithic app. It can become very challenging to actually secure that. If somebody gets into the initial Ops space they're really past the point of no return and can get access to some things that you might not want them to. Similar for compliance and governance reasons, that becomes challenging. So I think you're seeing this combination of where people are looking at breaking things into smaller pieces, even though it does come with its own challenges around security that you need to manage. It's making it so that there's less ability to just get in and cause a lot of damage all at once from malicious attackers. >> Yeah, you bring up security and so, yeah to me it's almost in some cases it's almost counterintuitive. I think about if I got this model to gap and I've got a big parameter around it, right. And I know that I can confine this thing. I can contain this, this is is good. Now microservices, now got a lot of, it's almost like a lot of villages, right? They're all around. And I don't have the castle anymore. I've got all these villages. So I have to build walls around all these villages. But you're saying that that's actually easier to do or at least you're more capable of doing that now as opposed to maybe where we were two, three years ago. >> Well you can almost think of it as if you have those villages, right. And if you have one castle and somebody gets inside they're going to be able to find whatever treasure you may have you know, to extend the analogy here a bit. But now if you have 50 different villages that an attacker needs to look in it starts to become really time consuming and really difficult. And now when you're looking at, especially this idea of cybersecurity, the ability to secure a monolithic app is typically not all that different from what you can do with a microservice or once you get past that initial point. Instead of thinking of it as, you know I have my one wall around everything you now think of it almost as a series of walls where it gets more and more difficult. Again this all depends on that you're managing that security well which can get really time-consuming more than anything else and challenging from a pure management standpoint. But from an actual security posture it is a way of where you can strengthen it because you're you're creating more difficult ways of accessing information for attackers as well as just more layers potentially of security that they need to get them. >> But what do you do to lift that burden then from the customers because like you said that's a concern they really don't want to have. They want you to do that. They want somebody to do that before them. So what do you do to alleviate those kinds of stresses on their systems? >> Yeah, it's a great question. And this is really where the idea of API management in its infancy came from. Was thinking about, how do we abstract a way these different tasks that people don't really want to do when they're managing how people can interact with their APIs whether that be a device or another human? And part of that is just taking away. So what we do and what API game management tools have always done is abstract that into a new piece of software. So instead of having to kind of individually develop and write code for security, for logging, for routing logic, all these different pieces of how those different APIs will communicate with each other we're putting that into a single piece of software, And we're allowing that to be done in a really easy way. And so what we've done now with Kong connect and where we've extended that to is making it even easier to do that at a microservices level of scale. So if you're thinking about hundreds or thousands of different microservices that you need to understand and be able to manage that's what we're really building to allow people to do. And so that comes with being able to make it extremely easy to actually add policies like authentication, rate limiting whatever it may be as well as giving people the choice to use what they want to use. We have great partners looking at the Datadog's, the Okta's of the world who provide a pretty, pretty incredible product. We don't necessarily want to reinvent the wheel on some of these things that are already out there and that are widely loved and accepted by technology practitioners and developers. We just want to make it really easy to actually use those different technologies. And so that's a lot of what we're doing is providing a a way to make it easy to add these policies and this logic into each one of these different services. >> So what if you're providing these kinds of services and they're new and you're merging them sometimes with kind of legacy components? That transition or that interaction I would assume could be a little complex. And you've got your work cut out for you in some regards to kind of retrofit, right? In some respects to make this seamless, to make this smooth. So maybe you shine a little light on that process in terms of not throwing all the bath out with the baby or the water here, but just making sure it all works. And that it makes it simple and takes away that kind of complexity that people might be facing. >> Yeah, that's really the name of the game. We do not believe that there is a one size fits all approach in general to how people should build software. There are going to be instances of where building a monolithic app makes the most sense. There are going to be instances where building a Kubernetes makes the most sense. The key thing that we wonna solve is making sure that it works and that you're able to make the best technical decision for your products and for your organization. And so in looking at how we help to solve that problem, I think the first is that we have first-class support for everything. So we support everything down to kind of the oldest bare metal servers, to IBMs to containers across the board. And we've had that mindset with every product that we brought to market. So thinking about our service mesh for instance Kuma is the open-source project that all depends now on an enterprise one. But looking at Kuma, one of the first things that we did when we brought it out because we saw this gap in the space was to make sure that they have first-class support for virtual machines. At the time that wasn't something that was commonly done at all. Now more people are moving in that direction because they do see it as it need which is great for the space. But that's something where we understand that the important thing is making sure your point you said it kind of the exact way that we like to which is it needs to be reliable. It needs to work. So I have a huge estate of older applications, older potentially environments even I might have data centers, I might have cloud been trying to do everything all at once. Isn't really a pragmatic approach always. It needs to be able to support the journey as you move to a more modern way of building. So in terms of going from on-premise to the cloud, running in a hybrid approach, whatever it may be, all of those things shouldn't be an all-or-nothing proposition. It should be a phased approach and moving to really where it makes sense for your business and for the specific product. >> You've been talking about cloud deployments obviously. AWS comes into play there in a major way for you guys. Tell me a little bit about that. About how you're leveraging that relationship and how you're partnering with them and then bringing the value then to your customer base. And how long that's been going on and the kinds of work that you guys are doing together ultimately to provide this kind of exemplary product or at least options to your customers. >> Yeah, of course. I think the way that we're doing it first and foremost is that we know exactly who AWS is in the space. And great number of our customers are running on AWS. So again, I think that first-class support in general for AWS environments, services both from the container service, their Kubernetes services, everything that they can have and that they offer to their customers we wonna be able to support. One of the first areas really that comes to mind in terms of first-class integration and support is thinking about Lambda and serverless. So at the time when we first came out with that, again it was early for us or early in our journey as product and as company, but really early for the space. And so how we were able to support that and how we were able to see that it could support our vision and what we wanted to bring as a value proposition to the market has been really powerful. So I think in looking at how we work with AWS certainly on a partnership level of where we share a lot of the same customers we share a very similar ethos and wanting to help people do things in the most cost-effective rapid manner possible and to build the best software. And I mean for us we have a little bit of a backstory with AWS 'cause Jeff Bezos was an early investor in Kong. >> That didn't hurt really. Yeah exactly, I mean the whole memo that he wrote about build an API or you're fired was certainly an inspiration to us. And just it catalyzed so much change in the technology landscape in general about how everyone viewed APIs about building a software that could be reused and and was composable. And so that's something that we look at and kind of carry it forward and we've been building on that momentum ever since. >> So I'm going to just kind of take, again a high level. Look at this in terms of microservices and how that's changing in terms of cloud connectivity. Think you actually have a graphic too that maybe we can pull up and take a look at this and let's talk about this evolution. What's occurring here a little bit and as we take a look at this tell us what you think these impacts are at the end of the day for your customers and how they're better able to provide their services and satisfy their customer needs. >> Absolutely, so this is really the heart of the connect platform and of our vision in general. We've spoken just a minute ago about thinking how we can support the entire journey or the enterprise reality that is managing a relatively complex environment of monoliths, different services, microservices, serverless functions, whatever it may be as well as lots of different deployment methods and underlying tech platforms. If you have virtual machines and Kubernetes whatever it may be. But what we look at is just the different design patterns that can occur in thinking about a monolithic application. Okay, mainly that's an edge concern of thinking about how you going to handle connectivity coming in from the edge in looking at a Kubernetes environment of where you going to have many Kubernetes clusters that need to be able to communicate with each other. That's where we start to think about our ingress products and Kubernetes ingress that allows for that cross application communication. And then within the application itself and looking at service mesh which we talked a little bit about of just how do I make sure that I can instrument and secure every transaction that's happening in a truly microservices deployment within Kubernetes or outside of it? How do I make sure that that's reliable and secure? And so what we look at is part of it is evolution. And part of it is going to be figuring out what works best when. Certainly if you're building something from scratch it doesn't always make sense to build it. Your MDP as microservices running on Kubernetes it probably makes sense to go with the shortest path. At the same time if you're trying to run it at massive scale and big applications and make sure they're as reliable as possible it very well does make sense to spend the time and the effort to make Kubernetes work well for you. And I think that's the beauty of how the space is shifting is that it's going towards a way of the most practical solution to get towards business value to move software quicker to give customers the value that they want to delight them to use Amazon's phraseology if that's a word. It's something that is becoming more and more standard practice versus just trying to make sure that you're doing the latest and greatest for the sake of doing it. >> So we've been talking about customers in rather generic terms in terms of what you're providing them. We've talked about new services that are certainly providing added value and providing them with solutions to their problems. Can you give us maybe just a couple of examples of some real life success stories where you've had some success in terms of providing services that I assume people needed or at least maybe they didn't know they needed until you provided that kind of development. But give us an idea, maybe just shine a little light on some success that you've had so that people at home and are watching this can perhaps relate to that experience and maybe give them a reason to think a little more about Kong and Kong connect. >> Yeah, absolutely, there's a number that come to mind but certainly one of the customers that I have spent a lot of time with, become almost friends with a couple of the practitioners who work there, is company called Cargill. It's a shared one with us and AWS. It's one we've written about in the past but this is one of the largest companies in the world. And the way that they describe it as is that if you've ever eaten a McMuffin or eaten from McDonald's and had breakfast there, you've used a Cargill service because they provide so much of the food supply chain business and the logistics for it. You know, it's a century and a half old company. It has a really story and a legacy and it's grown to be an extremely large company that's still private. But they have some of the most unique challenges, I think that I've seen in the space in terms of needing to be able to ensure that they're able to kind of move quickly and build a lot of new services and software that touch so many different spaces. So the challenge that was put in front of them was looking at really modernizing a century and a half old company. Modernizing their entire tech stack. And we're certainly not all of that in any way shape or form but we are something that can help that process quite a bit. And so as they were migrating to AWS as they were looking at creating a CICB process for really being able to shape and deploy new software as quickly as possible. As they were looking at how they could distribute the new APIs and services that they were building, we were helping them with every piece of that journey by being able to to make sure that the services that they deployed performed in the way that they expected them to. We're able to give them a lot of confidence in being able to move more rapidly and move a lot of software over from these tried and true older or more legacy ways of doing things to a much more cloud native build. As they were looking at using Kubernetes in AWS and being able to support that handle scale, again we're something that was able to kind of bridge that gap and make sure that there weren't going to be disruptions. So there are a lot of great reasons of why their numbers really speak for themselves in terms of how much velocity they were able to get. Saying them out loud will sound fake in some cases because they were able to, you know, I think like something around the order of 20 X the amount of new APIs and services that they were building over a six month period. Really kind of crazy, crazy numbers. But it is something where, for us we got a lot out of them because they were open-source users. So Kong is first and foremost an open-source company. And so they were helping us before they even became paying customers. Just by testing the software, providing feedback, really putting it through its paces and using it at a scale that's really hard to replicate. You know the scale of a couple of hundred thousand person company, yeah. >> Talk about a win-win. That worked out well. Certainly the proof is in the pudding and I'm sure that's just one of many examples of success that you've had. We appreciate the time here and certainly the insights and I wish you well on down the road. Thanks for joining us Mike. >> Thanks John, thanks for having me. >> Been peaking with Mike Bilodeau from Kong. He is in corporate development and operations there. I'm John Walls and you're watching "On the Cube" the AWS Startup Showcase. (soft music)

Published Date : Mar 18 2021

SUMMARY :

Nice to have you and Kong Great to be here. about Kong and then what And so they created it to be and that environment which and can get access to some things And I know that I can confine this thing. that they need to get them. from the customers because like you said So instead of having to And that it makes it simple and takes away and moving to really where that you guys are doing and that they offer to their customers and kind of carry it forward So I'm going to just kind and the effort to make this can perhaps relate to and services that they were building of success that you've had. I'm John Walls and you're watching

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Mike BilodeauPERSON

0.99+

AWSORGANIZATION

0.99+

Jeff BezosPERSON

0.99+

John WallsPERSON

0.99+

JohnPERSON

0.99+

AmazonORGANIZATION

0.99+

ItalyLOCATION

0.99+

MikePERSON

0.99+

KongLOCATION

0.99+

CargillORGANIZATION

0.99+

KongORGANIZATION

0.99+

firstQUANTITY

0.99+

MashapeORGANIZATION

0.99+

OktaORGANIZATION

0.99+

50 different villagesQUANTITY

0.99+

McDonaldORGANIZATION

0.99+

On the CubeTITLE

0.99+

KumaORGANIZATION

0.98+

oneQUANTITY

0.98+

two co-foundersQUANTITY

0.98+

OneQUANTITY

0.98+

DatadogORGANIZATION

0.98+

IBMsORGANIZATION

0.98+

hundredsQUANTITY

0.98+

Startup ShowcaseEVENT

0.98+

LambdaTITLE

0.98+

one castleQUANTITY

0.98+

three years agoDATE

0.97+

bothQUANTITY

0.97+

single platformQUANTITY

0.97+

20 XQUANTITY

0.96+

one wallQUANTITY

0.96+

thousandsQUANTITY

0.96+

a century and a half oldQUANTITY

0.96+

first areasQUANTITY

0.95+

KubernetesTITLE

0.94+

todayDATE

0.93+

single pieceQUANTITY

0.92+

twoDATE

0.92+

first-classQUANTITY

0.91+

20 teensQUANTITY

0.91+

ingressORGANIZATION

0.9+

a century and a half oldQUANTITY

0.9+

each oneQUANTITY

0.9+

one sizeQUANTITY

0.9+

first thingsQUANTITY

0.86+

over a six monthQUANTITY

0.85+

minute agoDATE

0.82+

microservicesQUANTITY

0.81+

Kong Inc.ORGANIZATION

0.8+

AWS Startup ShowcaseEVENT

0.8+

hundred thousand personQUANTITY

0.78+

Kong connectORGANIZATION

0.76+

past five yearsDATE

0.71+

MashapeTITLE

0.68+

coupleQUANTITY

0.67+

Tim Hinrichs, Styra | CUBE Conversation, February 2021


 

>> From theCUBE studios in Palo Alto and Boston, connecting with thought leaders all around the world, this is a CUBE conversation. >> Hi, and welcome to another CUBE Conversation. I'm Stu Miniman coming to you from our Boston area office. We've been in the cloud native ecosystem for many years. We know many open source projects, really helping to drive innovation, help companies modernize what they're doing. And one of the companies that leads one of those initiatives, happy to welcome to the program, we're going to be talking to the co-founder and CTO of Styra, that is Tim Hinrichs. First time on theCUBE, of course, company behind OPA. Tim, thank you for joining us. Welcome to the program. >> Hi Stu, thanks for having me. >> All right, so we've had the CEO of Styra, Bill Mann, on the program before, he's many time CUBE alum, it's your first time, and I always love when I get the founder on the program. Of course the question is, give us the why Tim. There's no shortage of tools out there in the industry, but as we've seen in the ecosystem, there's always companies, I wish something could happen, I wish we had something there. Often they've built it for themselves, and then, create a project. So bring us back a little bit to that origin story and what you and the team, what was the inspiration? >> So when we... the first thing to know is that really at Styra what we're focused on is helping enterprises that are embracing cloud native technology, sort of enforce and control the authorization policies across all their different Cloud native software. So I remember authorization is that problems of which people and which machines can perform which actions on software. And so the way this all got started was we were at DIEMware, before we founded Styra, and we were talking to a number of our customers from finance and tech, and what they did was they had built one of these things. They had built a unified solution policy to manage their authorization needs across many different pieces of software. So at that point we knew that the problem was very real, cause people had to solve it themselves. And so when- >> I'm sorry Tim. Just one thing to make sure I understand this. So in the policy management you talk about there, help me understand how that fits into say identity management which is one of the top things we think about when I'm managing my IT, when I go to the Cloud. It seems related but different, yes? >> Absolutely, yeah. So identity management is really this problem of who are you? It's often solved, from a user's point of view, by providing a username and a password, or a thumbprint, or a multi-factor authentication. That's an important problem that needs to be solved. That's authentication or identity. And it's really about proving who you are. But authorization is the next step, it's about what actions can you perform once you've convinced the machine who you are. And so really that's the piece that we focus on. >> All right, yeah, once can we get people in we need... It's usually you want to give them the least amount of access possible. We understand that from a security standpoint, we need to do this. So you've said what the kind of problem was, and that this is there so how open source?... I mean we know often it's, there's many reasons why projects end up open source. So give us the journey here. >> So it started, we've really got two pieces of software, So one of which, as you say is completely open source, it's become the open policy agent project, we decided to open source it and then eventually donate it to the CNCF because it's sort of mission in life is to make authorization decisions make decisions about if an action that a user or machine is trying to take a safe or not. And, that project is really designed to be a decision maker across all the different kinds of software in the cloud native ecosystem. And so naturally, there's a need for a lot of expertise about a whole bunch of different areas, about a whole bunch of different pieces of software and the best way to sort of leverage all of the world's knowledge about all those different pieces of software is to put that project out into the open. And so for us, it was just an easy, very easy thing to do. Every single line of OPA of code that goes into OPA has been done. >> Well, absolutely it's a project I know I've seen the stickers, I've seen people talking about it in the breakout at KubeCon CloudNativeCon shows. Let's not leave everybody, waiting for the news though Tim, it had been an incubating project, believe you've got some news for us. Yeah, absolutely so OPA has now officially graduated, it's now moved from incubation into the graduation portion the CNCF. And for us, it's really exciting because it really is a reflection of the maturity of the project. Right? There's so many people using OPA and using it to solve all kinds of different use cases. We're even seeing vendors pick it up and offer native integrations with their homegrown software. So it's really exciting to see the progress of the project has made >> It just for audience that might not be familiar. What does this mean now that it's graduated as a maturity level? Is it production? Ready? What what are those criteria that allowed to go from that incubating stage to the graduation? Yeah, so there are a bunch of criteria, but I think the biggest one really is really users in production, right? It has been proven at scale for many different users all over the world, right? CNCF just did a survey recently there, a couple hundred different organizations all across the world who were using open in some way, shape or form. We see it all the time and KubeCon and CloudNativeCon talks, you can hear all about all the folks who were using it. >> Yeah, so maybe it would help if you've got a customer example or use case that you can walk us through as to how exactly that fits. >> For sure yeah. So the nice thing about OPA and more generally Styra is that you can apply it to all different kinds of use cases. So there are a couple of very popular ones using it for Kubernetes admission control or micro service authorization, those are the two most popular right now. And they both work roughly the same way but I'll give you a concrete example. For Kubernetes, anytime some end users trying to spin up any resource, whether the pod or an Ingress or anything on the Kube cluster, you can integrate OPA with that Kube API server and allow open make a decision, is this new resource safe to deploy on the cluster? Or is it not? Micro service authorization works almost exactly the same way, every time one of those micro services receives an API call, it can ask OPA is this API call safe for me to off to execute or not? And so both of those are going to work in basically the same way and that's true for all the other applications and use cases for OPA. >> Okay, and give us some of the stats if you would, how many people how many companies and people contribute to it? What was the customer base look like? >> So think they're a bunch of interesting metrics I think that was the one that's most interesting to me is that number of downloads a week. Right now, we're at roughly a million downloads a week, which is super exciting. I remember those days when we hit that one million mark total and we were very excited. And so now we're at a point where it's every week, we're hitting a million downloads, all kinds of contributors as well and I think, another good metric there to think about are, talks I think we had nearly 50 talks, organic talks from end users on OPA that we ran across it last year. >> Well it's wonderful is the thing we love in that ecosystem there is it's not just using it contributing, to the code, sharing with the community. Tim, what are the challenges in this ecosystem? if you go to the CNCF website and you look at the landscape, it's a little bit scary and taunting just because there's so many different pieces. What I understand from OPA is, are there any dependencies there when you think about, the other services that it interacts with? Or does it just, kind of do its own thing enables customers? >> Yeah, so OPA is, wasn't designed to be a standalone project, right? It doesn't depend on really any other CNCF or really any other project. It was designed to make these policies of these authorization decisions and but at the same time, it's also designed to make it very easy to integrate with a wide range of software systems. And so, I think on the OPA website we've got over 25 different integrations that we are the community have built around OPA, to go ahead and give you and deliver on that vision of unified authorization. >> You mentioned that styro has kind of two pieces help us understand, what is graduating mean for customers in general? And for Styra? Help us understand a little bit more of the business that goes along with it. >> So like I said, that first piece that we build that first piece of software we built was the policy agent project open source, the second piece of software that we built is a control plane for OPA. The idea architecturally behind OPA is that you don't have one copy of OPA running, typically, you might have 10, or 100, or thousand copies of OPA running. And you do that for availability and performance aid for decision making. And so Styra second piece of software is what we call the declarative authorization service. It is a control plane and management plane, a single pane of glass that allows you to operationalize OPA at scale for the enterprise. So it really is designed to give you that ability to control and manage distribute policy, right policy log all the policy decisions for all those Opus. And so that's really where we're, that's the second piece of software that we're putting a lot of effort energy into. >> All right, now that the great graduation is there, what does this mean? Give us a little bit of the roadmap, you're the CTO, we know, there's always, feedbacks and other updates coming. So what should we be expecting to be seeing going forward? >> So there a couple of things I'll mention here, one of which is that with OPA we did a survey recently, just trying to get a sense as to what the community needs and how they're using OPA and so one of the things we found was that the fastest growing use case for OPA, it looks to be application authorization, right? So if you're building a custom application, maybe it's a banking application, that application needs to decide every time a user performs an action is this authorized or not? So if I'm trying to withdraw money from an account, is it safe or not? And so that's the fastest growing use case for OPA that we saw on that and so what I expect to see is more and more people talking about using OPA for that application level authorization. On the Styra side, I think what we're looking forward to is just continuing to chat with the community and understand what they need around operationalizing OPA and making that control plane, that management plane do all the things that enterprises need to operationalize OPA at scale. >> Tim, you've reached the graduation, which is a phenomenal milestone in the project there, there's so many other projects out there wonder what advice you would give to other people starting business, starting a project engaging with the open source community? What have you learned along the way? Any lessons learned? And what feedback would you give others? >> Absolutely, so if I'm talking to somebody else who's interested in, starting an open source project, I'll give them a little bit of advice. So the first of which is that certainly the code matters a lot, it's codes got to be technically sound, it's got to be solving real problems. Everybody understands that. I think what a lot of people understand less of is that when you start a project, you need to put a lot of energy into growing, that community that communication, you need to focus a lot, you need to reach out to end users, and actively engage with them. Help them understand what the project's good for. Help them be successful with it. And so I think that piece is what a lot of people don't really understand, and it's something that I think we that if more people did, we'd see a lot more successful open source projects. >> Alright, Tim, I'll let you have the final word and any final things you want to feed back to the community or, potential customers for Styra? >> Sure, so first of all, I'd like to say thank you to all of our community members, all the users who've worked with us, all the vendors who are taking her doing integrations with OPA, we'd love to see it, we'd love to see more of it. And at the end of the day, I got to say I'm super excited to be working both with OPA and our commercial declared authorization service really deliver on that vision of unified authorization and deliver that to the vote to the world at large. >> Tim, congratulations to you and the OPA team and Styra definitely looking forward to seeing you at the next gathering of the community. And we'l hear more updates in the future. >> Thanks so much for having me. Steve, this is great. >> All right, and be sure to check out the cube.net for all the back catalog of interviews that we've done, including with the CEO Styra as well as upcoming events that we will be at including, of course KubeCon CloudNativeCon North America happening later this year virtually. I'm Stu Miniman, and thank you for watching theCUBE.

Published Date : Feb 9 2021

SUMMARY :

leaders all around the world, and CTO of Styra, that is Tim Hinrichs. and what you and the team, the first thing to know is one of the top things And so really that's the and that this is there and then eventually donate it to the CNCF So it's really exciting to see all the folks who were using it. as to how exactly that fits. is that you can apply it to all different that we ran across it last year. is the thing we love and but at the same time, bit more of the business is that you don't have to be seeing going forward? so one of the things we found So the first of which is that certainly and deliver that to the to you and the OPA team Thanks so much for having for all the back catalog of

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
StevePERSON

0.99+

February 2021DATE

0.99+

StuPERSON

0.99+

TimPERSON

0.99+

Tim HinrichsPERSON

0.99+

Stu MinimanPERSON

0.99+

BostonLOCATION

0.99+

Bill MannPERSON

0.99+

second pieceQUANTITY

0.99+

Palo AltoLOCATION

0.99+

twoQUANTITY

0.99+

StyraORGANIZATION

0.99+

first pieceQUANTITY

0.99+

StyraPERSON

0.99+

one millionQUANTITY

0.99+

10QUANTITY

0.99+

KubeConEVENT

0.99+

DIEMwareORGANIZATION

0.99+

OPATITLE

0.99+

last yearDATE

0.99+

first timeQUANTITY

0.99+

CNCFORGANIZATION

0.99+

100QUANTITY

0.99+

two piecesQUANTITY

0.99+

bothQUANTITY

0.99+

oneQUANTITY

0.99+

theCUBEORGANIZATION

0.99+

a million downloadsQUANTITY

0.98+

firstQUANTITY

0.98+

one copyQUANTITY

0.98+

imanPERSON

0.98+

later this yearDATE

0.97+

Stu MinPERSON

0.97+

cube.netOTHER

0.97+

thousand copiesQUANTITY

0.97+

First timeQUANTITY

0.97+

first thingQUANTITY

0.96+

a million downloads a weekQUANTITY

0.94+

one thingQUANTITY

0.93+

OPAORGANIZATION

0.92+

nearly 50 talksQUANTITY

0.92+

25 different integrationsQUANTITY

0.91+

KubeCon CloudNativeCon North AmericaEVENT

0.91+

CUBEORGANIZATION

0.87+

a weekQUANTITY

0.83+

styroORGANIZATION

0.8+

single paneQUANTITY

0.79+

KubeORGANIZATION

0.76+

Russ Currie, NetScout Systems | AWS re:Invent 2020


 

>> Narrator: From around the globe, It's the Cube. With digital coverage of AWS reinvent 2020. Sponsored by Intel, AWS, and our commudity partners. >> Okay, Welcome back. You're ready. Jeff Frick here with the cube. We are, coming to you from our Palo Alto studio with our continuing coverage of AWS reinvent 2020 digital this year, like everything in 2020 but we're excited to welcome back to The Cube. He's been on a number of times, he's Russ Currie. The vice president enterprise strategy for Netscout systems. Russ great to see you. >> Great to see you, Jeff. Thank you. >> Absolutely. So before we jump into there's so(laughs), so many things going on in 2020. What I do want to do is, is reflect back a little bit. You were first on The Cube at AWS reinvent 2017. So it's been about three years. And I remember, one of the lines you had said, I believe that was your guys' first, AWS show as well. So I wonder if you could reflect on kind of how the world has changed in terms of your business, and the importance of AWS and public cloud within the infrastructure systems of your clients. >> Yeah, well, it was interesting, right? We were just getting our feet wet at that point, and had just introduced some of our technology for use in AWS, and it was kind of a interesting little adventure. So we were looking at it and saying, okay where's this going to lead us? And ultimately now we're just really waist deep in it, and really having a great partnership with AWS, and delivering new technologies, new capabilities, and our customer base also is becoming so much more reliant on public cloud in particular AWS and the services that they can provide. So as we've gone and they've gone it's been a journey that we've taken together, and it's been quite, fruitful and exciting. >> Right, right. And it really reinforces this concept of I think you'd mentioned it before, a blended, you know kind of a blended infrastructure approach. So there's a lot of conversations about public cloud, hybrid cloud, multicloud, et cetera, et cetera. But at the, at the end of the day from a customer perspective, as you've mentioned it's really kind of a blended network, right. And it's really application centric, and you put the applications where those applications need to be to be the most appropriate, and that might even change over time from, from test dev to really roll out to, to scale. So you're seeing that consistency. Consistency, >> Absolutely. Yeah. The, the blended environment that in it it's so incredibly complex of our customers. As they take a look at the way that the world has changed, right? When we take a look at what has happened with people working remotely, working from home and having to come into access services in such a, a completely blended and hybrid environment as you say, not only the move to the cloud, but the move to Colo, and bringing all of this together for interconnect, it's definitely a complex environment that they have to have their fingers on the pulse of. Right? >> Yep, yep. And then of course there was this little thing that happened this year with COVID. And really right in March, April timeframe light switch moment, everybody worked from home, whether you're ready or not. And that was a very different kind of situation. Cause we had to get people secure and safe, and get them up and operating. So I'm sure you(laugh) saw a lot of interesting stuff at your business there, but I'm even more interested in how that's evolved over time. Here we are at the end of 2020, there's going to be you know, some version of this for the foreseeable future. And a lot of companies are saying that, you know there'll be a lot of, kind of work from anywhere pieces that continue forward. So again, with your customers and looking kind of the change between what happened in the spring, and now what's happening as they really of kind of put in the systems that'll enable them to continue to support, you know people working from anywhere, not even really working from home, but working from anywhere. >> Right. Exactly. I mean, as our customers had to bring up more connectivity, new connectivity, and start to add licenses for virtual desktop or for their VPN connectivity ultimately how they got it done, most of our customers said, you know we're running hot, but stable. And I think that that was, that was great for most folks. But now they're leaning into it and saying, okay how do we continue to make this happen? And how do we provide the visibility that we need to ensure that the services that we're delivering are, making it possible for their users to be productive and successful. A user doesn't want to feel that they're not contributing as much as someone else that may be able to make it into the office. And, it's a, it's a challenging time, but with that being said, technology has really stepped up, and in particular, the way that they're able to stand up services in the cloud, and the automation, and potential cost savings that they get from standing up in the cloud has really been a bood for most of our users. And some of the users, you know, the high end enterprise that we're a little bit slow to adopt, now are just turning it on as fast as they possibly can. >> Yeah, it's pretty wild. And then, we had another representative from Netscout on earlier this year. One of the, the kind of recurring themes that we've seen is you know, changes in the threat landscape. So clearly the increased attack surfaces as more and more people are working from home. They're not working from the secure environment at the office. But you guys notice some interesting things about what's happening, and we've, we've seen a little bit too in terms of kind of, ransomware and the increase in ransomware as a particular type of attack that, that seems to be growing in popularity. And these, these people are a little bit more thorough in the badness that they caused before they, they throw in the ransom request, and that they're looking for a little bit more fundamental disruption to enable them to basically extract that ransom is which they hope to do. >> Yeah. I mean the amount of DDoS attacks that we've seen has just grown incredibly over the past several months. And these extortion attacks they come in and they often hit the customer quickly and hard, and then say, turn it back for a bit and say, pay us, or we're going to shut you down. And they're really coming in more towards the back office aspects of things. So, going in and attacking that part of the business is kind of a new environment for a lot of folks. But one of the other interesting(laughs) challenges here with us is that, oftentimes those extortion notes don't make it through to the people that really need to act on them because they get caught in spam filters or they like so they're finding these DDoS attacks, and don't necessarily understand that they're under an extortion attack. So it's a real challenge for folks. And we've seen a good uptake with our on-prem capabilities to provide that kind of protection, right at the top of the security stack with our Arbor edge defense products. So it's been something that we're trying to get out there and help our customers as much as we can. And even that new, folks. >> Yeah. It's a, it's an interesting environment. And we found out from somebody too that sometimes if you actually pay the bad guys you can be breaking other rules for doing business with countries >> Yeah. >> Or people that we're not supposed to be doing business with. Like, that's the last thing you need to think about when you're trying to get all your data, and your company back online. >> Right, yeah, I mean, are you trying to make sure that you're keeping yourself stood up right? And, it's tough and you know kind of the rule one is never pay the extortion right? But you kind of got to take a look at it and say, hey, you know, what do I do? >> Right, right. So, you guys been around for a while. I wonder if we could dive in a little bit, we're at reinvent. Some of the things you guys are doing specifically on the product side to, basically increase your, your AWS capabilities. >> Sure. Thanks, yeah. We've been working really closely with AWS as they start to roll out new technologies. Last year, we were fundamental in the VPC ingress routing announcement that they have. We've been working with them with their traffic mirroring capabilities. So technology-wise, we keep in close touch with them in terms of everything that they are delivering. But also on the business side of it, we have our networking competency and just last week got our migration competency. So what we're really doing is, trying to both work the technical and the business relationship, as much as we can to try and expand our overall capabilities of book print with AWS. And, having that visibility and being able to kind of provide that same level of control and capability that you had, on-prem in your enterprise network as you move into the public cloud is a great benefit to a lot of our customers. They really have the ability now, to deliver services the way they have been delivering it for years and years. >> Now, what do you mean specifically, when you say migration competency or networking competency? >> So, they have these different competency programs for their technology partners. And the networking competency is, that you've demonstrated capabilities in your ability to provide network monitoring, or network management capabilities, or network connectivity. In the applica--, migration side you've really provided the ability to show that you have the tools, and solution set to drive, and help people become successful migrations into AWS. As you can imagine right now, a lot of folks are just lifting and shifting, putting stuff into AWS as quickly as they can to try and take advantage of the automation and the operational efficiencies that you get when you move into public cloud settings. As you make those migrations, you want to ensure that you're not either leaving something behind, that needed to move with it, or building a dependency onto something that's in the background that's going to have an adverse effect on, user experience. And ultimately, it really all comes down to the user experience that are, delivering to your customers and or your user base. Right? >> Right. Right. So what are the things you talked about in a prior interview was kind of the shifting dynamic in terms of network traffic. As there's more and more, you know kind of SAS based applications, and there's more kind of an application centric, and in this kind of API interface between all the applications that, you know the North-south is still significant, but the growth in the East-west traffic, meaning, you know kind of inside, if you will. And that some of the unique challenges that come from that from kind of a network monitoring. I wonder if you can share a little bit more color on that, as to, and are you continuing to see this increase in East West relative to North-south, and what kind of special opportunities and challenges that that presents? >> Yeah, absolu--. There is an absolute growth in terms of the East-west connectivity and, traffic that exists out there. In particular, when we take a look at the way that people are implementing software defined networks, NSX, for example NSXT has now provided the ability to blend your environment whether you're going to any cloud, any vendor as you move between these environments having that ability to deliver network services under the same framework is really beneficial to our customer base. And we've also been partnering very closely with VM-ware, and a lot of our customers are implementing VMware cloud on AWS. So, they have that ability to stand up services in a consistent manner whether it be in their legacy environments, or into the public cloud environments, and have that same ability to provide visibility down into the East-west traffic so that you can see that. So when you're part of the NSX framework, what you're able to do is really leverage the service framework that they have the service and search it, and be part of the clusters and host groups that are exchanging traffic East-west. And our ability to see into that really exposes chall--, not, exposes challenges but exposes potential issues that(laughs) our customers might be having in delivering high quality services. So that visibility is really what we've been keying on. >> Right. I'm just curious to get your take, you know as people kind of, as you said, make this move to public cloud, and, you know, you talked about wholesale migrations, and wholesale lifts and shifts. You know, there, there's kind of a couple trains of thought. One is, you know, using cloud for just pure economics, and trying to save money, and the flexibility. The second one is, is to is to add this automation as things grow in this, these great opportunities to automate, and try to reduce air. But the third one, right, the big one is to drive innovation, and to unlock innovation enable better innovation, and speed of delivery, and, you know, moving at the speed of business, pick your favorite buzzword. I'm curious whether your customers, as you have you seen them all jumping in? How much of it is still, you know, to save money or to, or to, you know, kind of use the basic, you know cost saving economics versus people really embracing the opportunity to use this as a method to drive innovation, and change within their own business? >> So I, I think the realities of 2020 have been forcing people to look at primarily from operational and cost efficiency perspectives, however with an eye towards innovation, and as they start to get themselves into a, zone where they're comfortable, they look to see how they can leverage the cloud to provide new services, and new ways in which they provide their services, and avail themselves of, the underlying technologies that are there to build something that's new and exciting in their overall portfolio. So, I think that 2021 is probably going to be a little bit more of where can I innovate as opposed to, how do I get there? (Jeff laughs) >> It's probably an unfair question here at 2020 cause priorities certainly got turned upside down in the middle of the year. So maybe, maybe innovation got pushed down a little bit from, you know, let's get people up, let's get people safe, and let's make sure they can access all the systems and all this crazy stuff that we've got available to them from wherever they are. >> Yeah, yeah. >> Not just within the, within the home office. >> I was listening to a, panel from federal government a couple of weeks ago, and it was really the way the they've adopted kind of commercial cha-- commercial capabilities to meet some of these challenges things that they wouldn't normally look at. But now it's a set of innovation that they're looking at, to try and make sure that they can avail themselves of the services that are out there and available in the public cloud. >> Yeah. Well, that's great, Russ. It's great to catch up. I'm sure you must be as amazed as anybody as the rapid acceleration of this, you know since the short time you went to your first re-invent and, >> Yeah. >> And clearly AWS and Amazon generally is an execution we're seeing. So, I think they'll keep doing it. So I think you're, you're probably sitting in a good spot. >> I think so. (Jeff laughs) Thank you. (Russ laughs) >> All right. Thank you Russ for, for stopping by and sharing your insight. Look forward to catching up next time. >> Thanks a lot, Jeff. Really appreciate it. >> Alrighty. He's Russ, I'm Jeff. You're watching The Cube's, continuous coverage of AWS reinvent 2020, the virtual event. Thanks for watching and we'll see you next time. (bright music)

Published Date : Dec 2 2020

SUMMARY :

It's the Cube. coming to you from our Palo Alto studio Great to see you, Jeff. one of the lines you had said, in particular AWS and the and you put the applications not only the move to the cloud, and looking kind of the change and the automation, and the increase in ransomware going to shut you down. pay the bad guys Like, that's the last thing Some of the things you and being able to kind of the ability to show that And that some of the unique and have that same ability to and the flexibility. and as they start to in the middle of the year. Not just within the, and available in the public cloud. as the rapid acceleration of this, AWS and Amazon generally is I think so. Look forward to catching up next time. Thanks a lot, Jeff. the virtual event.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
RussPERSON

0.99+

AWSORGANIZATION

0.99+

JeffPERSON

0.99+

AmazonORGANIZATION

0.99+

Jeff FrickPERSON

0.99+

Last yearDATE

0.99+

Russ CurriePERSON

0.99+

Palo AltoLOCATION

0.99+

MarchDATE

0.99+

2020DATE

0.99+

last weekDATE

0.99+

2021DATE

0.99+

NetscoutORGANIZATION

0.99+

IntelORGANIZATION

0.99+

NetScout SystemsORGANIZATION

0.99+

this yearDATE

0.98+

bothQUANTITY

0.98+

firstQUANTITY

0.98+

third oneQUANTITY

0.98+

ColoLOCATION

0.98+

end of 2020DATE

0.98+

oneQUANTITY

0.97+

OneQUANTITY

0.97+

about three yearsQUANTITY

0.97+

AprilDATE

0.94+

VPC ingressORGANIZATION

0.89+

NSXTTITLE

0.88+

second oneQUANTITY

0.87+

earlier this yearDATE

0.85+

couple of weeks agoDATE

0.83+

2017DATE

0.83+

The CubeTITLE

0.81+

The CubeCOMMERCIAL_ITEM

0.78+

yearsQUANTITY

0.76+

rule oneQUANTITY

0.74+

The CubeCOMMERCIAL_ITEM

0.66+

Invent 2020EVENT

0.66+

NSXTITLE

0.65+

pastDATE

0.54+

monthsDATE

0.53+

coupleQUANTITY

0.51+

CubeCOMMERCIAL_ITEM

0.5+

ArborORGANIZATION

0.46+

2020TITLE

0.41+

reinventEVENT

0.4+

COVIDOTHER

0.3+

Akanksha Mehrotra & Caitlin Gordon, Dell Technologies | Dell Technologies World Digital Experience


 

>> Announcer: From around the globe, it's theCUBE with digital coverage of Dell Technologies world, digital experience, brought to you by Dell Technologies. >> Hi, I'm Stu Miniman and this is theCUBE coverage of Dell Technologies world digital experience. Happy to welcome to the program. First we have a first time guest Akanksha Mehrotra, she's the Vice President of Marketing with Dell Technologies. Joining us one of our CUBE alumni, Caitlin Gordon, she's the Vice President of Product Marketing, also with Dell Technologies. Caitlin, welcome back, Akanksha welcome to the program. >> Thank you Stu, happy to be here. >> Alright, so one of the big models we've been talking about for the last few years is a change in how customers acquire things, big thing we've talked about, for many years, this shift from CAPEX to OPEX. How cloud is impacting everything Jeff Clarke in the keynote was talking about, it's the Dell Technologies on demand, DTOD, I guess is the, four letter acronym we use Akansha help us understand a little bit from your standpoint, what is it? Why is it important to your customers? >> Yeah, so Stu, as soon as you as you heard, as part of the keynote, from from Jeff and others earlier today, we've been working really hard to bring the benefits of on demand IT to our customers, in private cloud, public cloud and edge. And certainly this year, especially, we've seen a lot of interest in this, COVID have catalyzed customer interest in flexible consumption in as a service. As we talk with our customers and partners, we hear this almost daily, it's required a level of agility that candidly traditional CAPEX based models simply haven't been able to provide, I mean, imagine taking your workforce remote over the weekend, and the stress that puts on your infrastructure. And so I think that's kind of forced IT to consider some of these alternatives. Another factor has also been, companies have been wanting to preserve capital, right, and avoid large cash outlays and having this type of flexibility and being able to pay for infrastructure, as you're using it, it gives them a way to do that. So I mean, those are some of the customer drivers that we've seen. Last year at Dell Tech Summit, around the this time last year, actually, in November timeframe, we introduced Dell Technologies on demand as our umbrella program for a flexible consumption and as a service solutions. And really what it what it seeks to do is make it easier for customers to get the simplicity and flexibility of cloud, along with the performance and security of on-premises infrastructure. So it's giving them a range of consumption models that include both payment option as well as services that they can apply on any one of the products in our portfolio from end user devices to core data center infrastructure to hybrid cloud solutions. And we've announced that last year, one of the things that you heard about today, and that we're announcing over this event is that we're continually looking to make it easier and simpler for our customers with various turnkey offerings and simpler offerings for them, given the interest that we've seen. >> Yeah, I want to key off of, you mentioned the impact of COVID-19. And for your customers, it's something we've definitely seen that the promise of cloud always has been to be highly flexible, we can scale up, we can scale down. We know that some services out there aren't always as flexible as we might hope. There's certain SaaS solutions, where you're signing up for a multi year offering and even for the cloud, I might lock in some savings by buying something in bulk. So help us understand, what are the benefits that your customer sees, the savings that they get and is this truly cloud flexible, which means I can burst up and scale as I need. And I can it reached the point, oh, hey, I need half the capacity for the next six months. Can I do that? >> Yeah, absolutely. So, Stu we actually commissioned IBC to talk to a few of our customers. So let me maybe share some of the benefits that they saw in broad terms, and then I can maybe share a specific example of what a particular customer saw. So we had IDC talk to several of the customers using Dell Technologies on demand models, various GIOS, and various sort of sizes. And what they found was that on average, they saw about a 23%, lower cost of storage operations per year, which is great, right? Lower cost of operations is always great. IT is always looking for those efficiencies, especially, in the current environment, but that's not all. I think that's just sort of part of the story. What they also shared with us is that, these types of models were able to help them become much more agile in how they work and change how they work. And what they found was that they saw 54% fewer incidents of downtime and they were 92% faster in their ability to deploy storage capacity, because they had that capacity in their data center available ready for that spike when their business saw it. ` So those are just some of the broad examples of what our customers have seen. Another specific example that I would would share with you is a large multinational institution, financial services company, we've been working with them for years to service their, enterprise scale, private cloud. And then more recently, they had us also, manage their storage as a service managed utility. And they've seen phenomenal results, they've been able to get 50% more compute power at 8%, lower cost, and 90% faster or reduce time and provisioning data. It's all about the yes, it's about the cost savings but really, it's about the agility that the business gets, right. And as you started out, right, with COVID, they really needed that agility and that flexibility and having these models available, ready to spike, ready to go down, right, have been able to provide that. >> Yeah, I think another thing we've seen is, people rush to cloud because it promised that agility, and we've had those conversations before is, there's a reality of what that means, which it might not be the resiliency you're looking for, it also might not actually be as simple as he thought it might be. And we're seeing some of that come back on-prem, whether you need resiliency or performance or security, or you don't want to be really locked into a specific public cloud but you still want to have that agility in the benefits of really running your data center in a service oriented model. And that trend has been picking up over the past couple years. And as we've already said a couple times today, we've seen that accelerate, but also, we starting to see more customers ask for it. It's not just the big and more strategic and the aggressive customers that are looking for this more and more customers are kind of seeing that this is the end game and that's kind of leads into where we're going, which is, how do we make this more accessible to others? >> Well, Caitlin, you're using one of one of my punch lines that I've used for a number of years now if remember, when we thought that cloud was inexpensive and easy to use, it's not. And if we look at what customers are doing, it's a hybrid model. They're deploying in multiple environments, we're seeing the public cloud look more like the enterprise, the enterprise look more like, the public cloud. So these offerings have, OPEX flexibility and the like, make a whole lot of sense. So you've said that, you've seen a lot of growth, especially this year, any metrics you can give us on, adoption, love the one customer example, in the financial space, anything else to kind of paint the picture as to, how prevalent this is becoming. >> Yeah, maybe I'll get started. So, we've seen nearly 50%, year over year growth in the customer base or our most recent quarter, and it's growing, we've seen over 500% increase, year on year in signed contracts, customer demand in these types of models has caused us to expand our offerings to into countries like Brazil, Chile, Colombia, India, and China. I mean, we already offered about 50 plus countries and along with our partner, network and even more, so, I mean, those are just some of the data points around business traction. In the models that we have another proof point that I could point you to is that, in April, we include, we announced a payment flexibility program, which gave our customers a number of promotions and options to extend this flexibility into, across our portfolio and into other parts of our businesses. And just recently, about a month ago, we extended that, and we've seen really good traction in that as well. So I think overall, like you said there's aspects about public cloud that customers really like, and they tell us, hey, I want to be able to pay as I go, I want to be able to extend and contract the infrastructure as I'm using it. I want a simple management experience. But then as Caitlin said, they realize that Oh, but I don't want to, pay for the refactoring and then the egress and the ingress charges and some of my workloads are better off on premises for performance, locality, security, compliance reasons, right. And therein lies the promise of as a service for on-prem infrastructure, 'cause really, I keep looking for the best of both worlds. And this gives you that right you can use the consumption models to grow and shrink as you needed, you can us the payment models to only pay for what you're using and along with our partner network, you can have in the location that you want so you can sort of have your cake and eat it too. >> Yeah, and I would just add on to that is that more and more of the conversation is both about how can I consume that more as a service and pay for just what I'm using? But also, how can I spend less time maybe zero time and energy actually managing that infrastructure? And how can I then allocate the time energy resources into running my business and investing in more strategic things? So becomes both an important financial conversation but even more so a conversation about how IT can empower the business, which really just changes what we're able to do for customers. So it's an exciting kind of transition to see this really evolve into really not talking about products anymore, and helping our customers have all their business. >> Well, Caitlin, that's a really interesting point, I want you to talk to us a little bit about the Dell Tech storage as a service, how does that fit, we were just talking about don't want to talk about products, we want to talk about really moving to that full OPEX model so help connect the dots for us. >> Yeah, so we're really excited about this, this will be coming in the first half of next year, as you probably heard earlier today. And what we're doing here is we've really taken what we already have had in market. And we've really upped that to the next level, we've accelerated the simplicity of what we offer here. And think of the experience is all starting in a single console, where you just pick up four things, what's the type of storage you want, what's the performance you want, how much and for how long, that's it. And then now we're counting the time from then to when it's in your data center in days, not months, not weeks, but in days and we're able to get you up and going. And it's your data center of your choice, whether that's on-prem in your own data center, or at a colo facility, we bring that equipment in, we get that deployed, we manage it for you, you operate it, and you simply pay for what you use. So you're really in a quick time to value you're in a very simple model and you're not really responsible for managing infrastructure that's really on us. And that moves you into being in a true OPEX model and it also enables you to accelerate what you're able to leverage that whether it's Blob Storage, file storage, you can get up and running quickly and let us worry about how to manage the infrastructure and we give you the ability to operate what you need to. >> Caitlin, maybe if you could give us a little bit of color as to what happens behind the scenes to make that work. As it sounds wonderful, you've had the program around for a year, these aren't trivial things that you're talking about all the logistics, the management the the gear, and making sure that the physical and the power and everything is all set. So help us understand the engineering, the development, and what this means from kind of a services and go to market that make a solution like this work. >> Yeah, and a lot of ways we're having to change our entire business to help our customers change there's, it goes from top to bottom, and you'll get to hear a lot more about it when we're actually available next year. But when you think about it, we have a lot of the DNA, we have a lot of the experience, we have the technology, but we almost have to completely flip the script on ourselves of how we deliver it, who our customer is, what our then end user customer needs from us, and what the role of things like our global services organization is what the role of our global sales organization is and how do we accelerate providing outcomes to our customers and get the rest out of their way. And the fact that I haven't mentioned a product name, but by the way, we actually have industry leading products and pretty much every category. So of course, on the back end, all of this is going to be powered by our industry leading storage solutions, like power store will be in your data center but at the same time, we will actually have worked to really masked that you don't even need to know that nor do you need to really operate much beyond what you need to really run your business. And that's really it's been an interesting work for us to just flip how we think about everything and you'll hear a whole lot more about it next year as we really bring this out into market but it's been really fun and a big learning for everyone. >> Excellent well yeah, something something power is underneath there well Caitlin. All right why don't you both give us the final takeaway for the Dell Tech on demand account. Start with you in just give us the final takeaway. >> Yeah, so I think look, I back to kind of what we were talking about, we've actually been offering these types of solutions to our customers for a really long time. Through Dell financial services, we've been offering payment flexibility for over 23 years, over 15 years and manage utility. So the customer example that I gave you is a customer who's running storage as a service and has been for many years, I think, building on that experience, listening to our customers feedback over that time period and over, of course, this past year, we're looking to apply all of that, to make it even more simpler for them to consume our infrastructure in the near future. And so, storage as a service is going to be a really exciting proof point of that, the momentum stats and some of the other things that I shared with you today and that you're going to hear about over the next couple of days or another proof point of it. But we're excited about this, and looking forward to continuing the dialogue with our customers with our partners and (mumbles) >> Then I would I'll kind of play off of one of your words there which is is all about simplicity for us is how do we take what we've been able to do for a lot of our customers accelerate that and simplify it to a point where we can offer that for all of our customers. And we're really looking to accelerate this first with storage and then get all of our offerings really into this model, because it's really about getting our customers out of managing infrastructure and give them the time, energy, resources to manage their business and simplicity is paramount to making sure that happens. >> Caitlin and Akanksha, thank you so much for giving us the updates. Congratulations to all the progress and definitely looking forward to hearing more beginning of next year. Thanks for joining. >> Thank you Stu. >> Thank you Stu >> All right, I'm Stu Miniman this is Dell Technology world digital experience. I'm Stu Miniman. And thank you as always for watching theCUBE (upbeat music)

Published Date : Oct 21 2020

SUMMARY :

to you by Dell Technologies. she's the Vice President of Marketing for the last few years and the stress that puts and even for the cloud, I that the business gets, right. and the aggressive customers and easy to use, it's not. and contract the more and more of the so help connect the dots for us. and we give you the ability and making sure that the and get the rest out of their way. for the Dell Tech on demand account. and some of the other things for a lot of our customers and definitely looking And thank you as always

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
CaitlinPERSON

0.99+

AkankshaPERSON

0.99+

Caitlin GordonPERSON

0.99+

Jeff ClarkePERSON

0.99+

Akanksha MehrotraPERSON

0.99+

JeffPERSON

0.99+

DellORGANIZATION

0.99+

AprilDATE

0.99+

50%QUANTITY

0.99+

Dell TechnologiesORGANIZATION

0.99+

90%QUANTITY

0.99+

54%QUANTITY

0.99+

8%QUANTITY

0.99+

92%QUANTITY

0.99+

Stu MinimanPERSON

0.99+

IBCORGANIZATION

0.99+

last yearDATE

0.99+

Dell TechORGANIZATION

0.99+

next yearDATE

0.99+

over 23 yearsQUANTITY

0.99+

over 15 yearsQUANTITY

0.99+

Last yearDATE

0.99+

NovemberDATE

0.99+

ingressORGANIZATION

0.99+

StuPERSON

0.99+

ColombiaLOCATION

0.99+

todayDATE

0.99+

bothQUANTITY

0.99+

BrazilLOCATION

0.99+

ChinaLOCATION

0.99+

FirstQUANTITY

0.99+

COVID-19OTHER

0.99+

first timeQUANTITY

0.99+

egressORGANIZATION

0.99+

ChileLOCATION

0.99+

IndiaLOCATION

0.98+

IDCORGANIZATION

0.98+

oneQUANTITY

0.98+

this yearDATE

0.98+

about 50 plus countriesQUANTITY

0.98+

GIOSTITLE

0.98+

single consoleQUANTITY

0.98+

over 500%QUANTITY

0.98+

both worldsQUANTITY

0.97+

nearly 50%QUANTITY

0.97+

Dell Tech SummitEVENT

0.96+

a yearQUANTITY

0.95+

next six monthsDATE

0.93+

earlier todayDATE

0.93+

23%QUANTITY

0.93+

one customerQUANTITY

0.89+

four letterQUANTITY

0.88+

about a month agoDATE

0.88+

zero timeQUANTITY

0.87+

firstQUANTITY

0.87+

CUBEORGANIZATION

0.85+

first half of next yearDATE

0.84+

Vice PresidentPERSON

0.83+

past couple yearsDATE

0.8+

Manoj Nair. Metallic and Ranga Rajagopalan, Commvault | CUBE Conversation, October 2020


 

(royalty free music) >> Woman's voice: From the Cube Studios in Palo Alto, in Boston, connecting with thought leaders all around the world, this is a cube conversation. >> Hi, I'm Stu Miniman coming to you from our Boston area studio and this is a special cube conversation. I have a special announcement from our friends at Commvault. So welcome back to the program. We have two of our cube alumni. First, we have Manoj Nair, he's actually the general manager of Metallic, which is a Commvault venture. First time Manoj on the program in your role with, with Commvault, welcome back. And also welcoming back Ranga Rajagopalan who's the vice president of products at Commvault. Ranga, caught up with you recently at the FutureReady event that we had over the summer. Thanks so much for joining us again. >> Sure. >> Alright. So Manoj, let's start. Metallic obviously was, you know, the standout you know, thing that everybody talked about last year at Commvault GO. Really helping to, you know, put Commvault clearly into the SaaS marketplace out there. Talking about how, you know, all the wonderful features for managing my data in a cloud environment. So there is an expansion to the portfolio that we're announcing today. Why don't you share the news? >> Yeah, absolutely Stu, you know, it's great to be back here with all of you and Metallic has come a long way from the launch. Just less than a year ago, we announced the creation of Metallic multiple different offerings whether it's protecting SaaS workloads like O365, remote endpoints and a hybrid cloud workloads. You know, the context that we're getting from our customers, especially in the last six months, increased cloud adoption and, you know, remote working collaboration suites being adopted. All of that has been a great accelerator for adoption of SaaS data protection, which is really what the Metallic is offering. We have gone to global countries and expanded to our Commvault customer base who was, you know, using both Commvault software and Metallic now. One of the key things that we're not, you know, today's announcement is focused on a Metallic cloud storage service that as a new service available for Commvault customers are looking to get a, you know, fully managed secure cloud-based SaaS target for protecting all of the data as an air gap copy and this is, you know, is more relevant than ever. >> So Manoj, using the cloud for data protection, for backup isn't new? Ranga, help us understand. I heard in there air gap, I heard, you know, leveraging the cloud. Absolutely, we've seen a huge tailwind for cloud adoption but there's that gap for making sure customers, you know, protect their data, secure their data. Do they have the skillset to be able to leverage that, so help help us drill in and understand what's different about this new service >> You're right Stu. Cloud is absolutely not new but what is really unique about today's announcement with metallic cloud storage service is that we are bringing cloud even closer to our Commvault customers. So thinking from a data management perspective, our customers want to more easily and securely get the benefits of cloud storage. What we are doing today is integrating Metallic cloud storage service as a cloud storage target into our Commvault software as well as our HyperScale X plans. And that lets our customers to seamlessly use cloud storage for their data protection, backup and archival use cases without needing to understand a lot about the cloud, without needing to get through any of the complexities. Think of it as the easy button that is now introduced into the Commvault software and HyperScale X. >> All right, so, if I heard you right, this is a managed service that Commvault is offering. Did I get that right? >> That's fast. >> Yeah >> So, you know, it's a managed service. It's public cloud storage. It's, as Ranga said, the easy button to be able to create your air gap copies in the cloud. And, you know, with everything that we keep hearing about ransomware, and we believe this is one of the, the, the most important steps in ransomware readiness, a lot of our customers are already doing it by bringing their own cloud storage on all the clouds we protect, but it's still not easy. And this is a skills gap, you know, the procurement process and all of that, you know, the management of the credentials, the setting up of the networking, all of that is encapsulated. So now, it's just, you know, it's like a built-in feature, just, you know plug it in and now you've got an on-ramp to the cloud. Make sure you have your air gap copy. >> Yeah, maybe it would help if you'd, if you'd talk about the easy button, give us a little compare contrast 'cause, right, I could go, I could spin up instance of the cloud, but, you know, who has access? What are the security settings? There's a whole litany of things that I need to make sure I've got the right identity management. It's kind of easy, but not necessarily simple to, to be able to do that. So from what you're describing I don't even need to really think, you know, yes, it's in the cloud, I'm leveraging all the wonderful things of the cloud, but I don't have to have that, that ramp up of skillset if I don't already have that in house as... Ranga, sounds like I'm understanding that. >> Yeah >> You know. >> Yeah, you're perfectly understanding and that's all there is to it. And let me expand on the PC part there, right? For us, simplicity is into end-customer experience. So I'm going to break this down from a customer life cycle perspective. Think of a Commvault customer who's backing up pretty much all the workloads in the data center. The first question they have is, you know, "For security reasons "for easy, or because I'm in a transformation project "I need to make, I need to start using cloud storage." So the first complexity they would face is understanding which cloud provider to use, what kind of cloud profile to use? or who their cloud or chasing model, which is very different from how they normally procure their hardware and software. So that's really the first dimension of simplicity that this Metallic cloud storage offer. Our customers can procure their cloud storage along with any other Commvault software and hardware just like they would do any other Commvault software. So that's the first level of simplicity. The second one is "How do I bring "that into my data management life cycle." And again, as I mentioned before, MCSS is fully integrated into Commvault software. So through the simplicity of command center, which is the one UI that brings all our products together, customers can just click to the cloud storage target and start backing up, moving copies, archiving, doing all the data management use cases, the second dimension of simplicity. And the third one really is the predictability. You know, cloud is beautiful, It brings a lot of flexibility, but it also brings in a lot of new terms. What are the egress charges? What does ingress mean? What does egress mean? What happens when I have the V store? What happens when I have the Ricola? So all of that complexity is taken away. We handle all of that in the backend. From the customer's perspective, just like they use CAP, just like they use the Desk, now, they can use cloud. We handled all the egress and all those kind of stuff in the backend. From the customer's perspective, they get a simple, predictable price point. So from the time of choosing, procuring it, using it and continuously getting the best benefits out of it, the easy button extends across that entire dimension. And the beauty in all of this is customers getting all the benefits of cloud without having to really understand much about cloud. So that's really the benefit we bring to the table with MCSS. >> Yeah. Manoj, Commvault has a long history of being able to live on, you know, various infrastructures that customers have. Are you able to share who the, I'm assuming there's a cloud partner for part of this, so who is the, the underlying IS? >> Yeah, so still, you know, end of June doing, we announced the next phase of our strategic partnership with Microsoft. So this is a, you know, one of the first big, new things that is coming out of the giant partnership between Commvault and Microsoft around Metallic and Microsoft Azure. There's a lot of things that, you know, we're jointly doing that are unique that make all of the simplicity Ranga, you know, just mentioned, come to life and, you know, that's, you know, power of the end as I call it. It's Commvault and Metallic and Microsoft, you know, coming together to make this really easy for our customers to start getting the value out of leveraging cloud for the data protection. Yeah. >> Well, Manoj, it seems natural extension of what you've already talked about for what Metallic can protect. Of course, you've got the, you know, the business suite from Microsoft, can you help frame it for us, you know, where this new, the MCSS fits in the Metallic portfolio today? >> Yeah absolutely. So if you look at, you know, what... I'll give you a customer journey and what's been happening. If you are not a Commvault customer today and you're looking at "What's my best 0365 data protection option," if you go to microsoft.com, you'll actually find Metallic in there as the recommended offer. And they, they might start the journey there or you're an existing Commvault customer and you start rapidly adopting teams and O365, you know, post COVID. The, the, you know, Metallic is the default option. So it doesn't matter how you enter in, you're now getting a full, you know, SaaS actual backup as a service, no storage costs, no egress costs. And so our Commvault customers have been asking, "We love that part of it, why not make that available "for all of the other data that is being protected "by Commvault, either appliance or software on-prem?" and, you know, in a very simple way, it's, you know, the best things are driven by customers. And in this case, our customers came to us and said, "We love the simple button "not just what's included in the Metallic service, "we would like that that to be available, even for, "you know, the existing software you're protecting on-prem "for the air gap copy use case is kind of the biggest one." And you know, all of the things that Ranga said in terms of simplicity now comes to bear. And it's something that we were including inside the Metallic SaaS offerings. Now, it's available for software and appliance customers. >> Yeah. I definitely, I've heard of the industry now. Microsoft seems a little bit more amenable to, you know, not charging for egress, with some of their partners, when they put together these solutions. Ranga, Manoj has mentioned air gap a couple of times, can you help us frame, you know, what that means today? You know, I even think back, you know, ape that most people are familiar with. Even, I think about, you know, Google, you know, use ape for many years even in the public cloud to give that air gap. Of course, we've talked to your customers lots about how to protect against ransomware. So how does, how does this fit in the new solution? >> You know, unfortunately, Stu today. It's, it's important reality for us to discuss the ransomware readiness. Number of attacks are going up depending on, you know, which your source you are listening to. So security is a very important concern in top of our customers' minds. Now, MCSS is cloud storage, so it is off site storage. So it comes with all the natural layered security that it's built into cloud storage. Additionally, Commvault brings a complete ransomware protection, protection and recovery framework, which becomes inherently available with the MCSS. And let me explain that in a few very simple quotes. Now, the entire journey from on-prem to the cloud storage is completely encrypted. So that's, you know, a very important part of the order on security mechanism, but here is where it really becomes cool Commvault software is managing the cloud credentials, the cloud keys. So the entire access to MCSS as a cloud storage target is managed to Commvault. So there isn't an independent cloud admin accessing that storage, which opens it up for any kind of an intentional or unintentional access. Anything can happen when you allow that access. So Commvault completely manages that access the keys are owned by the customer, but managed by a Commvault. So it's a really air gap security, layered security mechanism that you get in combination with the entire framework of air gap isolation, anomaly protection, the authentication, everything that is built into the Commvault framework. So when you, when you bring in the simplicity that we talked about earlier, you can apply that to the security angle as well here. Instead of making the customer manage yet another piece in the jigsaw, we are managing it for them. So from their perspective, it is a seamless extension to their data management strategy while it also adds an extra layer of security and a readiness to recover from ransomware attacks. >> While it's being launched today, we already have customers that have, you know, we have accelerated into adoption of MCSS and it's coming exactly for the scenarios Ranga just said. You know, they, they have a requirement for a cloud copy. If you have seen that on the Metallic SaaS side that some of the customers might be in pilot mode. And because they were in pilot mode, they were quickly able to recover from attacks that happened. Unfortunately, those, those things are reality. And we have had customers who after the attack go and say "I want to make sure it's much easier to recover from that." And so we already have our first customers who are starting to adopt the service even as we launch it today. >> Well. I'm so glad you brought up the customer examples. Manoj, give us a little bit just the high level view, you talked about the growth and adoption of Metallic overall, and you just talked about kind of the, the single management. You got any SaaS for us, you know, how much data do you have in the cloud now and, you know, what's the growth looking like? And talk a little bit about, you know, what we can expect going forward from this portfolio. >> Yeah, I, you know, I don't know how many people disclose this or not, but we have disclosed it in the past, we have over an exabyte of data today in the cloud that, you know, our customers are, you know, either using a Metallic or bringing their own cloud with Commvault and writing to the cloud. So, you know, that's probably, you know, best in class out there. What we are also seeing is the acceleration of that, you know, so we look at it's, you know, it's exponential growth over a hundred percent, you know, we're, we're seeing that, that rise in leverage yet it's something that when you look at the overall industry percentages, it depends on whose stats you use, it's probably only 5%, maybe 10% that are leveraging the cloud for anything, whether it's, you know, in this case, it's data, cloud data as a secondary target. So there's a lot of untapped potential. And the things that Ranga said I think really are the ones our customers are telling us as we tested this out. And those are the biggest reasons. Right cost, you know, I'm concerned about it. I've heard that it's unpredictable. It goes up, people start spinning up other things that they shouldn't be. And so I want predictable costs, you know, security and the whole model around it, the, the governance of the keys, and finally skills, everyone's busy, no one's trying to not be, you know, upping their cloud skills yet it's not something that is very, you know, very easy for most people to, you know, become an expert. And if you're not an expert while you're protecting your data, that's not, you know, that's not something you want to do, so you kind of hold back. And I think this is really the biggest thing that customers are looking at, like our cloud expertise packaged in an offering solving all those things? >> And Stu, we discussed this at FutureReady of how the Commvault portfolio continues to come closer and closer together in order to deliver that increased value to our customers. In July, when we were having a similar conversation, we saw how Hedvig came in as the scale load storage in our HyperScale X integrated data protection plans. And we can see that we have Metallic Cloud Storage Service coming in as a cloud extension to our software, as well as HyperScale X. So it's kind of bringing the best of both worlds, customers who want to continue to stay on for them, protect their on-prem workloads with on-prem footprint. You have HyperScale X as a very nice scale, which integrated our plans. And as the capacity needs increase, as the security needs increase, you have MCSS now as a managed storage extension, bringing together those pieces of the portfolio. Now, the thing that is now available already as of September 15 is our ability to manage Metallic as part of command center. So while you want that SaaS flexibility and you're using Metallic to protect the SaaS workloads let's also realize that there are a bunch of other workloads that you might be protecting using Commvault software all through HyperScale. We can now bring all of them together into the simplicity of command center. So it, again, takes away another point of complexity for the customer. Just one UI, go ahead, do protect the workloads the way you want. With the form factor you want. SaaS software, or our plans, and we bring it all together into a single management framework for you. So you're going to continue seeing the portfolio coming closer together because our prime concern is to provide flexibility of choice to customers. Flexibility of choice in so many different ways, you know, you can use software, our plans or SaaS. You can bring your own on-prem storage, cloud storage, or if you want to hit the simple button, use Metallic clouds for it. So, so you're going to see that happen as we move forward. >> Well. Manoj, Ranga, thank you so much for the updates. Congratulations on the launch. Love little tagline leading it. We're we're making the cloud just a little bit closer to us. >> It is, >> It is a lot closer. >> Thank you. Thank you Stu for your time. >> Thank you. >> I'm Stu Miniman. Thank you so much for watching theCUBE. (royalty free music)

Published Date : Oct 6 2020

SUMMARY :

all around the world, Hi, I'm Stu Miniman coming to you you know, the standout and this is, you know, is sure customers, you know, Think of it as the easy button that is now introduced All right, so, if I heard you right, So now, it's just, you know, to really think, you know, We handle all of that in the backend. to live on, you know, So this is a, you know, one you know, the business suite And you know, all of the Even, I think about, you know, Google, So that's, you know, a very you know, we have And talk a little bit about, you know, in the cloud that, you know, protect the workloads the way you want. you so much for the updates. Thank you Stu for your time. Thank you so much for watching theCUBE.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
RangaPERSON

0.99+

MicrosoftORGANIZATION

0.99+

twoQUANTITY

0.99+

MetallicORGANIZATION

0.99+

September 15DATE

0.99+

Ranga RajagopalanPERSON

0.99+

Stu MinimanPERSON

0.99+

GoogleORGANIZATION

0.99+

ManojPERSON

0.99+

CommvaultORGANIZATION

0.99+

October 2020DATE

0.99+

BostonLOCATION

0.99+

JulyDATE

0.99+

10%QUANTITY

0.99+

third oneQUANTITY

0.99+

Palo AltoLOCATION

0.99+

Manoj NairPERSON

0.99+

second oneQUANTITY

0.99+

FirstQUANTITY

0.99+

last yearDATE

0.99+

second dimensionQUANTITY

0.99+

todayDATE

0.99+

HyperScale XTITLE

0.99+

MCSSTITLE

0.99+

StuPERSON

0.99+

first customersQUANTITY

0.99+

first levelQUANTITY

0.98+

HyperScale X.TITLE

0.98+

RicolaORGANIZATION

0.98+

end of JuneDATE

0.98+

bothQUANTITY

0.98+

first questionQUANTITY

0.98+

both worldsQUANTITY

0.98+

HyperScaleTITLE

0.98+

FutureReadyORGANIZATION

0.97+

first dimensionQUANTITY

0.97+

5%QUANTITY

0.97+

FutureReadyEVENT

0.97+

OneQUANTITY

0.96+

oneQUANTITY

0.95+

singleQUANTITY

0.95+

Another test of transitions


 

>> Hi, my name is Andy Clemenko. I'm a Senior Solutions Engineer at StackRox. Thanks for joining us today for my talk on labels, labels, labels. Obviously, you can reach me at all the socials. Before we get started, I like to point you to my GitHub repo, you can go to andyc.info/dc20, and it'll take you to my GitHub page where I've got all of this documentation, socials. Before we get started, I like to point you to my GitHub repo, you can go to andyc.info/dc20, (upbeat music) >> Hi, my name is Andy Clemenko. I'm a Senior Solutions Engineer at StackRox. Thanks for joining us today for my talk on labels, labels, labels. Obviously, you can reach me at all the socials. Before we get started, I like to point you to my GitHub repo, you can go to andyc.info/dc20, and it'll take you to my GitHub page where I've got all of this documentation, I've got the Keynote file there. YAMLs, I've got Dockerfiles, Compose files, all that good stuff. If you want to follow along, great, if not go back and review later, kind of fun. So let me tell you a little bit about myself. I am a former DOD contractor. This is my seventh DockerCon. I've spoken, I had the pleasure to speak at a few of them, one even in Europe. I was even a Docker employee for quite a number of years, providing solutions to the federal government and customers around containers and all things Docker. So I've been doing this a little while. One of the things that I always found interesting was the lack of understanding around labels. So why labels, right? Well, as a former DOD contractor, I had built out a large registry. And the question I constantly got was, where did this image come from? How did you get it? What's in it? Where did it come from? How did it get here? And one of the things we did to kind of alleviate some of those questions was we established a baseline set of labels. Labels really are designed to provide as much metadata around the image as possible. I ask everyone in attendance, when was the last time you pulled an image and had 100% confidence, you knew what was inside it, where it was built, how it was built, when it was built, you probably didn't, right? The last thing we obviously want is a container fire, like our image on the screen. And one kind of interesting way we can kind of prevent that is through the use of labels. We can use labels to address security, address some of the simplicity on how to run these images. So think of it, kind of like self documenting, Think of it also as an audit trail, image provenance, things like that. These are some interesting concepts that we can definitely mandate as we move forward. What is a label, right? Specifically what is the Schema? It's just a key-value. All right? It's any key and pretty much any value. What if we could dump in all kinds of information? What if we could encode things and store it in there? And I've got a fun little demo to show you about that. Let's start off with some of the simple keys, right? Author, date, description, version. Some of the basic information around the image. That would be pretty useful, right? What about specific labels for CI? What about a, where's the version control? Where's the source, right? Whether it's Git, whether it's GitLab, whether it's GitHub, whether it's Gitosis, right? Even SPN, who cares? Where are the source files that built, where's the Docker file that built this image? What's the commit number? That might be interesting in terms of tracking the resulting image to a person or to a commit, hopefully then to a person. How is it built? What if you wanted to play with it and do a git clone of the repo and then build the Docker file on your own? Having a label specifically dedicated on how to build this image might be interesting for development work. Where it was built, and obviously what build number, right? These kind of all, not only talk about continuous integration, CI but also start to talk about security. Specifically what server built it. The version control number, the version number, the commit number, again, how it was built. What's the specific build number? What was that job number in, say, Jenkins or GitLab? What if we could take it a step further? What if we could actually apply policy enforcement in the build pipeline, looking specifically for some of these specific labels? I've got a good example of, in my demo of a policy enforcement. So let's look at some sample labels. Now originally, this idea came out of label-schema.org. And then it was a modified to opencontainers, org.opencontainers.image. There is a link in my GitHub page that links to the full reference. But these are some of the labels that I like to use, just as kind of like a standardization. So obviously, Author's, an email address, so now the image is attributable to a person, that's always kind of good for security and reliability. Where's the source? Where's the version control that has the source, the Docker file and all the assets? How it was built, build number, build server the commit, we talked about, when it was created, a simple description. A fun one I like adding in is the healthZendpoint. Now obviously, the health check directive should be in the Docker file. But if you've got other systems that want to ping your applications, why not declare it and make it queryable? Image version, obviously, that's simple declarative And then a title. And then I've got the two fun ones. Remember, I talked about what if we could encode some fun things? Hypothetically, what if we could encode the Compose file of how to build the stack in the first image itself? And conversely the Kubernetes? Well, actually, you can and I have a demo to show you how to kind of take advantage of that. So how do we create labels? And really creating labels as a function of build time okay? You can't really add labels to an image after the fact. The way you do add labels is either through the Docker file, which I'm a big fan of, because it's declarative. It's in version control. It's kind of irrefutable, especially if you're tracking that commit number in a label. You can extend it from being a static kind of declaration to more a dynamic with build arguments. And I can show you, I'll show you in a little while how you can use a build argument at build time to pass in that variable. And then obviously, if you did it by hand, you could do a docker build--label key equals value. I'm not a big fan of the third one, I love the first one and obviously the second one. Being dynamic we can take advantage of some of the variables coming out of version control. Or I should say, some of the variables coming out of our CI system. And that way, it self documents effectively at build time, which is kind of cool. How do we view labels? Well, there's two major ways to view labels. The first one is obviously a docker pull and docker inspect. You can pull the image locally, you can inspect it, you can obviously, it's going to output as JSON. So you going to use something like JQ to crack it open and look at the individual labels. Another one which I found recently was Skopeo from Red Hat. This allows you to actually query the registry server. So you don't even have to pull the image initially. This can be really useful if you're on a really small development workstation, and you're trying to talk to a Kubernetes cluster and wanting to deploy apps kind of in a very simple manner. Okay? And this was that use case, right? Using Kubernetes, the Kubernetes demo. One of the interesting things about this is that you can base64 encode almost anything, push it in as text into a label and then base64 decode it, and then use it. So in this case, in my demo, I'll show you how we can actually use a kubectl apply piped from the base64 decode from the label itself from skopeo talking to the registry. And what's interesting about this kind of technique is you don't need to store Helm charts. You don't need to learn another language for your declarative automation, right? You don't need all this extra levels of abstraction inherently, if you use it as a label with a kubectl apply, It's just built in. It's kind of like the kiss approach to a certain extent. It does require some encoding when you actually build the image, but to me, it doesn't seem that hard. Okay, let's take a look at a demo. And what I'm going to do for my demo, before we actually get started is here's my repo. Here's a, let me actually go to the actual full repo. So here's the repo, right? And I've got my Jenkins pipeline 'cause I'm using Jenkins for this demo. And in my demo flask, I've got the Docker file. I've got my compose and my Kubernetes YAML. So let's take a look at the Docker file, right? So it's a simple Alpine image. The org statements are the build time arguments that are passed in. Label, so again, I'm using the org.opencontainers.image.blank, for most of them. There's a typo there. Let's see if you can find it, I'll show you it later. My source, build date, build number, commit. Build number and get commit are derived from the Jenkins itself, which is nice. I can just take advantage of existing URLs. I don't have to create anything crazy. And again, I've got my actual Docker build command. Now this is just a label on how to build it. And then here's my simple Python, APK upgrade, remove the package manager, kind of some security stuff, health check getting Python through, okay? Let's take a look at the Jenkins pipeline real quick. So here is my Jenkins pipeline and I have four major stages, four stages, I have built. And here in build, what I do is I actually do the Git clone. And then I do my docker build. From there, I actually tell the Jenkins StackRox plugin. So that's what I'm using for my security scanning. So go ahead and scan, basically, I'm staging it to scan the image. I'm pushing it to Hub, okay? Where I can see the, basically I'm pushing the image up to Hub so such that my StackRox security scanner can go ahead and scan the image. I'm kicking off the scan itself. And then if everything's successful, I'm pushing it to prod. Now what I'm doing is I'm just using the same image with two tags, pre-prod and prod. This is not exactly ideal, in your environment, you probably want to use separate registries and non-prod and a production registry, but for demonstration purposes, I think this is okay. So let's go over to my Jenkins and I've got a deliberate failure. And I'll show you why there's a reason for that. And let's go down. Let's look at my, so I have a StackRox report. Let's look at my report. And it says image required, required image label alert, right? Request that the maintainer, add the required label to the image, so we're missing a label, okay? One of the things we can do is let's flip over, and let's look at Skopeo. Right? I'm going to do this just the easy way. So instead of looking at org.zdocker, opencontainers.image.authors. Okay, see here it says build signature? That was the typo, we didn't actually pass in. So if we go back to our repo, we didn't pass in the the build time argument, we just passed in the word. So let's fix that real quick. That's the Docker file. Let's go ahead and put our dollar sign in their. First day with the fingers you going to love it. And let's go ahead and commit that. Okay? So now that that's committed, we can go back to Jenkins, and we can actually do another build. And there's number 12. And as you can see, I've been playing with this for a little bit today. And while that's running, come on, we can go ahead and look at the Console output. Okay, so there's our image. And again, look at all the build arguments that we're passing into the build statement. So we're passing in the date and the date gets derived on the command line. With the build arguments, there's the base64 encoded of the Compose file. Here's the base64 encoding of the Kubernetes YAML. We do the build. And then let's go down to the bottom layer exists and successful. So here's where we can see no system policy violations profound marking stack regimes security plugin, build step as successful, okay? So we're actually able to do policy enforcement that that image exists, that that label sorry, exists in the image. And again, we can look at the security report and there's no policy violations and no vulnerabilities. So that's pretty good for security, right? We can now enforce and mandate use of certain labels within our images. And let's flip back over to Skopeo, and let's go ahead and look at it. So we're looking at the prod version again. And there's it is in my email address. And that validated that that was valid for that policy. So that's kind of cool. Now, let's take it a step further. What if, let's go ahead and take a look at all of the image, all the labels for a second, let me remove the dash org, make it pretty. Okay? So we have all of our image labels. Again, author's build, commit number, look at the commit number. It was built today build number 12. We saw that right? Delete, build 12. So that's kind of cool dynamic labels. Name, healthz, right? But what we're looking for is we're going to look at the org.zdockerketers label. So let's go look at the label real quick. Okay, well that doesn't really help us because it's encoded but let's base64 dash D, let's decode it. And I need to put the dash r in there 'cause it doesn't like, there we go. So there's my Kubernetes YAML. So why can't we simply kubectl apply dash f? Let's just apply it from standard end. So now we've actually used that label. From the image that we've queried with skopeo, from a remote registry to deploy locally to our Kubernetes cluster. So let's go ahead and look everything's up and running, perfect. So what does that look like, right? So luckily, I'm using traefik for Ingress 'cause I love it. And I've got an object in my Kubernetes YAML called flask.doctor.life. That's my Ingress object for traefik. I can go to flask.docker.life. And I can hit refresh. Obviously, I'm not a very good web designer 'cause the background image in the text. We can go ahead and refresh it a couple times we've got Redis storing a hit counter. We can see that our server name is roundrobing. Okay? That's kind of cool. So let's kind of recap a little bit about my demo environment. So my demo environment, I'm using DigitalOcean, Ubuntu 19.10 Vms. I'm using K3s instead of full Kubernetes either full Rancher, full Open Shift or Docker Enterprise. I think K3s has some really interesting advantages on the development side and it's kind of intended for IoT but it works really well and it deploys super easy. I'm using traefik for Ingress. I love traefik. I may or may not be a traefik ambassador. I'm using Jenkins for CI. And I'm using StackRox for image scanning and policy enforcement. One of the things to think about though, especially in terms of labels is none of this demo stack is required. You can be in any cloud, you can be in CentOs, you can be in any Kubernetes. You can even be in swarm, if you wanted to, or Docker compose. Any Ingress, any CI system, Jenkins, circle, GitLab, it doesn't matter. And pretty much any scanning. One of the things that I think is kind of nice about at least StackRox is that we do a lot more than just image scanning, right? With the policy enforcement things like that. I guess that's kind of a shameless plug. But again, any of this stack is completely replaceable, with any comparative product in that category. So I'd like to, again, point you guys to the andyc.infodc20, that's take you right to the GitHub repo. You can reach out to me at any of the socials @clemenko or andy@stackrox.com. And thank you for attending. I hope you learned something fun about labels. And hopefully you guys can standardize labels in your organization and really kind of take your images and the image provenance to a new level. Thanks for watching. (upbeat music) >> Narrator: Live from Las Vegas It's theCUBE. Covering AWS re:Invent 2019. Brought to you by Amazon Web Services and Intel along with it's ecosystem partners. >> Okay, welcome back everyone theCUBE's live coverage of AWS re:Invent 2019. This is theCUBE's 7th year covering Amazon re:Invent. It's their 8th year of the conference. I want to just shout out to Intel for their sponsorship for these two amazing sets. Without their support we wouldn't be able to bring our mission of great content to you. I'm John Furrier. Stu Miniman. We're here with the chief of AWS, the chief executive officer Andy Jassy. Tech athlete in and of himself three hour Keynotes. Welcome to theCUBE again, great to see you. >> Great to be here, thanks for having me guys. >> Congratulations on a great show a lot of great buzz. >> Andy: Thank you. >> A lot of good stuff. Your Keynote was phenomenal. You get right into it, you giddy up right into it as you say, three hours, thirty announcements. You guys do a lot, but what I liked, the new addition, the last year and this year is the band; house band. They're pretty good. >> Andy: They're good right? >> They hit the queen notes, so that keeps it balanced. So we're going to work on getting a band for theCUBE. >> Awesome. >> So if I have to ask you, what's your walk up song, what would it be? >> There's so many choices, it depends on what kind of mood I'm in. But, uh, maybe Times Like These by the Foo Fighters. >> John: Alright. >> These are unusual times right now. >> Foo Fighters playing at the Amazon Intersect Show. >> Yes they are. >> Good plug Andy. >> Headlining. >> Very clever >> Always getting a good plug in there. >> My very favorite band. Well congratulations on the Intersect you got a lot going on. Intersect is a music festival, I'll get to that in a second But, I think the big news for me is two things, obviously we had a one-on-one exclusive interview and you laid out, essentially what looks like was going to be your Keynote, and it was. Transformation- >> Andy: Thank you for the practice. (Laughter) >> John: I'm glad to practice, use me anytime. >> Yeah. >> And I like to appreciate the comments on Jedi on the record, that was great. But I think the transformation story's a very real one, but the NFL news you guys just announced, to me, was so much fun and relevant. You had the Commissioner of NFL on stage with you talking about a strategic partnership. That is as top down, aggressive goal as you could get to have Rodger Goodell fly to a tech conference to sit with you and then bring his team talk about the deal. >> Well, ya know, we've been partners with the NFL for a while with the Next Gen Stats that they use on all their telecasts and one of the things I really like about Roger is that he's very curious and very interested in technology and the first couple times I spoke with him he asked me so many questions about ways the NFL might be able to use the Cloud and digital transformation to transform their various experiences and he's always said if you have a creative idea or something you think that could change the world for us, just call me he said or text me or email me and I'll call you back within 24 hours. And so, we've spent the better part of the last year talking about a lot of really interesting, strategic ways that they can evolve their experience both for fans, as well as their players and the Player Health and Safety Initiative, it's so important in sports and particularly important with the NFL given the nature of the sport and they've always had a focus on it, but what you can do with computer vision and machine learning algorithms and then building a digital athlete which is really like a digital twin of each athlete so you understand, what does it look like when they're healthy and compare that when it looks like they may not be healthy and be able to simulate all kinds of different combinations of player hits and angles and different plays so that you could try to predict injuries and predict the right equipment you need before there's a problem can be really transformational so we're super excited about it. >> Did you guys come up with the idea or was it a collaboration between them? >> It was really a collaboration. I mean they, look, they are very focused on players safety and health and it's a big deal for their- you know, they have two main constituents the players and fans and they care deeply about the players and it's a-it's a hard problem in a sport like Football, I mean, you watch it. >> Yeah, and I got to say it does point out the use cases of what you guys are promoting heavily at the show here of the SageMaker Studio, which was a big part of your Keynote, where they have all this data. >> Andy: Right. >> And they're data hoarders, they hoard data but the manual process of going through the data was a killer problem. This is consistent with a lot of the enterprises that are out there, they have more data than they even know. So this seems to be a big part of the strategy. How do you get the customers to actually wake up to the fact that they got all this data and how do you tie that together? >> I think in almost every company they know they have a lot of data. And there are always pockets of people who want to do something with it. But, when you're going to make these really big leaps forward; these transformations, the things like Volkswagen is doing where they're reinventing their factories and their manufacturing process or the NFL where they're going to radically transform how they do players uh, health and safety. It starts top down and if the senior leader isn't convicted about wanting to take that leap forward and trying something different and organizing the data differently and organizing the team differently and using machine learning and getting help from us and building algorithms and building some muscle inside the company it just doesn't happen because it's not in the normal machinery of what most companies do. And so it always, almost always, starts top down. Sometimes it can be the Commissioner or CEO sometimes it can be the CIO but it has to be senior level conviction or it doesn't get off the ground. >> And the business model impact has to be real. For NFL, they know concussions, hurting their youth pipe-lining, this is a huge issue for them. This is their business model. >> They lose even more players to lower extremity injuries. And so just the notion of trying to be able to predict injuries and, you know, the impact it can have on rules and the impact it can have on the equipment they use, it's a huge game changer when they look at the next 10 to 20 years. >> Alright, love geeking out on the NFL but Andy, you know- >> No more NFL talk? >> Off camera how about we talk? >> Nobody talks about the Giants being 2 and 10. >> Stu: We're both Patriots fans here. >> People bring up the undefeated season. >> So Andy- >> Everybody's a Patriot's fan now. (Laughter) >> It's fascinating to watch uh, you and your three hour uh, Keynote, uh Werner in his you know, architectural discussion, really showed how AWS is really extending its reach, you know, it's not just a place. For a few years people have been talking about you know, Cloud is an operational model its not a destination or a location but, I felt it really was laid out is you talked about Breadth and Depth and Werner really talked about you know, Architectural differentiation. People talk about Cloud, but there are very-there are a lot of differences between the vision for where things are going. Help us understand why, I mean, Amazon's vision is still a bit different from what other people talk about where this whole Cloud expansion, journey, put ever what tag or label you want on it but you know, the control plane and the technology that you're building and where you see that going. >> Well I think that, we've talked about this a couple times we have two macro types of customers. We have those that really want to get at the low level building blocks and stitch them together creatively however they see fit to create whatever's in their-in their heads. And then we have the second segment of customers that say look, I'm willing to give up some of that flexibility in exchange for getting 80% of the way there much faster. In an abstraction that's different from those low level building blocks. And both segments of builders we want to serve and serve well and so we've built very significant offerings in both areas. I think when you look at microservices um, you know, some of it has to do with the fact that we have this very strongly held belief born out of several years of Amazon where you know, the first 7 or 8 years of Amazon's consumer business we basically jumbled together all of the parts of our technology in moving really quickly and when we wanted to move quickly where you had to impact multiple internal development teams it was so long because it was this big ball, this big monolithic piece. And we got religion about that in trying to move faster in the consumer business and having to tease those pieces apart. And it really was a lot of impetus behind conceiving AWS where it was these low level, very flexible building blocks that6 don't try and make all the decisions for customers they get to make them themselves. And some of the microservices that you saw Werner talking about just, you know, for instance, what we-what we did with Nitro or even what we did with Firecracker those are very much about us relentlessly working to continue to uh, tease apart the different components. And even things that look like low level building blocks over time, you build more and more features and all of the sudden you realize they have a lot of things that are combined together that you wished weren't that slow you down and so, Nitro was a completely re imagining of our Hypervisor and Virtualization layer to allow us, both to let customers have better performance but also to let us move faster and have a better security story for our customers. >> I got to ask you the question around transformation because I think that all points, all the data points, you got all the references, Goldman Sachs on stage at the Keynote, Cerner, I mean healthcare just is an amazing example because I mean, that's demonstrating real value there there's no excuse. I talked to someone who wouldn't be named last night, in and around the area said, the CIA has a cost bar like this a cost-a budget like this but the demand for mission based apps is going up exponentially, so there's need for the Cloud. And so, you see more and more of that. What is your top down, aggressive goals to fill that solution base because you're also a very transformational thinker; what is your-what is your aggressive top down goals for your organization because you're serving a market with trillions of dollars of spend that's shifting, that's on the table. >> Yeah. >> A lot of competition now sees it too, they're going to go after it. But at the end of the day you have customers that have a demand for things, apps. >> Andy: Yeah. >> And not a lot of budget increase at the same time. This is a huge dynamic. >> Yeah. >> John: What's your goals? >> You know I think that at a high level our top down aggressive goals are that we want every single customer who uses our platform to have an outstanding customer experience. And we want that outstanding customer experience in part is that their operational performance and their security are outstanding, but also that it allows them to build, uh, build projects and initiatives that change their customer experience and allow them to be a sustainable successful business over a long period of time. And then, we also really want to be the technology infrastructure platform under all the applications that people build. And we're realistic, we know that you know, the market segments we address with infrastructure, software, hardware, and data center services globally are trillions of dollars in the long term and it won't only be us, but we have that goal of wanting to serve every application and that requires not just the security operational premise but also a lot of functionality and a lot of capability. We have by far the most amount of capability out there and yet I would tell you, we have 3 to 5 years of items on our roadmap that customers want us to add. And that's just what we know today. >> And Andy, underneath the covers you've been going through some transformation. When we talked a couple of years ago, about how serverless is impacting things I've heard that that's actually, in many ways, glue behind the two pizza teams to work between organizations. Talk about how the internal transformations are happening. How that impacts your discussions with customers that are going through that transformation. >> Well, I mean, there's a lot of- a lot of the technology we build comes from things that we're doing ourselves you know? And that we're learning ourselves. It's kind of how we started thinking about microservices, serverless too, we saw the need, you know, we would have we would build all these functions that when some kind of object came into an object store we would spin up, compute, all those tasks would take like, 3 or 4 hundred milliseconds then we'd spin it back down and yet, we'd have to keep a cluster up in multiple availability zones because we needed that fault tolerance and it was- we just said this is wasteful and, that's part of how we came up with Lambda and you know, when we were thinking about Lambda people understandably said, well if we build Lambda and we build this serverless adventure in computing a lot of people were keeping clusters of instances aren't going to use them anymore it's going to lead to less absolute revenue for us. But we, we have learned this lesson over the last 20 years at Amazon which is, if it's something that's good for customers you're much better off cannibalizing yourself and doing the right thing for customers and being part of shaping something. And I think if you look at the history of technology you always build things and people say well, that's going to cannibalize this and people are going to spend less money, what really ends up happening is they spend less money per unit of compute but it allows them to do so much more that they ultimately, long term, end up being more significant customers. >> I mean, you are like beating the drum all the time. Customers, what they say, we encompass the roadmap, I got that you guys have that playbook down, that's been really successful for you. >> Andy: Yeah. >> Two years ago you told me machine learning was really important to you because your customers told you. What's the next traunch of importance for customers? What's on top of mind now, as you, look at- >> Andy: Yeah. >> This re:Invent kind of coming to a close, Replay's tonight, you had conversations, you're a tech athlete, you're running around, doing speeches, talking to customers. What's that next hill from if it's machine learning today- >> There's so much I mean, (weird background noise) >> It's not a soup question (Laughter) And I think we're still in the very early days of machine learning it's not like most companies have mastered it yet even though they're using it much more then they did in the past. But, you know, I think machine learning for sure I think the Edge for sure, I think that um, we're optimistic about Quantum Computing even though I think it'll be a few years before it's really broadly useful. We're very um, enthusiastic about robotics. I think the amount of functions that are going to be done by these- >> Yeah. >> robotic applications are much more expansive than people realize. It doesn't mean humans won't have jobs, they're just going to work on things that are more value added. We're believers in augmented virtual reality, we're big believers in what's going to happen with Voice. And I'm also uh, I think sometimes people get bored you know, I think you're even bored with machine learning already >> Not yet. >> People get bored with the things you've heard about but, I think just what we've done with the Chips you know, in terms of giving people 40% better price performance in the latest generation of X86 processors. It's pretty unbelievable in the difference in what people are going to be able to do. Or just look at big data I mean, big data, we haven't gotten through big data where people have totally solved it. The amount of data that companies want to store, process, analyze, is exponentially larger than it was a few years ago and it will, I think, exponentially increase again in the next few years. You need different tools and services. >> Well I think we're not bored with machine learning we're excited to get started because we have all this data from the video and you guys got SageMaker. >> Andy: Yeah. >> We call it the stairway to machine learning heaven. >> Andy: Yeah. >> You start with the data, move up, knock- >> You guys are very sophisticated with what you do with technology and machine learning and there's so much I mean, we're just kind of, again, in such early innings. And I think that, it was so- before SageMaker, it was so hard for everyday developers and data scientists to build models but the combination of SageMaker and what's happened with thousands of companies standardizing on it the last two years, plus now SageMaker studio, giant leap forward. >> Well, we hope to use the data to transform our experience with our audience. And we're on Amazon Cloud so we really appreciate that. >> Andy: Yeah. >> And appreciate your support- >> Andy: Yeah, of course. >> John: With Amazon and get that machine learning going a little faster for us, that would be better. >> If you have requests I'm interested, yeah. >> So Andy, you talked about that you've got the customers that are builders and the customers that need simplification. Traditionally when you get into the, you know, the heart of the majority of adoption of something you really need to simplify that environment. But when I think about the successful enterprise of the future, they need to be builders. how'l I normally would've said enterprise want to pay for solutions because they don't have the skill set but, if they're going to succeed in this new economy they need to go through that transformation >> Andy: Yeah. >> That you talk to, so, I mean, are we in just a total new era when we look back will this be different than some of these previous waves? >> It's a really good question Stu, and I don't think there's a simple answer to it. I think that a lot of enterprises in some ways, I think wish that they could just skip the low level building blocks and only operate at that higher level abstraction. That's why people were so excited by things like, SageMaker, or CodeGuru, or Kendra, or Contact Lens, these are all services that allow them to just send us data and then run it on our models and get back the answers. But I think one of the big trends that we see with enterprises is that they are taking more and more of their development in house and they are wanting to operate more and more like startups. I think that they admire what companies like AirBnB and Pintrest and Slack and Robinhood and a whole bunch of those companies, Stripe, have done and so when, you know, I think you go through these phases and eras where there are waves of success at different companies and then others want to follow that success and replicate it. And so, we see more and more enterprises saying we need to take back a lot of that development in house. And as they do that, and as they add more developers those developers in most cases like to deal with the building blocks. And they have a lot of ideas on how they can creatively stich them together. >> Yeah, on that point, I want to just quickly ask you on Amazon versus other Clouds because you made a comment to me in our interview about how hard it is to provide a service to other people. And it's hard to have a service that you're using yourself and turn that around and the most quoted line of my story was, the compression algorithm- there's no compression algorithm for experience. Which to me, is the diseconomies of scale for taking shortcuts. >> Andy: Yeah. And so I think this is a really interesting point, just add some color commentary because I think this is a fundamental difference between AWS and others because you guys have a trajectory over the years of serving, at scale, customers wherever they are, whatever they want to do, now you got microservices. >> Yeah. >> John: It's even more complex. That's hard. >> Yeah. >> John: Talk about that. >> I think there are a few elements to that notion of there's no compression algorithm for experience and I think the first thing to know about AWS which is different is, we just come from a different heritage and a different background. We ran a business for a long time that was our sole business that was a consumer retail business that was very low margin. And so, we had to operate at very large scale given how many people were using us but also, we had to run infrastructure services deep in the stack, compute storage and database, and reliable scalable data centers at very low cost and margins. And so, when you look at our business it actually, today, I mean its, its a higher margin business in our retail business, its a lower margin business in software companies but at real scale, it's a high volume, relatively low margin business. And the way that you have to operate to be successful with those businesses and the things you have to think about and that DNA come from the type of operators we have to be in our consumer retail business. And there's nobody else in our space that does that. So, you know, the way that we think about costs, the way we think about innovation in the data center, um, and I also think the way that we operate services and how long we've been operating services as a company its a very different mindset than operating package software. Then you look at when uh, you think about some of the uh, issues in very large scale Cloud, you can't learn some of those lessons until you get to different elbows of the curve and scale. And so what I was telling you is, its really different to run your own platform for your own users where you get to tell them exactly how its going to be done. But that's not the way the real world works. I mean, we have millions of external customers who use us from every imaginable country and location whenever they want, without any warning, for lots of different use cases, and they have lots of design patterns and we don't get to tell them what to do. And so operating a Cloud like that, at a scale that's several times larger than the next few providers combined is a very different endeavor and a very different operating rigor. >> Well you got to keep raising the bar you guys do a great job, really impressed again. Another tsunami of announcements. In fact, you had to spill the beans earlier with Quantum the day before the event. Tight schedule. I got to ask you about the musical festival because, I think this is a very cool innovation. It's the inaugural Intersect conference. >> Yes. >> John: Which is not part of Replay, >> Yes. >> John: Which is the concert tonight. Its a whole new thing, big music act, you're a big music buff, your daughter's an artist. Why did you do this? What's the purpose? What's your goal? >> Yeah, it's an experiment. I think that what's happened is that re:Invent has gotten so big, we have 65 thousand people here, that to do the party, which we do every year, its like a 35-40 thousand person concert now. Which means you have to have a location that has multiple stages and, you know, we thought about it last year and when we were watching it and we said, we're kind of throwing, like, a 4 hour music festival right now. There's multiple stages, and its quite expensive to set up that set for a party and we said well, maybe we don't have to spend all that money for 4 hours and then rip it apart because actually the rent to keep those locations for another two days is much smaller than the cost of actually building multiple stages and so we thought we would try it this year. We're very passionate about music as a business and I think we-I think our customers feel like we've thrown a pretty good music party the last few years and we thought we would try it at a larger scale as an experiment. And if you look at the economics- >> At the headliners real quick. >> The Foo Fighters are headlining on Saturday night, Anderson Paak and the Free Nationals, Brandi Carlile, Shawn Mullins, um, Willy Porter, its a good set. Friday night its Beck and Kacey Musgraves so it's a really great set of um, about thirty artists and we're hopeful that if we can build a great experience that people will want to attend that we can do it at scale and it might be something that both pays for itself and maybe, helps pay for re:Invent too overtime and you know, I think that we're also thinking about it as not just a music concert and festival the reason we named it Intersect is that we want an intersection of music genres and people and ethnicities and age groups and art and technology all there together and this will be the first year we try it, its an experiment and we're really excited about it. >> Well I'm gone, congratulations on all your success and I want to thank you we've been 7 years here at re:Invent we've been documenting the history. You got two sets now, one set upstairs. So appreciate you. >> theCUBE is part of re:Invent, you know, you guys really are apart of the event and we really appreciate your coming here and I know people appreciate the content you create as well. >> And we just launched CUBE365 on Amazon Marketplace built on AWS so thanks for letting us- >> Very cool >> John: Build on the platform. appreciate it. >> Thanks for having me guys, I appreciate it. >> Andy Jassy the CEO of AWS here inside theCUBE, it's our 7th year covering and documenting the thunderous innovation that Amazon's doing they're really doing amazing work building out the new technologies here in the Cloud computing world. I'm John Furrier, Stu Miniman, be right back with more after this short break. (Outro music)

Published Date : Sep 29 2020

SUMMARY :

at org the org to the andyc and it was. of time. That's hard. I think that

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Andy ClemenkoPERSON

0.99+

AndyPERSON

0.99+

Stu MinimanPERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

Andy JassyPERSON

0.99+

CIAORGANIZATION

0.99+

John FurrierPERSON

0.99+

AWSORGANIZATION

0.99+

EuropeLOCATION

0.99+

JohnPERSON

0.99+

3QUANTITY

0.99+

StackRoxORGANIZATION

0.99+

80%QUANTITY

0.99+

4 hoursQUANTITY

0.99+

100%QUANTITY

0.99+

AmazonORGANIZATION

0.99+

VolkswagenORGANIZATION

0.99+

Rodger GoodellPERSON

0.99+

AirBnBORGANIZATION

0.99+

RogerPERSON

0.99+

40%QUANTITY

0.99+

Brandi CarlilePERSON

0.99+

PintrestORGANIZATION

0.99+

PythonTITLE

0.99+

two daysQUANTITY

0.99+

4 hourQUANTITY

0.99+

7th yearQUANTITY

0.99+

Willy PorterPERSON

0.99+

Friday nightDATE

0.99+

andy@stackrox.comOTHER

0.99+

7 yearsQUANTITY

0.99+

Goldman SachsORGANIZATION

0.99+

two tagsQUANTITY

0.99+

IntelORGANIZATION

0.99+

millionsQUANTITY

0.99+

Foo FightersORGANIZATION

0.99+

last yearDATE

0.99+

GiantsORGANIZATION

0.99+

todayDATE

0.99+

andyc.info/dc20OTHER

0.99+

65 thousand peopleQUANTITY

0.99+

Saturday nightDATE

0.99+

SlackORGANIZATION

0.99+

two setsQUANTITY

0.99+

flask.docker.lifeOTHER

0.99+

WernerPERSON

0.99+

two thingsQUANTITY

0.99+

Shawn MullinsPERSON

0.99+

RobinhoodORGANIZATION

0.99+

IntersectORGANIZATION

0.99+

thousandsQUANTITY

0.99+

Kacey MusgravesPERSON

0.99+

4 hundred millisecondsQUANTITY

0.99+

first imageQUANTITY

0.99+

Andy


 

>> Hi, my name is Andy Clemenko. I'm a Senior Solutions Engineer at StackRox. Thanks for joining us today for my talk on labels, labels, labels. Obviously, you can reach me at all the socials. Before we get started, I like to point you to my GitHub repo, you can go to andyc.info/dc20, and it'll take you to my GitHub page where I've got all of this documentation, I've got the Keynote file there. YAMLs, I've got Dockerfiles, Compose files, all that good stuff. If you want to follow along, great, if not go back and review later, kind of fun. So let me tell you a little bit about myself. I am a former DOD contractor. This is my seventh DockerCon. I've spoken, I had the pleasure to speak at a few of them, one even in Europe. I was even a Docker employee for quite a number of years, providing solutions to the federal government and customers around containers and all things Docker. So I've been doing this a little while. One of the things that I always found interesting was the lack of understanding around labels. So why labels, right? Well, as a former DOD contractor, I had built out a large registry. And the question I constantly got was, where did this image come from? How did you get it? What's in it? Where did it come from? How did it get here? And one of the things we did to kind of alleviate some of those questions was we established a baseline set of labels. Labels really are designed to provide as much metadata around the image as possible. I ask everyone in attendance, when was the last time you pulled an image and had 100% confidence, you knew what was inside it, where it was built, how it was built, when it was built, you probably didn't, right? The last thing we obviously want is a container fire, like our image on the screen. And one kind of interesting way we can kind of prevent that is through the use of labels. We can use labels to address security, address some of the simplicity on how to run these images. So think of it, kind of like self documenting, Think of it also as an audit trail, image provenance, things like that. These are some interesting concepts that we can definitely mandate as we move forward. What is a label, right? Specifically what is the Schema? It's just a key-value. All right? It's any key and pretty much any value. What if we could dump in all kinds of information? What if we could encode things and store it in there? And I've got a fun little demo to show you about that. Let's start off with some of the simple keys, right? Author, date, description, version. Some of the basic information around the image. That would be pretty useful, right? What about specific labels for CI? What about a, where's the version control? Where's the source, right? Whether it's Git, whether it's GitLab, whether it's GitHub, whether it's Gitosis, right? Even SPN, who cares? Where are the source files that built, where's the Docker file that built this image? What's the commit number? That might be interesting in terms of tracking the resulting image to a person or to a commit, hopefully then to a person. How is it built? What if you wanted to play with it and do a git clone of the repo and then build the Docker file on your own? Having a label specifically dedicated on how to build this image might be interesting for development work. Where it was built, and obviously what build number, right? These kind of all, not only talk about continuous integration, CI but also start to talk about security. Specifically what server built it. The version control number, the version number, the commit number, again, how it was built. What's the specific build number? What was that job number in, say, Jenkins or GitLab? What if we could take it a step further? What if we could actually apply policy enforcement in the build pipeline, looking specifically for some of these specific labels? I've got a good example of, in my demo of a policy enforcement. So let's look at some sample labels. Now originally, this idea came out of label-schema.org. And then it was a modified to opencontainers, org.opencontainers.image. There is a link in my GitHub page that links to the full reference. But these are some of the labels that I like to use, just as kind of like a standardization. So obviously, Author's, an email address, so now the image is attributable to a person, that's always kind of good for security and reliability. Where's the source? Where's the version control that has the source, the Docker file and all the assets? How it was built, build number, build server the commit, we talked about, when it was created, a simple description. A fun one I like adding in is the healthZendpoint. Now obviously, the health check directive should be in the Docker file. But if you've got other systems that want to ping your applications, why not declare it and make it queryable? Image version, obviously, that's simple declarative And then a title. And then I've got the two fun ones. Remember, I talked about what if we could encode some fun things? Hypothetically, what if we could encode the Compose file of how to build the stack in the first image itself? And conversely the Kubernetes? Well, actually, you can and I have a demo to show you how to kind of take advantage of that. So how do we create labels? And really creating labels as a function of build time okay? You can't really add labels to an image after the fact. The way you do add labels is either through the Docker file, which I'm a big fan of, because it's declarative. It's in version control. It's kind of irrefutable, especially if you're tracking that commit number in a label. You can extend it from being a static kind of declaration to more a dynamic with build arguments. And I can show you, I'll show you in a little while how you can use a build argument at build time to pass in that variable. And then obviously, if you did it by hand, you could do a docker build--label key equals value. I'm not a big fan of the third one, I love the first one and obviously the second one. Being dynamic we can take advantage of some of the variables coming out of version control. Or I should say, some of the variables coming out of our CI system. And that way, it self documents effectively at build time, which is kind of cool. How do we view labels? Well, there's two major ways to view labels. The first one is obviously a docker pull and docker inspect. You can pull the image locally, you can inspect it, you can obviously, it's going to output as JSON. So you going to use something like JQ to crack it open and look at the individual labels. Another one which I found recently was Skopeo from Red Hat. This allows you to actually query the registry server. So you don't even have to pull the image initially. This can be really useful if you're on a really small development workstation, and you're trying to talk to a Kubernetes cluster and wanting to deploy apps kind of in a very simple manner. Okay? And this was that use case, right? Using Kubernetes, the Kubernetes demo. One of the interesting things about this is that you can base64 encode almost anything, push it in as text into a label and then base64 decode it, and then use it. So in this case, in my demo, I'll show you how we can actually use a kubectl apply piped from the base64 decode from the label itself from skopeo talking to the registry. And what's interesting about this kind of technique is you don't need to store Helm charts. You don't need to learn another language for your declarative automation, right? You don't need all this extra levels of abstraction inherently, if you use it as a label with a kubectl apply, It's just built in. It's kind of like the kiss approach to a certain extent. It does require some encoding when you actually build the image, but to me, it doesn't seem that hard. Okay, let's take a look at a demo. And what I'm going to do for my demo, before we actually get started is here's my repo. Here's a, let me actually go to the actual full repo. So here's the repo, right? And I've got my Jenkins pipeline 'cause I'm using Jenkins for this demo. And in my demo flask, I've got the Docker file. I've got my compose and my Kubernetes YAML. So let's take a look at the Docker file, right? So it's a simple Alpine image. The org statements are the build time arguments that are passed in. Label, so again, I'm using the org.opencontainers.image.blank, for most of them. There's a typo there. Let's see if you can find it, I'll show you it later. My source, build date, build number, commit. Build number and get commit are derived from the Jenkins itself, which is nice. I can just take advantage of existing URLs. I don't have to create anything crazy. And again, I've got my actual Docker build command. Now this is just a label on how to build it. And then here's my simple Python, APK upgrade, remove the package manager, kind of some security stuff, health check getting Python through, okay? Let's take a look at the Jenkins pipeline real quick. So here is my Jenkins pipeline and I have four major stages, four stages, I have built. And here in build, what I do is I actually do the Git clone. And then I do my docker build. From there, I actually tell the Jenkins StackRox plugin. So that's what I'm using for my security scanning. So go ahead and scan, basically, I'm staging it to scan the image. I'm pushing it to Hub, okay? Where I can see the, basically I'm pushing the image up to Hub so such that my StackRox security scanner can go ahead and scan the image. I'm kicking off the scan itself. And then if everything's successful, I'm pushing it to prod. Now what I'm doing is I'm just using the same image with two tags, pre-prod and prod. This is not exactly ideal, in your environment, you probably want to use separate registries and non-prod and a production registry, but for demonstration purposes, I think this is okay. So let's go over to my Jenkins and I've got a deliberate failure. And I'll show you why there's a reason for that. And let's go down. Let's look at my, so I have a StackRox report. Let's look at my report. And it says image required, required image label alert, right? Request that the maintainer, add the required label to the image, so we're missing a label, okay? One of the things we can do is let's flip over, and let's look at Skopeo. Right? I'm going to do this just the easy way. So instead of looking at org.zdocker, opencontainers.image.authors. Okay, see here it says build signature? That was the typo, we didn't actually pass in. So if we go back to our repo, we didn't pass in the the build time argument, we just passed in the word. So let's fix that real quick. That's the Docker file. Let's go ahead and put our dollar sign in their. First day with the fingers you going to love it. And let's go ahead and commit that. Okay? So now that that's committed, we can go back to Jenkins, and we can actually do another build. And there's number 12. And as you can see, I've been playing with this for a little bit today. And while that's running, come on, we can go ahead and look at the Console output. Okay, so there's our image. And again, look at all the build arguments that we're passing into the build statement. So we're passing in the date and the date gets derived on the command line. With the build arguments, there's the base64 encoded of the Compose file. Here's the base64 encoding of the Kubernetes YAML. We do the build. And then let's go down to the bottom layer exists and successful. So here's where we can see no system policy violations profound marking stack regimes security plugin, build step as successful, okay? So we're actually able to do policy enforcement that that image exists, that that label sorry, exists in the image. And again, we can look at the security report and there's no policy violations and no vulnerabilities. So that's pretty good for security, right? We can now enforce and mandate use of certain labels within our images. And let's flip back over to Skopeo, and let's go ahead and look at it. So we're looking at the prod version again. And there's it is in my email address. And that validated that that was valid for that policy. So that's kind of cool. Now, let's take it a step further. What if, let's go ahead and take a look at all of the image, all the labels for a second, let me remove the dash org, make it pretty. Okay? So we have all of our image labels. Again, author's build, commit number, look at the commit number. It was built today build number 12. We saw that right? Delete, build 12. So that's kind of cool dynamic labels. Name, healthz, right? But what we're looking for is we're going to look at the org.zdockerketers label. So let's go look at the label real quick. Okay, well that doesn't really help us because it's encoded but let's base64 dash D, let's decode it. And I need to put the dash r in there 'cause it doesn't like, there we go. So there's my Kubernetes YAML. So why can't we simply kubectl apply dash f? Let's just apply it from standard end. So now we've actually used that label. From the image that we've queried with skopeo, from a remote registry to deploy locally to our Kubernetes cluster. So let's go ahead and look everything's up and running, perfect. So what does that look like, right? So luckily, I'm using traefik for Ingress 'cause I love it. And I've got an object in my Kubernetes YAML called flask.doctor.life. That's my Ingress object for traefik. I can go to flask.docker.life. And I can hit refresh. Obviously, I'm not a very good web designer 'cause the background image in the text. We can go ahead and refresh it a couple times we've got Redis storing a hit counter. We can see that our server name is roundrobing. Okay? That's kind of cool. So let's kind of recap a little bit about my demo environment. So my demo environment, I'm using DigitalOcean, Ubuntu 19.10 Vms. I'm using K3s instead of full Kubernetes either full Rancher, full Open Shift or Docker Enterprise. I think K3s has some really interesting advantages on the development side and it's kind of intended for IoT but it works really well and it deploys super easy. I'm using traefik for Ingress. I love traefik. I may or may not be a traefik ambassador. I'm using Jenkins for CI. And I'm using StackRox for image scanning and policy enforcement. One of the things to think about though, especially in terms of labels is none of this demo stack is required. You can be in any cloud, you can be in CentOs, you can be in any Kubernetes. You can even be in swarm, if you wanted to, or Docker compose. Any Ingress, any CI system, Jenkins, circle, GitLab, it doesn't matter. And pretty much any scanning. One of the things that I think is kind of nice about at least StackRox is that we do a lot more than just image scanning, right? With the policy enforcement things like that. I guess that's kind of a shameless plug. But again, any of this stack is completely replaceable, with any comparative product in that category. So I'd like to, again, point you guys to the andyc.infodc20, that's take you right to the GitHub repo. You can reach out to me at any of the socials @clemenko or andy@stackrox.com. And thank you for attending. I hope you learned something fun about labels. And hopefully you guys can standardize labels in your organization and really kind of take your images and the image provenance to a new level. Thanks for watching. (upbeat music)

Published Date : Sep 28 2020

SUMMARY :

at org the org to the andyc

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Andy ClemenkoPERSON

0.99+

EuropeLOCATION

0.99+

100%QUANTITY

0.99+

StackRoxORGANIZATION

0.99+

two tagsQUANTITY

0.99+

PythonTITLE

0.99+

flask.docker.lifeOTHER

0.99+

andy@stackrox.comOTHER

0.99+

AndyPERSON

0.99+

andyc.info/dc20OTHER

0.99+

DockerORGANIZATION

0.99+

todayDATE

0.99+

flask.doctor.lifeOTHER

0.99+

third oneQUANTITY

0.99+

DockerfilesTITLE

0.99+

seventhQUANTITY

0.99+

KubernetesTITLE

0.98+

first oneQUANTITY

0.98+

second oneQUANTITY

0.98+

label-schema.orgOTHER

0.98+

OneQUANTITY

0.98+

KeynoteTITLE

0.98+

andyc.infodc20OTHER

0.98+

first imageQUANTITY

0.98+

First dayQUANTITY

0.97+

CentOsTITLE

0.97+

StackRoxTITLE

0.97+

SkopeoORGANIZATION

0.96+

Red HatORGANIZATION

0.96+

GitTITLE

0.96+

Ubuntu 19.10 VmsTITLE

0.95+

oneQUANTITY

0.95+

build 12OTHER

0.95+

JQTITLE

0.95+

base64TITLE

0.93+

JenkinsTITLE

0.93+

build number 12OTHER

0.91+

org.opencontainers.image.OTHER

0.91+

IngressORGANIZATION

0.89+

DODORGANIZATION

0.89+

opencontainers.image.authors.OTHER

0.89+

a secondQUANTITY

0.89+

two major waysQUANTITY

0.89+

Jenkins StackRoxTITLE

0.88+

GitosisTITLE

0.86+

GitLabORGANIZATION

0.86+

GitHubORGANIZATION

0.86+

two fun onesQUANTITY

0.84+

GitLabTITLE

0.82+

skopeoORGANIZATION

0.82+

DockerTITLE

0.81+

JSONTITLE

0.81+

traefikTITLE

0.77+

skopeoTITLE

0.76+

@clemenkoPERSON

0.74+

RancherTITLE

0.74+

IngressTITLE

0.73+

org.zdockerOTHER

0.72+

RedisTITLE

0.72+

DigitalOceanTITLE

0.71+

org.opencontainers.image.blankOTHER

0.71+

KuberORGANIZATION

0.69+

ON DEMAND SPEED K8S DEV OPS SECURE SUPPLY CHAIN


 

>> In this session, we will be reviewing the power and benefits of implementing a secure software supply chain and how we can gain a cloud like experience with the flexibility, speed and security of modern software delivering. Hi, I'm Matt Bentley and I run our technical pre-sales team here at Mirantis. I spent the last six years working with customers on their containerization journey. One thing almost every one of my customers has focused on is how they can leverage the speed and agility benefits of containerizing their applications while continuing to apply the same security controls. One of the most important things to remember is that we are all doing this for one reason and that is for our applications. So now let's take a look at how we can provide flexibility to all layers of the stack from the infrastructure on up to the application layer. When building a secure supply chain for container focused platforms, I generally see two different mindsets in terms of where their responsibilities lie between the developers of the applications and the operations teams who run the middleware platforms. Most organizations are looking to build a secure, yet robust service that fits their organization's goals around how modern applications are built and delivered. First, let's take a look at the developer or application team approach. This approach falls more of the DevOps philosophy, where a developer and application teams are the owners of their applications from the development through their life cycle, all the way to production. I would refer to this more of a self service model of application delivery and promotion when deployed to a container platform. This is fairly common, organizations where full stack responsibilities have been delegated to the application teams. Even in organizations where full stack ownership doesn't exist, I see the self service application deployment model work very well in lab development or non production environments. This allows teams to experiment with newer technologies, which is one of the most effective benefits of utilizing containers. In other organizations, there is a strong separation between responsibilities for developers and IT operations. This is often due to the complex nature of controlled processes related to the compliance and regulatory needs. Developers are responsible for their application development. This can either include dock at the development layer or be more traditional, throw it over the wall approach to application development. There's also quite a common experience around building a center of excellence with this approach where we can take container platforms and be delivered as a service to other consumers inside of the IT organization. This is fairly prescriptive in the manner of which application teams would consume it. Yeah when examining the two approaches, there are pros and cons to each. Process, controls and compliance are often seen as inhibitors to speed. Self-service creation, starting with the infrastructure layer, leads to inconsistency, security and control concerns, which leads to compliance issues. While self-service is great, without visibility into the utilization and optimization of those environments, it continues the cycles of inefficient resource utilization. And a true infrastructure as a code experience, requires DevOps, related coding skills that teams often have in pockets, but maybe aren't ingrained in the company culture. Luckily for us, there is a middle ground for all of this. Docker Enterprise Container Cloud provide the foundation for the cloud like experience on any infrastructure without all of the out of the box security and controls that our professional services team and your operations teams spend their time designing and implementing. This removes much of the additional work and worry around ensuring that your clusters and experiences are consistent, while maintaining the ideal self service model. No matter if it is a full stack ownership or easing the needs of IT operations. We're also bringing the most natural Kubernetes experience today with Lens to allow for multi-cluster visibility that is both developer and operator friendly. Lens provide immediate feedback for the health of your applications, observability for your clusters, fast context switching between environments and allowing you to choose the best in tool for the task at hand, whether it is the graphic user interface or command line interface driven. Combining the cloud like experience with the efficiencies of a secure supply chain that meet your needs brings you the best of both worlds. You get DevOps speed with all the security and controls to meet the regulations your business lives by. We're talking about more frequent deployments, faster time to recover from application issues and better code quality. As you can see from our clusters we have worked with, we're able to tie these processes back to real cost savings, real efficiency and faster adoption. This all adds up to delivering business value to end users in the overall perceived value. Now let's look and see how we're able to actually build a secure supply chain to help deliver these sorts of initiatives. In our example secure supply chain, where utilizing Docker desktop to help with consistency of developer experience, GitHub for our source control, Jenkins for our CACD tooling, the Docker trusted registry for our secure container registry and the Universal Control Plane to provide us with our secure container runtime with Kubernetes and Swarm, providing a consistent experience, no matter where our clusters are deployed. You work with our teams of developers and operators to design a system that provides a fast, consistent and secure experience. For my developers, that works for any application, Brownfield or Greenfield, Monolith or Microservice. Onboarding teams can be simplified with integrations into enterprise authentication services, calls to GitHub repositories, Jenkins access and jobs, Universal Control Plan and Docker trusted registry teams and organizations, Kubernetes namespace with access control, creating Docker trusted registry namespaces with access control, image scanning and promotion policies. So, now let's take a look and see what it looks like from the CICD process, including Jenkins. So let's start with Docker desktop. From the Docker desktop standpoint, we'll actually be utilizing visual studio code and Docker desktop to provide a consistent developer experience. So no matter if we have one developer or a hundred, we're going to be able to walk through a consistent process through Docker container utilization at the development layer. Once we've made our changes to our code, we'll be able to check those into our source code repository. In this case, we'll be using GitHub. Then when Jenkins picks up, it will check out that code from our source code repository, build our Docker containers, test the application that will build the image, and then it will take the image and push it to our Docker trusted registry. From there, we can scan the image and then make sure it doesn't have any vulnerabilities. Then we can sign them. So once we've signed our images, we've deployed our application to dev, we can actually test our application deployed in our real environment. Jenkins will then test the deployed application. And if all tests show that as good, we'll promote our Docker image to production. So now, let's look at the process, beginning from the developer interaction. First of all, let's take a look at our application as it's deployed today. Here, we can see that we have a change that we want to make on our application. So our marketing team says we need to change containerize NGINX to something more Mirantis branded. So let's take a look at visual studio code, which we'll be using for our ID to change our application. So here's our application. We have our code loaded and we're going to be able to use Docker desktop on our local environment with our Docker desktop plugin for visual studio code, to be able to build our application inside of Docker, without needing to run any command line specific tools. Here with our code, we'll be able to interact with Docker maker changes, see it live and be able to quickly see if our changes actually made the impact that we're expecting our application. So let's find our updated tiles for application and let's go ahead and change that to our Mirantis sized NGINX instead of containerized NGINX. So we'll change it in a title and on the front page of the application. So now that we've saved that changed to our application, we can actually take a look at our code here in VS code. And as simple as this, we can right click on the Docker file and build our application. We give it a name for our Docker image and VS code will take care of the automatic building of our application. So now we have a Docker image that has everything we need in our application inside of that image. So, here we can actually just right click on that image tag that we just created and do run. This will interactively run the container for us. And then once our containers running, we can just right click and open it up in a browser. So here we can see the change to our application as it exists live. So, once we can actually verify that our applications working as expected, we can stop our container. And then from here, we can actually make that change live by pushing it to our source code repository. So here, we're going to go ahead and make a commit message to say that we updated to our Mirantis branding. We will commit that change and then we'll push it to our source code repository. Again, in this case, we're using GitHub to be able to use as our source code repository. So here in VS code, we'll have that pushed here to our source code repository. And then, we'll move on to our next environment, which is Jenkins. Jenkins is going to be picking up those changes for our application and it checked it out from our source code repository. So GitHub notifies Jenkins that there's a change. Checks out the code, builds our Docker image using the Docker file. So we're getting a consistent experience between the local development environment on our desktop and then in Jenkins where we're actually building our application, doing our tests, pushing it into our Docker trusted registry, scanning it and signing our image in our Docker trusted registry and then deploying to our development environment. So let's actually take a look at that development environment as it's been deployed. So, here we can see that our title has been updated on our application, so we can verify that it looks good in development. If we jump back here to Jenkins, we'll see that Jenkins go ahead and runs our integration tests for our development environment. Everything worked as expected, so it promoted that image for our production repository in our Docker trusted registry. We're then, we're going to also sign that image. So we're assigning that yes, we've signed off that has made it through our integration tests and it's deployed to production. So here in Jenkins, we can take a look at our deployed production environment where our application is live in production. We've made a change, automated and very secure manner. So now, let's take a look at our Docker trusted registry, where we can see our name space for our application and our simple NGINX repository. From here, we'll be able to see information about our application image that we've pushed into the registry, such as the image signature, when it was pushed by who and then, we'll also be able to see the results of our image. In this case, we can actually see that there are vulnerabilities for our image and we'll actually take a look at that. Docker trusted registry does binary level scanning. So we get detailed information about our individual image layers. From here, these image layers give us details about where the vulnerabilities were located and what those vulnerabilities actually are. So if we click on the vulnerability, we can see specific information about that vulnerability to give us details around the severity and more information about what exactly is vulnerable inside of our container. One of the challenges that you often face around vulnerabilities is how exactly we would remediate that in a secure supply chain. So let's take a look at that. In the example that we were looking at, the vulnerability is actually in the base layer of our image. In order to pull in a new base layer for our image, we need to actually find the source of that and update it. One of the ways that we can help secure that as a part of the supply chain is to actually take a look at where we get our base layers of our images. Docker hub really provides a great source of content to start from, but opening up Docker hub within your organization, opens up all sorts of security concerns around the origins of that content. Not all images are made equal when it comes to the security of those images. The official images from Docker hub are curated by Docker, open source projects and other vendors. One of the most important use cases is around how you get base images into your environment. It is much easier to consume the base operating system layer images than building your own and also trying to maintain them. Instead of just blindly trusting the content from Docker hub, we can take a set of content that we find useful such as those base image layers or content from vendors and pull that into our own Docker trusted registry, using our mirroring feature. Once the images have been mirrored into a staging area of our Docker trusted registry, we can then scan them to ensure that the images meet our security requirements. And then based off of the scan result, promote the image to a public repository where you can actually sign the images and make them available to our internal consumers to meet their needs. This allows us to provide a set of curated content that we know is secure and controlled within our environment. So from here, we can find our updated Docker image in our Docker trusted registry, where we can see that the vulnerabilities have been resolved. From a developer's point of view, that's about as smooth as the process gets. Now, let's take a look at how we can provide that secure content for our developers in our own Docker trusted registry. So in this case, we're taking a look at our Alpine image that we've mirrored into our Docker trusted registry. Here, we're looking at the staging area where the images get temporarily pulled because we have to pull them in order to actually be able to scan them. So here we set up mirroring and we can quickly turn it on by making it active. And then we can see that our image mirroring, we'll pull our content from Docker hub and then make it available in our Docker trusted registry in an automatic fashion. So from here, we can actually take a look at the promotions to be able to see how exactly we promote our images. In this case, we created a promotion policy within Docker trusted registry that makes it so that content gets promoted to a public repository for internal users to consume based off of the vulnerabilities that are found or not found inside of the Docker image. So our actual users, how they would consume this content is by taking a look at the public to them, official images that we've made available. Here again, looking at our Alpine image, we can take a look at the tags that exist and we can see that we have our content that has been made available. So we've pulled in all sorts of content from Docker hub. In this case, we've even pulled in the multi architecture images, which we can scan due to the binary level nature of our scanning solution. Now let's take a look at Lens. Lens provides capabilities to be able to give developers a quick opinionated view that focuses around how they would want to view, manage and inspect applications deployed to a Kubernetes cluster. Lens integrates natively out of the box with Universal Control Plane clam bundles. So you're automatically generated TLS certificates from UCP, just work. Inside our organization, we want to give our developers the ability to see their applications in a very easy to view manner. So in this case, let's actually filter down to the application that we just employed to our development environment. Here, we can see the pod for application. And when we click on that, we get instant detailed feedback about the components and information that this pod is utilizing. We can also see here in Lens that it gives us the ability to quickly switch contexts between different clusters that we have access to. With that, we also have capabilities to be able to quickly deploy other types of components. One of those is helm charts. Helm charts are a great way to package up applications, especially those that may be more complex to make it much simpler to be able to consume and inversion our applications. In this case, let's take a look at the application that we just built and deployed. In this case, our simple NGINX application has been bundled up as a helm chart and is made available through Lens. Here, we can just click on that description of our application to be able to see more information about the helm chart. So we can publish whatever information may be relevant about our application. And through one click, we can install our helm chart. Here, it will show us the actual details of the helm charts. So before we install it, we can actually look at those individual components. So in this case, we can see this created an ingress rule. And then this will tell Kubernetes how did it create this specific components of our application. We'd just have to pick a namespace to deploy it to and in this case, we're actually going to do a quick test here because in this case, we're trying to deploy the application from Docker hub. In our Universal Control Plane, we've turned on Docker content trust policy enforcement. So this is actually going to fail to deploy. Because we're trying to employ our application from Docker hub, the image hasn't been properly signed in our environment. So the Docker content trust policy enforcement prevents us from deploying our Docker image from Docker hub. In this case, we have to go through our approved process through our secure supply chain to be able to ensure that we know where our image came from and that meets our quality standards. So if we comment out the Docker hub repository and comment in our Docker trusted registry repository and click install, it will then install the helm chart with our Docker image being pulled from our DTR, which then it has a proper signature. We can see that our application has been successfully deployed through our home chart releases view. From here, we can see that simple NGINX application and in this case, we'll get details around the actual deployed helm chart. The nice thing is, is that Lens provides us this capability here with helm to be able to see all of the components that make up our application. From this view, it's giving us that single pane of glass into that specific application, so that we know all of the components that is created inside of Kubernetes. There are specific details that can help us access the applications such as that ingress rule that we just talked about, gives us the details of that, but it also gives us the resources such as the service, the deployment and ingress that has been created within Kubernetes to be able to actually have the application exist. So to recap, we've covered how we can offer all the benefits of a cloud like experience and offer flexibility around DevOps and operations control processes through the use of a secure supply chain, allowing our developers to spend more time developing and our operators, more time designing systems that meet our security and compliance concerns.

Published Date : Sep 14 2020

SUMMARY :

of our application to be

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Matt BentleyPERSON

0.99+

GitHubORGANIZATION

0.99+

FirstQUANTITY

0.99+

one reasonQUANTITY

0.99+

MirantisORGANIZATION

0.99+

OneQUANTITY

0.99+

NGINXTITLE

0.99+

DockerTITLE

0.99+

two approachesQUANTITY

0.99+

MonolithORGANIZATION

0.99+

oneQUANTITY

0.98+

UCPORGANIZATION

0.98+

KubernetesTITLE

0.98+

One thingQUANTITY

0.98+

one developerQUANTITY

0.98+

JenkinsTITLE

0.98+

todayDATE

0.98+

BrownfieldORGANIZATION

0.97+

both worldsQUANTITY

0.97+

twoQUANTITY

0.97+

bothQUANTITY

0.96+

one clickQUANTITY

0.96+

GreenfieldORGANIZATION

0.95+

eachQUANTITY

0.95+

single paneQUANTITY

0.92+

Docker hubTITLE

0.91+

a hundredQUANTITY

0.91+

LensTITLE

0.9+

DockerORGANIZATION

0.9+

MicroserviceORGANIZATION

0.9+

VSTITLE

0.88+

DevOpsTITLE

0.87+

K8SCOMMERCIAL_ITEM

0.87+

Docker hubORGANIZATION

0.85+

waysQUANTITY

0.83+

KubernetesORGANIZATION

0.83+

last six yearsDATE

0.82+

JenkinsPERSON

0.72+

One ofQUANTITY

0.7+

Speed K8S Dev Ops Secure Supply Chain


 

>>this session will be reviewing the power benefits of implementing a secure software supply chain and how we can gain a cloud like experience with flexibility, speed and security off modern software delivery. Hi, I'm Matt Bentley, and I run our technical pre sales team here. Um Iran. Tous I spent the last six years working with customers on their container ization journey. One thing almost every one of my customers is focused on how they can leverage the speed and agility benefits of contain arising their applications while continuing to apply the same security controls. One of the most important things to remember is that we are all doing this for one reason, and that is for our applications. So now let's take a look at how we could provide flexibility all layers of the stack from the infrastructure on up to the application layer. When building a secure supply chain for container focus platforms, I generally see two different mindsets in terms of where the responsibilities lie between the developers of the applications and the operations teams who run the middleware platforms. Most organizations are looking to build a secure yet robust service that fits the organization's goals around how modern applications are built and delivered. Yeah. First, let's take a look at the developer or application team approach. This approach follows Mawr of the Dev ops philosophy, where a developer and application teams are the owners of their applications. From the development through their life cycle, all the way to production. I would refer this more of a self service model of application, delivery and promotion when deployed to a container platform. This is fairly common organizations where full stack responsibilities have been delegated to the application teams, even in organizations were full stack ownership doesn't exist. I see the self service application deployment model work very well in lab development or non production environments. This allows teams to experiment with newer technologies, which is one of the most effective benefits of utilizing containers and other organizations. There's a strong separation between responsibilities for developers and I T operations. This is often do the complex nature of controlled processes related to the compliance and regulatory needs. Developers are responsible for their application development. This can either include doctorate the development layer or b'more traditional throw it over the wall approach to application development. There's also quite a common experience around building a center of excellence with this approach, where we can take container platforms and be delivered as a service to other consumers inside of the I T organization. This is fairly prescriptive, in the manner of which application teams would consume it. When examining the two approaches, there are pros and cons to each process. Controls and appliance are often seen as inhibitors to speak. Self service creation, starting with the infrastructure layer, leads to inconsistency, security and control concerns, which leads to compliance issues. While self service is great without visibility into the utilization and optimization of those environments, it continues the cycles of inefficient resource utilization and the true infrastructure is a code. Experience requires Dev ops related coding skills that teams often have in pockets but maybe aren't ingrained in the company culture. Luckily for us, there is a middle ground for all of this Doc Enterprise Container Cloud provides the foundation for the cloud like experience on any infrastructure without all of the out of the box security and controls that are professional services Team and your operations team spend their time designing and implementing. This removes much of the additional work and worry Run, ensuring that your clusters and experiences are consistent while maintaining the ideal self service model, no matter if it is a full stack ownership or easing the needs of I T operations. We're also bringing the most natural kubernetes experience today with winds to allow for multi cluster visibility that is both developer and operator friendly. Let's provides immediate feedback for the health of your applications. Observe ability for your clusters. Fast context, switching between environments and allowing you to choose the best in tool for the task at hand. Whether is three graphical user interface or command line interface driven. Combining the cloud like experience with the efficiencies of a secure supply chain that meet your needs brings you the best of both worlds. You get Dave off speed with all the security controls to meet the regulations your business lives by. We're talking about more frequent deployments. Faster time to recover from application issues and better code quality, as you can see from our clusters we have worked with were able to tie these processes back to real cost savings, riel efficiency and faster adoption. This all adds up to delivering business value to end users in the overall perceived value. Now let's look at see how we're able to actually build a secure supply chain. Help deliver these sorts of initiatives in our example. Secure Supply chain. We're utilizing doctor desktop to help with consistency of developer experience. Get hub for our source Control Jenkins for a C A C D. Tooling the doctor trusted registry for our secure container registry in the universal control playing to provide us with our secure container run time with kubernetes and swarm. Providing a consistent experience no matter where are clusters are deployed. You work with our teams of developers and operators to design a system that provides a fast, consistent and secure experience for my developers that works for any application. Brownfield or Greenfield monolith or micro service on boarding teams could be simplified with integrations into enterprise authentication services. Calls to get help repositories. Jenkins Access and Jobs, Universal Control Plan and Dr Trusted registry teams and organizations. Cooper down his name space with access control, creating doctor trusted registry named spaces with access control, image scanning and promotion policies. So now let's take a look and see what it looks like from the C I c D process, including Jenkins. So let's start with Dr Desktop from the doctor desktop standpoint, what should be utilizing visual studio code and Dr Desktop to provide a consistent developer experience. So no matter if we have one developer or 100 we're gonna be able to walk through the consistent process through docker container utilization at the development layer. Once we've made our changes to our code will be able to check those into our source code repository in this case, abusing Get up. Then, when Jenkins picks up, it will check out that code from our source code repository, build our doctor containers, test the application that will build the image, and then it will take the image and push it toward doctor trusted registry. From there, we can scan the image and then make sure it doesn't have any vulnerabilities. Then we consign them. So once we signed our images, we've deployed our application to Dev. We can actually test their application deployed in our real environment. Jenkins will then test the deployed application, and if all tests show that is good, will promote the r R Dr and Mr Production. So now let's look at the process, beginning from the developer interaction. First of all, let's take a look at our application as is deployed today. Here, we can see that we have a change that we want to make on our application. So marketing Team says we need to change containerized injure next to something more Miranda's branded. So let's take a look at visual studio coat, which will be using for I D to change our application. So here's our application. We have our code loaded, and we're gonna be able to use Dr Desktop on our local environment with our doctor desktop plug in for visual studio code to be able to build our application inside of doctor without needing to run any command line. Specific tools here is our code will be able to interact with docker, make our changes, see it >>live and be able to quickly see if our changes actually made the impact that we're expecting our application. Let's find our updated tiles for application and let's go and change that to our Miranda sized into next. Instead of containerized in genetics, so will change in the title and on the front page of the application, so that we save. That changed our application. We can actually take a look at our code here in V s code. >>And as simple as this, we can right click on the docker file and build our application. We give it a name for our Docker image and V s code will take care of the automatic building of our application. So now we have a docker image that has everything we need in our application inside of that image. So here we can actually just right click on the image tag that we just created and do run this winter, actively run the container for us and then what's our containers running? We could just right click and open it up in a browser. So here we can see the change to our application as it exists live. So once we can actually verify that our applications working as expected, weaken, stop our container. And then from here, we can actually make that change live by pushing it to our source code repository. So here we're going to go ahead and make a commit message to say that we updated to our Mantis branding. We will commit that change and then we'll push it to our source code repository again. In this case we're using get Hub to be able to use our source code repository. So here in V s code will have that pushed here to our source code repository. And then we'll move on to our next environment, which is Jenkins. Jenkins is gonna be picking up those changes for our application, and it checked it out from our source code repository. So get Hub Notifies Jenkins. That there is a change checks out. The code builds our doctor image using the doctor file. So we're getting a consistent experience between the local development environment on our desktop and then and Jenkins or actually building our application, doing our tests, pushing in toward doctor trusted registry, scanning it and signing our image. And our doctor trusted registry, then 2.4 development environment. >>So let's actually take a look at that development environment as it's been deployed. So here we can see that our title has been updated on our application so we can verify that looks good and development. If we jump back here to Jenkins, will see that Jenkins go >>ahead and runs our integration tests for a development environment. Everything worked as expected, so it promoted that image for production repository and our doctor trusted registry. Where then we're going to also sign that image. So we're signing that. Yes, we have signed off that has made it through our integration tests, and it's deployed to production. So here in Jenkins, we could take a look at our deployed production environment where our application is live in production. We've made a change automated and very secure manner. >>So now let's take a look at our doctor trusted registry where we can see our game Space for application are simple in genetics repository. From here we will be able to see information about our application image that we've pushed into the registry, such as Thean Midge signature when it was pushed by who and then we'll also be able to see the scan results of our image. In this case, we can actually see that there are vulnerabilities for our image and we'll actually take a look at that. Dr Trusted registry does binary level scanning, so we get detailed information about our individual image layers. From here, these image layers give us details about where the vulnerabilities were located and what those vulnerabilities actually are. So if we click on the vulnerability, we can see specific information about that vulnerability to give us details around the severity and more information about what, exactly is vulnerable inside of our container. One of the challenges that you often face around vulnerabilities is how, exactly we would remediate that and secure supply chain. So let's take a look at that and the example that we were looking at the vulnerability is actually in the base layer of our image. In order to pull in a new base layer of our image, we need to actually find the source of that and updated. One of the ways that we can help secure that is a part of the supply chain is to actually take a look at where we get our base layers of our images. Dr. Help really >>provides a great source of content to start from, but opening up docker help within your organization opens up all sorts of security concerns around the origins of that content. Not all images are made equal when it comes to the security of those images. The official images from Docker, However, curated by docker, open source projects and other vendors, one of the most important use cases is around how you get base images into your environment. It is much easier to consume the base operating system layer images than building your own and also trying to maintain them instead of just blindly trusting the content from doctor. How we could take a set >>of content that we find useful, such as those base image layers or content from vendors, and pull that into our own Dr trusted registry using our rearing feature. Once the images have been mirrored into a staging area of our DACA trusted registry, we can then scan them to ensure that the images meet our security requirements and then, based off the scan result, promote the image toe a public repository where we can actually sign the images and make them available to our internal consumers to meet their needs. This allows us to provide a set of curated content that we know a secure and controlled within our environment. So from here we confined our updated doctor image in our doctor trust registry, where we can see that the vulnerabilities have been resolved from a developers point of view, that's about a smooth process gets. Now let's take a look at how we could provide that secure content for developers and our own Dr Trusted registry. So in this case, we're taking a look at our Alpine image that we've mirrored into our doctor trusted registry. Here we're looking at the staging area where the images get temporarily pulled because we have to pull them in order to actually be able to scan them. So here we set up nearing and we can quickly turn it on by making active. Then we can see that our image mirroring will pull our content from Dr Hub and then make it available in our doctor trusted registry in an automatic fashion. So from here, we can actually take a look at the promotions to be able to see how exactly we promote our images. In this case, we created a promotion policy within docker trusted registry that makes it so. That content gets promoted to a public repository for internal users to consume based off of the vulnerabilities that are found or not found inside of the docker image. So are actually users. How they would consume this content is by taking a look at the public to them official images that we've made available here again, Looking at our Alpine image, we can take a look at the tags that exist. We could see that we have our content that has been made available, so we've pulled in all sorts of content from Dr Hub. In this case, we have even pulled in the multi architectural images, which we can scan due to the binary level nature of our scanning solution. Now let's take a look at Len's. Lens provides capabilities to be able to give developers a quick, opinionated view that focuses around how they would want to view, manage and inspect applications to point to a Cooper Days cluster. Lindsay integrates natively out of the box with universal control playing clam bundles so you're automatically generated. Tell certificates from UCP. Just work inside our organization. We want to give our developers the ability to see their applications and a very easy to view manner. So in this case, let's actually filter down to the application that we just deployed to our development environment. Here we can see the pot for application and we click on that. We get instant, detailed feedback about the components and information that this pot is utilizing. We can also see here in Linz that it gives us the ability to quickly switch context between different clusters that we have access to. With that, we also have capabilities to be able to quickly deploy other types of components. One of those is helm charts. Helm charts are a great way to package of applications, especially those that may be more complex to make it much simpler to be able to consume inversion our applications. In this case, let's take a look at the application that we just built and deployed. This case are simple in genetics. Application has been bundled up as a helm chart and has made available through lens here. We can just click on that description of our application to be able to see more information about the helm chart so we can publish whatever information may be relevant about our application, and through one click, we can install our helm chart here. It will show us the actual details of the home charts. So before we install it, we can actually look at those individual components. So in this case, we could see that's created ingress rule. And then it's well, tell kubernetes how to create the specific components of our application. We just have to pick a name space to to employ it, too. And in this case, we're actually going to do a quick test here because in this case, we're trying to deploy the application from Dr Hub in our universal Control plane. We've turned on Dr Content Trust Policy Enforcement. So this is actually gonna fail to deploy because we're trying to deploy application from Dr Hub. The image hasn't been properly signed in our environment. So the doctor can to trust policy enforcement prevents us from deploying our doctor image from Dr Hub. In this case, we have to go through our approved process through our secure supply chain to be able to ensure that we know our image came from, and that meets our quality standards. So if we comment out the doctor Hub repository and comment in our doctor trusted registry repository and click install, it will then install the helm chart with our doctor image being pulled from our GTR, which then has a proper signature, we can see that our application has been successfully deployed through our home chart releases view. From here, we can see that simple in genetics application, and in this case we'll get details around the actual deploy and help chart. The nice thing is that Linds provides us this capability here with home. To be able to see all the components that make up our application from this view is giving us that single pane of glass into that specific application so that we know all the components that is created inside of kubernetes. There are specific details that can help us access the applications, such as that ingress world that we just talked about gives us the details of that. But it also gives us the resource is such as the service, the deployment in ingress that has been created within kubernetes to be able to actually have the application exist. So to recap, we've covered how we can offer all the benefits of a cloud like experience and offer flexibility around dev ups and operations controlled processes through the use of a secure supply chain, allowing our developers to spend more time developing and our operators mawr time designing systems that meet our security and compliance concerns

Published Date : Sep 12 2020

SUMMARY :

So now let's take a look at how we could provide flexibility all layers of the stack from the and on the front page of the application, so that we save. So here we can see the change to our application as it exists live. So here we can So here in Jenkins, we could take a look at our deployed production environment where our application So let's take a look at that and the example that we were looking at of the most important use cases is around how you get base images into your So in this case, let's actually filter down to the application that we just deployed to our development environment.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Matt BentleyPERSON

0.99+

UCPORGANIZATION

0.99+

MawrPERSON

0.99+

FirstQUANTITY

0.99+

CooperPERSON

0.99+

OneQUANTITY

0.99+

100QUANTITY

0.99+

one reasonQUANTITY

0.99+

two approachesQUANTITY

0.99+

todayDATE

0.99+

bothQUANTITY

0.99+

Dr HubORGANIZATION

0.98+

DavePERSON

0.98+

oneQUANTITY

0.98+

JenkinsTITLE

0.97+

twoQUANTITY

0.97+

LindsORGANIZATION

0.97+

IranLOCATION

0.97+

One thingQUANTITY

0.97+

one developerQUANTITY

0.96+

DACATITLE

0.95+

each processQUANTITY

0.95+

Dr DesktopTITLE

0.93+

one clickQUANTITY

0.92+

single paneQUANTITY

0.92+

both worldsQUANTITY

0.91+

Thean MidgePERSON

0.91+

dockerTITLE

0.89+

three graphical userQUANTITY

0.86+

MantisORGANIZATION

0.85+

last six yearsDATE

0.84+

DrORGANIZATION

0.82+

MirandaORGANIZATION

0.81+

BrownfieldORGANIZATION

0.8+

this winterDATE

0.75+

waysQUANTITY

0.75+

CTITLE

0.74+

one ofQUANTITY

0.74+

LindsayORGANIZATION

0.72+

ingressTITLE

0.71+

AlpineORGANIZATION

0.69+

most important use casesQUANTITY

0.67+

Cooper DaysORGANIZATION

0.66+

JenkinsPERSON

0.65+

mindsetsQUANTITY

0.63+

GreenfieldLOCATION

0.62+

MirandaPERSON

0.62+

RPERSON

0.59+

C A CTITLE

0.59+

LinzTITLE

0.59+

every oneQUANTITY

0.56+

challengesQUANTITY

0.53+

EnterpriseCOMMERCIAL_ITEM

0.5+

2.4OTHER

0.5+

HubORGANIZATION

0.48+

K8STITLE

0.48+

LensTITLE

0.44+

DocORGANIZATION

0.4+

HelpPERSON

0.39+

DockerORGANIZATION

0.37+

AlpineOTHER

0.35+

Simon Kofkin-Hansen, IBM | VeeamON 2020


 

>> From around the globe, it's theCUBE with digital coverage of VeeamON 2020 brought to you by Veeam. >> Welcome back, I'm Stu Miniman, and this is theCUBE's coverage of VeeamON 2020 online. Of course, instead of all gathering together in Las Vegas, we were getting to talk to participants of the community where they are around the globe. Happy to welcome to the program, first time guest on the program, he's part of the opening keynote I'm sure most of you saw, Simon Kofkin-Hansen, chief technology officer for VMware Solutions inside of IBM. Simon, thanks so much for joining us. >> Thank you Stu, it's a pleasure to be here. >> All right, so you know, obviously we know IBM quite well. We at theCUBE at you know, the virtual events, both RedHat Summit and IBM Think not too long in the past there. Talking a lot about you know, the open hybrid cloud many of the messages that I hear from Veeam remind me of what I heard at their environments you know, it, multicloud environment, we need flexibility in what we're doing, we, you know, need to of course you know, data is such an important piece of what's going on. Maybe before we get into it too much, give us a little bit about you know, your role there, where you fit into that whole discussion of what IBM is with Cloud. >> So Stu, yeah, I'm the chief technology officer of IBM, of Veeam solutions on the IBM cloud. Primarily involved and helped create the partnership that exists between IBM and VMware today. Basically, I'm providing automated solutions for our clients. Automated, secure solutions for our clients around the VMware and the IBM Cloud infrastructure space. >> Yeah, well, Simon, it's interesting stuff, you've got some good history there, maybe you might remind our audience you know, I remember at VMWorld, before there was a big partnership, that VMware made with a certain public cloud provider that gets talked about a lot, IBM was the first and if I saw you know, correctly, I'd love for you to be able to provide the data behind it. There are more VMware customers on the IBM Cloud than any other cloud is what I believe is the data I saw, I think. So bring us in little bit more, explain that relationship. >> So yes, we were, as IBM, beginning of all of this, I mean VMware and IBM have had a long relationship. And in fact, IBM manages over 850,000 predominantly VMware workloads on-prems, and have done for the last 10+ years. But in the latest iteration of this partnership, we brought together our automation and our codified experience from dealing with these, our client accounts around the world and brought that expertise along with VMware's product side to align this automated stdc stack on cloud platforms. And first to market with that automated stdc stack called VMware Cloud Foundation. First to market out and we've had a great ongoing relationship since then. It's really resonated with many of our clients and our enterprise clients out there. >> All right well Simon, one of the most important pieces of that, you know, VMware stdc message is that I have VMware, I know how, I manage that environment, and it's got a really robust ecosystem, so, of course Veeam started exclusively in the VMware environments, now lives across many environments, but you know the comment I've made on some of these interviews for VeeamON is, wherever the VMware solution and VMware Cloud goes, Veeam could just go along for the ride, really, if it were. There's obviously some integration work and testing, but help dig into a little bit, what that means for you know, solutions like Veeam tying into what VMware is doing, and what VMware is doing in the IBM Cloud. >> Well particularly at the beginning of this relationship, part of this partnership with VMware was its rich partner ecosystem. And I was given the remit and had the luxury to choose the best of the best products that's out there. Which wasn't necessarily IBM's products in this particular space. Obviously we chose Veeam for backup. I mean Veeam's reputation out there's the backup, it's known as the market leader for the backup of its actual workloads. So it was very important for us to embrace that ecosystem. And it's been a great partnership from the very, very beginning. Getting the backup products out into our platform and as we've done more recently, bringing in the new enhancements like Veeam Cloud Connect to deal with data replication and more use cases around migration and the movement of data in a hybrid cloud sense. And Veeam has been right there with us every step of the way. >> Yeah, so Simon, you're a CTO, so bring us in a little bit architecturally because when I think about hybrid cloud or even you know having to move my data between you know different data centers, you know there are, you know, the physics challenges, and you know sometimes I can, you know, get closer, I can (microphone cuts out) through there, and then there's the financial considerations. So give us to how we have to think about that, what is data movement in 2020, you know, what considerations do we have to have here, and how does IBM maybe differentiate a little bit from some others? >> So I'll answer your first question, I'll answer some of the last questions first. What does data movement in 2020 look like? Well, to be perfectly honest, Stu, we never imagined what would happen this year, but data mobility and the movement of data in a hybrid scenario has never been more acute or prevalent because of the stage that the world is currently in and the conditions that we're living in today. Being able to use familiar based tooling that represents what is used in an on-premises state, over in the cloud, enabling Veeam, or people who have existing investments in Veeam, to use that tooling for multiple different use cases. Not just backup, but that actual data replication functionality has become ever more prevalent in these cases. I was saying similar messages back in 2019 and 2018 and as long as back in 2010. I feel as though, I look at that, it's been almost a decade now, talking about the need or the capabilities of hybrid cloud and this movement of data. But I've absolutely seen an absolute increase in it over the last few years and particularly in 2020 in this current situation. The major difference from an IMB perspective is I would say, is our openness, and our, how we're dealing with the openness in the community, and our commitment to open source. Our flexibility, our security, and the way we actually deal with the enterprise. And one of the major differentiations is the security to the core. Actually building up the security, looking at the secure elements, making sure their data is safe from tampering, it's encrypted both in transit and at rest. And these are many of the factors that our enterprise clients actually demand of us and particularly when we look at the regulated industries with their heavy focus on the financial services sector. And Veeam, with its capabilities and its ability to both do the backup and migration functionality, sort of clients are expecting a two-for-one deal, in these days when they're trying to cut costs, and get out of their own data centers in an effort to cut their costs. >> Excellent. Well, Simon, you know you laid out really the imperative for enterprises, you know today and how they're dealing with that, bring us in as to what differentiates the IBM-Veeam relationship versus just IBM is open and flexible, so there are a lot of options. You know what particularly is there about Veeam that makes that relationship special? >> Well, I think it all down to the partnership and the deep willingness to work together. The research that we're doing in the products, yeah? Looking at ways that we can take Veeam beyond the VMware space and into bare metals and containers. But maintaining that level of security and flexibility that clients demand. I mean, many clients, if they've invested in a particular technology to do their backups, back up and DR, because of the heavy data requirements are still one of the most important if not the most important use case that many cloud users or many of our clients actually go for. So having that partnership with Veeam, in not only dealing with the traditional base, which is the VMware backups, but really pushing the boundaries and looking how we can extend that into migrations, into containers, and bare metal, by still keeping that level of security and flexibility. It's a difficult balance. Sometimes to make it more secure, you have to make things less flexible. And vise-versa, having things more flexible, they become less secure. So being willing to work us and actually define that difficult balance, and still provide the level of the user experience and the level of functionality that our clients demand, and keeping both client sets happy, both IBM and Veeam. It's challenging at times, but I guess it's what makes the job interesting and exciting. >> Yeah Simon, I'm actually glad you mentioned containers as one of the you know, modernization efforts going on there. Of course from Veeam's standpoint, when vSphere 7 rolls out, that they are being supported in you know one of the first work in that. I'd love to hear your viewpoint, what you're hearing from customers, how you expect, as a VMware partner for cloud, that movement of VMs and containers and how they're going together. What should we be looking for as that kind of matures and progresses? >> So I would absolutely watch this space. Particularly as we move into this. Containers and VMs living very much side-by-side. With VMware's announcements around Project Pacific and tanzu, it's very interesting. It's certainly a furor around the market. And we as IBM are very closely working with them with our acquisition last year of RedHat and its containerization platform. All while maintaining our ability in the OpenShift community around Kubernetes. So Stu, obviously I'm privy to a lot more information which I really can't really say and dig into too much detail around this particular angle but just to say that, watch this space. There's a lot going to happen. You're going to see a lot of announcements in the back half of 2020 and in the first few halves of 2021, particularly around the carburetions between containers and VMs and seeing how the different offerings from the different companies shape-- (mic cuts out) interesting times ahead. >> Yeah, absolutely. Simon, maybe you're right, don't want to get you in trouble as looking too much into the future, but maybe bring us into, I'm sure you're having lots of conversations with customers, what's their mindset, you talked about, you know, there's bare metals, virtualization, containers, you know application modernization, I've always said the long haul of the dent in any transformation and modernization (mic stutters) doing, so you know, 'cause some of the challenges and opportunities that you're hearing from customers that you and your partner are helping to solve? >> So some of the challenges around this containerization is containerization (mic stutters) is taking a lot longer and its taking a lot more time than we originally anticipated or expected. So the realization is actually hitting that VMware is going to be around for a while. I mean, the idea that people are thinking that they're just going to transform their applications, or all their VMs over a six or 12-month period, is just not reality. So we're living in this hybrid platform way, where you have VMware, you have virtual machines, and containers coexisting. Certain parts of the application, namely the, if I take the three-tier web app as an example, consisting of a http server, an application server, and a database. When you containerize that, or modernize that, it's very easy to modernize the http server, which turns into the ingress/egress servers on the container. It's very easy to modernize the application server, which is fairly static and you can just put a container. But as we know, Stu, data is sticky. So what many enterprises the data migration, or the way that the database is transformed, is the thing that takes the longest. So we're seeing out there in the enterprises people who are running their apps both with the ingress/egress service, the application server container containerized, but the database still living on a virtual machine, for a extended period of time. And until that made the final jump or chone their data service, they make that move. I do see this being, I personally, I honestly don't believe in my lifetime VMs will actually disappear. Because we're seeing that in some cases it's actually too costly for organizations to actually transform their applications or there's no real business case. It works perfectly well with the existing process. There's no need to modernize. But they're looking at ways and what parts of the architecture can be modernized, and containers are definitely the future for all the attributes that we know and love. But there is going to be this hybrid world. So having tools and partners like Veeam, who are willing to cross the ecosphere of the different platforms, is critical for our clients today and critical for partnerships that we have. Like the one we have with Veeam. >> All right well Simon, it goes back to one of those IT maxims, you know, is IT always additive. We almost never really get rid of anything, we just keep adding to it and changing it and as you said, data is that critical component and I think you highlighted nicely how you know, Veeam fits in you know, very much for that story. So Simon, thank you so much for joining us, pleasure having you on the program, glad to have you in theCUBE alumni ranks at this point. >> Thank you Stu, and thank you, it was a pleasure. Take care. >> All right stay tuned for lots more coverage from VeeamON 2020 online, I'm Stu Miniman, and thanks for watching theCUBE. (calm music)

Published Date : Jun 17 2020

SUMMARY :

From around the globe, it's theCUBE of the community where Thank you Stu, it's many of the messages around the VMware and the IBM is the data I saw, I think. and have done for the last 10+ years. of the most important pieces and the movement of data and you know sometimes I can, you know, and the way we actually the imperative for enterprises, and still provide the level as one of the you know, and in the first few halves I've always said the long haul of the dent and containers are definitely the future and as you said, data is Thank you Stu, and thank I'm Stu Miniman, and thanks

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

SimonPERSON

0.99+

2019DATE

0.99+

2020DATE

0.99+

2010DATE

0.99+

VMwareORGANIZATION

0.99+

2018DATE

0.99+

VeeamORGANIZATION

0.99+

Las VegasLOCATION

0.99+

first questionQUANTITY

0.99+

StuPERSON

0.99+

VMware Cloud FoundationORGANIZATION

0.99+

bothQUANTITY

0.99+

Simon Kofkin-HansenPERSON

0.99+

three-tierQUANTITY

0.99+

12-monthQUANTITY

0.99+

VMware SolutionsORGANIZATION

0.99+

VMWorldORGANIZATION

0.99+

firstQUANTITY

0.99+

Stu MinimanPERSON

0.99+

oneQUANTITY

0.99+

last yearDATE

0.98+

this yearDATE

0.98+

over 850,000QUANTITY

0.98+

FirstQUANTITY

0.98+

first timeQUANTITY

0.98+

vSphere 7TITLE

0.98+

2021DATE

0.97+

RedHat SummitEVENT

0.97+

todayDATE

0.96+

VMware CloudTITLE

0.94+

theCUBEORGANIZATION

0.93+

UNLIST TILL 4/2 - Migrating Your Vertica Cluster to the Cloud


 

>> Jeff: Hello everybody, and thank you for joining us today for the virtual Vertica BDC 2020. Today's break-out session has been titled, "Migrating Your Vertica Cluster to the Cloud." I'm Jeff Healey, and I'm in Vertica marketing. I'll be your host for this break-out session. Joining me here are Sumeet Keswani and Chris Daly, Vertica product technology engineers and key members of our customer success team. Before we begin, I encourage you to submit questions and comments during the virtual session. You don't have to wait, just type your question or comment in the question box below the slides and click Submit. As always, there will be a Q&A session at the end of the presentation. We'll answer as many questions as we're able to during that time. Any questions that we don't address, we'll do our best to answer them offline. And alternatively, you can visit Vertica forums at forum.vertica.com to post your questions there after the session. Our engineering team is planning to join the forums to keep the conversation going. Also as a reminder that you can maximize your screen by clicking the double arrow button in the lower right corner of the slides. And yes, this virtual session is being recorded and will be available to view on demand this week. We'll send you a notification as soon as it's ready. Now let's get started. Over to you, Sumeet. >> Sumeet: Thank you, Jeff. Hello everyone, my name is Sumeet Keswani, and I will be talking about planning to deploy or migrate your Vertica cluster to the Cloud. So you may be moving an on-prem cluster or setting up a new cluster in the Cloud. And there are several design and operational considerations that will come into play. You know, some of these are cost, which industry you are in, or which expertise you have, in which Cloud platform. And there may be a personal preference too. After that, you know, there will be some operational considerations like VM and cluster sizing, what Vertica mode you want to deploy, Eon or Enterprise. It depends on your use keys. What are the DevOps skills available, you know, what elasticity, separation you need, you know, what is your backup and DR strategy, what do you want in terms of high availability. And you will have to think about, you know, how much data you have and where it's going to live. And in order to understand the cost, or the cost and the benefit of deployment and you will have to understand the access patterns, and how you are moving data from and to the Cloud. So things to consider before you move a deployment, a Vertica deployment to the Cloud, right, is one thing to keep in mind is, virtual CPUs, or CPUs in the Cloud, are not the same as the usual CPUs that you've been familiar with in your data center. A vCPU is half of a CPU because of hyperthreading. There is definitely the noisy neighbor effect. There is, depending on what other things are hosted in the Cloud environment, you may see performance, you may occasionally see performance issues. There are I/O limitations on the instance that you provision, so that what that really means is you can't always scale up. You might have to scale up, basically, you have to add more instances rather than getting bigger or the right size instances. Finally, there is an important distinction here. Virtualization is not free. There can be significant overhead to virtualization. It could be as much as 30%, so when you size and scale your clusters, you must keep that in mind. Now the other important aspect is, you know, where you put Vertica cluster is important. The choice of the region, how far it is from your various office locations. Where will the data live with respect to the cluster. And remember, popular locations can fill up. So if you want to scale out, additional capacity may or may not be available. So these are things you have to keep in mind when picking or choosing your Cloud platform and your deployment. So at this point, I want to make a plug for Eon mode. Eon mode is the latest mode, is a Cloud mode from Vertica. It has been designed with Cloud economics in mind. It uses shared storage, which is durable, available, and very cheap, like S3 storage or Google Cloud storage. It has been designed for quick scaling, like scale out, and highly elastic deployments. It has also been designed for high workload isolation, where each application or user group can be isolated from the other ones, so that they'll be paid and monitored separately, without affecting each other. But there are some disadvantages, or perhaps, you know, there's a cost for using Eon mode. Storage in S3 is neither cheap nor efficient. So there is a high latency of I/O when accessing data from S3. There is API and data access cost. There is API and data access cost associated with accessing your data in S3. Vertica in Eon mode has a pay as you go model, which you know, works for some people and does not work for others. And so therefore it is important to keep that in mind. And performance can be a little bit variable here, because it depends on cache, it depends on the local depot, which is a cache, and it is not as predictable as EE mode, so that's another trade-off. So let's spend about a minute and see how a Vertica cluster in Eon mode looks like. A Vertica cluster in Eon mode has S3 as the durability layer where all the data sits. There are subclusters, which are essentially just aggregation groups, which is separated compute, which will service different workloads. So for in this example, you may have two subclusters, one servicing ETL workload and the other one servicing (mic interference obscures speaking). These clusters are isolated, and they do not affect each other's performance. This allows you to scale them independently and isolate workloads. So this is the new Vertica Eon mode which has been specifically designed by us for use in the Cloud. But beyond this, you can use EE mode or Eon mode in the Cloud, it really depends on what your use case is. But both of these are possible, and we highly recommend Eon mode wherever possible. Okay, let's talk a little bit about what we mean by Vertica support in the Cloud. Now as you know, a Cloud is a shared data center, right. Performance in the Cloud can vary. It can vary between regions, availability zones, time of the day, choice of instance type, what concurrency you use, and of course the noisy neighbor effect. You know, we in Vertica, we performance, load, and stress test our product before every release. We have a bunch of use cases, we go through all of them, make sure that we haven't, you know, regressed any performance, and make sure that it works up to standards and gives you the high performance that you've come to expect. However, your solution or your workload is unique to you, and it is still your responsibility to make sure that it is tuned appropriately. To do this, one of the easiest things you can do is you know, pick a tested operating system, allocate the virtual machine, you know, with enough resources. It's something that we recommend, because we have tested it thoroughly. It goes a long way in giving you predictability. So after this I would like to now go into the various platforms, Cloud platforms, that Vertica has worked on. And I'll start with AWS, and my colleague Chris will speak about Azure and GCP. And our thoughts forward. So without further ado, let's start with the Amazon Web Services platform. So this is Vertica running on the Amazon Web Services platform. So as you probably are all aware, Amazon Web Services is the market leader in this space, and indeed really our biggest provider by far, and have been here for a very long time. And Vertica has a deep integration in the Amazon Web Services space. We provide a marketplace offering which has both pay as you go or a bring your own license model. We have many, you know, knowledge base articles, best practices, scripts, and resources that help you configure and use a Vertica database in the Cloud. We have several customers in the Cloud for many, many years now, and we have managed and console-based point and click deployments, you know, for ease of use in the Cloud. So Vertica has a deep integration in the Amazon space, and has been there for quite a bit now. So we communicate a lot of experience here. So let's talk about sizing on AWS. And sizing on any platform comes down to you know, these four or five different things. It comes down to picking the right instance type, picking the right disk volume and type, tuning and optimizing your networking, and finally, you know, some operational concerns like security, maintainability, and backup. So let's go into each one of these on the AWS ecosystem. So the choice of instance type is one of the important choices that you will make. In Eon mode, you know, you don't really need persistent disk. You can, you should probably choose ephemeral disk because it gives you extra speed, and speed with the instance type. We highly recommend the i3.4x instance types, which are very economical, have a big, 4 terabyte depot or cache per node. The i3.metal is similar to the i3.4, but has got significantly better performance, for those subclusters that need this extra oomph. The i3.2 is good for scale out of small ad hoc clusters. You know, they have a smaller cache and lower performance but it's cheap enough to use very indiscriminately. If you were in EE mode, well we don't use S3 as the layer of durability. Your local volumes is where we persist the data. Hence you do need an EBS volume in EE mode. In order to make sure that, you know, that the instance or the deployment is manageable, you might have to use some sort of a software RAID array over the EBS volumes. The most common instance type you see in EE mode is the r4.4x, the c4, or the m4 instance types. And then of course for temp space and depot we always recommend instance volumes. They're just much faster. Okay. So let's go, let's talk about optimizing your network or tuning your network. So the best, the best thing you can do about tuning your network, especially in Eon mode but in other modes too, is to get a VPC S3 endpoint. This is essentially a route table that makes sure that all traffic between your cluster and S3 goes over an internal fabric. This makes it much faster, you don't pay for egress cost, especially if you're doing external tables or your communal storage, but you do need to create it. Many times people will forget doing it. So you really do have to create it. And best of all, it's free. It doesn't cost you anything extra. You just have to create it during cluster creation time, and there's a significant performance difference for using it. The next thing about tuning your network is, you know, sizing it correctly. Pick the closest geographical region to where you'll consume the data. Pick the right availability zone. We highly recommend using cluster placement groups. In fact, they are required for the stability of the cluster. A cluster placement group is essentially, it operates this notion of rack. Nodes in a cluster placement group, are, you know, physically closer to each other than they would otherwise be. And this allows, you know, a 10 Gbps, bidirectional, TCP/IP flow between the nodes. And this makes sure that, you know, you get a high amount of Gbps per second. As you probably are all aware, the Cloud does not support broadcast or UDP broadcast. Hence you must use point-to-point UDP for spread in the Cloud, or in AWS. Beyond that, you know, point-to-point UDP does not scale very well beyond 20 nodes. So you know, as your cluster sizes increase, you must switch over to large cluster mode. And finally, use instances with enhanced networking or SR-IOV support. Again, it's free, it comes with the choice of the instance type and the operating system. We highly recommend it, it makes a big difference in terms of how your workload will perform. So let's talk a little bit about security, configuration, and orchestration. As I said, we provide CloudFormation scripts to make the ease of deployment. You can use the MC point and click. With regard to security, you know, Vertica does support instance profiles out of the box in Amazon. We recommend you use it. This is highly desirable so that you're not passing access keys and secret keys around. If you use our marketplace image, we have picked the latest operating systems, we have patched them, Amazon actually validates everything on marketplace and scans them for security vulnerabilities. So you get that for free. We do some basic configuration, like we disable root ssh access, we disallow any password access, we turn on encryption. And we run a basic set of security checks to make sure that the image is secure. Of course, it could be made more secure. But we try to balance out security, performance, and convenience. And finally, let's talk about backups. Especially in Eon mode I get the question, "Do we really need to back up our system, "since the data is in S3?" And the answer is yes, you do. Because you know, S3's not going to protect you against an accidental drop table. You know, S3 has a finite amount of reliability, durability, and availability. And you may want to be able to restore data differently. Also, backups are important if you're doing DR, or if you have additional cluster in a different region. The other cluster can be considered a backup. And finally, you know, why not create a backup or a disaster recovery cluster, you know, storage is cheap in the Cloud. So you know, we highly recommend you use it. So with this, I would like to hand it over to my colleague Christopher Daly, who will talk about the other two platforms that we support, that is Google and Azure. Over to you, Chris, thank you. >> Chris: Thanks, Sumeet, and hi everyone. So while there's no argument that we here at Vertica have a long history of running within the Amazon Web Services space, there are other alternative Cloud service providers where we do have a presence, such as Google Cloud Platform, or GCP. For those of you who are unfamiliar with GCP, it's considered the third-largest Cloud service provider in the marketspace, and it's priced very competitively to its peers. Has a lot of similarities to AWS in the products and services that it offers, but it tends to be the go-to place for newer businesses or startups. We officially started supporting GCP a little over a year ago with our first entry into their GCP marketplace. So a solution that deployed a fully-functional and ready-to-use Enterprise mode cluster. We followed up on that with the release and the support of Google storage buckets, and now I'm extremely pleased to announce that with the launch of Vertica 10, we're officially supporting Eon mode architecture in GCP as well. But that's not all, as we're adding additional offerings into the GCP marketplace. With the launch of version 10 we'll be introducing a second listing in the marketplace that allows for the deployment of an Eon mode cluster. It's all being driven by our own management consult. This will allow customers to quickly spin up Eon-based clusters within the GCP space. And if that wasn't enough, I'm also pleased to tell you that very soon after the launch we're going to be offering Vertica by the hour in GCP as well. And while we've done a lot to automate the solutions coming out of the marketplace, we recognize the simple fact that for a lot of you, building your cluster manually is really the only option. So with that in mind, let's talk about the things you need to understand in GCP to get that done. So wag me if you think this slide looks familiar. Well nope, it's not an erroneous duplicate slide from Sumeet's AWS section, it's merely an acknowledgement of all the things you need to consider for running Vertica in the Cloud. In Vertica, the choice of the operational mode will dictate some of the choices you'll need to make in the infrastructure, particularly around storage. Just like on-prem solutions, you'll need to understand the disk and networking capacities to get the most out of your cluster. And one of the most attractive things in GCP is the pricing, as it tends to run a little less than the others. But it does translate into less choices and options within the environment. If nothing else, I want you to take one thing away from this slide, and Sumeet said this earlier. VMs running, about AWS, Sumeet said this about AWS earlier. VMs running in the GCP space run on top of hardware that has hyperthreading enabled. And that a vCPU doesn't equate to a core, but rather a processing thread. This becomes particularly important if you're moving from an on-prem environment into the Cloud. Because a physical Vertica node with 32 cores is not the same thing as a VM with 32 vCPUs. In fact, with 32 vCPUs, you're only getting about 16 cores worth of performance. GCP does offer a handful of VM types, which they categorize by letter, but for us, most of these don't make great choices for Vertica nodes. The M series, however, does offer a good core to memory ratio, especially when you're looking at the high-mem variants. Also keep in mind, performance in I/O, such as network and disk, are partially dependent on the VM size, so customers in GCP space should be focusing on 16 vCPU VMs and above for their Vertica nodes. Disk options in GCP can be broken down into two basic types, persistent disks and local disks, which are ephemeral. Persistent disks come in two forms, standard or SSD. For Vertica in Eon mode, we recommend that customers use persistent SSD disks for the catalog, and either local SSD disks or persistent SSD disks for the depot and the temp space. Couple of things to think about here, though. Persistent disks are provisioned as a single device with a settable size. Local disks are provisioned as multiple disk devices with a fixed size, requiring you to use some kind of software RAIDing to create a single storage device. So while local SSD disks provide much more throughput, you're using CPU resources to maintain that RAID set. So you're giving, it's a little bit of a trade-off. Persistent disks offer redundancy, either within the zone that they exist or within the region, and if you're selecting regional redundancy, the disks are replicated across multiple zones in the region. This does have an effect in the performance to VM, so we don't recommend this. What we do recommend is the zonal redundancy when you're using persistent disks, as it gives you that redundancy level without actually affecting the performance. Remember also, in the Cloud space, all I/O is network I/O, as disks are basically block storage devices. This means that disk actions can and will slow down network traffic. And finally, the storage bucket access in GCP is based on GCP interoperability mode, which means that it's basically compliant with the AWS S3 API. In interoperability mode, access to the bucket is granted by a key pair that GCP refers to as HMAC keys. HMAC keys can be generated for individual users or for service accounts. We will recommend that when you're creating HMAC keys, choose a service account to ensure that the keys are not tied to a single employee. When thinking about storage for Enterprise mode, things change a little bit. We still recommend persistent SSD disks over standard ones. However, the use of local SSD disks for anything other than temp space is highly discouraged. I said it before, local SSD disks are ephemeral, meaning that the data's lost if the machine is turned off or goes down. So not really a place you want to store your data. In GCP, multiple persistent disks placed into a software RAID set does not create more throughput like you can find in other Clouds. The I/O saturation usually hits the VM limit long before it hits the disk limit. In fact, performance of a persistent disk is determined not just by the size of the disk but also by the size of the VM. So a good rule of thumb in GCP is to maximize your I/O throughput for persistent disks, is that the size tends to max out at two terabytes for SSDs and 10 terabytes for standard disks. Network performance in GCP can be thought of in two distinct ways. There's node-to-node traffic, and then there's egress traffic. Node-to-node performance in GCP is really good within the zone, with typical traffic between nodes falling in the 10-15 gigabits per second range. This might vary a little from zone to zone and region to region, but usually it's only limited, they're only limited by the existing traffic where the VMs exist. So kind of a noisy neighbor effect. Egress traffic from a VM, however, is subject to throughput caps, and these are based on the size of the VM. So the speed is set for the number of vCPUs in the VM at two gigabits per second per vCPU, and tops out at 32 gigabits per second. So the larger the VM, the more vCPUs you get, the larger the cap. So some things to consider in the NAV ring space for your Vertica cluster, pick a region that's physically close to you, even if you're connecting to the GCP network from a corporate LAN as opposed to the internet. The further the packets have to travel, the longer it's going to take. Also, GCP, like most Clouds, doesn't support UDP broadcast traffic on their virtual NAV ring, so you do have to use the point-to-point flag for spread when you're creating your cluster. And since the network cap on VMs is set at 32 gigabits per second per VM, maximize your network egress throughput and don't use VMs that are smaller than 16 vCPUs for your Vertica nodes. And that gets us to the one question I get asked the most often. How do I get my data into and out of the Cloud? Well, GCP offers many different methods to support different speeds and different price points for data ingress and egress. There's the obvious one, right, across the internet either directly to the VMs or into the storage bucket. Or you can, you know, light up a VPN tunnel to encrypt all that traffic. But additionally, GCP offers direct network interconnect from your corporate network. These get provided either by Google or by a partner, and they vary in speed. They also offer things called direct or carrier peering, which is connecting the edges of the networks between your network and GCP, and you can use a CDN interconnect, which creates, I believe, an on-demand connection from the GCP network, your network to the GCP network provided by a large host of CDN service providers. So GCP offers a lot of ways to move your data around in and out of the GCP Cloud. It's really a matter of what price point works for you, and what technology your corporation is looking to use. So we've talked about AWS, we've talked about GCP, it really only leaves one more Cloud. So last, and by far not the least, there's the Microsoft Azure environment. Holding on strong to the number two place in the major Cloud providers, Azure offers a very robust Cloud offering that's attractive to customers that already consume services from Microsoft. But what you need to keep in mind is that the underlying foundation of their Cloud is based on the Microsoft Windows products. And this makes their Cloud offering a little bit different in the services and offerings that they have. The good news here, though, is that Microsoft has done a very good job of getting their virtualization drivers baked into the modern kernels of most Linux operating systems, making running Linux-based VMs in Azure fairly seamless. So here's the slide again, but now you're going to notice some slight differences. First off, in Azure we only support Enterprise mode. This is because the Azure storage product is very different from Google Cloud storage and S3 on AWS. So while we're working on getting this supported, and we're starting to focus on this, we're just not there yet. This means that since we're only supporting Enterprise mode in Azure, getting the local disk performance right is one of the keys to success of running Vertica here, with the other major key being making sure that you're getting the appropriate networking speeds. Overall, Azure's a really good platform for Vertica, and its performance and pricing are very much on par with AWS. But keep in mind that the newer versions of the Linux operating systems like RHEL and CentOS run much better here than the older versions. Okay, so first things first again, just like GCP, in Azure VMs are running on top of hardware that has hyperthreading enabled. And because of the way Hyper-V, Azure's virtualization engine works, you can actually see this, right? So if you look down into the CPU information of the VM, you'll actually see how it groups the vCPUs by core and by thread. Azure offers a lot of VM types, and is adding new ones all the time. But for us, we see three VM types that make the most sense for Vertica. For customers that are looking to run production workloads in Azure, the Es_v3 and the Ls_v2 series are the two main recommendations. While they differ slightly in the CPU to memory ratio and the I/O throughput, the Es_v3 series is probably the best recommendation for a generalized Vertica node, with the Ls_v2 series being recommended for workloads with higher I/O requirements. If you're just looking to deploy a sandbox environment, the Ds_v3 series is a very suitable choice that really can reduce your overall Cloud spend. VM storage in Azure is provided by a grouping of four different types of disks, all offering different levels of performance. Introduced at the end of last year, the Ultra Disk option is the highest-performing disk type for VMs in Azure. It was designed for database workloads where high throughput and low latency is very desirable. However, the Ultra Disk option is not available in all regions yet, although that's been changing slowly since their launch. The Premium SSD option, which has been around for a while and is widely available, can also offer really nice performance, especially higher capacities. And just like other Cloud providers, the I/O throughput you get on VMs is dictated not only by the size of the disk, but also by the size of the VM and its type. So a good rule of thumb here, VM types with an S will have a much better throughput rate than ones that don't, meaning, and the larger VMs will have, you know, higher I/O throughput than the smaller ones. You can expand the VM disk throughput by using multiple disks in Azure and using a software RAID. This overcomes limitations of single disk performance, but keep in mind, you're now using CPU cycles to maintain that raid, so it is a bit of a trade-off. The other nice thing in Azure is that all their managed disks are encrypted by default on the server side, so there's really nothing you need to do here to enable that. And of course I mentioned this earlier. There is no native access to Azure storage yet, but it is something we're working on. We have seen folks using third-party applications like MinIO to access Azure's storage as an S3 bucket. So it might be something you want to keep in mind and maybe even test out for yourself. Networking in Azure comes in two different flavors, standard and accelerated. In standard networking, the entire network stack is abstracted and virtualized. So this works really well, however, there are performance limitations. Standard networking tends to top out around four gigabits per second. Accelerated networking in Azure is based on single root I/O virtualization of the Mellanox adapter. This is basically the VM talking directly to the physical network card in the host hardware, and it can produce network speeds up to 20 gigabits per second, so much, much faster. Keep in mind, though, that not all VM types and operating systems actually support accelerated networking, and you know, just like disk throughput, network throughput is based on VM type and size. So what do you need to think about for networking in the Azure space? Again, stay close to home. Pick regions that are geographically close to your location. Yes, the backbones between the regions are very, very fast, but the more hops your packets have to make, the longer it takes. Azure offers two types of groupings of their VMs, availability sets and availability zones. Availability zones offer good redundancy across multiple zones, but this actually increases the node-to-node latency, so we recommend you avoid this. Availability sets, on the other hand, keep all your VMs grouped together within a single zone, but makes sure that no two VMs are running on the same host hardware, for redundancy. And just like the other Clouds, UDP broadcast is not supported. So you have to use the point-to-point flag when you're creating your database to ensure that the spread works properly. Spread time out, okay, this is a good one. So recently, Microsoft has started monthly rolling updates of their environment. What this looks like is VMs running on top of hardware that's receiving an update can be paused. And this becomes problematic when the pausing of the VM exceeds eight seconds, as the unpaused members of the cluster now think the paused VM is down. So consider adjusting the spread time out for your clusters in Azure to 30 seconds, and this will help avoid a little of that. If you're deploying a large cluster in Azure, more than 20 nodes, use large closer mode, as point-to-point for spread doesn't really scale well with a lot of Vertica nodes. And finally, you know, pick VM types and operating systems that support accelerated networking. The difference in the node-to-node speeds can be very dramatic. So how do we move data around in Azure, right? So Microsoft views data egress a little differently than other Clouds, as it classifies any data being transmitted by a VM as egress. However, it only bills for data egress that actually leaves the Azure environment. Egress speed limits in Azure are based entirely on the VM type and size, and then they're limited by your connection to them. While not offering as many pathways to access their Cloud as GCP, Azure does offer a direct network-to-network connection called ExpressRoute. Offered by a large group of third-party processors, partners, the ExpressRoute offers multiple tiers of performance that are based on a flat charge for inbound data and a metered charge for outbound data. And of course you can still access these via the internet, and securely through a VPN gateway. So on behalf of Jeff, Sumeet, and myself, I'd like to thank you for listening to our presentation today, and we're now ready for Q&A.

Published Date : Mar 30 2020

SUMMARY :

Also as a reminder that you can maximize your screen So the best, the best thing you can do and the larger VMs will have, you know,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ChrisPERSON

0.99+

SumeetPERSON

0.99+

Jeff HealeyPERSON

0.99+

Chris DalyPERSON

0.99+

JeffPERSON

0.99+

Christopher DalyPERSON

0.99+

Sumeet KeswaniPERSON

0.99+

GoogleORGANIZATION

0.99+

VerticaORGANIZATION

0.99+

AWSORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

10 GbpsQUANTITY

0.99+

AmazonORGANIZATION

0.99+

forum.vertica.comOTHER

0.99+

30 secondsQUANTITY

0.99+

Amazon Web ServicesORGANIZATION

0.99+

RHELTITLE

0.99+

TodayDATE

0.99+

32 coresQUANTITY

0.99+

CentOSTITLE

0.99+

more than 20 nodesQUANTITY

0.99+

32 vCPUsQUANTITY

0.99+

two platformsQUANTITY

0.99+

eight secondsQUANTITY

0.99+

VerticaTITLE

0.99+

10 terabytesQUANTITY

0.99+

oneQUANTITY

0.99+

todayDATE

0.99+

bothQUANTITY

0.99+

20 nodesQUANTITY

0.99+

two terabytesQUANTITY

0.99+

each applicationQUANTITY

0.99+

S3TITLE

0.99+

two typesQUANTITY

0.99+

LinuxTITLE

0.99+

two subclustersQUANTITY

0.98+

first entryQUANTITY

0.98+

one questionQUANTITY

0.98+

fourQUANTITY

0.98+

AzureTITLE

0.98+

Vertica 10TITLE

0.98+

4/2DATE

0.98+

FirstQUANTITY

0.98+

16 vCPUQUANTITY

0.98+

two formsQUANTITY

0.97+

MinIOTITLE

0.97+

single employeeQUANTITY

0.97+

firstQUANTITY

0.97+

this weekDATE

0.96+