Image Title

Search Results for AMQ:

Naina Singh & Roland Huß, Red Hat | Kubecon + Cloudnativecon Europe 2022


 

>> Announcer: "theCUBE" presents KubeCon and CloudNativeCon Europe 2022 brought to you by Red Hat, the Cloud Native Computing Foundation and its ecosystem partners. >> Welcome to Valencia, Spain and KubeCon and CloudNativeCon Europe 2022. I'm Keith Townsend, my co-host, Paul Gillin, Senior Editor Enterprise Architecture for SiliconANGLE. We're going to talk, or continue to talk to amazing people. The coverage has been amazing, but also the city of Valencia is beautiful. I have to eat a little crow, I landed and I saw the convention center, Paul, have you got out and explored the city at all? >> Absolutely, my first reaction to Valencia when we were out in this industrial section was, "This looks like Cincinnati." >> Yes. >> But then I got on the bus second day here, 10 minutes to downtown, another world, it's almost a middle ages flavor down there with these little winding streets and just absolutely gorgeous city. >> Beautiful city. I compared it to Charlotte, no disrespect to Charlotte, but this is an amazing city. Naina Singh, Principal Product Manager at Red Hat, and Roland Huss, also Principal Product Manager at Red Hat. We're going to talk a little serverless. I'm going to get this right off the bat. People get kind of feisty when we call things like Knative serverless. What's the difference between something like a Lambda and Knative? >> Okay, so I'll start. Lambda is, like a function as a server, right? Which is one of the definitions of serverless. Serverless is a deployment platform now. When we introduced serverless to containers through Knative, that's when the serverless got revolutionized, it democratized serverless. Lambda was proprietary-based, you write small snippets of code, run for a short duration of time on demand, and done. And then Knative which brought serverless to containers, where all those benefits of easy, practical, event-driven, running on demand, going up and down, all those came to containers. So that's where Knative comes into picture. >> Yeah, I would also say that Knative is based on containers from the very beginning, and so, it really allows you to run arbitrary workloads in your container, whereas with Lambda you have only a limited set of language that you can use and you have a runtime contract there which is much easier with Knative to run your applications, for example, if it's coming in a language that is not supported by Lambda. And of course the most important benefit of Knative is it's run on top of Kubernetes, which allows you- >> Yes. >> To run your serverless platform on any other Kubernetes installation, so I think this is one of the biggest thing. >> I think we saw about three years ago there was a burst of interest around serverless computing and really some very compelling cost arguments for using it, and then it seemed to die down, we haven't heard a lot about serverless, and maybe I'm just not listening to the right people, but what is it going to take for serverless to kind of break out and achieve its potential? >> Yeah, I would say that really the big advantage of course of Knative in that case is that you can scale down to zero. I think this is one of the big things that will really bring more people onto board because you really save a lot of money with that if your applications are not running when they're not used. Yeah, I think also that, because you don't have this vendor log in part thing, when people realize that you can run really on every Kubernete platform, then I think that the journey of serverless will continue. >> And I will add that the event-driven applications, there hasn't been enough buzz around them yet. There is, but serverless is going to bring a new lease on life on them, right? The other thing is the ease of use for developers. With Knative, we are introducing a new programming model, the functions, where you don't even have to create containers, it would do create containers for you. >> So you create the servers, but not the containers? >> Right now, you create the containers and then you deploy them in a serverless fashion using Knative. But the container creation was on the developers, and functions is going to be the third component of Knative that we are developing upstream, and Red Hat donated that project, is going to be where code to cloud capability. So you bring your code and everything else will be taken care of, so. >> So, I'd call a function or, it's funny, we're kind of circular with this. What used to be, I'd write a function and put it into a container, this server will provide that function not just call that function as if I'm developing kind of a low code no code, not no code, but a low code effort. So if there's a repetitive thing that the community wants to do, you'll provide that as a predefined function or as a server. >> Yeah, exactly. So functions really helps the developer to bring their code into the container, so it's really kind of a new (indistinct) on top of Knative- >> on top op. >> And of course, it's also a more opinionated approach. It's really more closer coming to Lambda now because it also comes with a programming model, which means that you have certain signature that you have to implement and other stuff. But you can also create your own templates, because at the end what matters is that you have a container at the end that you can run on Knative. >> What kind of applications is serverless really the ideal platform? >> Yeah, of course the ideal application is a HTTP-based web application that has no state and that has a very non-uniform traffic shape, which means that, for example, if you have a business where you only have spikes at certain times, like maybe for Super Bowl or Christmas, when selling some merchandise like that, then you can scale up from zero very quickly at a arbitrary high depending on the load. And this is, I think, the big benefit over, for example, Kubernetes Horizontal Pod Autoscaling where it's more like indirect measures of value scaling based on CPR memory, but here, it directly relates one to one to the traffic that is coming in to concurrent request. Yeah, so this helps a lot for non-uniform traffic shapes that I think this has become one of the ideal use case. >> Yeah. But I think that is one of the most used or defined one, but I do believe that you can write almost all applications. There are some, of course, that would not be the right load, but as long as you are handling state through external mechanism. Let's say, for example you're using database to save the state, or you're using physical volume amount to save the state, it increases the density of your cluster because when they're running, the containers would pop up, when your application is not running, the container would go down, and the resources can be used to run any other application that you want to us, right? >> So, when I'm thinking about Lambda, I kind of get the event-driven nature of Lambda. I have a S3 bucket, and if a S3 event is driven, then my functions as the server will start, and that's kind of the listening servers. How does that work with Knative or a Kubernetes-based thing? 'Cause I don't have an event-driven thing that I can think of that kicks off, like, how can I do that in Kubernetes? >> So I'll start. So it is exactly the same thing. In Knative world, it's the container that's going to come up and your servers in the container, that will do the processing of that same event that you are talking. So let's say the notification came from S3 server when the object got dropped, that would trigger an application. And in world of Kubernetes, Knative, it's the container that's going to come up with the servers in it, do the processing, either find another servers or whatever it needs to do. >> So Knative is listening for the event, and when the event happens, then Knative executes the container. >> Exactly. >> Basically. >> So the concept of Knative source which is kind of adapted to the external world, for example, for the S3 bucket. And as soon as there is an event coming in, Knative will wake up that server, will transmit this event as a cloud event, which is another standard from the CNCF, and then when the server is done, then the server spins down again to zero so that the server is only running when there are events, which is very cost effective and which people really actually like to have this kind of way of dynamic scaling up from zero to one and even higher like that. >> Lambda has been sort of synonymous with serverless in the early going here, is Knative a competitor to Lambda, is it complimentary? Would you use the two together? >> Yeah, I would say that Lambda is a offering from AWS, so it's a cloud server there. Knative itself is a platform, so you can run it in the cloud, and there are other cloud offerings like from IBM, but you can also run it on-premise for example, that's the alternative. So you can also have hybrid set scenarios where you really can put one part into the cloud, the other part on-prem, and I think there's a big difference in that you have a much more flexibility and you can avoid this kind of Windows login compared to AWS Lambda. >> Because Knative provides specifications and performance tests, so you can move from one server to another. If you are on IBM offering that's using Knative, and if you go to a Google offering- >> A google offering. >> That's on Knative, or a Red Hat offering on Knative, it should be seamless because they're both conforming to the same specifications of Knative. Whereas if you are in Lambda, there are custom deployments, so you are only going to be able to run those workloads only on AWS. >> So KnativeCon, co-located event as part of KubeCon, I'm curious as to the level of effort in the user interaction for deploying Knative. 'Cause when I think about Lambda or cloud-run or one of the other functions as a servers, there is no backend that I have to worry about. And I think this is where some of the debate becomes over serverless versus some other definition. What's the level of lifting that needs to be done to deploy Knative in my Kubernetes environment? >> So if you like... >> Is this something that comes as based part of the OpenShift install or do I have to like, you know, I have to... >> Go ahead, you answer first. >> Okay, so actually for OpenShift, it's a code layer product. So you have this catalog of operator that you can choose from, and OpenShift Serverless is one part of that. So it's really kind of a one click install where you have also get a default configuration, you can flexibly configure it as you like. Yeah, we think that's a good user experience and of course you can go to these cloud offerings like Google Cloud one or IBM Code Engine, they just have everything set up for you. And the idea of other different alternatives, you have (indistinct) charts, you can install Knative in different ways, you also have options for the backend systems. For example, we mentioned that when an event comes in, then there's a broker in the middle of something which dispatches all the events to the servers, and there you can have a different backend system like Kafka or AMQ. So you can have very production grade messaging system which really is responsible for delivering your events to your servers. >> Now, Knative has recently, I'm sorry, did I interrupt you? >> No, I was just going to say that Knative, when we talk about, we generally just talk about the serverless deployment model, right? And the Eventing gets eclipsed in. That Eventing which provides this infrastructure for producing and consuming event is inherent part of Knative, right? So you install Knative, you install Eventing, and then you are ready to connect all your disparate systems through Events. With CloudEvents, that's the specification we use for consistent and portable events. >> So Knative recently admitted to the, or accepted by the Cloud Native Computing Foundation, incubating there. Congratulations, it's a big step. >> Thank you. >> Thanks. >> How does that change the outlook for Knative adoption? >> So we get a lot of support now from the CNCF which is really great, so we could be part of this conference, for example which was not so easy before that. And we see really a lot of interest and we also heard before the move that many contributors were not, started into looking into Knative because of this kind of non being part of a mutual foundation, so they were kind of afraid that the project would go away anytime like that. And we see the adoption really increases, but slowly at the moment. So we are still ramping up there and we really hope for more contributors. Yeah, that's where we are. >> CNCF is almost synonymous with open source and trust. So, being in CNCF and then having this first KnativeCon event as part of KubeCon, we are hoping, and it's a recent addition to CNCF as well, right? So we are hoping that this events and these interviews, this will catapult more interest into serverless. So I'm really, really hopeful and I only see positive from here on out for Knative. >> Well, I can sense the excitement. KnativeCon sold out, congratulations on that. >> Thank you. >> I can talk about serverless all day, it's a topic that I really love, it's a fascinating way to build applications and manage applications, but we have a lot more coverage to do today on "theCUBE" from Spain. From Valencia, Spain, I'm Keith Townsend along with Paul Gillin, and you're watching "theCUBE," the leader in high-tech coverage. (gentle upbeat music)

Published Date : May 19 2022

SUMMARY :

brought to you by Red Hat, I have to eat a little crow, reaction to Valencia 10 minutes to downtown, another world, I compared it to Charlotte, Which is one of the that you can use and you of the biggest thing. that you can run really the functions, where you don't even have and then you deploy them that the community wants So functions really helps the developer that you have a container at the end Yeah, of course the but I do believe that you can and that's kind of the listening servers. it's the container that's going to come up So Knative is listening for the event, so that the server is only running in that you have a much more flexibility and if you go so you are only going to be able that needs to be done of the OpenShift install and of course you can go and then you are ready So Knative recently admitted to the, that the project would go to CNCF as well, right? Well, I can sense the excitement. coverage to do today

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Keith TownsendPERSON

0.99+

Paul GillinPERSON

0.99+

Naina SinghPERSON

0.99+

IBMORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

SpainLOCATION

0.99+

twoQUANTITY

0.99+

10 minutesQUANTITY

0.99+

Roland HussPERSON

0.99+

ValenciaLOCATION

0.99+

LambdaTITLE

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

CincinnatiLOCATION

0.99+

second dayQUANTITY

0.99+

ChristmasEVENT

0.99+

PaulPERSON

0.99+

CharlotteLOCATION

0.99+

AWSORGANIZATION

0.99+

OpenShiftTITLE

0.99+

Super BowlEVENT

0.99+

KnativeORGANIZATION

0.99+

one partQUANTITY

0.99+

Valencia, SpainLOCATION

0.99+

KubeConEVENT

0.99+

Roland HußPERSON

0.98+

KnativeConEVENT

0.98+

S3TITLE

0.98+

one clickQUANTITY

0.98+

bothQUANTITY

0.98+

zeroQUANTITY

0.98+

GoogleORGANIZATION

0.98+

CNCFORGANIZATION

0.97+

oneQUANTITY

0.96+

googleORGANIZATION

0.96+

theCUTITLE

0.95+

CloudNativeCon Europe 2022EVENT

0.95+

todayDATE

0.95+

KubernetesTITLE

0.95+

firstQUANTITY

0.94+

one serverQUANTITY

0.93+

KnativeTITLE

0.93+

KubeconORGANIZATION

0.91+

KuberneteTITLE

0.91+

WindowsTITLE

0.9+

CloudEventsTITLE

0.9+

Nick Barcet, Red Hat | Red Hat Summit 2020


 

>> Announcer: From around the globe, it's theCUBE with digital coverage of Red Hat Summit 2020. brought to you by Red Hat. >> Welcome back. This is theCUBE's coverage of Red Hat Summit 2020. Of course this year instead of all gathering together in San Francisco, we're getting to talk to Red Hat executives, their partners and their customers where they are around the globe. I'm your host Stuart Miniman and happy to welcome to the program Nick Barcet, who is the Senior Director of Technology Strategy at Red Hat. He happens to be on a boat in the Bahamas. So Nick, thanks so much for joining us. >> Hey thank you for inviting me. It's a great pleasure to be here and it's a great pleasure to work for a company that has always dealt with remote people. So it's really easy for us to, kind of thing. >> Yeah Nick. You know it's interesting, I've been saying probably for the last 10 years that the challenge of our time is really distributed systems. You know from a software standpoint that's what we talk about and even more so today number one of course the current situation with the global pandemic but number two the topic we're going to talk to you about is edge and 5G. It's obviously gotten a lot of hype. So before we get into that my understanding Nick, you know you came into Red Hat through an acquisition. So give us a little bit about your background and what you work on for Red Hat. >> About five years ago company I was working for eNovance got acquired by Red Hat and I've been very lucky in that acquisition where I found a perfect home to express my talent. I've been free software advocate for the past 20 some years. Always been working in free software for the past 20 years and Red Hat is really wonderful for that. >> Yeah it's addressing me okay yeah. I remember back the early days we used to talk about free software. Now we don't talk free, open-source is what we talk about you know. Bream is a piece of what we're doing but let's talk about you know, You know, eNovanceI absolutely remember they were partner of Red Hat. I talked to them and a lot at some of the OpenStack shows. So I'm guessing when we're talking about edge, these are kind of the pieces coming together of what Red had done for years with OpenStack and with NFB. So what, what's the solution set you're talking about? Bring us inside, how you're helping your customers with these types of split. >> Well clearly the solution we are trying to put together as to combine what people already have with where they want to go. Our vision for the future is a vision where OpenShift is delivering a common service on any platform including hardware at the far edge on a model where both v-ends and containers can be hosted on the same machine. However there is a long road to get there and until we can fulfill all the needs, we are going to be using combination of OpenShift, OpenStack and many other product that we have in our portfolio to fulfill the needs of our customer. We've seen for example Verizon starting with OpenStack quite a few years ago now going with us with OpenShift that they're going to place on up of OpenStack or directly on bare metal. We've seen other big telcos use that in very successful to deploy their 5G networks. There is great capabilities in the existing portfolio. We are just expanding that simplifying it because when we are talking about the edge, we are talking about managing thousands if not millions of device and simplicity is key if you do not want to have your management parts in Crete. >> Excellent. So you talked a lot about the service providers. Obviously 5G as a big wave coming a lot of promise as what it will enable both for the service providers as well as the end-users. Help us understand where that is today and what we should expect to see in the coming years though. >> So in respect of 5G, there is two reason why 5G is important. One it is-- It is important in terms of edge strategy because any person deploying 5G will need to deploy computer resources much closer to the antenna if they want to be able to deliver the promise of 5G and the promise of very low latency. The second reason it is important is because it allows to build a network of things which do not need to be interconnected other than through a 5G connection. And this simplifies a lot some of the edge application that we are going to see where sensors need to provide data in a way where you're not necessarily always connected to a physical network and maintaining a WiFi connection is really complex and costly. >> Yeah Nick a lot of pieces that sometimes get confused or conflated, I want you to help us connect the dots between what you're talking about for edge and what's happening in the telcos and the the broader conversation about hybrid cloud or Red Hat calls at the open hybrid cloud because you know there were some articles that were like you know edge is going to kill the cloud. I think we all know an IP nothing ever dies, everything is all additive. So how do these pieces all go together? >> So for us at Red Hat, it's very important to build edge as an extension of our open hybrid cloud strategy. Clearly what we are trying to build is an environment where developers can develop workloads once and then can the administrator that needs to deploy a workload or the business mode that needs to deploy a workload can do it on any footprint. And the edge is just one of these footprint as is the cloud as is a private environment. So really having a single way to administer all these footprints, having a single way to define the workloads running on it, is really what we are achieving today and making better and better in the years to come. The reality of... to process the data as close as possible to where the data is being consumed or generated. So you have new footprints to let's say summarize or simplify or analyze the data where it is being used. And then you can limit the traffic to a more central site to only the essential of it. It is clear that with the current growth of data, there won't be enough capacity to have all the data going directly to the central path. And this is what the edge is about, making sure we have intermediary of points of processing. >> Yeah absolutely. So Nick you talked about OpenStack and OpenShift. Of course there's open source project with with OpenStack. OpenShift the big piece of that is is Kubernetes. When it comes to edge are there other open source project, the parts of the foundations out there that we should highlight when looking at these edge loop? >> Oh, there is a tremendous amount of projects that are pertaining to the edge. Red Hat carries many of these projects in its portfolio. The middleware components for example Quercus or AMQ mechanism, Carlcare are very important components. We've got storage solutions that are super important also when you're talking about storing or handling data. You've got in our management portfolio two very key tool one called Ansible that allows to configure remotely confidence that is super handy when you need to reconfigure firewall in mass. You've got another tool that is the central piece of our strategy which is called ACM, Red Hat's I forgot the name of the product now. We are using the acronym all the time which is our central management mechanism just delivered to us through IBM. So this is a portfolio wide we are making and I forgot the important one which is Red Hat Enterprise Linux which is delivering very soon a new version that is going to enable easier management at yet. >> Yeah. Well of course we know that realers you know the core foundational piece fit with most of the solution in a portfolio. That it's really interesting how you laid that out though. As you know some people on the outside look and say, " Okay, Red Hat's got a really big portfolio. How does it all fit together?" You just discussed that all of these pieces become really important when they come together for the edge. So maybe you know, one of the things when we get together summit of course, we get to hear a lot from your customers. So any customers you can talk about, that might be a good proof point for these solutions that you're talking about today? >> So right now most of the proof points are in the telco industry because these are the first one that have made the investment in depth. And when we are talking about various and we are talking about very large investment that is reinforced in their strategy. We've got customers in telco all over the world that are starting to use our products to deploy their 5G networks and we've got lots of customer starting to work with us on creating their strategy for in other vertical particularly in the industrial and manufacturing sector which is our next endeavour after telco yet. >> Yeah well absolutely. Verizon a customer, I'm well familiar with when it comes to what they've been used with Red Hat. I'd interviewed them, it opens back few years back when they talked about that those nav-pipe solutions. You brought a manufacturing so that brings up one of the concerns when you talk about edge or specifically about IOT environment. When we did some original research looking at the industrial internet, the boundaries between the IT group and the OT which heavily lives in manufacturing wouldn't, they don't necessarily talk or work together. So how's Red Hat helping to make sure that customers you know, go through these transitions, pass through those silos and can take advantage of these sorts of new technologies? >> Well obviously you have to look at a problem in the entirety. You've got to look at the change management aspect and for this, you need to understand how people interact together if you intend on modifying the way they work together. You also need to ensure that the requirements of one are not impeding the other on demand, on environment of a manufacturer. Is really important especially when we are talking about dealing with IOT sensors which have very limited security capability. So you need to add in the appropriate security layers to make what is not secure, secure and if you don't do that you're going to introduce a friction. And you also need to ensure that you can delegate administration of the component to the right people. You cannot say, Oh from now on all of what you used to be controlling on a manufacturing floor is now controlled centrally and you have to go through this form in order to have anything modified. So having the flexibility in our tooling to enable respect of the existing organization and handle a change management the appropriate way. These are way to answer this... >> Right Nick, last thing for you. Obviously this is a maturing space, lots of change happening. So give us a little bit of a look forward as to what users should be expecting and you know what pieces will be the industry and Red Hat be working on that bring full value out of the edge and 5G solution? >> So as always, any such changes are driven by the applications. And what we are seeing is in terms of application, a very large predominance of requirements for AI, ML and data processing capability. So reinforcing all the components around this environment is one of our key addition and that we are making as we speak. You can see Chris keynote which is going to demonstrate how we are enabling a manufacturer to process the signal sent from multiple sensors through an AI and during early failure detection. You can also expect us to enable more and more complex use case in terms of footprint. Right now, we can do very small data center that are residing on three machine. Tomorrow we'll be able to handle remote worker nodes that are on a single machine. Further along we'll be able to deal with disconnected node. A single machine acting as a cluster. All these are elements that are going to allow us to go further and further in the complication of the use cases. It's not the same thing when you have to connect a manufacturer that is on solid grounds with fiber access or when you have to connect the knowledge for example or a vote and talk about that to. >> Well, Nick thank you so much for all the updates. I know there's some really good breakouts. I'm sure there's lots on the Red Hat website to find out more about edge in five B's. Nick Barcet thanks so much for joining us. >> Thank you very much for having me. >> All right. Back with lots more covered from Red Hat Summit 2020. I'm Stuart Miniman and thanks for watching theCUBE. (bright upbeat music)

Published Date : Apr 28 2020

SUMMARY :

brought to you by Red Hat. and happy to welcome to It's a great pleasure to be that the challenge of our time software for the past 20 years I remember back the early days that they're going to see in the coming years though. and the promise of very low latency. and the the broader and better in the years to come. OpenShift the big piece that is the central piece one of the things when we get that have made the investment in depth. one of the concerns that the requirements of one and you know what pieces and that we are making as we speak. on the Red Hat website and thanks for watching theCUBE.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
VerizonORGANIZATION

0.99+

NickPERSON

0.99+

Stuart MinimanPERSON

0.99+

Nick BarcetPERSON

0.99+

Red HatORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

BahamasLOCATION

0.99+

ChrisPERSON

0.99+

IBMORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

millionsQUANTITY

0.99+

thousandsQUANTITY

0.99+

two reasonQUANTITY

0.99+

telcoORGANIZATION

0.99+

TomorrowDATE

0.99+

eNovanceORGANIZATION

0.99+

OneQUANTITY

0.99+

oneQUANTITY

0.99+

second reasonQUANTITY

0.99+

NFBORGANIZATION

0.99+

first oneQUANTITY

0.99+

bothQUANTITY

0.99+

single machineQUANTITY

0.98+

todayDATE

0.98+

Red Hat Summit 2020EVENT

0.98+

twoQUANTITY

0.97+

Red Hat Enterprise LinuxTITLE

0.97+

this yearDATE

0.96+

three machineQUANTITY

0.96+

About five years agoDATE

0.95+

OpenShiftTITLE

0.95+

RedORGANIZATION

0.95+

single wayQUANTITY

0.94+

OpenStackTITLE

0.94+

few years agoDATE

0.93+

CreteLOCATION

0.92+

last 10 yearsDATE

0.89+

CarlcareORGANIZATION

0.86+

global pandemicEVENT

0.84+

KubernetesORGANIZATION

0.74+

OpenStackORGANIZATION

0.71+

number twoQUANTITY

0.69+

few years backDATE

0.68+

pastDATE

0.6+

articlesQUANTITY

0.57+

telcosORGANIZATION

0.57+

yearsQUANTITY

0.55+

QuercusORGANIZATION

0.52+

AnsibleTITLE

0.52+

theCUBEORGANIZATION

0.51+

Rob Szumski, Red Hat OpenShift | KubeCon + CloudNativeCon EU 2019


 

>> Live from Barcelona, Spain. It's theCUBE! Covering KubeCon, CloudNativeCon, Europe 2019. Brought to you by Red Hat, the Cloud Native Computing Foundation, and Ecosystem Partners. >> Hi, and welcome back. This is KubeCon, CloudNativeCon 2019 here in Barcelona. 7700 in attendance according to the CNCF foundation. I'm Stu Miniman and my co-host for this week is Corey Quinn. And happy to welcome back to the program, a cube-i-lom Rob Szumski, who's the Product Manager for Red Hat OpenShift. Rob, thanks so much for joining us >> Happy to be here. >> All right, so a couple of weeks ago, we had theCUBE in Boston. You know, short drive for me, didn't have to take a flight as opposed to... I'm doing okay with the jet lag here, but Red Hat Summit was there. And it was a big crowd there, and the topic we're going to talk about with you is operators. And it was something we talked about a lot, something about the ecosystem. But let's start there. For our audience that doesn't know, What is an operator? How does it fit into this whole cloud-native space in this ecosystem? >> (Corey) And where can you hire one? >> (laughs) So there's software programs first of all. And the idea of an operator is everything it takes to orchestrate one of these complex distributor applications, databases, messaging queues, machine learning services. They all are distinct components that all need to be life-cycled. And so there's operational expertise around that, and this is something that might have been in a bash script before, you have a Wiki page. It's just in your head, and so it's putting that into software so that you can stamp out mini copies of that. So the operational expertise from the experts, so you want to go to the folks that make MongoDB for Mongo, for Reddits, for CouchBase, for TensorFlow, whatever it is. Those organizations can embed that expertise, and then take your user configuration and turn that into Kubernetes. >> Okay, and is there automation in that? When I hear the description, it reminds me a little bit of robotic process automation, or RPA, which you talk about, How can I harem them? RPA is, well there's certain jobs that are rather repetitive and we can allow software to do that, so maybe that's not where it is. But help me to put it into the >> No, I think it is. >> Okay, awesome. >> When you think about it, there's a certain amount of toil involved in operating anything and then there's just mistakes that are made by humans when you're doing this. And so you would rather just automate away that toil so you can spend you human capitol on higher level tasks. So that's what operator's all about. >> (Stu) All right. Great. >> Do you find that operator's are a decent approach to taking things that historically would not have been well-suited for autoscaling, for example, because there's manual work that has to happen whenever a no-joinser leaves a swarm. Is that something operators tend to address more effectively? Or am I thinking about this slightly in the wrong direction? >> Yeah, so you can do kind of any Kubernetes event you can hook into, so if your application cares about nodes coming and leaving, for example, this is helpful for operators that are operating the infrastructure itself, which OpenShift has under the hood. But you might care about when new name spaces are created or this pod goes away or whatever it is. You can kind of hook into everything there. >> So, effectively it becomes a story around running stateful things in what was originally designed for stateless containers. >> Yeah, that can help you because you care about nodes going away because your storage was on it, for example. Or, now I need to re-balance that. Whatever that type of thing is it's really critical for running stateful workloads. >> Okay, maybe give us a little bit of context as to the scope of operators and any customer examples you have that could help us add a little bit of concreteness to it. >> Yeah, they're designed to run almost anything. Every common workload that you can think about on an OpenShift cluster, you've got your messaging queues. We have a product that uses an operator, AMQ Streams. It's Kafka. And we've got folks that heavily use a Prometheus operator. I think there's a quote that's been shared around about one of our customer's Ticketmaster. Everybody needed some container native monitoring and everybody could figure out Prometheus on their own. Or they could use operator. So, they were running, I think 300-some instances of Prometheus and dev and staging and this team, that team, this person just screwing around with something over here. So, instead of being experts in Prometheus, they just use the operator then they can scale out very quickly. >> That's great because one of the challenges in this ecosystem, there's so many pieces of it. We always ask, how many companies need to be expert on not just Kubernetes, but any of these pieces. How does this tie into the CNCF, all the various projects that are available? >> I think you nailed it. You have to integrate all this stuff all together and that's where the value of something like OpenShift comes at the infrastructure layer. You got to pick all your networking and storage and your DNS that you're going to use and wire all that together and upgrade that. Lifecycle it. The same thing happens at a higher level, too. You've got all these components, getting your Fluentd pods down to operating things like Istio on Service Mesh's, serviceless workloads. All this stuff needs to be configured and it's all pretty complex. It's moving so fast, nobody can be an expert. The operator's actually the expert, embedded from those teams which is really awesome. >> You said something before we got started. A little bit about a certification program for operators. What is that about? >> We think of it as the super set of our community operators. We've got the TensorFlow community, for example, curates an operator. But, for companies that want to go to market jointly with Red Hat, we have a certification program that takes any of their community content, or some of their enterprise distributions and makes sure that it's well-tested on OpenShift and can be jointly supported by OpenShift in that partner. If you come to Red Hat with a problem with a MongoDB operator, for example, we can jointly solve that problem with MongoDB and ultimately keep your workload up and keep it running. We've got that times a bunch of databases and all kinds of servers like that. You can access those directly from OpenShift which is really exciting. One-click install of a production-ready Mongo cluster. You don't need to dig through a bunch of documentation for how that works. >> All right, so Rob, are all of these specific only to OpenShift, or will they work with flavors of Kubernetes? >> Most of the operators work just against the generic Kubernetes cluster. Some of them also do hook into OpenShift to use some of our specialized security primitives and things like that. That's where you get a little bit more value on OpenShift, but you're just targeting Kubernetes at the end of the day. >> What do you seeing customers doing with this specifically? I guess, what user stories are you seeing that is validating that this is the right direction to go in? >> It's a number of different buckets. The first one is seeing folks running services internally. You traditionally have a DBA team that maybe runs the shared database tier and folks are bringing that the container native world from their VM's that they're used to. Using operators to help with that and so now it's self-service. You have a dedicated cluster infrastructure team that runs clusters and gives out quota. Then, you're just eating into that quota to run whatever workloads that you want in an operator format. That's kind of one bucket of it. Then, you see folks that are building operators for internal operation. They've got deep expertise on one team, but if you're running any enterprise today especially like a large scale Ecommerce shop, there's a number of different services. You've got caching tier, and load balancing tiers. You've got front-ends, you've got back-ends, you've got queues. You can build operators around each one of those, so that those teams even when they're sharing internally, you know, hey where's the latest version of your stack? Here's the operator, go to town. Run it in staging QA, all that type of stuff. Then, lastly, you see these open source communities building operators which is really cool. Something like TensorFlow, that community curates an operator to get you one consistent install, so everyone's not innovating on 30 different ways to install it and you're actually using it. You're using high level stuff with TensorFlow. >> It's interesting to lay it out. Some of these okay, well, a company is doing that because it's behind something. Others you're saying it's a community. Remind me, just Red Hat's long history of helping to give if you will, adult supervision for all of these changes that are happening in the world out there. >> It's a fast moving landscape and some tools that we have are our operator SDK are helping to tame some of that. So, you can get quickly up and running, building an operator whether you are one of those communities, you are a commercial vendor, you're one of our partners, you're one of our customers. We've got tools for everybody. >> Anything specific in the database world that's something we're seeing, that Cambrian explosion in the database world? >> Yeah, I think that folks are finally wrapping their heads around that Kubernetes is for all workloads. And, to make people feel really good about that, you need something like an operator that's got this extremely well-tested code path for what happens when these databases do fail, how do I fail it over? It wasn't just some person that went in and made this. It's the expert, the folks that are committing to MongoDB, to CouchBase, to MySQL, to Postgres. That's the really exciting thing. You're getting that expertise kind of as extension of your operations team. >> For people here at the show, are there sessions about operators? What's the general discussion here at the show for your team? >> There's a ton. Even too many to mention. There's from a bunch of different partners and communities that are curating operators, talking about best practices for managing upgrades of them. Users, all that kind of stuff. I'm going to be giving a keynote, kind of an update about some of stuff we've been talking about here later on this evening. It's all over the place. >> What do you think right now in the ecosystem is being most misunderstood about operators, if anything? >> I think that nothing is quite misunderstood, it's just wrapping your head around what it means to operate applications in this manner. Just like Kubernetes components, there's this desired state loop that's in there and you need to wrap your head around exactly what needs to be in that. You're declarative state is just the Kubernetes API, so you can look at desired and actual and make that happen, just like all the Kub components. So, just looking at a different way of thinking. We had a panel yesterday at the OpenShift Commons about operators and one of the questions that had some really interesting answers was, What did you understand about your software by building an operator? Cause sometimes you need to tease apart some of these things. Oh, I had hard coded configuration here, one group shared that their leader election was not actually working correctly in every single incidences and their operator forced them to dig into that and figure out why. So, I think it's a give and take that's pretty interesting when you're building one of these things. >> Do you find that customers are starting to rely on operators to effectively run their own? For example, MongoDB inside of their Kubernetes clusters, rather than depending upon a managed service offering provided by their public cloud vendor, for example. Are you starting to see people effectively reducing public cloud to baseline primitives at a place to run containers, rather than the higher level services that are starting to move up the stack? >> A number of different reasons for that too. You see this for services if you find a bug in that service, for example, you're just out of luck. You can't go introspect the versions, you can't see how those components are interacting. With an operator you have an open source stack, it's running on your cluster in your infrastructure. You can go introspect exactly what's going on. The operator has that expertise built in, so it's not like you can screw around with everything. But, you have much more insight into what's going on. Another thing you can't get with a cloud service is you can't run it locally. So, if you've got developers that are doing development on an airplane, or just want to have something local so it's running fast, you can put your whole operator stack right on your laptop. Not something you can do with a hosted service which is really cool. Most of these are opens source too, so you can go see exactly how the operator's built. It's very transparent, especially if you're going to trust this for a core part of the infrastructure. You really want to know what's going on under the hood. >> Just to double check, all this can run on OpenShift? It is agnostic to where it lives, whether public cloud or data center? >> Exactly. These are truly hybrid services, so if you're migrating your database to here, for example, over now you have a truly hybrid just targeting Kubernetes environment. You can move that in any infrastructure that you like. This is one of the things that we see OpenShift customers do. Some of them want to be cloud-to-cloud, cloud-to-on-prem, different environments on prem only, because you've got database workloads that might not be leaving or a mainframe you need to tie into, a lot of our FSI customers. Operators can help you there where you can't move some of those workloads. >> Cloud-on-prem makes a fair bit of sense to me. One thing I'm not seeing as much of in the ecosystem is cloud-to-cloud. What are you seeing that's driving that? >> I think everybody has their own cloud that they prefer for whatever reasons. I think it's typically not even cost. It's tooling and cultural change. And, so you kind of invest in one of those. I think people are investing in technologies that might allow them to leave in the future, and operators and Kubernetes being one of those important things. But, that doesn't meant that they're not perfectly happy running on one cloud versus the other, running Kubernetes on top of that. >> Rob, really appreciate all the updates on operators. Thanks so much for joining us again. >> Absolutely. It's been fun. >> Good luck on the keynote. >> Thank you. >> For Corey Quinn, I'm Stu Miniman, back with more coverage two days live from wall to wall here at KubeCon CloudNativeCon 2019 in Barcelona, Spain. Thanks for watching.

Published Date : May 21 2019

SUMMARY :

Brought to you by Red Hat, 7700 in attendance according to the CNCF foundation. and the topic we're going to talk about so that you can stamp out mini copies of that. which you talk about, How can I harem them? so you can spend you human capitol on higher level tasks. (Stu) All right. Do you find that operator's are a decent approach Yeah, so you can do kind of any So, effectively it becomes a story Yeah, that can help you because you care and any customer examples you have Every common workload that you can think about That's great because one of the challenges You got to pick all your networking and storage What is that about? and can be jointly supported by OpenShift in that partner. That's where you get a little bit more value and folks are bringing that the container native world that are happening in the world out there. So, you can get quickly up and running, the folks that are committing to MongoDB, to CouchBase, and communities that are curating operators, and you need to wrap your head around Do you find that customers are starting to so it's not like you can screw around with everything. You can move that in any infrastructure that you like. What are you seeing that's driving that? that might allow them to leave in the future, Rob, really appreciate all the updates on operators. It's been fun. at KubeCon CloudNativeCon 2019 in Barcelona, Spain.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Corey QuinnPERSON

0.99+

Red HatORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

BostonLOCATION

0.99+

Rob SzumskiPERSON

0.99+

BarcelonaLOCATION

0.99+

RobPERSON

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

CNCFORGANIZATION

0.99+

30 different waysQUANTITY

0.99+

yesterdayDATE

0.99+

One-clickQUANTITY

0.99+

two daysQUANTITY

0.99+

MySQLTITLE

0.99+

KubeConEVENT

0.99+

Barcelona, SpainLOCATION

0.99+

Ecosystem PartnersORGANIZATION

0.99+

PrometheusTITLE

0.99+

CoreyPERSON

0.99+

oneQUANTITY

0.99+

OpenShiftTITLE

0.99+

MongoDBTITLE

0.98+

KafkaTITLE

0.98+

KubernetesTITLE

0.98+

Red Hat SummitEVENT

0.98+

one teamQUANTITY

0.98+

CloudNativeConEVENT

0.98+

first oneQUANTITY

0.96+

CloudNativeCon 2019EVENT

0.96+

one cloudQUANTITY

0.96+

this weekDATE

0.94+

CouchBaseTITLE

0.94+

CloudNativeCon EU 2019EVENT

0.93+

TensorFlowTITLE

0.91+

One thingQUANTITY

0.91+

EuropeLOCATION

0.9+

2019EVENT

0.89+

KubeCon CloudNativeCon 2019EVENT

0.89+

todayDATE

0.88+

couple of weeks agoDATE

0.87+

one groupQUANTITY

0.87+

300OTHER

0.85+

each oneQUANTITY

0.85+

RedditsORGANIZATION

0.83+

OpenShift CommonsORGANIZATION

0.83+

TicketmasterORGANIZATION

0.83+

this eveningDATE

0.79+

one ofQUANTITY

0.79+

7700QUANTITY

0.79+

PostgresTITLE

0.77+

single incidencesQUANTITY

0.75+

FSIORGANIZATION

0.73+

doubleQUANTITY

0.69+

Khurshid Sohail & Nish Jani, UPS | Red Hat Summit 2019


 

(electronic music) >> Presenter: Live from Boston, Massachusetts. It's theCUBE! Covering Red Hat Summit 2019, brought to you by Red Hat (electronic music) >> Welcome back here on The Cube, continuing our live coverage at Red Hat Summit 2019 as we come to a near conclusion of our three days of wall to wall coverage for you here. All the keynotes, it's been and the guests we've had just a lot of fun and certainly an educational opportunity for Stu Menimen and myself and we're looking forward to our next couple of guests here. We have Khurshid Sohail, an application developer at UPS and Nish Jani, a senior application development manager at UPS. Gentlemen, thank you for joining us. We appreciate the time. [Developers] Thanks for having us. >> Presenter: Thank you. And so you have representation on the keynote stage of this morning. UPS did, talking about some of the changes underway there and your Red Hat relationship for those at home who work privy to that. Just to set the stage for in terms of what you're doing with Red Hat and what you're gonna be doing with them as they came up with a couple of releases this week. Nish, if you would? >> Sure. So as you know, UPS is delivering products and services to over 200 countries and from a scalability perspective, we deliver over 21,000,000 packages per day and during our peak season, it grows to over 30,000,000 packages a day, and last year we averaged 200,000,000 tracks a day on our tracking system and last peak we went to 335,000,000 tracks in a single day and that was all built on a new open-shift platform that we developed. >> Just a little bit of data. >> Yeah. >> All right. >> Yeah, you know I love when you talk to, you know, so many customers today who scale and it's like "oh, okay. How many transactions we have." It's like "oh, you talk logistics", you're like oh, okay. You talked a lot of numbers there but when you talk about the driver and how many officers they have and the amount of data that goes in. It's like, okay, how many supercomputers do you have? And you know, hundreds of PhDs solving this. Maybe we're just a little bit in tide, you know, the logistical pieces that go there and how, you know, I mean this is not, you know, a just "Okay, go do your route" as we have in the past. >> Yeah, so from a driver perspective, we've offered services to the drivers that take out the human search for needing to deliver packages. We have an orion system which tells the driver exactly where to go and where to deliver the packages and optimize the routes for them from a visibility perspective which is the products and services Khurshid and I support. The driver is able to do their jobs and deliver their status and deliver packages on time so for our customers, they see an updated status in real time. From a VI perspective, which is our visibility information business engine which was our new platform that we built last year, it was a long journey into the process, part of our digital transformation. We got into the transformation as a need for customers who wanted more out of their products and services that we offered today, and as far as being able to do faster market and provide visibility in a real-time sense. >> Presenter: Yeah. >> We always love when you hear some of these digital transformations. Like okay, you know, I think if UPS does logistics, those were pretty complicated before. >> Absolutely. So, like you needed a digital transformation. Maybe we could start with, you know, what were some of the objectives, what were we, you know, what was holding you back or limited before and you know, let's go to the after when you get through there. >> Sure. So before we were on a monolithic system, a legacy system and the costs per track were very expensive and your to drive new need we needed to redevelop ourselves and redesign ourselves and the way we did that was we transformed by moving away from our traditional waterfall models which typically took six months to deploy new services and we went to within weeks, and the way we did that was to develop agile methodologies and using open-shift we were able to develop and deploy applications more quickly and Khurshid can talk a little bit more about VI application and how it works. >> So pretty much what our goal was to get the old track system off the legacy model off into a containerized, on premise, cloud-based platform. So we successfully accomplished that, essentially 20 years worth of data we did in a year, so we're pretty proud of that, not to toot our own horn, but yeah. We got everything going with open-shift, and a couple of other Red Hat products like AMQ, JBoss, Fuse for AMQ and we also worked like Nish mentioned with the agile methodologies and principles so we were successfully able to create a type of environment for other applications that UPS see as a, you know, kind of a look up to so other applications can see what we've did and they can get themselves over in the same direction. >> Yeah, so can you bring us inside a little bit the organization. Was this a new team that came in? Was there a combination of the new and old? You know, the retraining. >> Yeah, so the team was formed out of some of the old members of the team that knew visibility inside and out. My team done the front-end of UPS.com's tracking application and we've brought in team members that were new and were able to develop the application in a short amount of time. So we've nearly formed a team, we've put together a parallel path from the old system to the new system then we transitioned over and it was seamless to the custom. >> You were talking about customer choice, we were speaking earlier before just about competition, so you have to be extremely responsive to customer needs and my choice is something that comes to my mind that you offer that gives great flexibility to a customer but tremendous complexity I would think to you because you have kind of like an X and a Y, you have a package, you've got a delivery point and now you throw the Z in with a time of day change or location change and to coordinate that so your efficiencies, your fuel efficiencies and route efficiencies are still maintained. And how do you do that in your environment? And whether that's something that Red Hat, is that something that is enabled by the technology that you're deploying of theirs? >> Sure, from a visibility perspective my choice product? have been very successful. We're able to deliver B to C packages to our individual customers or our consignees which help them choose where and when they want their package and also be able to see a delivery time. From a complexity perspective, sure. It adds a ton of complexity because we need to know what addresses to go to and what changes are done to the packages prior to them being delivered. From an open-shift perspective, that's partly going to be our digital transformation to transform that visibility and provide that information and bringing more products and services to those customers and lower latency of time. >> Okay, so containerization is something that's relatively prevalent for the audience here, but it's still relatively young in maturity. Just wondering as you rolled out the solutions, any learnings you had or any, you know, I don't want to say stumbling blocks, but you know, things that you learned along the way that maybe your peers should, could learn for. >> Yeah, I mean, I think you should say stumbling blocks 'cus as anybody knows, whenever you go through anything new there's opportunities to learn and there's monumental opportunities of failure and I think UPS knows and we've pride ourselves in failing fast, learning from our mistakes and getting to the next level. So like you mentioned with containerization and open-shift, the ability for us when we used to deploy every six months, now we get to deploy in two weeks to production and before that we could deploy in a matter of minutes so we could test all these tools and everything that open-shift offers gives us the ability to serve our business and give the most information to our customers. So open-shift and Red Hat have done a great job in helping us reach our maximum potential and we look to continue that partnership. >> Yeah, so was there anything, you know, in that speed to delivery and being more agile that, you know, "Oh jeez, security, "we should have pulled them in sooner." Or you know, so and so should have, but we forgot to include them in the original discussions. >> No, when we went through the transformation of moving the tracking application, we went through all the options and open-shift was just a natural partner, a natural fit. At the time we were going through a proof of concept with the product with another team and as the VI project came along it was just a natural fit to use containerization and use the speed of deployment, automated testing and pipelines in order to deploy this new application. (coughing) >> You used an interesting phrase there, for shit about failing fast and we've heard that a couple of times this week in different flavors. What about the lack of fear and failure and almost like that failure is not always a bad thing because it leads to improvement. But you have to have a certain amount of confidence underpinning that. So talk, I'm just curious from a company culture standpoint, what kind of confidence is there about that failing fast and how technology allows you to make up the ground that you might have lost by failure, especially in today's world, there's so much more capability and so much more at your disposal. >> Yeah, so I may, I think that benefits us and allows us to fail fast is management like Nish and our upper level management, they give us the opportunity to make these mistakes because they know we're going to learn from them and just talking about open-shift and like you said, when we fail, we have to make up that ground. When we make those mistakes the platform that we're on allows us to pivot from that and make it a success story right away. So we noticed that we were able to learn from mistakes quickly and with the help and support of management we were able to implement real-time solutions and deploy them right away. >> Yeah, in addition to that we're able to deploy in a short period of time so we know we're at a minimum two weeks away from the next deployment. So we could quickly restore functionality within minutes or within days if necessary. So, you know, previously we weren't able to do that, so fail fast didn't quite work in the waterfall method. >> So Nish, you know, the VI project has rolled out. What does that mean to your relationship to the business? And also ultimately, how has it impacted your ultimate customers? >> Sure, so from an external customer perspective, obviously we're able to, speed to market products and services faster to our customers and provide better visibility to the customers. From internally in the organization, we've significantly reduced our cost to serve and as we continue to transform on the VI platform using open-shift and partnering with Red Hat, we'll be able to transform other visibility products in the future and going forward we're able to take folks like Khurshid and develop them further and use our skill sets that we've learned and develop our people faster. >> So where do you want to jump in next? I mean, in your world Krashid, I would think that's probably one of the more exciting questions is, you know, what now? What next? Where are we going with this? In terms of your core business, you know, where's the efficiency gain that you'd like to see? Where's the customer service you'd like to improve? >> Yeah I mean, from a business perspective we're always looking to serve our business and bring products to platform that are gonna be useful to the customers. So what we currently have in VI today, we're looking to create more visibility products for our customers and from a technical standpoint, and we were at Red Hat Summit 2019, they've announced some crazy cool things. >> What's the craziest cool thing you've heard this week? >> We're looking forward to open-shift 4, we're looking forward to Cofcus Dreams, and Corcus which is really cool, and just operators, the list goes on and on. I could talk to you about it for days and days. We were here for three days, you got three more days ready? >> Sure. (laughing) Tape is cheap. (laughing) >> Yeah, we're looking forward to a lot of cool things that Red Hat's going to provide and we're gonna run with it. >> Yeah. We're looking forward to continued relationship with Red Hat and offering new products and services that can make our businesses run better. >> Like for example, if you could, if I were to say, a military build a rocket ship right now, you know, what's it gonna look like? What area of your business would you like to literally dabble in and say "Okay, I think this will work." It might right now, look to be a little bit futuristic or down the road, what scenario could you paint possibly to give us an idea about what you're thinking? >> So our next focus to business is to serve up the small and medium business, right? So we've been talking about the modulus product and serving residential addresses and serving residential folks but we want to start focusing on the small and medium businesses and offering the same services and capabilities so our next plateau, our next capability is to provide those services to the small and medium businesses so they can grow and partner with UPS. >> And I think, as Nish mentioned, with the utilizations from Cofcun actually bringing some of these technologies into our containers, bringing more security layers like there's a lot of great vendors here and partnering with them and bringing them into our services, it will open the doors for us a lot, and like Nish mentioned with my choice and small business, I think will allow them a better customer experience with partnering up with some of these new people. >> Presenter: You bet. Well thank you both. Thanks for being here and sharing your time, good to see you. Good keynote this morning as well, so please be sure to pass that along and we look forward to seeing you down the road. >> Developers: Thank you. >> Thank you both. Back with more coverage from Red Hat Summit 2019, you are watching theCUBE live from Boston. (electronic music)

Published Date : May 9 2019

SUMMARY :

brought to you by Red Hat and the guests we've had just a lot of fun And so you have representation and that was all built on a new open-shift platform and how, you know, I mean this is not, you know, and optimize the routes for them Like okay, you know, I think if UPS does logistics, and you know, let's go to the after and redesign ourselves and the way we did that and we also worked like Nish mentioned Yeah, so can you bring us inside from the old system to the new system and my choice is something that comes to my mind and also be able to see a delivery time. but you know, things that you learned along the way and give the most information to our customers. Yeah, so was there anything, you know, in that and as the VI project came along and how technology allows you to make up the ground and like you said, Yeah, in addition to that we're able So Nish, you know, the VI project has rolled out. and as we continue to transform on the VI platform and we were at Red Hat Summit 2019, I could talk to you about it for days and days. Tape is cheap. to provide and we're gonna run with it. We're looking forward to continued relationship or down the road, what scenario could you paint possibly and offering the same services and capabilities and like Nish mentioned with my choice and small business, and we look forward to seeing you down the road. Thank you both.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Stu MenimenPERSON

0.99+

UPSORGANIZATION

0.99+

KhurshidPERSON

0.99+

Khurshid SohailPERSON

0.99+

Red HatORGANIZATION

0.99+

20 yearsQUANTITY

0.99+

six monthsQUANTITY

0.99+

NishPERSON

0.99+

two weeksQUANTITY

0.99+

BostonLOCATION

0.99+

three daysQUANTITY

0.99+

AMQORGANIZATION

0.99+

last yearDATE

0.99+

UPS.comORGANIZATION

0.99+

Boston, MassachusettsLOCATION

0.99+

335,000,000 tracksQUANTITY

0.99+

JBossORGANIZATION

0.99+

Nish JaniPERSON

0.99+

Red Hat Summit 2019EVENT

0.99+

oneQUANTITY

0.99+

todayDATE

0.98+

a yearQUANTITY

0.98+

CofcunORGANIZATION

0.98+

over 200 countriesQUANTITY

0.98+

this weekDATE

0.98+

FuseORGANIZATION

0.97+

bothQUANTITY

0.97+

over 30,000,000 packages a dayQUANTITY

0.96+

CorcusORGANIZATION

0.96+

Cofcus DreamsORGANIZATION

0.96+

200,000,000 tracks a dayQUANTITY

0.96+

KrashidPERSON

0.96+

NishORGANIZATION

0.88+

three more daysQUANTITY

0.88+

this morningDATE

0.87+

coupleQUANTITY

0.83+

VILOCATION

0.79+

a single dayQUANTITY

0.79+

hundreds of PhDsQUANTITY

0.77+

over 21,000,000 packages per dayQUANTITY

0.75+

Red HatTITLE

0.67+

CubeORGANIZATION

0.61+

couple of timesQUANTITY

0.61+

openORGANIZATION

0.6+

agileTITLE

0.59+

openTITLE

0.58+

4OTHER

0.48+

shiftTITLE

0.41+

Brent Compton, Red Hat | theCUBE NYC 2018


 

>> Live from New York, it's theCUBE, covering theCUBE New York City 2018. Brought to you by SiliconANGLE Media and its ecosystem partners. >> Hello, everyone, welcome back. This is theCUBE live in New York City for theCUBE NYC, #CUBENYC. This is our ninth year covering the big data ecosystem, which has now merged into cloud. All things coming together. It's really about AI, it's about developers, it's about operations, it's about data scientists. I'm John Furrier, my co-host Dave Vellante. Our next guest is Brent Compton, Technical Marketing Director for Storage Business at Red Hat. As you know, we cover Red Hat Summit and great to have the conversation. Open source, DevOps is the theme here. Brent, thanks for joining us, thanks for coming on. >> My pleasure, thank you. >> We've been talking about the role of AI and AI needs data and data needs storage, which is what you do, but if you look at what's going on in the marketplace, kind of an architectural shift. It's harder to find a cloud architect than it is to find diamonds these days. You can't find a good cloud architect. Cloud is driving a lot of the action. Data is a big part of that. What's Red Hat doing in this area and what's emerging for you guys in this data landscape? >> Really, the days of specialists are over. You mentioned it's more difficult to find a cloud architect than find diamonds. What we see is the infrastructure, it's become less about compute as storage and networking. It's the architect that can bring the confluence of those specialties together. One of the things that we see is people bringing their analytics workloads onto the common platforms where they've been running the rest of their enterprise applications. For instance, if they're running a lot of their enterprise applications on AWS, of course, they want to run their analytics workloads in AWS and that's EMRs long since in the history books. Likewise, if they're running a lot of their enterprise applications on OpenStack, it's natural that they want to run a lot of their analytics workloads on the same type of dynamically provisioned infrastructure. Emerging, of course, we just announced on Monday this week with Hortonworks and IBM, if they're running a lot of their enterprise applications on a Kubernetes substrate like OpenShift, they want to run their analytics workloads on that same kind of agile infrastructure. >> Talk about the private cloud impact and hybrid cloud because obviously we just talked to the CEO of Hortonworks. Normally it's about early days, about Hadoop, data legs and then data planes. They had a good vision. They're years into it, but I like what Hortonworks is doing. But he said Kubernetes, on a data show Kubernetes. Kubernetes is a multi-cloud, hybrid cloud concept, containers. This is really enabling a lot of value and you guys have OpenShift which became very successful over the past few years, the growth has been phenomenal. So congratulations, but it's pointing to a bigger trend and that is that the infrastructure software, the platform as a service is becoming the middleware, the glue, if you will, and Kubernetes and containers are facilitating a new architecture for developers and operators. How important is that with you guys, and what's the impact of the customer when they think, okay I'm going to have an agile DevOps environment, workload portability, but do I have to build that out? You mentioned people don't have to necessarily do that anymore. The trend has become on-premise. What's the impact of the customer as they hear Kubernetese and containers and the data conversation? >> You mentioned agile DevOps environment, workload portability so one of the things that customers come to us for is having that same thing, but infrastructure agnostic. They say, I don't want to be locked in. Love AWS, love Azure, but I don't want to be locked into those platforms. I want to have an abstraction layer for my Kubernetese layer that sits on top of those infrastructure platforms. As I bring my workloads, one-by-one, custom DevOps from a lift and shift of legacy apps onto that substrate, I want to have it be independent, private cloud or public cloud and, time permitting, we'll go into more details about what we've seen happening in the private cloud with analytics as well, which is effectively what brought us here today. The pattern that we've discovered with a lot of our large customers who are saying, hey, we're running OpenStack, they're large institutions that for lots of reasons they store a lot of their data on-premises saying, we want to use the utility compute model that OpenStack gives us as well as the shared data context that Ceph gives us. We want to use that same thing for our analytics workload. So effectively some of our large customers taught us this program. >> So they're building infrastructure for analytics essentially. >> That's what it is. >> One of the challenges with that is the data is everywhere. It's all in silos, it's locked in some server somewhere. First of all, am I overstating that problem and how are you seeing customers deal with that? What are some of the challenges that they're having and how are you guys helping? >> Perfect lead in, in fact, one of our large government customers, they recently sent us an unsolicited email after they deployed the first 10 petabytes in a deca petabyte solution. It's OpenStack based as well as Ceph based. Three taglines in their email. The first was releasing the lock on data. The second was releasing the lock on compute. And the third was releasing the lock on innovation. Now, that sounds a bit buzzword-y, but when it comes from a customer to you. >> That came from a customer? Sounds like a marketing department wrote that. >> In the details, as you know, traditional HDFS clusters, traditional Hadoop clusters, sparklers or whatever, HDFS is not shared between clusters. One of our large customers has 50 plus analytics clusters. Their data platforms team employ a maze of scripts to copy data from one cluster to the other. And if you are a scientist or an engineer, you'd say, I'm trying to obtain these types of answers, but I need access to data sets A, B, C, and D, but data sets A and B are only on this cluster. I've got to go contact the data platforms team and have them copy it over and ensure that it's up-to-date and in sync so it's messy. >> It's a nightmare. >> Messy. So that's why the one customer said releasing the lock on data because now it's in a shared. Similar paradigm as AWS with EMR. The data's in a shared context, an S3. You spin up your analytics workloads on AC2. Same paradigm discussion as with OpenStack. Your spinning up your analytics workloads via OpenStack virtualization and their sourcing is shared data context inside of Ceph, S3 compatible Ceph so same architecture. I love his last bit, the one that sounds the most buzzword-y which was releasing lock on innovation. And this individual, English was not this person's first language so love the word. He said, our developers no longer fear experimentation because it's so easy. In minutes they can spin up an analytics cluster with a shared data context, they get the wrong mix of things they shut it down and spin it up again. >> In previous example you used HDFS clusters. There's so many trip wires, right. You can break something. >> It's fragile. >> It's like scripts. You don't want to tinker with that. Developers don't want to get their hand slapped. >> The other thing is also the recognition that innovation comes from data. That's what my takeaway is. The customer saying, okay, now we can innovate because we have access to the data, we can apply intelligence to that data whether it's machine intelligence or analytics, et cetera. >> This the trend in infrastructure. You mentioned the shared context. What other observations and learnings have you guys come to as Red Hat starts to get more customer interactions around analytical infrastructure. Is it an IT problem? You mentioned abstracting the way different infrastructures, and that means multi-cloud's probably setup for you guys in a big way. But what does that mean for a customer? If you had to explain infrastructure analytics, what needs to get done, what does the customer need to do? How do you describe that? >> I love the term that industry uses of multi-tenant workload isolation with shared data context. That's such a concise term to describe what we talk to our customers about. And most of them, that's what they're looking for. They've got their data scientist teams that don't want their workloads mixed in with the long running batch workloads. They say, listen, I'm on deadline here. I've got an hour to get these answers. They're working with Impala. They're working with Presto. They iterate, they don't know exactly the pattern they're looking for. So having to take a long time because their jobs are mixed in with these long MapReduce jobs. They need to be able to spin up infrastructure, workload isolation meaning they have their own space, shared context, they don't want to be placing calls over to the platform team saying, I need data sets C, D, and E. Could you please send them over? I'm on deadline here. That phrase, I think, captures so nicely what customers are really looking to do with their analytics infrastructure. Analytics tools, they'll still do their thing, but the infrastructure underneath analytics delivering this new type of agility is giving that multi-tenant workload isolation with shared data context. >> You know what's funny is we were talking at the kickoff. We were looking back nine years. We've been at this event for nine years now. We made prediction there will be no Red Hat of big data. John, years ago said, unless it's Red Hat. You guys got dragged into this by your customers really is how it came about. >> Customers and partners, of course with your recent guest from Hortonworks, the announcement that Red Hat, Hortonworks, and IBM had on Monday of this week. Dialing up even further taking the agility, okay, OpenStack is great for agility, private cloud, utility based computing and storage with OpenStack and Ceph, great. OpenShift dials up that agility another notch. Of course, we heard from the CEO of Hortonworks how much they love the agility that a Kubernetes based substrate provides their analytics customers. >> That's essentially how you're creating that sort of same-same experience between on-prem and multi-cloud, is that right? >> Yeah, OpenShift is deployed pervasively on AWS, on-premises, on Azure, on GCE. >> It's a multi-cloud world, we see that for sure. Again, the validation was at VMworld. AWS CEO, Andy Jassy announced RDS which is their product on VMware on-premises which they've never done. Amazon's never done any product on-premises. We were speculating it would be a hardware device. We missed that one, but it's a software. But this is the validation, seamless cloud operations on-premise in the cloud really is what people want. They want one standard operating model and they want to abstract away the infrastructure, as you were saying, as the big trend. The question that we have is, okay, go to the next level. From a developer standpoint, what is this modern developer using for tools in the infrastructure? How can they get that agility and spinning up isolated, multi-tenant infrastructure concept all the time? This is the demand we're seeing, that's an evolution. Question for Red Hat is, how does that change your partnership strategy because you mentioned Rob Bearden. They've been hardcore enterprise and you guys are hardcore enterprise. You kind of know the little things that customers want that might not be obvious to people: compliance, certification, a decade of support. How is Red Hat's partnership model changing with this changing landscape, if you will? You mentioned IBM and Hortonworks release this week, but what in general, how does the partnership strategy look for you? >> The more it changes, the more it looks the same. When you go back 20 years ago, what Red Hat has always stood for is any application on any infrastructure. But back in the day it was we had n-thousand of applications that were certified on Red Hat Linux and we ran on anybody's server. >> Box. >> Running on a box, exactly. It's a similar play, just in 2018 in the world of hybrid, multi-cloud architectures. >> Well, you guys have done some serious heavy lifting. Don't hate me for saying this, but you're kind of like the mules of the industry. You do a lot of stuff that nobody either wants to do or knows how to do and it's really paid off. You just look at the ascendancy of the company, it's been amazing. >> Well, multi-cloud is hard. Look at what it takes to do multi-cloud in DevOps. It's not easy and a lot of pretenders will fall out of the way, you guys have done well. What's next for you guys? What's on the horizon? What's happening for you guys this next couple months for Red Hat and technology? Any new announcements coming? What's the vision, what's happening? >> One of the announcements that you saw last week, was Red Hat, Cloudera, and Eurotech as analytics in the data center is great. Increasingly, the world's businesses run on data-driven decisions. That's great, but analytics at the edge for more realtime industrial automation, et cetera. Per the announcements we did with Cloudera and Eurotech about the use of, we haven't even talked about Red Hat's middleware platforms, such as AMQ Streams now based on Kafka, a Kafka distribution, Fuze, an integration master effectively bringing Red Hat technology to the edge of analytics so that you have the ability to do some processing in realtime before back calling all the way back to the data center. That's an area that you'll also see is pushing some analytics to the edge through our partnerships such as announced with Cloudera and Eurotech. >> You guys got the Red Hat Summit coming up next year. theCUBE will be there, as usual. It's great to cover Red Hat. Thanks for coming on theCUBE, Brent. Appreciate it, thanks for spending the time. We're here in New York City live. I'm John Furrier, Dave Vallante, stay with us. All day coverage today and tomorrow in New York City. We'll be right back. (upbeat music)

Published Date : Sep 12 2018

SUMMARY :

Brought to you by SiliconANGLE Media Open source, DevOps is the theme here. Cloud is driving a lot of the action. One of the things that we see is people and that is that the infrastructure software, the shared data context that Ceph gives us. So they're building infrastructure One of the challenges with that is the data is everywhere. And the third was releasing the lock on innovation. That came from a customer? In the details, as you know, I love his last bit, the one that sounds the most buzzword-y In previous example you used HDFS clusters. You don't want to tinker with that. that innovation comes from data. You mentioned the shared context. I love the term that industry uses of You guys got dragged into this from Hortonworks, the announcement that Yeah, OpenShift is deployed pervasively on AWS, You kind of know the little things that customers want But back in the day it was we had n-thousand of applications in the world of hybrid, multi-cloud architectures. You just look at the ascendancy of the company, What's on the horizon? One of the announcements that you saw last week, You guys got the Red Hat Summit coming up next year.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VallantePERSON

0.99+

Dave VellantePERSON

0.99+

IBMORGANIZATION

0.99+

JohnPERSON

0.99+

Brent ComptonPERSON

0.99+

AWSORGANIZATION

0.99+

John FurrierPERSON

0.99+

EurotechORGANIZATION

0.99+

HortonworksORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

BrentPERSON

0.99+

New York CityLOCATION

0.99+

2018DATE

0.99+

Red HatORGANIZATION

0.99+

Rob BeardenPERSON

0.99+

nine yearsQUANTITY

0.99+

Andy JassyPERSON

0.99+

last weekDATE

0.99+

first languageQUANTITY

0.99+

Three taglinesQUANTITY

0.99+

SiliconANGLE MediaORGANIZATION

0.99+

firstQUANTITY

0.99+

tomorrowDATE

0.99+

secondQUANTITY

0.99+

OneQUANTITY

0.99+

ClouderaORGANIZATION

0.99+

next yearDATE

0.99+

thirdQUANTITY

0.99+

New YorkLOCATION

0.99+

ImpalaORGANIZATION

0.99+

Monday this weekDATE

0.99+

VMworldORGANIZATION

0.98+

one clusterQUANTITY

0.98+

Red Hat SummitEVENT

0.98+

ninth yearQUANTITY

0.98+

oneQUANTITY

0.98+

OpenStackTITLE

0.98+

todayDATE

0.98+

NYCLOCATION

0.97+

20 years agoDATE

0.97+

KuberneteseTITLE

0.97+

KafkaTITLE

0.97+

FirstQUANTITY

0.96+

this weekDATE

0.96+

Red HatTITLE

0.95+

EnglishOTHER

0.95+

Monday of this weekDATE

0.94+

OpenShiftTITLE

0.94+

one standardQUANTITY

0.94+

50 plus analytics clustersQUANTITY

0.93+

CephTITLE

0.92+

AzureTITLE

0.92+

GCETITLE

0.9+

PrestoORGANIZATION

0.9+

agile DevOpsTITLE

0.89+

theCUBEORGANIZATION

0.88+

DevOpsTITLE

0.87+

Mark Little & Mike Piech, Red Hat | Red Hat Summit 2018


 

>> Announcer: From San Francisco, it's theCUBE. Covering Red Hat Summit 2018 brought to you by Red Hat. >> Hello everyone and welcome back to see CUBE's exclusive coverage of Red Hat Summit 2018 live in San Francisco, California at Moscone West. I'm John Furrier, your cohost of theCUBE with John Troyer co-founder of Tech Reckoning advisory and community development firm. Our next two guests Mike Piech Vice President and General Manager of middleware at Red Hat and Mark Little, Vice President of Software Engineering for middleware at Red Hat. This is the stack wars right here. Guys thanks for coming back, good to see you guys again. >> Great to see you too. >> So we love Middleware because Dave Vellante and I and Stu always talk about like the real value is going to be created in abstraction layers. You're seeing examples of that all over the place but Kubernetes containers, multi-cloud conversations. Workload management and all these things are happening at these really cool abstraction layers. That's obviously you say global I say middleware but you know it's where the action is. So I got to ask you, super cool that you guys have been leading in there but the new stuff's happening. So let's just go review last year or was it this year? What's different this year, new things happening within the company? We see core OS' in there, you guys got OpenShift is humming along beautifully. What's new in the middleware group? >> There's a few things. I'll take one and then maybe Mike can think of another while I'm speaking but when we were here this time last year we were talking about functions as a service or server-less and we had a project of our own called Funktion with a K, between then and now the developer affinity around functions as a service has just grown. Lots of people are now using it and starting to use it in production. We did a review of what we were doing back then and looked around at other efforts that were in the market space and we decided actually we wanted to get involved with a large community of developers and try and move that in a direction that was pretty beneficial for everybody but clearly for ourselves. And we've decided, and we announced this publicly last year but we're now involved with a project called Apache OpenWhisk instead of Funktion. And OpenWhisk is a project that IBM originally kicked off. We got involved, it was tied very closely to cloud foundering so one of the first things that we've been doing is making it more Kubernetes native and allowing it to run on OpenShift. In fact we're making some announcements this week around our functions are service based on Apache OpenWhisk. But that's probably one of the bigger things that's changed in the last 12 months. >> I would just add to that that across the rest of the middleware portfolio which is as you know, a wide range of different technologies, different products, in our integration area we continue to push ahead with containerizing, putting the integration technologies in the containers, making it easier to basically connect the different components of applications and different applications to each other together through different integration paradigms whether it's messaging or more of a bus style. So with our Jboss Fuse and our AMQ we've made great progress in continuing to refine how those are invoked and consumed in the Openshift environment. Forthcoming very shortly, literally in the next week or two is our integration platform as a service based on the Fuse and AMQ technologies. In addition we've continued to charge ahead with our API management solution based on the technology we acquired from Threescale a couple of years ago. So that is coming along nicely, being very well adopted by our customers. Then further up the stack on the process automation front, so some of the business process management types of technologies we've continued to push ahead with containerizing and that was being higher up the stack and a little bit bigger a scale of technology was a little bit more complex in really setting it up for the containerized world but we've got our Process Automation 7.0 release coming out in the next few weeks. That includes some exciting new technology around case management, so really bringing all of those traditional middleware capabilities forward into the Cloud Native, containerized environment has been I would say the most significant focus of our efforts over the last year. >> Go ahead. >> Can you contextualize some of that a little bit for us? The OpenShift obviously a big topic of conversation here. You know the new thing that everyone's looking at and Kubernetes, but these service layers, these layers it takes to build an app still necessary, Jboss a piece of this stack is 17, 18 years old, right? So can you contextualize it a little bit for people thinking about okay we've got OpenStack on the bottom, we've got OpenShift, where does the middleware and the business process, how has that had to be modernized? And how are people, the Java developers, still fitting into the equation? >> Mark: So a lot of that contextualization can actually, if we go back about four or five years, we announced an initiative called Xpass which was to essentially take the rich middleware suite of products and capabilities we had, and decompose them into independently consumable services kind of like what you see when you look at AWS. They've got the simple queuing service, simple messaging service. We have those capabilities but in the past they were bundled together in an app server, so we worked to pull them apart and allow people to use them independently so if you wanted transactions, or you wanted security, you didn't have to consume the whole app server you actually had these as independent services, so that was Xpass. We've continued on that road for the past few years and a lot of those services are now available as part and parcel of OpenShift. To get to the developer side of things, then we put language veneers on top of those because we're a Java company, well at least middleware is, but there's a lot more than Java out there. There's a lot of people who like to use Pearl or PHP or JavaScript or Go, so we can provide language specific clients for them to interact. At the end of the day, your JavaScript developer who's using bulletproof, high performing messaging doesn't need to know that most of it is implemented in Java. It's just a complete opaque box to them in a way. >> John F: So this is a trend of microservices, this granularity concept of this decomposition, things that you guys are doing is to line up with what people want, work with services directly. >> Absolutely right, to give developers the entire spectrum of granularity. So they can basically architect at a granularity that's appropriate for the given part of their job they're working on it's not a one size fits all proposition. It's not like throw all the monoliths out and decompose every last workload into it's finest grain possible pieces. There's a time and a place for ultra-fine granularity and there's also a time and a place to group things together and with the way that we're providing our runtimes and the reference architectures and the general design paradigm that we're sort of curating and recommending for our customers, it really is all about, not just the right tool for the job but the right granularity for the job. >> It's really choice too, I mean people can choose and then based on their architecture they can manage it the way they want from a design standpoint. Alright I got to get your guys' opinion on something. Certainly we had a great week in Copenhagen last week, in Denmark, around CUBECon, Kubernetes conference, Cloud NativeCon, whatever it's called, they're called two things. There was a rallying cry around Kubernetes and really the community felt like that Linix moment or that TCPIP moment where people talk about standards but like when will we just do something? We got to get behind it and then differentiate and provide all kinds of coolness around it. Core defacto stand with Kubernetes is opening up all kinds of new creative license for developers, it's also bringing up an accelerated growth. Istio's right around the corner, Cubeflow have the cool stuff on how software's being built. >> Right. >> So very cool rallying cry. What is the rallying cry in middleware, in your world? Is there a similar impact going on and what is that? >> Yeah >> Because you guys are certainly affected by this, this is how software will be built. It's going to be orchestrated, composed, granularity options, all kinds of microservices, what's the rallying cry in the middleware? >> So I think the rallying cry, two years ago, at Summit we announced something called MicroProfile with IBM, with Tomitribe, another apps vendor, Piara and a few quite large Java user groups to try and do something innovative and microservices specific with Enterprise Java. It was incredibly successful but the big elephant in the room who wasn't involved in that was Oracle, who at the time was still controlling Java E and a lot of what we do is dependent on Java E, a lot of what other vendors who don't necessarily talk about it do is also dependent on Java E to one degree or another. Even Pivotal with Springboot requires a lot of core services like messaging and transactions that are defined in Java E. So two years further forward where we are today, we've been working with IBM and Oracle and others and we've actually moved, or in process of moving all of Java E away from the old process, away from a single vendor's control into the Eclipse Foundation and although that's going to take us a little while longer to do we've been on that path for about four or five months. The amount of buzz and interest in the community and from companies big and small who would never have got involved in Java E in the past is immense. We're seeing new people get involved with Eclipse Foundation, and new companies get involved with Eclipse Foundation on a daily basis so that they can get in there and start to innovate in Enterprise Java in a much more agile and interesting way than they could have done in the past. I think that's kind of our rallying call because like I said we're getting lots of vendors, Pivotal's involved, Fujitsu. >> John F: And the impact of this is going to be what? >> A lot more innovation, a lot quicker innovation and it's not going to be at the slow speed of standards it's going to be at the fast, upstream, open source innovative speed that we see in likes of Kubernetes. >> And Eclipse has got a good reputation as well. >> Yeah, the other significant thing here, in addition to the faster innovation is it's a way forward for all of that existing Java expertise, it's a way for some of the patterns and some of the knowledge that they have already to be applied in this new world of Cloud Native. So you're not throwing out all that and having to essentially retrain double digit millions of developers around the world. >> John F: It's instant developer actually and plus Java's a great language, it's the bulldozer of languages, it can move a lot, it does a lot of heavy lifting >> Yep. >> And there's a lot of developers out there. Okay, final question I know you guys got to go, thanks for spending the time on theCUBE, really appreciate certainly very relevant, middleware is key to the all the action. Lot of glue going on in that layers. What's going on at the show here for you guys? What's hot, what should people pay attention to? What should they look for? >> Mark: I'll give my take, what's hot is any talk to do with middleware >> (laughs) Biased. >> But kind of seriously we do have a lot of good stuff going on with messaging and Kafka. Kafka's really hot at the moment. We've just released our own project which is eventually going to become a product called Strimsy, integrated with OpenShift so it's coognative from the get-go, it's available now. We're integrating that with OpenWhisk, which we talked about earlier, and also with our own reactive async platform called Vertex, so there's a number of sessions on that and if I get a chance I'm hoping to say into one >> John F: So real quick though I mean streaming is important because you talk about granularity, people are going to start streaming services with service measures right around the corner, the notion of streaming asynchronously is going to be a huge deal >> Absolutely, absolutely. >> Mark: And tapping into that stream at any point in time and then pulling the plug and then doing the work based on that. >> Also real quick, Kubernetes, obviously the momentum is phenomenal in Cloud Native but becoming a first class citizen in the enterprise, still some work to do. Thoughts on that real quick? Would you say Kubernetes's Native, is it coming faster? Will it ever be, certainly I think it will be but. >> I think this is the year of Kubernetes and of enterprise Kubernetes. >> Mike: I mean you just look at the phenomenal growth of OpenShift and that in a way speaks directly to this point >> Mike, what's hot, what's hot? What are you doing at the show, what should we look at? I'd add to, I certainly would echo the points Mark made and in addition to that I would take a look at any session here on API management. Again within middleware the three-scale technology we acquired is still going gangbusters, the customers are loving that, finding it extremely helpful as they start to navigate the complexity of doing essentially distributive computing using containers and microservices, getting more disciplined about API management is of huge relevance in that world, so that would be the next thing I'd add. >> Congratulations guys, finally the operating system called the Cloud is taking over the world. It's basically distributed computer all connected together, it sounds like >> All that stuff we learned in the eighties right (laughs) >> It's a systems world, the middleware is changing the game, modern software construction of Apple cases all being done in a new way, looking at orchestration, server lists, service meshes all happening in real time, guys congratulations on the all the work and Red Hats. Be keeping it in the open, Java E coming around the corner as well, it's theCUBE bringing it out in the open here in San Francisco, I'm John Furrier with John Troyer we'll be back with more live coverage after this short break

Published Date : May 8 2018

SUMMARY :

brought to you by Red Hat. This is the stack wars right here. and I and Stu always talk about like the of the bigger things of our efforts over the last year. and the business process, how and a lot of those are doing is to line up and the reference architectures and really the community What is the rallying cry in It's going to be orchestrated, composed, E in the past is immense. and it's not going to be at And Eclipse has got a and some of the knowledge What's going on at the so it's coognative from the and then doing the work based on that. citizen in the enterprise, and of enterprise Kubernetes. and in addition to that called the Cloud is taking over the world. on the all the work and Red Hats.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Susan WojcickiPERSON

0.99+

Dave VellantePERSON

0.99+

Lisa MartinPERSON

0.99+

JimPERSON

0.99+

JasonPERSON

0.99+

Tara HernandezPERSON

0.99+

David FloyerPERSON

0.99+

DavePERSON

0.99+

Lena SmartPERSON

0.99+

John TroyerPERSON

0.99+

Mark PorterPERSON

0.99+

MellanoxORGANIZATION

0.99+

Kevin DeierlingPERSON

0.99+

Marty LansPERSON

0.99+

TaraPERSON

0.99+

JohnPERSON

0.99+

AWSORGANIZATION

0.99+

Jim JacksonPERSON

0.99+

Jason NewtonPERSON

0.99+

IBMORGANIZATION

0.99+

Daniel HernandezPERSON

0.99+

Dave WinokurPERSON

0.99+

DanielPERSON

0.99+

LenaPERSON

0.99+

Meg WhitmanPERSON

0.99+

TelcoORGANIZATION

0.99+

Julie SweetPERSON

0.99+

MartyPERSON

0.99+

Yaron HavivPERSON

0.99+

AmazonORGANIZATION

0.99+

Western DigitalORGANIZATION

0.99+

Kayla NelsonPERSON

0.99+

Mike PiechPERSON

0.99+

JeffPERSON

0.99+

Dave VolantePERSON

0.99+

John WallsPERSON

0.99+

Keith TownsendPERSON

0.99+

fiveQUANTITY

0.99+

IrelandLOCATION

0.99+

AntonioPERSON

0.99+

Daniel LauryPERSON

0.99+

Jeff FrickPERSON

0.99+

MicrosoftORGANIZATION

0.99+

sixQUANTITY

0.99+

Todd KerryPERSON

0.99+

John FurrierPERSON

0.99+

$20QUANTITY

0.99+

MikePERSON

0.99+

January 30thDATE

0.99+

MegPERSON

0.99+

Mark LittlePERSON

0.99+

Luke CerneyPERSON

0.99+

PeterPERSON

0.99+

Jeff BasilPERSON

0.99+

Stu MinimanPERSON

0.99+

DanPERSON

0.99+

10QUANTITY

0.99+

AllanPERSON

0.99+

40 gigQUANTITY

0.99+