Image Title

Search Results for KubeFlow:

Abhinav Joshi & Tushar Katarki, Red Hat | KubeCon + CloudNativeCon Europe 2020 – Virtual


 

>> Announcer: From around the globe, it's theCUBE with coverage of KubeCon + CloudNativeCon Europe 2020 Virtual brought to you by Red Hat, the Cloud Native Computing Foundation and Ecosystem partners. >> Welcome back I'm Stu Miniman, this is theCUBE's coverage of KubeCon + CloudNativeCon Europe 2020, the virtual event. Of course, when we talk about Cloud Native we talk about Kubernetes there's a lot that's happening to modernize the infrastructure but a very important thing that we're going to talk about today is also what's happening up the stack, what sits on top of it and some of the new use cases and applications that are enabled by all of this modern environment and for that we're going to talk about artificial intelligence and machine learning or AI and ML as we tend to talk in the industry, so happy to welcome to the program. We have two first time guests joining us from Red Hat. First of all, we have Abhinav Joshi and Tushar Katarki they are both senior managers, part of the OpenShift group. Abhinav is in the product marketing and Tushar is in product management. Abhinav and Tushar thank you so much for joining us. >> Thanks a lot, Stu, we're glad to be here. >> Thanks Stu and glad to be here at KubeCon. >> All right, so Abhinav I mentioned in the intro here, modernization of the infrastructure is awesome but really it's an enabler. We know... I'm an infrastructure person the whole reason we have infrastructure is to be able to drive those applications, interact with my data and the like and of course, AI and ML are exciting a lot going on there but can also be challenging. So, Abhinav if I could start with you bring us inside your customers that you're talking to, what are the challenges, the opportunities? What are they seeing in this space? Maybe what's been holding them back from really unlocking the value that is expected? >> Yup, that's a very good question to kick off the conversation. So what we are seeing as an organization they typically face a lot of challenges when they're trying to build an AI/ML environment, right? And the first one is like a talent shortage. There is a limited amount of the AI, ML expertise in the market and especially the data scientists that are responsible for building out the machine learning and the deep learning models. So yeah, it's hard to find them and to be able to retain them and also other talents like a data engineer or app DevOps folks as well and the lack of talent can actually stall the project. And the second key challenge that we see is the lack of the readily usable data. So the businesses collect a lot of data but they must find the right data and make it ready for the data scientists to be able to build out, to be able to test and train the machine learning models. If you don't have the right kind of data to the predictions that your model is going to do in the real world is only going to be so good. So that becomes a challenge as well, to be able to find and be able to wrangle the right kind of data. And the third key challenge that we see is the lack of the rapid availability of the compute infrastructure, the data and machine learning, and the app dev tools for the various personas like a data scientist or data engineer, the software developers and so on that can also slow down the project, right? Because if all your teams are waiting on the infrastructure and the tooling of their choice to be provisioned on a recurring basis and they don't get it in a timely manner, it can stall the projects. And then the next one is the lack of collaboration. So you have all these kinds of teams that are involved in the AI project, and they have to collaborate with each other because the work one of the team does has a dependency on a different team like say for example, the data scientists are responsible for building the machine learning models and then what they have to do is they have to work with the app dev teams to make sure the models get integrated as part of the app dev processes and ultimately rolled out into the production. So if all these teams are operating in say silos and there is lack of collaboration between the teams, so this can stall the projects as well. And finally, what we see is the data scientists they typically start the machine learning modeling on their individual PCs or laptops and they don't focus on the operational aspects of the solution. So what this means is when the IT teams have to roll all this out into a production kind of deployment, so they get challenged to take all the work that has been done by the individuals and then be able to make sense out of it, be able to make sure that it can be seamlessly brought up in a production environment in a consistent way, be it on-premises, be it in the cloud or be it say at the edge. So these are some of the key challenges that we see that the organizations are facing, as they say try to take the AI projects from pilot to production. >> Well, some of those things seem like repetition of what we've had in the past. Obviously silos have been the bane of IT moving forward and of course, for many years we've been talking about that gap between developers and what's happening in the operation side. So Tushar, help us connect the dots, containers, Kubernetes, the whole DevOps movement. How is this setting us up to actually be successful for solutions like AI and ML? >> Sure Stu I mean, in fact you said it right like in the world of software, in the world of microservices, in the world of app modernization, in the world of DevOps in the past 10, 15 years, but we have seen this evolution revolution happen with containers and Kubernetes driving more DevOps behavior, driving more agile behavior so this in fact is what we are trying to say here can ease up the cable to EIML also. So the various containers, Kubernetes, DevOps and OpenShift for software development is directly applicable for AI projects to make them move agile, to get them into production, to make them more valuable to organization so that they can realize the full potential of AI. We already touched upon a few personas so it's useful to think about who the users are, who the personas are. Abhinav I talked about data scientists these are the people who obviously do the machine learning itself, do the modeling. Then there are data engineers who do the plumbing who provide the essential data. Data is so essential to machine learning and deep learning and so there are data engineers that are app developers who in some ways will then use the output of what the data scientists have produced in terms of models and then incorporate them into services and of course, none of these things are purely cast in stone there's a lot of overlap you could find that data scientists are app developers as well, you'll see some of app developers being data scientist later data engineer. So it's a continuum rather than strict boundaries, but regardless what all of these personas groups of people need or experts need is self service to that preferred tools and compute and storage resources to be productive and then let's not forget the IT, engineering and operations teams that need to make all this happen in an easy, reliable, available manner and something that is really safe and secure. So containers help you, they help you quickly and easily deploy a broad set of machine learning tools, data tools across the cloud, the hybrid cloud from data center to public cloud to the edge in a very consistent way. Teams can therefore alternatively modify, change a shared container images, machine learning models with (indistinct) and track changes. And this could be applicable to both containers as well as to the data by the way and be transparent and transparency helps in collaboration but also it could help with the regulatory reasons later on in the process. And then with containers because of the inherent processes solution, resource control and protection from threat they can also be very secure. Now, Kubernetes takes it to the next level first of all, it forms a cluster of all your compute and data resources, and it helps you to run your containerized tools and whatever you develop on them in a consistent way with access to these shared compute and centralized compute and storage and networking resources from the data center, the edge or the public cloud. They provide things like resource management, workload scheduling, multi-tendency controls so that you can be a proper neighbors if you will, and quota enforcement right? Now that's Kubernetes now if you want to up level it further if you want to enhance what Kubernetes offers then you go into how do you write applications? How do you actually make those models into services? And that's where... and how do you lifecycle them? And that's sort of the power of Helm and for the more Kubernetes operators really comes into the picture and while Helm helps in installing some of this for a complete life cycle experience. A kubernetes operator is the way to go and they simplify the acceleration and deployment and life cycle management from end-to-end of your entire AI, ML tool chain. So all in all organizations therefore you'll see that they need to dial up and define models rapidly just like applications that's how they get ready out of it quickly. There is a lack of collaboration across teams as Abhinav pointed out earlier, as you noticed that has happened still in the world of software also. So we're talking about how do you bring those best practices here to AI, ML. DevOps approaches for machine learning operations or many analysts and others have started calling as MLOps. So how do you kind of bring DevOps to machine learning, and fosters better collaboration between teams, application developers and IT operations and create this feedback loop so that the time to production and the ability to take more machine learning into production and ML-powered applications into production increase is significant. So that's kind of the, where I wanted shine the light on what you were referring to earlier, Stu. >> All right, Abhinav of course one of the good things about OpenShift is you have quite a lot of customers that have deployed the solution over the years, bring us inside some of your customers what are they doing for AI, ML and help us understand really what differentiates OpenShift in the marketplace for this solution set. >> Yeah, absolutely that's a very good question as well and we're seeing a lot of traction in terms of all kinds of industries, right? Be it the financial services like healthcare, automotive, insurance, oil and gas, manufacturing and so on. For a wide variety of use cases and what we are seeing is at the end of the day like all these deployments are focused on helping improve the customer experience, be able to automate the business processes and then be able to help them increase the revenue, serve their customers better, and also be able to save costs. If you go to openshift.com/ai-ml it's got like a lot of customer stories in there but today I will not touch on three of the customers we have in terms of the different industries. The first one is like Royal Bank of Canada. So they are a top global financial institution based out of Canada and they have more than 17 million clients globally. So they recently announced that they build out an AI-powered private cloud platform that was based on OpenShift as well as the NVIDIA DGX AI compute system and this whole solution is actually helping them to transform the customer banking experience by being able to deliver an AI-powered intelligent apps and also at the same time being able to improve the operational efficiency of their organization. And now with this kind of a solution, what they're able to do is they're able to run thousands of simulations and be able to analyze millions of data points in a fraction of time as compared to the solution that they had before. Yeah, so like a lot of great work going on there but now the next one is the ETCA healthcare. So like ETCA is one of the leading healthcare providers in the country and they're based out of the Nashville, Tennessee. And they have more than 184 hospitals as well as more than 2,000 sites of care in the U.S. as well as in the UK. So what they did was they developed a very innovative machine learning power data platform on top of our OpenShift to help save lives. The first use case was to help with the early detection of sepsis like it's a life-threatening condition and then more recently they've been able to use OpenShift in the same kind of stack to be able to roll out the new applications that are powered by machine learning and deep learning let say to help them fight COVID-19. And recently they did a webinar as well that had all the details on the challenges they had like how did they go about it? Like the people, process and technology and then what the outcomes are. And we are proud to be a partner in the solution to help with such a noble cause. And the third example I want to share here is the BMW group and our partner DXC Technology what they've done is they've actually developed a very high performing data-driven data platform, a development platform based on OpenShift to be able to analyze the massive amount of data from the test fleet, the data and the speed of the say to help speed up the autonomous driving initiatives. And what they've also done is they've redesigned the connected drive capability that they have on top of OpenShift that's actually helping them provide various use cases to help improve the customer experience. With the customers and all of the customers are able to leverage a lot of different value-add services directly from within the car, their own cars. And then like last year at the Red Hat Summit they had a keynote as well and then this year at Summit, they were one of the Innovation Award winners. And we have a lot more stories but these are the three that I thought are actually compelling that I should talk about here on theCUBE. >> Yeah Abhinav just a quick follow up for you. One of the things of course we're looking at in 2020 is how has the COVID-19 pandemic, people working from home how has that impacted projects? I have to think that AI and ML are one of those projects that take a little bit longer to deploy, is it something that you see are they accelerating it? Are they putting on pause or are new project kicking off? Anything you can share from customers you're hearing right now as to the impact that they're seeing this year? >> Yeah what we are seeing is that the customers are now even more keen to be able to roll out the digital (indistinct) but we see a lot of customers are now on the accelerated timeline to be able to say complete the AI, ML project. So yeah, it's picking up a lot of momentum and we talk to a lot of analyst as well and they are reporting the same thing as well. But there is the interest that is actually like ramping up on the AI, ML projects like across their customer base. So yeah it's the right time to be looking at the innovation services that it can help improve the customer experience in the new virtual world that we live in now about COVID-19. >> All right, Tushar you mentioned that there's a few projects involved and of course we know at this conference there's a very large ecosystem. Red Hat is a strong contributor to many, many open source projects. Give us a little bit of a view as to in the AI, ML space who's involved, which pieces are important and how Red Hat looks at this entire ecosystem? >> Thank you, Stu so as you know technology partnerships and the power of open is really what is driving the technology world these days in any ways and particularly in the AI ecosystem. And that is mainly because one of the machine learning is in a bootstrap in the past 10 years or so and a lot of that emerging technology to take advantage of the emerging data as well as compute power has been built on the kind of the Linux ecosystem with openness and languages like popular languages like Python, et cetera. And so what you... and of course tons of technology based in Java but the point really here is that the ecosystem plays a big role and open plays a big role and that's kind of Red Hat's best cup of tea, if you will. And that really has plays a leadership role in the open ecosystem so if we take your question and kind of put it into two parts, what is the... what we are doing in the community and then what we are doing in terms of partnerships themselves, commercial partnerships, technology partnerships we'll take it one step at a time. In terms of the community itself, if you step back to the three years, we worked with other vendors and users, including Google and NVIDIA and H2O and other Seldon, et cetera, and both startups and big companies to develop this Kubeflow ecosystem. The Kubeflow is upstream community that is focused on developing MLOps as we talked about earlier end-to-end machine learning on top of Kubernetes. So Kubeflow right now is in 1.0 it happened a few months ago now it's actually at 1.1 you'll see that coupon here and then so that's the Kubeflow community in addition to that we are augmenting that with the Open Data Hub community which is something that extends the capabilities of the Kubeflow community to also add some of the data pipelining stuff and some of the data stuff that I talked about and forms a reference architecture on how to run some of this on top of OpenShift. So the Open Data Hub community also has a great way of including partners from a technology partnership perspective and then tie that with something that I mentioned earlier, which is the idea of Kubernetes operators. Now, if you take a step back as I mentioned earlier, Kubernetes operators help manage the life cycle of the entire application or containerized application including not only the configuration on day one but also day two activities like update and backups, restore et cetera whatever the application needs. Afford proper functioning that a "operator" needs for it to make sure so anyways, the Kubernetes operators ecosystem is also flourishing and we haven't faced that with the OperatorHub.io which is a community marketplace if you will, I don't call it marketplace a community hub because it's just comprised of community operators. So the Open Data Hub actually can take community operators and can show you how to run that on top of OpenShift and manage the life cycle. Now that's the reference architecture. Now, the other aspect of it really is as I mentioned earlier is the commercial aspect of it. It is from a customer point of view, how do I get certified, supported software? And to that extent, what we have is at the top of the... from a user experience point of view, we have certified operators and certified applications from the AI, ML, ISV community in the Red Hat marketplace. And from the Red Hat marketplace is where it becomes easy for end users to easily deploy these ISVs and manage the complete life cycle as I said. Some of the examples of these kinds of ISVs include startups like H2O although H2O is kind of well known in certain sectors PerceptiLabs, Cnvrg, Seldon, Starburst et cetera and then on the other side, we do have other big giants also in this which includes partnerships with NVIDIA, Cloudera et cetera that we have announced, including our also SaaS I got to mention. So anyways these provide... create that rich ecosystem for data scientists to take advantage of. A TEDx Summit back in April, we along with Cloudera, SaaS Anaconda showcased a live demo that shows all these things to working together on top of OpenShift with this operator kind of idea that I talked about. So I welcome people to go and take a look the openshift.com/ai-ml that Abhinav already referenced should have a link to that it take a simple Google search might download if you need some of that, but anyways and the other part of it is really our work with the hardware OEMs right? And so obviously NVIDIA GPUs is obviously hardware, and that accelerations is really important in this world but we are also working with other OEM partners like HP and Dell to produce this accelerated AI platform that turnkey solutions to run your data-- to create this open AI platform for "private cloud" or the data center. The other thing obviously is IBM, IBM Cloud Pak for Data is based on OpenShift that has been around for some time and is seeing very good traction, if you think about a very turnkey solution, IBM Cloud Pak is definitely kind of well ahead in that and then finally Red Hat is about driving innovation in the open-source community. So, as I said earlier, we are doing the Open Data Hub which that reference architecture that showcases a combination of upstream open source projects and all these ISV ecosystems coming together. So I welcome you to take a look at that at opendatahub.io So I think that would be kind of the some total of how we are not only doing open and community building but also doing certifications and providing to our customers that assurance that they can run these tools in production with the help of a rich certified ecosystem. >> And customer is always key to us so that's the other thing that the goal here is to provide our customers with a choice, right? They can go with open source or they can go with a commercial solution as well. So you want to make sure that they get the best in cloud experience on top of our OpenShift and our broader portfolio as well. >> All right great, great note to end on, Abhinav thank you so much and Tushar great to see the maturation in this space, such an important use case. Really appreciate you sharing this with theCUBE and Kubecon community. >> Thank you, Stu. >> Thank you, Stu. >> Okay thank you and thanks a lot and have a great rest of the show. Thanks everyone, stay safe. >> Thanks you and stay with us for a lot more coverage from KubeCon + CloudNativeCon Europe 2020, the virtual edition I'm Stu Miniman and thank you as always for watching theCUBE. (soft upbeat music plays)

Published Date : Aug 18 2020

SUMMARY :

the globe, it's theCUBE and some of the new use Thanks a lot, Stu, to be here at KubeCon. and the like and of course, and make it ready for the data scientists in the operation side. and for the more Kubernetes operators that have deployed the and also at the same time One of the things of course is that the customers and how Red Hat looks at and some of the data that the goal here is great to see the maturation and have a great rest of the show. the virtual edition I'm Stu Miniman

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Brian GilmorePERSON

0.99+

David BrownPERSON

0.99+

Tim YoakumPERSON

0.99+

Lisa MartinPERSON

0.99+

Dave VolantePERSON

0.99+

Dave VellantePERSON

0.99+

BrianPERSON

0.99+

DavePERSON

0.99+

Tim YokumPERSON

0.99+

StuPERSON

0.99+

Herain OberoiPERSON

0.99+

JohnPERSON

0.99+

Dave ValantePERSON

0.99+

Kamile TaoukPERSON

0.99+

John FourierPERSON

0.99+

Rinesh PatelPERSON

0.99+

Dave VellantePERSON

0.99+

Santana DasguptaPERSON

0.99+

EuropeLOCATION

0.99+

CanadaLOCATION

0.99+

BMWORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

ICEORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Jack BerkowitzPERSON

0.99+

AustraliaLOCATION

0.99+

NVIDIAORGANIZATION

0.99+

TelcoORGANIZATION

0.99+

VenkatPERSON

0.99+

MichaelPERSON

0.99+

CamillePERSON

0.99+

Andy JassyPERSON

0.99+

IBMORGANIZATION

0.99+

Venkat KrishnamachariPERSON

0.99+

DellORGANIZATION

0.99+

Don TapscottPERSON

0.99+

thousandsQUANTITY

0.99+

Palo AltoLOCATION

0.99+

Intercontinental ExchangeORGANIZATION

0.99+

Children's Cancer InstituteORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

telcoORGANIZATION

0.99+

Sabrina YanPERSON

0.99+

TimPERSON

0.99+

SabrinaPERSON

0.99+

John FurrierPERSON

0.99+

GoogleORGANIZATION

0.99+

MontyCloudORGANIZATION

0.99+

AWSORGANIZATION

0.99+

LeoPERSON

0.99+

COVID-19OTHER

0.99+

Santa AnaLOCATION

0.99+

UKLOCATION

0.99+

TusharPERSON

0.99+

Las VegasLOCATION

0.99+

ValentePERSON

0.99+

JL ValentePERSON

0.99+

1,000QUANTITY

0.99+

Andrey Rybka, Bloomberg | KubeCon + CloudNativeCon NA 2019


 

(upbeat music) >> Announcer: Live from San Diego, California, it's theCUBE covering Kubecon and CloudNative Con brought to you by Red Hat, the Cloud Native Computing Foundation, and its ecosystem partners. >> Welcome back to the Kubecon CloudNative Con here in San Diego. I'm Stu Miniman and my co-host is Justin Warren. And one of the things we always love to do is really dig in to some of the customer use cases. And joining us to do that, Andrey Rybka, who's the head of Compute Architecture and the CTO Office at Bloomberg. Andrey, thanks so much for joining us. >> Thank you. >> All right, so just to set the stage, last year we had your colleague Steven Bauer, came, talked about your company's been using Kubernetes for a number of years. You're a member of the CNCF as one of those end users there and you're even an award winner. So, congratulations on all the process. You've been doing if for years, so all the problems, I'm sure are already solved, so now we just have a big party, right? >> Yes, well I'm mean certainly we are at the stage where things are quite mature and there's a lot of workloads that are running Kubernetes. We run Kubernetes on-premises. Steven has an excellent data sense platform that does machine learning with GPUs and bare metal. We also have a really excellent team that runs basically Platform as a Service, generic Platform as a Service, not GPUs but effectively runs any kind of stateless app or service and that's been extremely successful and, you know there's a lot interest in that. And we also run Kubernetes in Public Cloud. So, a lot of workloads for like Bloomberg.com, actually are backed now by Kubernetes. >> Yeah, so we want to spend a bunch of time talking about the applications, the data, the services, that you've built some PaaS's there. Yes, so step us back for a second if you would, and give us the, What led to Kubernetes? And as you said, you've got your on-premises environment, you've got Public Cloud, where was that when you started and what's the role of Kubernetes and that today? >> Sure, we started back in 2015, evaluating all kinds of sort of container orchestration platforms. It's very clear that developers love containers for its portability and just the ability to have the same environments that runs kind of on-premises or on your laptop and runs on the actual deployment environment, the same thing, right? So, we looked at Mesos, Marathon, Cloud Foundry, even OpenShift before it was Kubernetes. And we, in no specific order continuously evaluate all different options and once we make a decision, we recommend to the engineering team and work in partnership with engineers. So all of those awards and everything, actually I want to say, that this is really a kudos to our engineering team. We just a small part of the puzzle. Now as far as like how we made the Kubernetes selection, it was a bit risky. We started with a pre-alpha version and you know I read the Borg paper, how Google actually did Borg. And when I sort of realized, well they're trying to do the same thing with Kubernetes. It was very clear, this is kind of, you know we're going to build on mature experience, right. So, some what it was risky but also a safe bet because you know there was some good computer science and engineering behind the product. So we started alpha version, they're consumer web groups actually were one of the first deployments of there kind of Kubernetes and they present them at the first Kubecon. It was an excellent talk on how we did Kubernetes and you know we came a long way since then. We've got sort of now, probably about 80 to 100 clusters running and you know, they run full high availability, DR -1. I would say it is one of the most reliable environments that we have, you know. We have frequently, you know infrastructure outages, hypervisors, you know, obviously hardware fails, which is normal, and we rarely see any issues and actually you know no like any major issues whatsoever. So, the things we expected out of Kubernetes, the things like reliability, elastic infrastructure, auto-scaling, the multi-tenancy it all worked out. Higher density of sort of packing the nodes, you know that's another great sort of value add that we expected but now we finally realizing that. >> So, one question I've had from a lot of customers, particularly traditional enterprises who are used to doing things and have a lot of virtual machine infrastructure. They're looking at Kubernetes but they're finding it somewhat opaque, a little bit scary. Talk us through, How did you convince the business that this was the choice that we should make and that we need to change the way that we're developing applications and deploying applications and we want to do this with Kubernetes? How did you convince them that this was going to be okay in the end? >> Yes, yes, that's a really good question. A lot of people were scared and you know they were, is this going to break things or you know is this just a shiny new thing. And there was a lot of education that had to occur. We've shown a lot of POCs now. The way we exposed Kubernetes was not just like raw Kubernetes. We actually wanted to keep it safe, so we sort of stayed away from some, like more alpha type of workloads and moved towards kind of like the more stable things. And so, we exposed it Platform as a Service. So, the developers did not actually get to necessarily like kubectl you know, apply a config and just deploy the app. We actually had a really good sort of offering where we had kind of, almost like Git-flow kind of environment where you have, you know your source control, then you have CICD pipeline and then once it goes through all those check and balances, you deploy your containers. So from that perspective, we actually hid quite a bit of things that made things a bit dangerous or potentially a little bit more complicated. And that's proven to be the right strategy because right now as far as the reliability I would say this is probably one of the most reliable environments that we have. And this is by design, you know. We basically tell the developers, by default you're supposed to run at least two replicas at least two Data Centers by default or two, you know, regions or two availability zones, and you can't change that. There's some people who are asking me like can I just deploy just in one Data Center, I'm like, I'm sorry, no. Like by default its like that. And auto-scaling on so if one Data Center goes and you need DR -1, so if you started with two minimum replicas then it auto-scales to four or whatever that will be set. So, you know, I think we've basically put a prototype of a proof of concept relatively fast. And We've got with the initial Platform as a Service, you know from zero to actual delivery in about three months. A lot of building blocks were there and we just put kind of the pieces of the puzzle together. >> All right, that does echo a lot of the discussion that was at had in the keynote today, even was about looking at making Kubernetes easier to consume, essentially by having all of these sensible defaults like you mentioned. You will have two replicas. It will run in these two different zones. And kind of removing some of that responsibility for those decisions from the developers. >> Andrey: Yes. >> How does that line up with the idea of DevOps which seems to be partly about making the developers a bit more responsible for their service and how it runs in production. It sounds like you've actually taken a lot of that effort away from them by, we've done all this work for you so you don't have to think about that anymore. >> I mean a little bit of background, we have about 5,500 engineers. So, expecting everybody to learn DevOps and Kubernetes is not realistic, right? And most developers really want to write applications and services that add business value, right? Nobody wants to really manage networking at the lower level, you know there's a lot of still complexity in this environment, right? So, you know, as far as DevOps, we've built shared kind of teams that have basically like, think of like centralized SRE teams that build the core platform components. We have a world class kind of software infrastructure group which builds those type of components. On top of the sort of, the technology infrastructure team that caters to the hardware and the virtualization infrastructure built on OpenStack. So you know, there is very much kind of a lot of common services/shared services teams that build that as a platform to developers and that is how we can scale. Because, you know, it's very hard to do that if every team is just sort of duplicating each one of those things. >> So Andrey, let's talk a little bit about your application portfolio. >> Andrey: Sure. >> Bloomberg must have thousands of applications out there. >> Andrey: Yes, yes. >> From what you were describing, is this only for kind of net new applications. If I want to use it I have to build something new, replacing something else or, or can you walk us through kind of what percentage is on this platform today and how is that migration or transition? >> And some is not net new, we actually did port quite a bit of the sort of the classic Bloomberg services that developers expect to the platform. And it's seamless to the developers. So, we've been doing quite a bit of sort of Linux migration meaning from like things like Solaris, AIX, and this platform was built purposefully to help developers to migrate their services. Now, they're not sort of lift and shift type of migrations. You can't just expect the, you know classic C++ shared memory app suddenly like jump and start being in containers, right? So there is some architectural changes, differences that had to be done. The type of applications that we see, you know, they're just sort of microservices oriented. Bloomberg has been around since 1981 and they've been doing service-oriented architecture since like early 90s. So, you know, things were already kind of in services kind of framework and mentality. And before, you know we had service matches, Bloomberg had its own kind of paradigm of service matches. So, all we do is kind of retro-fit the same concepts with new frameworks. And what we did is we brought in sort of like a new mentality of open source first. So, most new systems that we built, we look for kind of what about if you know, we look for open source components that can fit in this particular problem set. So there applications that we have right now, we have quite a bit of data services, data transformation pipelines, machine learning, you know, there's quite a bit of the machine learning as far as like the actual learning part of training, and then there is the inference part that runs quite a bit. We have quite a few of accounting services, like, I mentioned Bloomberg.com, and many sort of things that you would normally think of like accounting delivery services that run on Kubernetes. And I mean, at this point, we certainly try to be a little bit conscious about stateful services, so we don't run as much of databases and things like that. Eventually, we will get there once we prove the reliability and resiliency around the stateful set in Kubernetes. >> Yeah, do you have an estimate internal or goals as to what percentage your applications are on this platform now and a roadmap going forward? >> I mean, it's hard to say but going forward, I see majority of all services migrating to Kubernetes because for us, Kubernetes is become an essentially standardized compute fabric. You know, one thing that we've been missing, you know, a lot of open source projects deliver, you know virtualized infrastructure. But, you know, that's not quite enough, right. You need other sort of concepts to be there and Kubernetes did deliver that for us. And more importantly, it also delivered us kind of a, almost like a multi-cloud strategy, you know, kind of accidentally because, you know none of the cloud providers have any standard APIs of any source, right? Like, so even if use Terraform, that's not necessarily multi-cloud, it's just like you got to write HCO for each cloud provider. In Kubernetes, more or less, that becomes kind of a really solved problem. >> So which, what flavor of Kubernetes are you using? Do you leverage any of the services from the Public Cloud on Kubernetes? >> Yeah, I mean, excellent question. So, you know we want to leverage managed offerings as much as possible because things like patch and the security of you know, CVE's, and things like that, I want somebody to take care of that for me and harden things, and out of the box. So, the key to our multi-cloud strategy is use managed offering but based on open source software. So if you want to deploy services, deploy them on Kubernetes as much as possible. If you want to use databases, use manage database but based on the open source software, like Postgres, or MySQL. And that makes it affordable, right, to an extent, I mean, there's going to be some slight differences, but I do believe that managed is better than if I'm going to go and bootstrap VM's and manage my own control plane and the workers and things like that. >> Yeah, and it is a lot of additional work that I think organizations genuinely did try to roll their own and do everything themselves. There's a lot more understanding since the advent of cloud essentially that actually making someone else do this for what is essentially the undifferentiated heavy lifting. If you can get someone else to do that for you, >> Andrey: Absolutely >> it's a much better experience. Which is actually what you've built with the Kubernetes services for your developers. You are becoming that managed service for your app developers. I think a few enterprise organizations have tried to do that a little bit with centralized IT. They haven't quite got that service mentality there where I'm the product owner and I need to create something which my developers find is valuable to use so that they want to use it. >> This is exactly spot on. When I joined Bloomberg six years ago, one of the things we wanted to do is effectively offer a Public Cloud like services on-premises and now we're there. We actually have a lot of managed offerings whether you want Kafka as a service, queuing as a service, or you know, cache as a service, or even Kubernetes but not necessarily we want to expose Kubernetes as a service, we want to expose Platform as a Service. So, you hit the nail on the head because effectively developers want kind of the same things that they see in the Public Cloud. I want you know, function as a service, I want lambda something like this. Well, that's a type of Platform as a Service. So, you're spot on. >> Yeah, Andrey, last question I have for you. You know, you talked about the maturity of the managed offerings there, something we've seen a lot this year is the companies that, How am I going to manage across, you know, various environments? There we saw, you know, Microsoft with Azure, or VMware with Honzu, what do you think of that? Is that something that interests you or anything else in the ecosystem that you still think needs to mature to help your business? >> Sure, sure, I mean, I think that the use cases they're trying to address are definitely near and dear to my heart. Because we are trying to be multi-cloud. And in order to be truly mature multi-cloud sort of company, we need to have sort of mature kind of multi-cloud control plane. That has kind of the deployment address, ACD pipeline address than it need to address security, not just day one but day two, a load and monitoring and all of you know, if I were just to have three different portals to look at, it is very complicated, you're going to miss things. I want one pane of glass, right. So, what this company is addressing is extremely important and I see a lot of value in it. Now from my point of view, in general, what we prefer if it was an open source project that we could contribute and we could collaborate on, we still want to pay money for the support and what not, we don't want to just be free riders, right? But if it's an open source product and we can be part of it, it's not just read-only open source, that is definitely something that I would be very much interested in participating. And majority of the developers that we have are very happy to participate in open source. I think you seen some of our contributors here. We have some people contributing to Kubeflow. There's many other projects, we have quite a bit of cube projects like the case engineering with powerfulseal. If somebody wants to check it out, we've got some really interesting things. >> Andrey, really appreciate you sharing what you and your engineering teams are doing. >> Thank you. >> Thank you for all the contributions back to the community. >> Yep. >> For Justin Warren, I'm Stu Miniman back with more of our three day wall to wall coverage here at KubeCon CloudNative Con. Thank you for watching theCube. (dramatic music)

Published Date : Nov 21 2019

SUMMARY :

brought to you by Red Hat, And one of the things we always love to do is really dig in You're a member of the CNCF as one of those end users there and, you know there's a lot interest in that. And as you said, you've got your on-premises environment, that we have, you know. and that we need to change the way A lot of people were scared and you know they were, And kind of removing some of that responsibility we've done all this work for you so you don't have and that is how we can scale. about your application portfolio. and how is that migration or transition? we look for kind of what about if you know, kind of a, almost like a multi-cloud strategy, you know, and the security of you know, CVE's, and things like that, Yeah, and it is a lot of additional work that they want to use it. I want you know, function as a service, There we saw, you know, Microsoft with Azure, and all of you know, Andrey, really appreciate you sharing what you Thank you for watching theCube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Justin WarrenPERSON

0.99+

Steven BauerPERSON

0.99+

AndreyPERSON

0.99+

Andrey RybkaPERSON

0.99+

MicrosoftORGANIZATION

0.99+

2015DATE

0.99+

Stu MinimanPERSON

0.99+

Red HatORGANIZATION

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

San DiegoLOCATION

0.99+

twoQUANTITY

0.99+

BloombergORGANIZATION

0.99+

San Diego, CaliforniaLOCATION

0.99+

last yearDATE

0.99+

CNCFORGANIZATION

0.99+

MySQLTITLE

0.99+

HonzuORGANIZATION

0.99+

KubeConEVENT

0.99+

GoogleORGANIZATION

0.99+

oneQUANTITY

0.99+

firstQUANTITY

0.99+

LinuxTITLE

0.99+

StevenPERSON

0.99+

1981DATE

0.99+

Bloomberg.comORGANIZATION

0.99+

SolarisTITLE

0.98+

early 90sDATE

0.98+

KubernetesTITLE

0.98+

two replicasQUANTITY

0.98+

AIXTITLE

0.98+

one questionQUANTITY

0.98+

MarathonORGANIZATION

0.98+

two different zonesQUANTITY

0.98+

CloudNative ConEVENT

0.98+

DevOpsTITLE

0.97+

about 5,500 engineersQUANTITY

0.97+

KafkaTITLE

0.97+

about three monthsQUANTITY

0.97+

Compute ArchitectureORGANIZATION

0.97+

fourQUANTITY

0.97+

one Data CenterQUANTITY

0.97+

todayDATE

0.96+

six years agoDATE

0.96+

thousandsQUANTITY

0.96+

this yearDATE

0.96+

MesosORGANIZATION

0.96+

PostgresTITLE

0.96+

Cloud FoundryORGANIZATION

0.96+

two availability zonesQUANTITY

0.96+

one DataQUANTITY

0.95+

three dayQUANTITY

0.95+

first deploymentsQUANTITY

0.94+

each cloud providerQUANTITY

0.94+

CloudNativeCon NA 2019EVENT

0.94+

zeroQUANTITY

0.94+

OpenStackTITLE

0.94+

one paneQUANTITY

0.94+

two minimum replicasQUANTITY

0.94+

100 clustersQUANTITY

0.93+

three different portalsQUANTITY

0.92+

Kubecon CloudNative ConEVENT

0.91+

Public CloudTITLE

0.91+

CTO OfficeORGANIZATION

0.89+

VMwareORGANIZATION

0.88+

OpenShiftORGANIZATION

0.88+

day twoQUANTITY

0.88+

Tom Phelan, HPE | KubeCon + CloudNativeCon NA 2019


 

Live from San Diego, California it's theCUBE! covering KubeCon and CloudNativeCon brought to you by Red Hat a CloudNative computing foundation and its ecosystem partners. >> Welcome back, this is theCube's coverage of KubeCon, CloudNativeCon 2019 in San Diego I'm Stu Miniman with my co-host for the week, John Troyer, and happy to welcome to the program, Tom Phelan, who's an HPE Fellow and was the BlueData CTO >> That's correct. >> And is now part of Hewlett-Packard Enterprise. Tom, thanks so much for joining us. >> Thanks, Stu. >> All right, so we talked with a couple of your colleagues earlier this morning. >> Right. >> About the HPE container platform. We're going to dig in a little bit deeper later. >> So, set the table for us as to really the problem statement that HP is going to solve here. >> Sure, so Blue Data which is what technologies we're talking about, we addressed the issues of how to run applications well in containers in the enterprise. Okay, what this involves is how do you handle security how do you handle Day-2 operations of upgrade of the software how do you bring CI and CD actions to all your applications. This is what the HPE container platform is all about. So, the announcement this morning, which went out was HPE is announcing the general availability of the HPE container platform, an enterprise solution that will run not only CloudNative applications, are typically called microservices applications, but also Legacy applications on Kubernetes and it's supported in a hybrid environment. So not only the main public cloud providers, but also on premise. And a little bit of divergence for HPE, HPE is selling this product, licensing this product to work on heterogeneous hardware. So not only HPE hardware, but other competitors' hardware as well. >> It's good, one of the things I've been hearing really over the last year is when we talked about Kubernetes, it resonated, for the most part, with me. I'm an infrastructure guy by background. When I talk in the cloud environment, it's really talking more about the applications. >> Exactly. >> And that really, we know why does infrastructure exist? Infrastructure is just to run my applications, it's about my data, it's about my business processes >> Right. >> And it seems like that is a y'know really where you're attacking with this solution. >> Sure, this solution is a necessary portion of the automated infrastructure for providing solutions as a service. So, um, historically, BlueData has been specializing in artificial intelligence, machine learning, deep learning, big data, that's where our strong suit came from. So we, uh, developed a platform that would containerize those applications like TensorFlow, um, Hadoop, Spark, and the like, make it easy for data scientists to stand up some clusters, and then do the horizontal scalability, separate, compute, and storage so that you can scale your compute independent of your storage capacity. What we're now doing is part of the HPE container platform is taking that same knowledge, expanding it to other applications beyond AI, ML, and DL. >> So what are some of those Day-2 implications then uh what is something that folks run into that then now with an HPE container platform you think will eliminate those problems? >> Sure, it's a great question, so, even though, uh, we're talking about applications that are inherently scalable, so, AI and ML and DL, they are developed so they can be horizontal- horizontally scalable, they're not stateless in the true sense of the word. When we say a stateless application, that means that, uh, there is no state in the container itself that matters. So if you destroy the container, reinstate it, there's no loss of continuity. That's a true stateless or CloudNative application. Uh, AI and ML and DL applications tend to have configuration information and state information that's stored in what's known as the Root Storage of the compute node, okay, what's in slash, so you might see, um, per node configuration information in a configuration file in the Etsy directory. Okay, today, if you just take standard off the shelf Kubernetes, if you deploy, um, Hadoop for example, or TensorFlow, and you configure that, you lose that state when the container goes down. With the HPE container platform, we are, we have been moving forward with a, or driving, a open source project known as KubeDirector. A portion of KubeDirector, of the functionality is to preserve that, uh, Root Storage so that if a container goes down, we are allowed- we are enabled to bring a Nether Instance of that container and have it have the same Root Storage. So it'll look like a just a reboot to the node rather than a reinstall of that node. So that's a huge value when you're talking about these, um, machine learning and deep learning applications that have the state in root. >> All right, so, Tom, how does KubeDirector fit compared to compare contrast it, does it kind of sit aside something like Rook, which was talked about in the keynote, talking about being able to really have that, uh, that kind of universal backplate across all of my clusters >> Right, you're going to have to be >> Is that specific for AI and ML or is this >> I, well, that's a great question, so KubeDirector itself is a Kubernetes operator, okay, uh, and we have implemented that, the open-source communities joining in, so, but what it allows us, KubeDirector is, um, application agnostic, so, you could author a YAML file with some pertinent information about the application that you want to deploy on Kubernetes. You give that YAML file to the KubeDirector operator, it will then deploy the application on your Kubernetes cluster and then manage the Day-2 activities, so this is beyond Helm, or beyond KubeFlow, which are deployment engines. So this also has, well, what happens if I lose my container? How do I bring the services back up, and those services are dependent upon the type of application that's there. That's what KubeDirector does. So, KubeDirector allows a new application to be deployed and managed on Kubernetes without having to write a operator in Go Code. Makes it much easier to bring a new application to the platform. >> Gotcha, so Tom, kind of a two-part question, first part, so, uh, you were one of the co-founders of BlueData >> And now with HPE, there's, sometimes I think with technology, some of them are kind of invented in a lab, or in a graduate student's head, others come out of real world experience. And, uh, you're smiling 'cause I think BlueData was really built around, uh, y'know, at least your experience was building these BlueData apps. >> This is a hundred percent real world experience. So we were one of the real early pioneers of bringing, um, these applications into containers y'know, truth be told, when BlueData first started, we were using VMs. We were using OpenStack, and VM more. And we realized that we didn't need to pay that overhead it was possible to go ahead and get the same thing out of a container. So we did that, and we suffered all the slings and arrows of how to make the, um, security of the container, uh, to meet enterprise class standards. How do we automatically integrate with active directory and LDAP, and Kerberos, with a single sign on all those things that enterprises require for their infrastructure, we learned that the hard way through working with, y'know, international banking organizations, financial institutions, investment houses, medical companies, so our, our, all our customers were those high-demand enterprises. Now that we're apart of HP, we're taking all that knowledge that we acquired, bringing it to Kubernetes, exposing it through KubeDirector, where we can, and I agree there will be follow on open-source projects, releasing more of that technology to the open-source community. >> Mhm that was, that was actually part-two of my question is okay, what about, with now with HPE, the apps that are not AI, ML and you nailed it, right, >> Yeah. >> All those enterprise requirements. >> Same problems exist, right, there is secure data, you have secure data in a public cloud, you have it on premise, how do you handle data gravity issues so that you store, you run your compute close to your data where it's necessary you don't want to pay for moving data across the web like that. >> All right, so Tom, platforms are used for lots of different things, >> Yes. >> Bring us inside, what do you feel from your early customers, some of the key use cases that should be highlighted? >> Our key use cases were those customers who were very interested, they had internal developers. So they had a lot of expertise in house, maybe they had medical data scientists, or financial advisors. They wanted to build up sandboxes, so we helped them stand up, cookie-cutter sandboxes within a few moments, they could go ahead and play around with them, if they screwed them up, so what? Right, we tear them down and redo it within moments, they didn't need a lot of DevOps, heavy weight-lifting to reinstall bare-metal servers with these complex stacks of applications. The data scientist that I want to use this software which just came out of the open-source community last week, deployed in a container and I want to mess it up, I want to tighten, y'know, really push the edge on this and so we did that. We developed this sandboxing platform. Then they said, okay, now that you've tested this, I have it in queue A, I've done my CI/CD, I've done my testing, now I want to promote it into production. So we did that, we allowed the customer to deploy and define different quality of service depending on what tier their application was running in. If it was in testing dev, it got the lowest tier. If it was in CI/CD, it got a higher level of resource priority. Once it got promoted to production, it got guaranteed resource priority, the highest solution, so that you could always make sure that the customer who is using the production cluster got the highest level of access to the resources. So we built that out as a solution, KubeDirector now allows us to deploy that same sort of thing with the Kubernetes container orchestrator. >> Tom, you mentioned blue metal, uh, bare-metal, we've talked about VMs, we've been hearing a lot of multicloud stories here, already today, the first day of KubeCon, it seems like that's a reality out in the world, >> Can you talk about where are people putting applications and why? >> Well, clearly, uh, the best practices today are to deploy virtual machines and then put containers in virtual machines, and they do that for two very legitimate reasons. One is concern about the security, uh, plane for containers. So if you had a rogue actor, they could break out of the container, and if they're confined within the virtual machine, you can limit the impact of the damage. One very good, uh, reason for virtual machines, also there's a, uh, feeling that it's necessary to maintain, um, the container's state running in a virtual machine, and then be allowed to upgrade the the Prom Code, or the host software itself. So you want to be able to vMotion a virtual machine from one physical host to another, and then maintain the state of the containers. What KubeDirector brings and what BlueData and HP are stating is we believe we can provide both of those functionalities on containers on bare-metal. Okay, and we've spoken a bit about today already about how KubeDirector allows the Root File System to be preserved. That is a huge component of of why vMotion is used to move the container from one host to another. We believe that we can do that with a reboot. Also, um, HPE container platform runs all virtual machines as, um, reduced priority. So you're not, we're not giving root priority or privileged priority to those containers. So we minimize the attack plane of the software running in the container by running it as an unprivileged user and then tight control of the container capabilities that are configured for a given container. We believe it's just enough priority or just enough functionality which is granted to that container to run the application and nothing more. So we believe that we are limiting the attack plane of that through the, uh and that's why we believe we can validly state we can run these containers on bare-metal without, without the enterprise having to compromise in areas of security or persistence of the data. >> All right, so Tom, the announcement this week, uh is HP container platform available today? >> It will be a- we are announcing it. It's a limited availability to select customers It'll be generally available in Queue 1 of 2020. >> All right, and y'know, give us, y'know, we come back to KubeCon, which will actually be in Boston >> Yes. >> Next year in November >> When we're sitting down with you and you say hugely successful >> Right. >> Give us some of those KPIs as to y'know >> Sure. >> What are your teams looking at? >> So, we're going to look at how many new customers these are not the historic BlueData customers, how many new customers have we convinced that they can run their production work loads on Kubernetes And we're talking about I don't care how many POCs we do or how many testing dev things I want to know about production workloads that are the bread and butter for these enterprises that HP is helping run in the industry. And that will be not only, as we've talked about, CloudNative applications, but also the Legacy, J2EE applications that they're running today on Kubernetes. >> Yeah, I, uh, I don't know if you caught the keynote this morning, but Dan Kohn, y'know, runs the CNCF, uh, was talking about, y'know, a lot of the enterprises have been quitting them with second graders. Y'know, we need to get over the fact that y'know things are going to break and we're worried about making changes y'know the software world that y'know we've been talking about for a number of years, absolutely things will break, but software needs to be a resilient and distributed system, so, y'know, what advice do you give the enterprise out there to be able to dive in and participate? >> It's a great question, we get it all the time. The first thing is identify your most critical use case. Okay, that we can help you with and, and don't try to boil the ocean. Let's get the container platform in there, we will show you how you have success, with that one application and then once that's you'll build up confidence in the platform and then we can run the rest of your applications and production. >> Right, well Tom Phelan, thanks so much for the updates >> Thank you, Stu. >> Congratulations on the launch >> Thank you. >> with the HP container platform and we look forward to seeing the results in 2020. >> Well I hope you invite me back 'cause this was really fun and I'm glad to speak with you today. Thank you. >> All right, for John Troyer, I'm Stu Miniman, still watch more to go here at KubeCon, CloudNativeCon 2019. Thanks for watching theCUBE. (energetic music)

Published Date : Nov 20 2019

SUMMARY :

brought to you by Red Hat And is now part of Hewlett-Packard Enterprise. All right, so we talked with a couple of your colleagues About the HPE container platform. statement that HP is going to solve here. of the HPE container platform, it resonated, for the most part, with me. And it seems like that is a y'know so that you can scale your compute of that container and have it have the same Root Storage. about the application that you want to deploy on Kubernetes. built around, uh, y'know, at least your experience was security of the container, uh, issues so that you store, you run your compute got the highest level of access to the resources. We believe that we can do that with a reboot. It's a limited availability to select customers that are the bread and butter for these enterprises runs the CNCF, uh, was talking about, y'know, Okay, that we can help you with and we look forward to seeing the results in 2020. and I'm glad to speak with you today. All right, for John Troyer, I'm Stu Miniman,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Tom PhelanPERSON

0.99+

John TroyerPERSON

0.99+

Dan KohnPERSON

0.99+

2020DATE

0.99+

Red HatORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

HPORGANIZATION

0.99+

BostonLOCATION

0.99+

two-partQUANTITY

0.99+

TomPERSON

0.99+

San Diego, CaliforniaLOCATION

0.99+

BlueDataORGANIZATION

0.99+

KubeConEVENT

0.99+

StuPERSON

0.99+

last weekDATE

0.99+

Next yearDATE

0.99+

first partQUANTITY

0.99+

todayDATE

0.99+

last yearDATE

0.99+

San DiegoLOCATION

0.98+

oneQUANTITY

0.98+

CloudNativeConEVENT

0.98+

Hewlett-Packard EnterpriseORGANIZATION

0.98+

this weekDATE

0.98+

bothQUANTITY

0.98+

OneQUANTITY

0.98+

OpenStackTITLE

0.98+

HPETITLE

0.98+

CNCFORGANIZATION

0.98+

hundred percentQUANTITY

0.97+

EtsyORGANIZATION

0.97+

HPEORGANIZATION

0.97+

TensorFlowTITLE

0.97+

KubeDirectorTITLE

0.97+

first dayQUANTITY

0.96+

CloudNativeCon 2019EVENT

0.96+

CloudNativeTITLE

0.95+

SparkTITLE

0.95+

one applicationQUANTITY

0.95+

firstQUANTITY

0.94+

KubernetesTITLE

0.94+

HadoopTITLE

0.94+

this morningDATE

0.9+

first thingQUANTITY

0.9+

two very legitimate reasonsQUANTITY

0.89+

vMotionTITLE

0.89+

one physicalQUANTITY

0.88+

this morningDATE

0.88+

earlier this morningDATE

0.87+

KerberosTITLE

0.83+

Arijit Mukherji, SignalFx | CUBEConversation, August 2019


 

(groovy music) >> From our studios in the heart of Silicon Valley, Palo Alto, California, this is a CUBE Conversation. >> Everyone, welcome to this special CUBE Conversation here in Palo Alto Studios for theCUBE. I'm John Furrier, host for theCUBE. We're here with special guest, Arijit Mukherji, who's the CTO of SignalFx, hot startup that's now growing very very fast in this cloud native world. Arijit, great to see you. Thanks for coming on. >> Great to see you too again, John. >> So cloud growth is changing the landscape of the enterprise. We're seeing it obviously, it's no real surprise, Cloud 1.0 has happened, public cloud. Cloud 2.0 as we're calling it is changing the game, where you're seeing enterprise cloud really the focus. We're seeing cloud native really move the needle. Kubernetes has kind of created that abstraction, kind of standard, defacto standard, if you will, people getting around. So you're seeing the game changing from how apps are built. >> Right. >> To security and everything in between. So a new set of services, web services at scale has certainly change the game. You guys are in the middle of this with monitoring and observability. And I want you to help us understand the core problem enterprises are having today because they know it's coming. They know it's here. They got investments out there. The cloud has changed the game for the enterprise, what's the problem? >> Yeah, so you're absolutely right, John. So everybody's moving to this sort of the new way of doing things, right. So monoliths are gone. Microservices are in, containers are in. And you're going to have to do that because we know if you don't do that, you're going to get lapped right, by the competition. And so the challenge right now is how do you make that successful? And the challenge there is these new environments are much much more complex as you mentioned. And the question is unless you understand how these systems behave, how will you be able to run them successfully? So the challenge as far as monitoring and observability is concerned is I think it's critical that it be there for it to be able to sort of do this cloud transformation successfully. But it's a far more complex and hard challenge than it used to be. >> We've seen the evolution. Yeah, we've been covering that for 10 years. It's the 10th year of theCUBE. We'll be celebrating that at VMworld this year. You've seen wave one, lift and shift, do some basic stuff, not a lot of heavy lifting. No tinkering with some of the tech in there. Monitoring is great. And then you've got rearchitect. Let's get some cloud native. Let's see some Kubernetes. And then the next path is this complete microservices. This is where everyone's really excited about. >> That's right. >> This is where the complexity is. So I got to ask you, that changes the notion of monitoring and observability. So given that this shift is happening, rearchitecting to full microservices, what is observability in that equation? >> Right, so there's a very interesting difference between I think, monitoring and observability that I think I would like to touch upon here. So you know, back in the days of the monolith, it was what we called classic monitoring. Monitoring is about looking at things, looking for things that you know might happen. So for example, if I know my server might fall down, I will run a probe to make sure that it's up or not, right? But when you move to this new world, I mean, have you, if you look at any cloud native environment with all the microservices and containers, for example Amazon's S3 has 120 different microservices powering it behind it. Now, if you were to go and ask an engineer like what is the map or how are the data flow happening in that environment? I guarantee you, no one person can probably do that well. So then, monitoring doesn't work because I don't even know what to look for. So what's important is I be able to gather telemetry, have the information available so that the unknowns, the kind of things that I'm not expecting because it's just too complex or just unanticipated. Like that data will allow us to figure out what went wrong. So observability is about gathering telemetry and information so that we can deal with that complexity, understand problems as they behave because the world is no longer simple anymore. >> So overall, observability is just monitoring in a dynamic environment, what you're saying. 'Cause monitoring used to be simple. You know it's going on. Static routes. >> That's right. >> Set policy, get some alarms. Network management, basic stuff. >> Exactly like Nagios checks and what not, yup. >> Now, you're saying there's unknowns happening, unexpected things going on around the services. What would that be just as an example? >> Yeah, so for example, again with microservices, why are we doing it? Because we want smaller teams to be able to innovate quicker, faster, right. So instead of my monolith, let's say I have whatever, SignalFx has 50 different microservices powering it. Now each of these teams, they are deploying software on their own because the whole idea of Cloud 2.0 is that we are able to move faster. So what that means is individual chunks of my overall service are adapting or changing over time or evolving. And so that's the complexity, like it's actually a changing landscape. Like my map does not stay the same on an ongoing basis. That is fundamentally a big challenge. The other challenge that I would mention too is that how ephemeral things are getting. So all these microservices that are themselves adapting, they're also being deployed in containers and by Kubernetes. Where these containers, they keep popping up and down all the time. Like even on infrastructure on which we are running it's extremely dynamic, right. Containers, Lambdas, sort of serverless is another great example of that. So it's a very shifting sands is what we're standing on, in some sense, right. >> And a lot of times, we cover a lot of real time. And you can't just throw in logs, you got to have that in there. This begs the question, okay, so I get the complexity. I'm a customer or I'm someone who wants to really go down this observability track with you. Why is it important? What's in it for me? >> So in the end, without it, how will you succeed? So it's almost like will a pilot with blinders on, will he be able to fly an aircraft? The answer is no. Similarly, I mean we may want to move to this modern awesome environment, which lets us move fast but unless you have visibility into it, unless you can find when problems are happening, unless you can, when those problems happen, be able to find the root cause and remediate them quickly, you're not really going to be successful. And so that's really why observability is important because it allows us to not only sort of run this well but it also allows us to understand the user experience because in the end, we are all service providers, we have users, right. And so understand what the user's experience is like. So that's important. Understand the key business metrics. If you look at a lot of the talk track that's been going around in the circuit around error budgets, and SLIs, and SLOs, which are sort of important things. The whole idea is that we want to measure and monitor what's important to the business, to the user. And that's kind of what observability allows us to get. >> You know gone are the days of a few application servers and a database. >> That's right. >> So on the why is it important, I got to follow up and say from an operations perspective, what is the new reality, okay? Because we know there's going to be a lot of databases out there, and a lot of different applications. You mentioned some of the containerization, dynamic microservices. But what's the impact and what's the importance to the operation side of the equation with observability? >> So what's happening now is again, back in the monolith days, the operators, the IT staff, who were running those infrastructure, they were the ones who would implement monitoring, right. But if you see the way now these environments are structured, these organizations are structured, it's the developers who are building tools. They are the ones who are also running them. And in order for an organization to be able to move fast, they need to give powerful tools to their developers to do their job. And because there is no one person who knows the right way of doing things. So it's really about sort of democratizing that capability. So you will need to give powerful observability tools to the developers, the operators, who are also the new operators, to sort of make with it what they will, in the sense that they are ones who best understand the meaning of the data that's being collected. Because it's all very specific to individual microservices. So that's really a powerful observability platform is one that allows you to easily collect a lot of information, allows you to analyze, visualize it, and sort of treat it in a way to sort of it helps you answer the questions you want to answer. >> So you're saying that okay, ops gets monitoring and observability. But a new persona, user is a developer. >> That is correct. >> And what do they care about? 'Cause they just want it to be abstracted away. They're not really probably wake up and say, "Hey, I can't wait to look at observability." So is it more of a use, so talk about the developer dynamic 'cause this is, that seems like a new trend. >> Yes, it totally is. So things are becoming less about black box testing, and more about sort of observability being an end-to-end process. So let me tell you what I mean. So back in the day, let's say, I implemented, I deployed a monolith. It was a Java server. There were standard ways to check them as a black box. I could run probes, et cetera, to run a health check end point and whatnot, life was great. But now, that's obviously not good enough. Because as I mentioned, because of interactions, because of complexity, a black box testing doesn't even work because like I said, the whole environment is very dynamic. So what the pattern now is that as I said, observability is an end-to-end process in the sense that developers care about observability when they're writing the software. It is not an afterthought anymore. So as I'm writing, as I'm developing software, I think about well, when this thing goes out into the wild, how will I monitor it? What are the things that I care about as a developer? Because I understand the system the best. And so you instrument, you build systems for observability is I think a big change that's happening. And once that happens, when you are the one person, who also is able to best read that data. >> So while they're developing, they get these benefits inherently right there on the spot. >> That is correct. >> This is kind of consistent with the live programming trend that's really popular in some languages. Rather than doing all the debugging, post event, coming back to it. >> That's right. >> So making it very efficient seems to be a use case. >> Yes, you are absolutely right. It's one of the things I'll actually talk about a lot actually is you know, observability, what is it for? Is it just for telling me when my production is not working well? The answer is no. Even when I'm developing, I may want to know well, did I have a performance degradation? How do I know that the code that I wrote is good? So I use again the same telemetry that I'm going to use later, even during the development process to make sure that the code that I wrote works well. We do the same thing during deployment. Again, I deploy a version or a canary or a few of them. Are they running well, right? So it is not just about what's happening in production, it's about end-to-end from development and deployment up to production. >> And that's what developers want. They want it in the moment, right when they're coding. >> That's right. >> Taken care of. >> It's instant gratification, like everybody else wants. >> And more efficiency. They know it's going to break, they know the consequences, they can deal with that. >> Yes. >> This is awesome, so the next question I have for you is how do you implement observability? >> That's a great question. So you can think of it as sort of in two ways. One is the means through which you get it. So you get observability through metrics, through logs, through traces, through probes, et cetera. That's one way. Another one is I think I alluded to a little bit earlier is what are your goals? Because everybody's goals are different, right? And if you think about in that sense, then the sort of the purpose of observability are a few. A, is it allowing your teams to move faster? So I spoke about some of the process just about earlier. Are they able to deploy code with confidence, faster? When problems happen, how quickly are we able to then triage them? So the whole incident review process. That's kind of important, observability better help me with that. The user experience is also something very important. As I mentioned, observability is going sort of more up the stack, so to speak. And so being able to understand what the user experience is, is very important. Similarly, understanding from the business point of view, what does the business care about? For example, when I had that outage, how much loss did I have? How many eyeballs did I miss on my side? So I think one way to think about it, you need to have good processes, good tools. At the same time, you need to be clear about what your goals are, and make sure that sort of whatever you're implementing, sort of furthers those to some extent. >> I'd like to play a little CUBE game here with you, and walk through the observability myths and reality. >> Sure. >> I'll say the myth, and you can tell me the reality. Myth number one, having observability reduces incidents. >> No, actually it might increase it. Let me put it that way, I'll tell you why. So it's almost like in a human, I may be measuring someone's temperature or pulse rate like every day. Does that make the person less or more prone to health problems? Chances are it's going to be the same. I might actually find things that I was not aware of, right. So in that sense, just having observability does not necessarily change anything about the process. But what it will do though is when a problem does happen, it I have this treasure trove of data, which I can then use to quickly isolate the problem. So what it does is it shrinks the outage time, which is in the end, what's very very important. So while it may not reduce your outages, it will definitely make them better from the end user point of view. >> Second myth, buying a tool means you have observability. Reality? >> No. Having a doctor doesn't mean I am healthy. In the sense that I think a tool is very important. It's a very very important step but the question is how well adopted is the tool? What kind of data am I sending to the tool? How are the users, my engineers, how well versed are they in using it, right? And so there's a lot of other stuff associated with it. So tool selection is very important but I think adoption and making it a success within the organization is also very very important. >> Okay, final myth, observability is free or cheap. >> I wish, so. >> Well you're a for-profit company. >> That's exactly right. No, I think, I really feel that observability is almost like a, it's an associated function. As I mentioned earlier, if I'm going to be successful flying this plane, I need commensurate amount of other services that sort of help me make that successful. So in a way, one way to think about it is it is it scales up as complex and as large as your environment gets and justifiably so. Because there's various other reasons too because in a way, adopting new technologies all the time. So tools are getting just more and more complicated. My requirements are getting more complicated. And then another thing I would add is the quality of my service like the level, the quality of service that I provide, the higher bar I want, I probably am spending more on observability. But it's a justified cost. So I think it's not a fixed cost obviously. It grows with your complexity and the kind of quality that you want to provide. >> Well, it's also, I mean I think the observability challenge with the complexity is there's a hidden cost though if you're not observing the right areas. >> Yes. >> The cost of not having that visibility, as a blind spot, could have business benefits, I mean not benefits, but consequences in a sense of outages, security, I mean there's a lot of different things that you got to have the observation space being enterprise. >> You totally have to do that. It's actually one thing we like to say is that instrument first, ask questions later. Now, coming from a for-profit vendor, it may sound sort of self-serving. But it's kind of not so too in the sense that I mean hindsight is 20-20. If I am stuck in a bad situation, I have no telemetry to sort of fall back on. Then where do I go with it? So I think we should be more conservative and sort of try and instrument the things that we think might be applicable, the kind of questions I may want to ask when a rainy day comes. So I think you're absolutely right. So it has to be something. It's a philosophy that developers, engineers, should sort of imbibe and they should then practice it as part of their sort of own workflows. >> I want to get into where company should invest in observability but I want to just throw a wildcard question at you, which is when you look at the big data space, even go back 10 years ago to now, cloud, there's always been they're talking about tool versus platform. And anything that's been data centric tend to be platform like conversations, not a tool. Tool can be like okay, it does a thing, does it really well, a hammer, everything's a nail with the hammer. But there's more dynamic range required 'cause you're talking about observation space, talking about cloud, horizontally scalable, hybrid on-premises. >> That's right. >> So again, it kind of feels like a platform technical challenge. >> You're absolutely right. So I think two factors at play if you ask my opinion. One is if you were interested in monitoring, a tool is perfect, right? Because you kind of know what you want. If a tool does that well, there's more power to it. But if you don't know what you want, if you are basically collecting stuff and you're depending on it as a way to answer questions on the unknown unknowns as Charity Majors like to say, then you do need a platform. Because that platform needs to be sort of inclusive. It must have data of different types, all be able to come into it. It's not really meant for a specific purpose. It's meant to be a generic tool. So we do see this trend in the industry towards more sort of a platform approach to this. Obviously, they will have tool-like capabilities because they're answering sort of particular use cases, et cetera. But the underlying platform, the more powerful it is, I think the better it is in the long run. >> Yeah and the argument there could be if it's an enabling platform, you can create abstraction layers for visualization. >> That's correct. >> Or APIs, other services. >> That's exactly right, yes. >> Okay, so now, I'm interested. I want to get started. How do I invest in good observability? I'm crossing the chasm, I'm going to full microservices. I've done a rearchitect, my team has got cultural buy-in. We're hiring, we're building our own stack, we're going to have on-premise, we'll be in the cloud. In some cases, fully cloud. What do I do, what do I invest? What's my play book? >> Sure, so I think we talked about the first one a little bit. So you have to choose the right tool. And the right tool in my opinion is not the one that does the best job now. When maybe I'm small, I'm not fully there yet. We have to think about what's the right tool or the right platform for when I'm, where for where I do want to go. Now, that may be commercial, that may be open source, that's not the point. But the point is that we need to have a very considered thought about what is it that we're betting the farm on, right. So that's number one. But that's not good enough. So as I mentioned, we need to make sure that there is a usage and understanding of the tool within the organization. So a big part of it is just around the practice and the culture of observability within the organization. So for example, good habits like every time we have an incident, you speak about these are the things that we measure, this is how we use our observability tool, here are the dashboards that we depended on. Sort of reinforcing those concepts over and over again so that those who are on board, they are obviously doing it right. Those who are not, they see the value and they start sort of using those good practices. That's kind of very important. The third thing I would say is start moving towards more higher level monitoring, observability. So measure the user experience, measure what's important to the business. Stuff like that are important. The fourth area that's sort of very very key is around sort of the whole incident management process. This is actually a very active topic. A lot of discussion going on out there right now is you know, it's great that I have this great tool, I have all this telemetry. I found there was a problem. But when that incident happened, there are a lot of again good practices. This is part of the whole culture and process of observability is how do we make that process smooth and standardized, and sort of become more efficient at it, right? And let's say you were to do that, the final sort of end goal is as we like to call is a self-running or a self-automated cloud, where do we really need humans in all of this? How can we sort of run remediation in a more and more automated fashion? At least for the stuff that maybe does not require a human intelligence, right? Actually, you'll find that 80, 90% of issues probably don't. If you think hard about it, don't require human. So I think this move towards automation is also another sort of very fantastic trend. I've seen that being very successful in the past. Some of my old companies. And I think that's going to be a trend in later stage. >> Yeah, for known processes, people and process. >> Yup. >> That's where problems come. Bad process or code and people mistakes. You can automate that. Mundane tasks or on differentiated kind of heavy lifting. >> That is exactly right because you know, there was this interesting study done that was commissioned by Stripe, where they found that among the CXOs, they value engineers and developers more than money. Because getting them, because they are such a scarce resource. So if you do get them, you probably don't want them to sort of run and do mundane things. That's not what you hired them for. So you want them to do the work of the business, and the more you can isolate them from sort of the mundane and they have automation come into play, I think it's just better across the board. >> Oh, they're investing more from the CSOs and the CIOs we talked to. >> Right. >> There's more investments in-house now for real development, real software development, real projects. And those top talent, they want to work on the toughest problem. >> That's exactly, that's why we're moving to a SASS future right? Because all the function that are not core to my business, I want to farm off and have somebody else take care of them. >> So you got me sold on observability, I'm a big believer in the observation space. And I certainly think cloud as they're horizontally scalable, elastic resource, and certainly, the Kubernetes trends with service measures and all the stuff going on, Kubeflow, and a bunch of other things. More and more services are going to be very dynamic. >> Yes. >> So you're going to have a lot of unknown and unusual patterns. >> That's right. >> That's just the way the internet works now. So you had me sold on that. Now, I need to get my team to the next level. They bought into devops. How do I take the temperature of where our IQ is in the life cycle of observability? Because I got to know where I am. Is there a way I can track my maturity or progress? >> Yes, yes, it's a topical question because I go out and I meet a lot of prospects and customers. And it's part of my job. And a lot of times, because sort of we are sort of in the leading edge, they will ask us, they look to us to tell them like are we doing it right? Or how did you guys do it? So just so that sharing of information, how can we get better? So as part of that, we actually, at SignalFx, we actually built a maturity model. It's a way for us to sort of evaluate ourselves across various dimensions to see how well are we doing? Not only, it's not just the score, it's also about how well are we doing? But how can we improve? Like if you wanted to go to the next level of maturity for example, like what are some of the things that we can do? And it spans multiple dimension. It starts with how are we even collecting data, right? How easy is it, in other words, for somebody starting a new service or using a new software to get to the business of observability? That's kind of important. You got the data, how will am I able to visualize it? Because effective visualization is as you can understand, very important. The next part comes with alerting. So well, things are running. I know how they're going. How well can I detect when problems are happening? How soon can I detect when problems are happening? What kind of items can I monitor? Can I monitor the low-level things? Or can I monitor the higher-level constructs? When the problem happen, let's talk about remediation. How quickly can I triage the problem to find out where it was? What kind of tools and slice-and-dice capabilities do I have? That's an important part of it. Let's say I did it. After that comes things like remediation. So I found the problem, how well can I remediate so like we talked about automation. So there is multiple different categories where we sort of, we talked about what, we've seen in the field in terms of what people are doing as well as some of the best practices. >> So you're going to make this tool available to customers? >> Yes, we have, it's actually available on our website. And if you come to our website, you'll be able to sort of run the assessment, as well as sort of see all of it yourself. >> Well, we've been following you guys since you launched. You've got a great management team, great technical chops, we've covered that. And this observability is a real trend as it moves into more complexity as we talked about. Most customers that are getting into this are trying to sift this from the signal, from the noise, and trying to think, decide who is the leader, and who is not. So how would you describe what a leader in enterprise observability looks like from a supplier's standpoint. You guys are one, you want to be the leader. You're the market leader. >> Right. >> What does a leader look like from a customer's standpoint? What are the things that have to be in place? What are the table stakes to be that leader? >> Sure, that's a great question. And yeah, so we did, SignalFx, we did build SignalFx to be a leader in this space, frankly. And there's a lot of different aspects that goes behind like what creates a good supplier in this space. One is I think you have to be open and flexible. Like you have to be, it's a platform play. You better be able to collect data from all the systems that are out there. The kind of the quality of the integration is very important. Another big thing we're finding is scale. A lot of these systems might not work when you move to sort of large numbers. And the problem that we are seeing is while I may have a hundred servers, I may be running 10,000 containers in those hundred servers. So now, everybody is a scale player, right. So the question is will your platform really be able to handle the complexity and the load? So that's an important one. Analytics as we mentioned is another very very important capability. I'll like to say that the ability to do analytics is not just good enough. How easy is it to use? Like are you developers and engineers, are they even using it? So the easy and the capability of analytics is important because that's kind of what allows us to measure those KPIs, those SLIs, those business metrics. And so that's kind of important. Slice-and-dice capability like how fast is the tool? Because when that outage happens, I don't want to run a five-minute query to sort of find some suspicion or you know to. And so the question is how quickly will it answer these ad hoc questions for me when the problem happens? So sort of the whole triage process that I talked about. The ability to support automation is one. The ability to, as I mentioned to take in different types of data, traces, metrics, be able to play with logs. All of those are sort of important aspects of it, yeah. >> Final question for you. If a customer says, "I'm going to cross the bridge "to the future with SignalFx." What's some of the head room? What's some of the futures that you would expect the customer to imagine or expect down the road as observability becomes more scalable? I can imagine the metrics are going to be all over the place. >> Yes. >> A lot of unusual patterns. New apps could come in that could be hits. And new data comes in. So as you take them today in observability, what's the next level, to cross that bridge to the future? >> Sure, sure. >> What's the next expectation? >> I think one thing will be to expect the unexpected. Because the world is changing so fast, I think you would probably be running things that you won't expect later. But a few things, I would say yes. So I think the proliferation of metrics and traces is like a big trend, where we see there used to be this dependence on sort of these monoliths and APM that sort of is transforming a little bit. There is this also this concept of using data science and artificial intelligence to come to bear on this space. So that's actually an interesting trend we see, where the idea is that it's hard because it's a complex system. It's hard for humans to define exactly what they want. But if the system can help them, can help identify things, that's actually really fantastic. Another one that we sort of briefly touched upon is automation or self-automated systems, where I think well, the time you're going to see that platforms like ours are going to help you automate much of this in a safe manner, because these are controlled systems, where if you, things can go awry and that's not a good position to be. So these are some exciting areas, where I think you will see some development down the road. >> And we've been seeing a lot of conversation around correlation and causation, and the interplay between those as these services are being stood up, torn down, stood up, torn down. You can look at the numbers all day long but you got to know causation, correlation. >> You bet, you bet because I think a lot of times, we naively think about this as a data problem, right? Where I find the kink in the graph, and if I go looking, I'll probably going to find a hundred different things that were sort of also correlated. Some of them may or may not be related to it. So I think a good tool is one that sort of gives you a sense. It sort of creates a boundary around the data set that it needs to look at, that is sort of relevant to your problem, and able to give you clues to causation. That you're exactly right because again, complexity is a hard problem to deal with. And anything that we can do to sort of help you short-circuit some of the pain is awesome. >> And I think you're on the right track with this developer focus because devops has proven that the developers want to code, build apps, and abstract away the complexity. And certainly, it's complex. >> That's right, that's right. It's fairly complex. >> Arijit, thanks for coming on. Arijit Mukherji, the CTO of SignalFx here inside the special CUBE Conversation breaking down the future of observability, where monitoring is going to the next level, certainly with cloud, impact to enterprise cloud. I'm John Furrier, here on theCUBE. Thanks for watching. >> Thank you. (groovy music)

Published Date : Aug 1 2019

SUMMARY :

in the heart of Silicon Valley, Palo Alto, California, Arijit, great to see you. the landscape of the enterprise. You guys are in the middle of this And the question is unless you understand It's the 10th year of theCUBE. So I got to ask you, that changes the notion so that the unknowns, the kind of things So overall, observability is just Set policy, get some alarms. Now, you're saying there's unknowns happening, And so that's the complexity, And a lot of times, we cover a lot of real time. So in the end, without it, You know gone are the days of So on the why is it important, is one that allows you to easily collect So you're saying that okay, So is it more of a use, so talk about the developer dynamic So back in the day, let's say, So while they're developing, Rather than doing all the debugging, post event, How do I know that the code that I wrote is good? And that's what developers want. They know it's going to break, they know the consequences, One is the means through which you get it. I'd like to play a little CUBE game here with you, I'll say the myth, and you can tell me the reality. Does that make the person less Second myth, buying a tool means you have observability. In the sense that I think a tool is very important. and the kind of quality that you want to provide. observing the right areas. that you got to have the observation space being enterprise. So it has to be something. at the big data space, even go back So again, it kind of feels like a platform So I think two factors at play if you ask my opinion. Yeah and the argument there could be I'm crossing the chasm, I'm going to full microservices. So a big part of it is just around the practice Yeah, for known processes, That's where problems come. and the more you can isolate them from sort of the mundane from the CSOs and the CIOs we talked to. And those top talent, Because all the function that are not core So you got me sold So you're going to have is in the life cycle of observability? So I found the problem, how well can I remediate And if you come to our website, So how would you describe what a leader So sort of the whole triage process that I talked about. I can imagine the metrics are going to be So as you take them today in observability, But if the system can help them, and the interplay between those as these services And anything that we can do to sort of help you has proven that the developers want to code, build apps, That's right, that's right. the future of observability, Thank you.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
John FurrierPERSON

0.99+

Arijit MukherjiPERSON

0.99+

ArijitPERSON

0.99+

JohnPERSON

0.99+

AmazonORGANIZATION

0.99+

Silicon ValleyLOCATION

0.99+

hundred serversQUANTITY

0.99+

five-minuteQUANTITY

0.99+

10 yearsQUANTITY

0.99+

10,000 containersQUANTITY

0.99+

August 2019DATE

0.99+

two factorsQUANTITY

0.99+

SignalFxORGANIZATION

0.99+

StripeORGANIZATION

0.99+

50 different microservicesQUANTITY

0.99+

two waysQUANTITY

0.99+

OneQUANTITY

0.99+

eachQUANTITY

0.99+

Cloud 2.0TITLE

0.99+

10th yearQUANTITY

0.98+

JavaTITLE

0.98+

one wayQUANTITY

0.98+

SecondQUANTITY

0.97+

oneQUANTITY

0.97+

todayDATE

0.97+

120 different microservicesQUANTITY

0.97+

10 years agoDATE

0.97+

first oneQUANTITY

0.96+

one thingQUANTITY

0.96+

theCUBEORGANIZATION

0.96+

Charity MajorsORGANIZATION

0.96+

this yearDATE

0.96+

Cloud 1.0TITLE

0.94+

fourth areaQUANTITY

0.93+

Palo Alto, CaliforniaLOCATION

0.92+

one personQUANTITY

0.92+

firstQUANTITY

0.92+

NagiosORGANIZATION

0.91+

CUBE ConversationEVENT

0.88+

80, 90%QUANTITY

0.87+

VMworldEVENT

0.87+

third thingQUANTITY

0.83+

Palo Alto StudiosLOCATION

0.81+

hundredQUANTITY

0.75+

CUBETITLE

0.75+

KubeflowORGANIZATION

0.7+

oneEVENT

0.69+

KubernetesORGANIZATION

0.68+

Myth number oneOTHER

0.65+

CUBEConversationEVENT

0.65+

SASSORGANIZATION

0.6+

theseQUANTITY

0.53+

SignalFxTITLE

0.53+

S3TITLE

0.52+

KubernetesTITLE

0.43+

KubernetesPERSON

0.4+

Exclusive Google & Cisco Cloud Announcement | CUBEConversations April 2019


 

(upbeat jazz music) >> Woman: From our studio's, in the heart of Silicon Valley Palo Alto California this is a CUBE conversation. >> John: Hello and welcome to this CUBE conversation here, exclusive coverage of Google Next 2019. I'm John Furrier, host of theCUBE. Big Google Cisco news, we're here with KD who's the vice president of the data center for compute for Cisco and Kip Compton, senior vice president of Cloud Platform and Solutions Group. Guys, welcome to this exclusive CUBE conversation. Thanks for spending the time. >> KD: Great to be here. >> So Google Next, obviously, showing the way that enterprises are now quickly moving to the cloud. Not just moving to the cloud, the cloud is part of the plan for the enterprise. Google Cloud clearly coming out with a whole new set of systems, set of software, set of relationships. Google Anthos is the big story, the platform. You guys have had a relationship previously announced with Google, your role in joint an engineering integrations. Talk about the relationship with Cisco and Google. What's the news? What's the big deal here? >> Kip: Yeah, no we're really excited. I mean as you mentioned, we've been working with Google Cloud since 2017 on hybrid and Multicloud Kubernetes technologies. We're really excited about what we're able to announce today, with Google Cloud, around Google Cloud's new Anthos system. And we're gonna be doing a lot of different integrations that really bring a lot of what we've learned through our joint work with them over the last few years, and we think that the degree of integration across our Data Center Portfolio and also our Networking and Security Portfolios, ultimately give customers one of the most secure and flexible Multicloud and hybrid architectures. >> One of the things we're seeing in the market place, I want to get your reactions to this Kip because I think this speaks to what's going on here at Google Next and the industry, is that the company's that actually get on the Cloud wave truly, not just say they're doing Cloud, but ride the wave of the enterprise Cloud, which is here. Multicloud is big conversation. Hybrids and implementation of that. Cloud is big part of it, the data center certainly isn't going away. Seeing a whole new huge wave. You guys have been big behind this at Cisco. You saw what the results are with Microsoft. Their stock has gone from where it was really low to really high because they were committed to the Cloud. How committed is Cisco to this Cloud Wave, what specifically are you guys bringing to the table for Enterprises? >> Oh we're very committed. We see it as the seminal IT transformation of our time, and clearly on of the most important topics in our discussions with CIO's across our customer base. And what we're seeing is, really not as much enterprises moving to the Cloud as much as enterprises extending or expanding into the Cloud. And their on-prem infrastructures, including our data centers as you mentioned, certainly aren't going away, and their really looking to incorporate Cloud into a complete system that enables them to run their business and their looking for agility and speed to deliver new experiences to their employees and to their customers. So we're really excited about that and we think sorta this Multicloud approaches is absolutely critical and its one of the things that Google Cloud and Cisco are aligned on. >> I'd like to get this couple talk tracks. One is the application area of Multicloud and Hybrid but first lets unpack the news of what's going on with Cisco and Google. Obviously Anthos is the new system, essentially its just the Cloud platform but that's what they're calling it, Googles anthem. How is Cisco integrating into this? Cause you guys had great integration points before Containers was a big bet that you guys had made. >> Kip: That's right. >> You certainly have, under the covers we learned at Cisco Live in Barcelona around what's going on with HyperFlex and ACI program ability, DevNet developer program going on. So good stuff going on at Cisco. What does this connect in with Google because ya got containers, you guys have been very full throttle on Kubernetes. Containers, Kubernetes, where does this all fit? How should your customers understand the relationship of how Cisco fits with Google Cloud? What's the integration? >> So let me start with, and backing it with the higher level, right? Philosophically we've been talking about Multicloud for a long time. And Google has a very different and unique view of how Cloud should be architected. They've gone 'round the open source Kubernetes Path. They've embraced Multicloud much more so then we would've expected. That's the underpinning of the relationship. Now you bring to that our deep expertise with serving Enterprise IT and our knowledge of what Enterprise IT really needs to productize some of these innovations that are born elsewhere. You get those two ingredients together and you have a powerful solution that democratizes some of the innovations that's born in the Cloud or born elsewhere. So what we've done here with Anthos, with Google HyperFlex, oh with Cisco's HyperFlex, with our Security Portfolio, our Networking Portfolio is created a mechanism for Enterprise ID to serve their constituent developers who are wanting to embrace Containers, readily packaged and easily consumable solution that they can deploy really easily. >> One of the things we're hearing is that this, the difference between moving to the Cloud versus expanding to and with the Cloud, and two kind of areas pop up. Operational's, operations, and developers. >> Kip: Yep. >> People that operate IT mention IT Democratizing IT, certainly with automation scale Cloud's a great win there. But you gotta operate it at that level at the same time serve developers, so it seems that we're hearing from customers its complicated, you got open source, you got developers who are pushing code everyday, and then you gotta run it over and over networks which have security challenges that you need to be managing everyday. Its a hardcore op's problem meets frictionalist development. >> Yeah so lets talk about both of these pieces. What do developers want? They want the latest framework. They want to embrace some of the new, the latest and greatest libraries out there. They want to get on the cutting edge of the stuff. Its great to experiment with open source, its really really hard to productize it. That's what we're bringing to the table here. With Anthos delivering a manage service with Cisco's deep expertise and taking complex technologies, packaging it, creating validated architectures that can work in an enterprise, it takes that complexity out of it. Secondly when you have a enterprise ID operator, lets talk about the complexities there, right? You've gotta tame this wild wild west of open source. You can't have drops every day. You can't have things changing every, you need a certain level of predictability. You need the infrastructure to slot in to a management framework that exists in the dollar center. It needs to slot into a sparing mechanism, to a workflow that exists. On top of that, you've got security and networking on multiple levels right? You've got physical networking, you've got container networking, you've got software define networking, you've got application level networking. Each layer has complexity around policy and intent that needs to marry across those layers. Well, you could try to stitch it together with products from different vendors but its gonna be a hot stinking mess pretty soon. Driving consistency dry across those layers from a vendor who can work in the data center, who can work across the layers of networking, who can work with security, we've got that product set. Between ACI Stealthwatch Cloud providing the security and networking pieces, our container networking expertise, HyperFlex as a hyper converge infrastructure appliance that can be delivered to IT, stood up, its scale out, its easy to deploy. Provides the underpinning for running Anthos and then, now you've got a smooth simple solution that IT can take to its developer and say Hey you know what? You wanna do containers? I've got a solution for you. >> And I think one of the things that's great about that is, you know just as enterprise's are extending into the Cloud so is Cisco. So a lot of the capabilities that KD was just talking about are things that we can deliver for our customers in our data centers but then also in the Cloud. With things like ACI Anywhere. Bringing that ACI Policy framework that they have on-prem into the Cloud, and across multiple Clouds that they get that consistency. The same with Stealthwatch Cloud. We can give them a common security model across their on-prem workloads and multiple public Cloud workload areas. So, we think its a great compliment to what Google's doing with Anthos and that's one of the reasons that we're partners. >> Kip I want to get your thoughts on this, because one of the things we've seen over the past years is that Public Cloud was a great green field, people, you know born in the Cloud no problem. (Kip laughs) And Enterprise would want to put workloads in the Cloud and kind of eliminate some of the compute pieces and some benefits that they could put in the cloud have been great. But the data center never went away, and they're a large enterprise. It's never going away. >> Kip: Yep. >> As we're seeing. But its changing. How should your customers be thinking about the evolution of the data center? Because certainly computes become commodity, okay need some Cloud from compute. Google's got some stuff there, but the network still needs to move packets around. You still got to store stuff, you still need security. They may not be a perimeter, but you still have the nuts and bolts of networking, software, these roles need to be taking place, how should these customers be thinking about Cloud, compute, integration on data primus? >> That is a great point and what we've seen is actually Cloud makes the network even more important, right? So when you have workloads and staff services in the Cloud that you rely on for your business suddenly the reliability and the performance and latency of your networks more important in many ways than it was before, and so that's something any of our customers have seen, its driving a lot of interest and offerings like SD-WAN from Cisco. But to your point on the data center side, we're seeing people modernize their data centers, and their looking to take a lot of the simplicity and agility that they see in a Public Cloud and bring it home, if you will, into the data center. Cause there are lots of reasons why data centers aren't going away. And I think that's one of the reasons we're seeing HyperFlex take off so much is it really simplifies multiple different layers and actually multiple different types of technology, storage, compute, and networking together into a sort of a very simple solution that gives them that agility, and that's why its the center piece of many of our partnerships with the Public Cloud players including Anthos. Because it really provides a Cloud like workload hosting capability on-prem. >> So the news here is that you guys are expanding your relationship with Google. What does it mean? Can you guys summarize the impact to your customers and the industry? >> Well I think that, I mean the impact for our customers is that you've two leaders working together, and in fact they're two leaders who believe in open technology and in a Multicloud approach. And we believe that both of those are fundamentally more aligned with our customers and the market than other approaches and so we're really excited about that and what it means for our customers in the future. You know and we are expanding the relationship, I mean there's not only what we're doing with Google Cloud's Anthos but also associated advances we've made about expanding our collaboration actually in the collaboration area with our Webex capabilities as well as Google Swed. So we're really excited about all of this and what we can enable together for our customers. >> You guys have a great opportunity, I always say latency is important and with low latency, moving stuff around and that's your wheelhouse. KD, talk about the relationship expanding with Google, what specifically is going on? Lets get down and dirty, is it tighter integration? Is it policy? Is it extending HyperFlex into Google? Google coming in? What's actually happening in the relationship that's expanding? >> So let me describe it in three ways. And we've talked a little bit about this already. The first is, how do we drive Cloud like simplicity on-prem? So what we've taken is HyperFlex, which is a scale out appliance, dead simple, easy to manage. We've integrated that with Anthos. Which means that now you've got not only a hyper conversion appliance that you can run workloads on, you can deliver to your developers Kubernetes eco system and tool set that is best in class, comes from Google, its managed from the Cloud and its not only the Kubernetes piece of it you can deliver the silver smash pieces of it, lot of the other pieces that come as part of that Anthos relationship. Then we've taken that and said well to be Enterprise grade, you've gotta makes sure the networking is Enterprise grade at every single layer, whether that is at the physical layer, container layers, fortune machine layer, at the software define networking layer, or in the service layer. We've been working with the teams on both sides, we've been working together to develop that solution and bring back the market for our customers. The third piece of this is to integrate security, right? So Stealthwatch Cloud was mentioned, we're working with the other pieces of our portfolio to integrate security across these offerings to make sure those flows are as secure as can be possible and if we detect anomalies, we flag them. The second big theme is driving this from the Cloud, right? So between Anthos, which is driving the Kubernetes and RAM from the Cloud our SD-WAN technology, Cisco's SD-WAN technology driven from the Cloud being able to terminate those VPN's at the end location. Whether that be a data center, whether that be an edge location and being able to do that seamlessly driven from the Cloud. Innerside, which takes the management of that infrastructure, drives it from the Cloud. Again a Cisco innovation, first in the industry. All of these marry together with driving this infrastructure from the Cloud, and what did it do for our eventual customers? Well it gave them, now a data center environment that has no boundaries. You've got an on-prem data center that's expanding into the Cloud. You can build an application in one place, deploy it in another, have it communicate with another application in the Cloud and suddenly you've kinda demolished those boundaries between data center and the Cloud, between the data center and the edge, and it all becomes a continuum and no other company other than Cisco can do something like that. >> So if I hear you saying, what you're saying is you're bringing the software and security capabilities of Cisco in the data center and around campus et cetera, and SD-WAN to Google Cloud. So the customer experience would be Cisco customer can deploy Google Cloud and Google Cloud runs best on Cisco. That's kinda, is that kind of the guiding principles here to this deal? Is that you're integrating in a deep meaningful way where its plug and play? Google Cloud meets Cisco infrastructure? >> Well we certainly think that with the work that we've done and the integrations that we're doing, that Cisco infrastructure including software capabilities like Stealthwatch Cloud will absolutely be the best way for any customer who wants to adopt Google Cloud's Anthos, to consume it, and to have really the best experience in terms of some of the integration simplicity that KD talked about but also frankly security's very important and being able to bring that consistent security model across Google Cloud, the workloads running there, as well as on-prem through things like Stealthwatch Cloud we think will be very compelling for our customers, and somewhat unique in the marketplace. >> You know one of the things that interesting, TK the new CEO of Google, and I had this question to Diane Green she had enterprise try ops of VM wear, Google's been hiring a lot of strong enterprise people lately and you can see the transformation and we've interviewed a lot of them, I have personally. They're good people, they're smart, and they know what they're doing. But Google still gets dinged for not having those enterprise chops because you just can't have a trajectory of those economy of scales over night, you can't just buy your way into the enterprise. You got to earn it, there's a certain track record, it seems like Google's getting a lot with you guys here. They're bringing Cloud to the table for sure for your customer base but you're bringing, Cisco complete customer footprint to Google Cloud. That seems to be a great opportunity for Google. >> Well I mean I think its a great opportunity for both of us. I mean because we're also bringing a fantastic open Multicloud hybrid solution to our customer base. So I think there's a great opportunity for our customers and we really focus on at the end of the day our customers and what do we do to make them more successful and we think that what we're doing with Google will contribute to that. >> KD talk about, real quickly summarize what's the benefits to the customers? Customers watching the announcements, seeing all the hype and all the buzz on this Google Next, this relationship with Cisco and Google, what's the bottom line for the customer? They're dealing with complexity. What are you guys solving, what the big take away for your customers? >> So its three things. First of all, we've taken the complexity out of the equation, right? We've taken all the complexity around networking, around security, around bridging to multiple Clouds, packaged it in a scale out appliance delivered in an enterprise consistent way. And for them, that's what they want. They want that simplicity of deployment of these next gen technologies, and the second thing is as IT serves their customers, the developers in house, they're able to serve those customers much better with these latest generation technologies and frameworks, whether its Containers, Kubernetes, HDL, some of these pieces that are part of the Anthos solution. They're able to develop that, deliver it back to their internal stakeholders and do it in a way that they control, they feel comfortable with, they feel their secure, and the networking works and they can stand behind it without having to choose or have doubts on whether they should embrace this or not. At the end of the day, customers want to do the right things to develop fast. To be nimble, to act, and to do the latest and greatest and we're taking all those hurtles out of the equations. >> Its about developers. >> It is. >> Running software on secure environments for the enterprise. Guys that's awesome news. Google Next obviously gonna be great conversations. While I have you here I wanna get to a couple talk tracks that are I important around the theme's recovering around Google Next and certainly challenges and opportunities for enterprises that is the application area, Multicloud, and Hybrid Cloud. So lets start with application. You guys are enabling this application revolution, that's the sound bites we hear at your events and certainly that's been something that you guys been publicly talking about. What does that mean for the marketplace? Because certain everyone's developing applications now, (Kip laughs) you got mobile apps, you got block chain apps, we got all kinds of new apps coming out all the time. Software's not going away its a renaissance, its happening. (Kip laughs) How is the application revolution taking shape? How is and what's Cisco's roll in it? >> Sure, I mean our role is to enable that. And that really comes from the fact that we understand that the only reason anyone builds any kind of infrastructure is ultimately to deliver applications and the experiences that applications enable. And so that's why, you know, we pioneered ACI is Application Centric Infrastructure. We pioneered that and start focusing on the implications of applications in the infrastructure any years ago. You know, we think about that and the experience that we can deliver at each layer in the infrastructure and KD talked a little bit about how important it is to integrate those layers but then we also bring tools like AppDynamics. Which really gives our customers the ability to measure the performance of their applications, understand the experience that they're delivering with customers and then actually understand how each piece of the infrastructure is contributing to and affecting that performance and that's a great example of something that customers really wanna be able to do across on-prem and multiple Clouds. They really need to understand that entire thing and so I think something like App D exemplifies our focus on the application. >> Its interesting storage and compute used to be the bottle necks in developers having to stand that up. Cloud solved that problem. >> Kip: That's right. >> Stu Miniman and I always talk about on theCUBE networking's the bottle neck. Now with ACI, you guys are solving that problem, you're making it much more robust and programmable. >> It is. >> This is a key part for application developers because all that policy work can be now automated away. Is that kinda part of that enablement? >> It sure is. I mean if you look at what's happening to applications, they're becoming more consumerized, they're becoming more connected. Whether its micro services, its not just one monolithic application anymore, its all of these applications talking to each other. And they need to become more secure. You need to know what happens, who can talk to whom. Which part of the application can be accessed from where. To deliver that, when my customer tell me listen you deliver the data center, you deliver security, you deliver networking, you deliver multicloud, you've got AppDynamics. Who else can bring this together? And that's what we do. Whether its ACI that specifies policy and does that programmable, delivers that programmable framework for networking, whether its our technologies like titration, like AppDynamics as Kip mentioned. All of these integrate together to deliver the end experience that customers want which is if my application's slow, tell me where, what's happening and help me deliver this application that is not a monolith anymore its all of these bits and pieces that talk to each other. Some of these bits and pieces will reside in the Cloud, a lot of them will be on-prem, some of them will be on the edge. But it all needs to work together-- >> And developers don't care about that they just care about do I get the resources do I need, And you guys kinda take care of all the heavy lifting underneath the covers. >> Yeah and we do that in a modern programmable way. Which is the big change. We do it in intent based way. Which means we let the developers describe the intent and we control that via policy. At multiple levels. >> And that's good for the enterprises, they want to invest more in developing, building applications. Okay track number two, talk track number two Multicloud. its interesting, during the hype cycle of Hybrid Cloud which was a while, I think now people realize Hybrid Cloud is an implementation thing and so its beyond hype now getting into reality. Multicloud never had a hype cycle because people generally woke up one day and said yeah I got multiple Clouds. I'm using this over here, so it wasn't like a, there was no real socialization around the concept of Multicloud they got it right away. They can see it, >> Yep. >> They know what they're paying for. So Multicloud has been a big part of your strategy at Cisco and certainly plays well into what's happening at Google Next. What's going on with Multicloud? Why's the relation with Google important? And where do you guys see Multicloud going from a Cisco perspective? >> Sure enough, I think you're right. The latest data we saw, or have, is 94 percent of enterprises are using or expect to use multiple Clouds and I think those surveys have probably more than six points of potential error so I think for all intensive purposes its 100 percent. (John and KD laughing) I've not met a customer who's unique Cloud, if that's a thing. And so you're right, its an incredibly authentic trend compared with some of these things that seem to be hype. I think what's happening though is the definition of what a Multicloud solution is is shifting. So I think we start out as you said, with a realization, oh wait a second we're all Multicloud this really is a thing and there's a set of problems to solve. I think you're seeing players get more and more sophisticated in how they solve those problems. And what we're seeing is its solving those problems is not about homogenizing all the Clouds and making them all the same because one of the reasons people are using multiple Clouds is to get to the unique capabilities that's in each Cloud. So I think early on there were some approaches where they said okay well we're gonna put down like a layer across all these Clouds and try to make them all look the same. That doesn't really achieve the point. The point is Google has unique capabilities in Google Cloud, certainly the tenser flow capabilities are one that people point to. AWS has unique capabilities as well and so does Dajour. And so customers wanna access all of that innovation. So that kind of answers your question of why is this relationship important to us, its for us to meet our customers needs, we need to have great relationships, partnerships, and integrations with the Clouds that are important to our customers. >> Which is all the Clouds. >> And we know that Google Cloud is important. >> Well not just Google Cloud, which I think in this relationship's got my attention because you're creating a deep relationship with them on a development side. Providing your expertise on the network and other area's you're experts at but you also have to work with other Clouds because, >> That's right we do. >> You're connecting Clouds, that's the-- >> And in fact we do. I mean we have, solutions for Hybrid with AWS and Dejour already launched in the marketplace. So we work with all of them, and what our roll, we see really is to make this simpler for our customers. So there are things like networking and security, application performance management with things like AppDynamics as well as some aspects of management that our customers consistently tell us can you just make this the same? Like these are not the area's of differentiation or unique capabilities. These are area's of friction and complexity and if you can give me a networking framework, whether its SD-WAN or ACI Anywhere that helps me connect those Clouds and manage policy in a consistent way or you can give me application performance the same over these things or security the same over these things, that's gonna make my life easier its gonna be lower friction and I'm expecting it, since your Cisco, you'll be able to integrate with my own Prime environment. >> Yeah, so then we went from hard to simple and easy, is a good business model. >> Kip: Absolutely. >> You guys have done that in the past and you certainly have the, from routing, everything up to switches and storage. KD, but talk about the complexity, because this is where it sounds complex on paper but when you actually unpack the technologies involved, you know in different Cloud suppliers, different technologies and tools. Throw in open sources into the mix is even more complex. So Multicloud, although sounds like a simple reality, the complexities pretty significant. Can you just share your thoughts on that? >> It is, and that's what we excel. We excel, I think complexity and distilling it down and making it simple. One other thing that we've done is, because each Cloud is unique and brings some unique capabilities, we've worked with those vendors along those dimension's that they're really really passionate about and strong end. So for example, with Google we've worked on the container front. They are, maybe one of the pioneers in that space, they've certainly delivered a lot of technologies into that domain. We've worked with them on the Kubeflow front on the AI front, in fact we are one of the biggest contributors to the open source projects on Kubeflow. And we've taken those technologies and then created a simple way for enterprise IT to consume them. So what we've done with Anthos, with Google, takes those technologies, takes our networking constructs, whether its ACI Anywhere, whether its other networking pieces on different parts of it, whether its SD-WAN and so forth. And it creates that environment which makes an enterprise IT feel comfortable with embracing these technologies. >> You said you're contributing to Kubeflow. A lot of people don't look at Cisco and would instantly come to the reaction that you guys are heavily contributing into open source. Can you just share, you know, the level of commitment you guys are making to open source? Just get that out there, and why? Why are you doing it? >> Yeah. For us, some of these technologies are really in need for incubation and nurturing, right? So Kubeflow is early, its really promising technology. People, in fact there's a lot of buzz about AI-- >> In your contributing to Kubeflow, significantly? >> Yes, yeah. >> Cisco? >> We're number three contributor actually. Behind Google. >> Okay so you're up there? You're up at the top of the list? >> Yeah one of the top three. >> Top of the list. >> And why? Is this getting more collaborative? More Multicloud fabric-- >> Well I mean, again it comes back to our customers. We think Kubeflow is a really interesting framework for AI and ML and we've seen our customers that workload type is becoming more and more important to them. So we're supporting that because its something we think will help our customers. In fact, Kubeflow figures into how we think about Hybrid and Multicloud with Google and the Anthos system in terms of giving customers the ability to run those workloads in Google Cloud with TPU's or on-prem with some of the incredible appliances that we've delivered in the data centers using GPU's to accelerate these workings. >> And it also certainly is compatible with the whole Multicloud mission as well-- >> Exactly, yeah. >> That's right. >> So you'll see us, we're committed to open source but that commitment comes through the lens of what we think our customers need and want. So it really again it comes back to the customer for us, and so you'll see us very active in open source areas. Sometimes, I think to your point, we should be louder about that. Talk more about that but we're really there to help our customers. DevNet, DevNet Create that Susie Wee's been working on has been a great success. I mean we've witnessed it first hand, seeing it at the Cisco Live packed house. >> In Barcelona. >> You've got developers developing on the network its a really big shift. >> Yeah absolutely. >> That's a positive shift. >> Well its a huge shift, I think its natural as you see Cisco shifting more and more towards software you see much much more developer engagement and we're thrilled with the way DevNet has grown. >> Yeah, and networking guys in your target audience gravitates easily to software it seems to be a nice fit. So good stuff there. Third talk track, Hybrid. You guys have deep bench of tech and people on network security, networking security, data center, and all the things involved in the years and years of enterprise evolution. Whether its infrastructure and all the way through the facilities, lot of expertise. Now Hybrid comes onto the scene. Went through the little hype cycle, people now get it, you gotta operate across Clouds on-prem to the Cloud and now multiple Clouds so what's the current state of Cisco-Google relationship with Hybrid? How is that fitting in, Google Next and beyond? >> So let me tease that in the context of some history, right? So if we go back, say 10 years, virtualization was the bad word of the day. Things were getting virtualized. We created the best data center infrastructure for virtualization in our UCS platforms. Completely programmable infrastructure's code, a very programmable environment that can back a lot of density of virtual machines, right? Roll forward three or four years, storage and compute were getting unwieldily. There was complexity there to be solved. We created the category of converge infrastructure, became the leader of that category whether we work with DMC and other players. Roll forward another four or five years we got into the hyper conversion infrastructure space with the most performant ACI appliance on the market anywhere. And most performant, most consistent, deeply engineered across all the stacks. Can took that complexity, took our learnings and DNA networking and married it together to create something unique for the industry. Now you think, do other domains come together? Now its the Cloud and on-prem. And if that comes together we see similar kinds of complexity. Complexity in security, complexity in networking, complexity in policy and enforcement across layers. Complexity, frankly in management, and how do you make that management much more simple and consumerized? We're taking that complexity and distilling it down into developing a very simple appliance. So what we're trying to deliver to the customer is a simple appliance that they can stand and procure and set up much in the way that they're used to but now the appliance is scale out. Its much more Cloud like. Its managed from the Cloud. So its got that consumer modern feel to it. Now you can deliver on this a container environment, a container development environment, for your developer stakeholders. You can deliver security that's plumed through and across multiple layers, networking that's plumed through and across multiple layers, at the end of the day we've taken those boundaries between Cloud and data center and blown them away. >> And you've merged operational constructs of the old data center operations to Cloud like operations, >> Yeah. >> Everything's just a service, you got Microservices coming, so you didn't really lose anything, you'd mentioned democratizing IT earlier, you guys are bringing the HyperFlex to ACI to the table so you now can let customers run, is that right? Am I getting it right? >> That's right. Its all about how do you take new interesting technologies that are developed somewhere, that may have complexity because its open source and exchanging all the time or it may have complexity because it was not been for a different environment, not for the on-prem environment. How do you take that innovation and democratize it so that everybody, all of the 100's of thousands and millions of enterprise customers can use it and feel comfortable using it and feel comfortable actually embracing it in a way that gives them the security, gives them the networking that's needed and gives them a way that they can serve their internal stakeholders very easily. >> Guys thanks for taking the time for this awesome conversation. One final question, gettin you both to weigh in on, here at Google Next 2019, we're in 2019. Cloud's going a whole other level here. What's the most important story that customers should pay attention to with respect to expanding into the Cloud, taking advantage of the growing developer ecosystem as open source continues to go to the next level. What's the most important thing happening around Google Next and the industry with respect to Cloud and for the enterprise? >> Well I think certainly here at Google Next the Google Cloud's Anthos announcement is going to be of tremendous interest to enterprises cause as you said they are extending into the Cloud and this is another great option for enterprises who are looking to do that. >> Yeah and as I look at it suddenly IT has a set of new options. They used to be able to pick networking and compute and storage, now they can pick Kubeflow for AI or they can pick Kubernetes for container development, Anthos for an on-prem version. They're shopping list has suddenly gone up. We're trying to keep that simple and organized for them so that they can pick the best ingredients they can and build the best infrastructure they can, they can do it. >> Guys thanks so much. Kip Compton senior vice president Cloud Platform and Solutions Group and KD vice president of the Data Center compute group for Cisco. Its been exclusive CUBE conversation around the Google-Cisco big news at Google Next 2019 and I'm John Furrier thanks for watching. (upbeat jazz music)

Published Date : Apr 9 2019

SUMMARY :

in the heart of Silicon Valley Thanks for spending the time. Talk about the relationship with Cisco and Google. and we think that the degree of integration is that the company's that actually and clearly on of the most important One is the application area of Multicloud and Hybrid What's the integration? born in the Cloud or born elsewhere. the difference between moving to the Cloud and then you gotta run it over and over You need the infrastructure to slot in to a and that's one of the reasons that we're partners. because one of the things we've seen but the network still needs to move packets around. in the Cloud that you rely on for your business So the news here is that you guys are and the market than other approaches What's actually happening in the and its not only the Kubernetes piece of it That's kinda, is that kind of the guiding and to have really the best experience the new CEO of Google, and I had this question to and we think that what we're doing with Google seeing all the hype and all the buzz on this do the right things to develop fast. What does that mean for the marketplace? and the experience that we can deliver having to stand that up. networking's the bottle neck. because all that policy work can be now automated away. the end experience that customers want which is the heavy lifting underneath the covers. Which is the big change. its interesting, during the hype cycle of Why's the relation with Google important? the Clouds that are important to our customers. and other area's you're experts at the same over these things or and easy, is a good business model. You guys have done that in the past on the AI front, in fact we are one of the instantly come to the reaction that you guys So Kubeflow is early, its really promising technology. We're number three contributor actually. and the Anthos system in terms of So it really again it comes back to the customer for us, You've got developers developing on the network and we're thrilled with the way DevNet has grown. Whether its infrastructure and all the way So let me tease that in the all of the 100's of thousands and millions Google Next and the industry with respect to enterprises cause as you said and compute and storage, now they can pick of the Data Center compute group for Cisco.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Diane GreenPERSON

0.99+

MicrosoftORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

John FurrierPERSON

0.99+

April 2019DATE

0.99+

BarcelonaLOCATION

0.99+

JohnPERSON

0.99+

Susie WeePERSON

0.99+

KDPERSON

0.99+

100 percentQUANTITY

0.99+

threeQUANTITY

0.99+

AWSORGANIZATION

0.99+

Kip ComptonPERSON

0.99+

94 percentQUANTITY

0.99+

2019DATE

0.99+

MulticloudORGANIZATION

0.99+

two leadersQUANTITY

0.99+

each pieceQUANTITY

0.99+

four yearsQUANTITY

0.99+

10 yearsQUANTITY

0.99+

five yearsQUANTITY

0.99+

bothQUANTITY

0.99+

Silicon ValleyLOCATION

0.99+

fourQUANTITY

0.99+

KipPERSON

0.99+

Stu MinimanPERSON

0.99+

each layerQUANTITY

0.99+

Each layerQUANTITY

0.99+

three waysQUANTITY

0.99+

ACIORGANIZATION

0.99+

Cloud Platform and Solutions GroupORGANIZATION

0.99+

oneQUANTITY

0.99+

third pieceQUANTITY

0.99+

firstQUANTITY

0.99+

One final questionQUANTITY

0.99+

Dr. Thomas Scherer, Telindus Luxembourg & Dave Cope, Cisco | Cisco Live EU 2019


 

>> Live from Barcelona, Spain. It's the cue covering Sisqo Live Europe, brought to you by Cisco and its ecosystem partners. >> Hi, everybody. Welcome back to Barcelona. This is Cisco Live. I'm Dave a lot with stew Mina, man. And you're watching the Cube. The leader >> in live tech coverage. We go out to the events, we extract the signal from the noise. Dr. Thomas Shearer's here is the chief architect of tle Indus looks onboard and David Cope is back. He's a senior director of marketing development for the Cisco Cloud Platform and Solutions Group. Gentlemen, welcome to the Cube. Thank you. Thanks. So you're very welcome. So tell Indus. Tell us about Delinda. >> So Telindus, we are actually an integrator, a cloud operator, and a tech company. And we're partnering over the years with Cisco, with all the products that they have notably, and lately we are moving also into the public cloud. We have private cloud offering, but we see our first appetite coming up with our customers in the public cloud, which are heavily regulated industries. And there we are working notably with the team off Dave to have an offering there that enables them to move into the clouds. >> So these guys are a customer or a partner? >> Well, you know, what's special about them. They're actually both. So they're a big customer of Cisco offerings, Cloud Center and other offerings. The Cisco Container Platform. But they also use those to provide services to their customers. Expect so there are a great sounding board about what the market needs and how our products are working. >> So Thomas telling has been around since. If I saw right. Nineteen seventy nine. So you know, we weren't talking multi cloud back then, but it is a big discussion point here at the show. You said private public, you're using Cloud Center, maybe explain to us what multi cloud means to you and your customers today. >> I would say most customers that we have a large organizations we manage the IT infrastructure. We're also doing integration projects, but those customers they are normally not really technology companies, you know, they are searching to work with us because we deal with the good part off their IT operations. So at these companies they come from a private infrastructure, they have there these days. They're VMWare installation their private clouds and I think also, it will stay like this for for a good amount of time. So there's no good reason to just go into the cloud because it's fancy or because there is something that you cannot have certainly there is. But that's stable progress that they are following. So what we need is actually to catch the low hanging fruits that exist in a public cloud for our customers. But in such a way that it satisfies their day today IT operations and sometimes it's our IT operations who is doing that since we are managing this. So for us, actually, hyper cloud, to say short, is actually the standard, or multicloud. >> So I wonder we're almost two years into GDP, are one year into the owner's finds. How has GDP are affected? You and your customers and What's it like out there these days? >> GDPR is for me not the main reason for public, private, multicloud installations for us and that involves GDPR is the regulation that we are in, so our customers are notably from the financial sector, and they're very strict on conservative security. rules for good because their main business is they're selling trust. There is not much more business where you trust that much than a bank. They know everything about you, and that's something they cannot sacrifice. Now, in Europe, we have the advantage. Data is that strict regulation which puts kind of standards. And that involves obviously also the GDPR thing. But if I look into that standards, that regulation imposes its very technical, they say. For example, please make sure if you move into the clouds then avoid a lock-in, be confident on what will be your exit cost. What will be your transition cost, and don't get married to anyone. And that's where Dave's team comes into the game because that they provide that solution, actually. >> I mean, that's music to your ears, I would think. I mean, I have to be honest. If I were a public cloud provider, I'd say no, don't do multi cloud. We have one cloud, does it all, But no customer speaks like that. >> You're right. And I think to me what I love about Linda's in the way they use the product is they work in such a highly regulated environment, where policies managing common policies across very different environments becomes critical. So how do I manage access control and security profiles and placement policies all across very different multiplied environments? That's hard, and that's been one of the cornerstones that we've focused on in Cloud Centre. >> Yeah, so look, double click on that. We're talking Teo, a guest earlier, and I was asking them, sort of poking it. There's >> a lot of people who want that business because it's a huge >> business opportunity. It's, um, some big, well established companies. Cisco's coming at it from a position of strength, which course? Network, But I'll ask you the same question. What gives you confidence that Cisco is in the best position for customers? Two urn, right? Tio manage their multi cloud data environment? >> I think it's I think it's a great question. I mean, for my perspective and I love our customers perspective. But if you think about Cisco's heritage around the network and security, I think most people would agree. They're very strong there. It's a very natural extension to have Cisco be a leader and multicloud because after all, it's how do I securely connect very diverse environments together. And now a little further. Now, how do I help customers manage workloads, whether they be existing or new cloud native workloads, So we find it's a very natural extension to our core strength and through both development and acquisition Cisco's got a very, very broad and deep portfolio to do that. >>So your thoughts on that? >> Yes, Cisco is coming from a network in history. But if your now look into the components there is, actually, yeah, the Networking Foundation, there is CUCS, which we have, for example, in our infrastructure, there is hyperflex there are then solutions like CCP that you can run a DevOps organization can combine it with Cloud Center to make it hybrid. And just today I learned a new thing, which is Kubeflow. I just recognized Cisco is the first one that is coming up with a platform as a service enabled Private Cloud. So if you go private Cloud usually talk about running VM's. But now with With With a CCP and it's open source projects Kubeflow which I think will be very interesting to see in conjunction with CCPN I heard that it's going to happen. You're actually Cisco is the first one delivering such a solution to the markets. So it's growth that just have >> a thing for the cnc es eso que >> bernetti slow way Don't have to send a cease and desist letter, right? >> CCP that Francisco Container platform Ryan out sad Some while ago on Prim Cooper. Nettie Stack. Right. >> So, Thomas, you know, with the update on Cloud Center suite now containerized, You got micro services. It's built with communities underneath and using cube flow. I'm guessing that's meaningful to you. There's a lot of things in this announcement that it's like, Okay, it sounds good, but in the real world, you know what? What do you super excited for? The container ization? You know, I would think things like the action orchestrator and the cost Optimizer would have value, but, you know, police tell us yourself >> The CloudCenter was already valuable before, you know, we a did investigation about what kind of cloud brokering and cloud orchestrations solutions exist back in those days when it was called CliQr CloudCenter and me and my colleagues know that CliQr team back then as well as now at Cisco we appreciated that they they became one family now. For me, CloudCenter fulfills certain requirements that I simply have to fulfill for our customer. And it's a mandatory effect that I have to feel for them, like being able to ensure and guarantee portability. Implementing policies, segregation of duties were necessary, things like that. I have to say now that it becomes containerized. That's a lot of ease in managing CloudCenter as a solution by itself, and also you have the flexibility to have it better. Also, migratable. It's an important key point that CloudCloud eyes a non cloud centric product that you can run it on-prem that your orchestration that you don't have to log in on the orchestration there and have it on-prem but now can easily move it on things such a GKE because it's it's a container based solution. But I think also there's a SaaS option available so you can just subscribe to it. So you have full range of flexibilities so that a day to day management work flow engine doesn't become a day to day management thing by itself. >> So I wonder if you could paint a picture for us of your environment around since nineteen seventy nine. So you must have a lot of a lot of stuff, a lot of it that you've developed over the years. But you mentioned that you're starting to look a public clouds. You just mentioned your customer base, largely financial services. So they're highly regulated and maybe a little nervous about the cloud. But so paint a picture of your Maybe not for certain workloads. Paint a picture of your environment tunnel where you want to go from. From an architecture and an infrastructure perspective. >> We have our own what we call private managed cloud. That's a product we call U-flex which is  FlexPod reference architecture that's Cisco was networking NetApp storage. Cisco UCS in conjunction with the ember, as a compute. This we use since many years and as I already have said, the regulated market started opening up towards public cloud. So what does it mean? European Banking Authority. So EBA, who's the umbrella organization on European level. They send out a recommendation. Dear countries, please, your financial institution. If they go into the cloud that have to do ABC. The countries I have put in place those regulations they have put in place those controls and for them, they are mostly now in that let's investigate what its influence in the public they come from their private infrastructure. They are in our infrastructure, which is like private infrastructure virtualized and managed by us, mainly VM based. And now the news things on top that they investigate are things like big data, artificial intelligence and things like that which you mostly don't have in private infrastructure. So in that combination is what we have to provide to our customers but their mostly in and investigative mode. >> and okay. And and Cisco is your policy engine management engine across all those clouds, is that right? >> Yes we are able to manage those workloards with CloudCenter. Sometimes it depends also on the operating model. The customer himself is the one using CloudCenter, you know, so it depends, since we are in integrator, cloud operator and also offer our services in the public cloud. It's always the question about who has to manage what. >> One of the things, if I could just add on that we see people providing our products as a service. We're just talking about Kubernetes. Customers today are starting to move Kubernetes just from being like development now into production. And what we're seeing is that these new Kubernetes based applications have non containerized dependencies reach out to another traditional app, reach out to PaaS, a database. And what we try to do is to say, how do you give your customers the ability to get the new and the old working together? Because it'll be that way for quite some time. And that's a part of sort of the new cloud center capabilities also. >> That's that's a valid reason. So you have those legacy services and you don't want just to You cannot just replace them now. Now let's go all in. Let's be cloud native. So you have always thes interoperability things to handle and yeah, that's true. Actually, you can build quite some migration path using containerization. >> Yeah, I mean, you can't customer can't just over rotate to all the new fun buzz words. They got a business to run. Yeah, so this >> And how do I apply security policies and access control and to this very mixed environment now, common policies and that becomes challenging. >> But it's also part of our business. Yes, there have there, for example, financial institution than not a nineteen company. That's where we come in as a for Vita Toe. It's such an industry daddy, via highly value the partnership with Cisco Heavy Cat build new services together. We had that early adopters program, for example, regarding CCP. So Cisco is bringing a service provider into the loop to build what's just right for the customer for them on their behalf. Yes, you describe that is very challenging, is it's In some cases, it's chaos. But that's the opportunity I heard this morning that you guys are going after pretty hard. >> No, it's right. And you've got one set of desires for developers, but now we move into production. Now I t cops gets involved, the sea so gets involved. And how do we have then well thought out integrations into security and network management? Those air all of the things that we're trying to really focus on. >> Well, anywhere the definite zone. So you you were surrounded by infrastructures code. Is there a fits and club? Guys, Thanks so much for coming to Cuba and telling your story. Really appreciate it. Thank you. Enjoyed. Thank you. Alright, Keep it right there, buddy. Stupid him and Dave. Alon. Today we're live from Cisco Live Barcelona. You watching the cube right back?

Published Date : Jan 30 2019

SUMMARY :

Sisqo Live Europe, brought to you by Cisco and its ecosystem partners. I'm Dave a lot with stew Mina, We go out to the events, we extract the signal from the noise. all the products that they have notably, and lately we are moving also Well, you know, what's special about them. to us what multi cloud means to you and your customers today. So there's no good reason to just go into the cloud because it's fancy or because You and your customers and What's it like out there these days? And that involves obviously also the GDPR thing. I mean, that's music to your ears, I would think. And I think to me what I love about Linda's in the way they use the product is they work in such and I was asking them, sort of poking it. What gives you confidence that Cisco is in the best position for customers? you think about Cisco's heritage around the network and security, I think most people would agree. So if you go private Cloud usually talk about running VM's. CCP that Francisco Container platform Ryan out sad Some while ago on Prim Cooper. Okay, it sounds good, but in the real world, you know what? cloud centric product that you can run it on-prem that your orchestration that you So I wonder if you could paint a picture for us of your environment around since nineteen seventy nine. So in that combination is what And and Cisco is your policy engine management engine across all those clouds, is that right? The customer himself is the one using CloudCenter, you know, so it depends, we try to do is to say, how do you give your customers the ability to get the new and So you have always thes interoperability things to handle and yeah, Yeah, I mean, you can't customer And how do I apply security policies and access control and to this very mixed environment So Cisco is bringing a service provider into the loop to build what's just right Those air all of the things that we're trying So you you were surrounded by infrastructures code.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
CiscoORGANIZATION

0.99+

EuropeLOCATION

0.99+

ThomasPERSON

0.99+

DavePERSON

0.99+

David CopePERSON

0.99+

European Banking AuthorityORGANIZATION

0.99+

CubaLOCATION

0.99+

Thomas SchererPERSON

0.99+

EBAORGANIZATION

0.99+

Thomas ShearerPERSON

0.99+

one yearQUANTITY

0.99+

stew MinaPERSON

0.99+

GDPRTITLE

0.99+

BarcelonaLOCATION

0.99+

Nettie StackPERSON

0.99+

bothQUANTITY

0.99+

IndusPERSON

0.99+

Barcelona, SpainLOCATION

0.99+

oneQUANTITY

0.99+

first oneQUANTITY

0.99+

Dave CopePERSON

0.99+

first oneQUANTITY

0.98+

AlonPERSON

0.98+

Cisco Cloud Platform and Solutions GroupORGANIZATION

0.98+

DelindaPERSON

0.98+

one cloudQUANTITY

0.98+

TodayDATE

0.98+

OneQUANTITY

0.98+

Cloud CenterTITLE

0.98+

nineteen companyQUANTITY

0.98+

CCPNORGANIZATION

0.98+

LindaPERSON

0.97+

one familyQUANTITY

0.97+

todayDATE

0.97+

Francisco ContainerORGANIZATION

0.97+

Networking FoundationORGANIZATION

0.97+

TeoPERSON

0.96+

Cisco Heavy CatORGANIZATION

0.95+

CloudCenterTITLE

0.95+

CloudCloudTITLE

0.94+

CUCSORGANIZATION

0.93+

first appetiteQUANTITY

0.93+

CliQrORGANIZATION

0.92+

CCPORGANIZATION

0.92+

ABCORGANIZATION

0.91+

KubernetesTITLE

0.9+

KubeflowTITLE

0.89+

nineteen seventy nineDATE

0.86+

VMWareTITLE

0.85+

Two urnQUANTITY

0.84+

almost two yearsQUANTITY

0.83+

this morningDATE

0.81+

Telindus LuxembourgORGANIZATION

0.74+

Cisco UCSORGANIZATION

0.73+

EuropeanLOCATION

0.72+

doubleQUANTITY

0.71+

GKETITLE

0.68+

Cisco Live BarcelonaORGANIZATION

0.68+

CliQr CloudCenterTITLE

0.67+

Nineteen seventy nineQUANTITY

0.66+

LiveEVENT

0.66+

Vita ToeCOMMERCIAL_ITEM

0.66+

Cisco Live EUEVENT

0.64+

RyanPERSON

0.64+

Stephan Fabel, Canonical | KubeCon 2018


 

>> Live, from the Seattle, Washington. It's theCUBE, covering KubeCon and CloudNativeCon, North America 2018, brought to you by Red Hat, the Cloud Native Computing Foundation and it's ecosystem partners. >> Welcome back everyone. We're live here in Seattle for theCUBE's exclusive coverage of KubeCon and CloudNativeCon 2018. I'm John Furrier at Stuart Miniman. Our next guest Stephan Fabel, who is the Director of Product Management at Canonical. CUBE alumni, welcome back. Good to see you. >> Thank you. Good to see you too. Thanks for having me. >> You guys are always in the middle of all the action. It's fun to talk to you guys. You have a pulse on the developers, you have pulse on the ecosystem. You've been deep in it for many, many years. Great value. What's hot here, what's the announcement, what's the hard news? Let's get to the hard news out of the way. What's happening? What's happening here at the show for you guys? >> Yeah, we've had a great number of announcements, a great number of threads of work that came into fruition over the last couple of months, and now just last week where we announced hardware reference architectures with our hardware partners, Dell and SuperMicro. We announced ARM support, ARM64 support for Kubernetes. We released our version 1.13 of our Charmed Distribution of Kubernetes, last week And we also released, very proud to release, MicroK8s. Kubernetes in a single snap for your workstation in the latest release 1.13. >> Maybe explain that, 'cause we often talk about scale, but there is big scale, and then we're talking about edge, we're talking about so many of these things. >> That's right. >> That small scale is super important, so- >> It really is, it really is, so, MicroK8s came out of this idea that we want to enable a developer to just quickly standup a Kubernetes cluster on their workstation. And it really came out of this idea to really enable, for example, AIML work clouds, locally from development on the workstation all the way to on-prem and into the public cloud. So that's kind of where this whole thing started. And it ended up being quite obvious to us that if we do this in a snap, then we actually can also tie this into appliances and devices at the edge. Now we're looking at interesting new use cases for Kubernetes at the edge as an actual API end point. So it's a quite nice. >> Stephan talk about ... I want to take a step back. There's kind of dynamics going on in the Kubernetes wave, which by the way is phenomenal, 8000 people here at KubeCon, up from 4000. It's got that hockey stick growth. It's almost like a Moore's Law, if you will, for the events. You guys have been around, so you have a lot of existing big players that have been in the space for a while, doing a lot of work around cloud, multi-cloud, whatever ... That's the new word, but again, you guys have been there. You got like the Cisco's of the world, you guys, big players actively involved, a lot of new entrants coming in. What's your perspective of what's happening here? A lot of people looking at this scratching their head saying: Okay I get Kubernetes, I get the magic. Kubernetes enables a lot of things. What's the impact to me? What's in it for me as an enterprise or a developer? How do you guys see this market place developing? What's really going on here? >> Well I think that the draw to this conference and to technology and all the different vendors et cetera, it's ultimately a multi-cloud experience, right? It is about enabling workload portability and enabling the operator to operate Kubernetes, independently of where that is being deployed. That's actually also the core value proposition of our charmed Kubernetes. The idea that a single operational paradigm allows you to experience, to deploy, lifecycle manage and administer Kubernetes on-prem, as well as any of the public clouds, as well as on other virtual substrates, such as VMware. So ultimately I think the consolidation of application delivery into a single container format, such as Docker and other compatible formats, OCI formats right? That was ultimately a really good thing, 'cause it enabled that portability. Now I think the question is, I know how to deploy my applications in multiple ways, 'cause it's always the same API, right? But how do I actually manage a lot of Kubernetes clusters and a lot of Kubernetes API end points all over the place? >> So break down the hype and reality, because again, a lot of stuff looks good on paper. Love the soundbites of people saying, "Hey, Kubernetes," all this stuff. But people admitting some things that need to be done, work areas. Security is a big concern and people are working on that. Where is the reality? Where does the rubber meet the road when it comes down to, "Okay, I'm an enterprise. What am I buying into with Kubernetes? How do I get there?" We heard Lyft take an approach that's saying, "Look, it solved one problem." Get a beachhead and take the incremental approach. Where's the hype, where's the reality? Separate that for us. >> I think that there is certainly a lot of hype around the technology aspect of Kubernetes. Obviously containerization is invoked. This is how developers choose to engage in application development. We have Microservices architecture. All of those things we're very well aware of and have been around for quite some time and in the conversation. Now looking at container management, container orchestration at scale, it was a natural fit for something like Kubernetes to become quite popular in this space. So from a technology perspective I'm not surprised. I think the rubber meets the road, as always, in two things: In economics and in operations. So if I can roll out more Kubernetes clusters per day, or more containers per day, then my competitor ... I gain a competitive advantage, that the cost per container is ultimately what's going to be the deciding factor here. >> Yeah, Stephan, when I think about developers how do I start with something and then how do I scale it out in the economics of that? I think Canonical has a lot of experience with that to share. What are you seeing ... What's the same, what's different about this ecosystem, CloudNative versus, when we were just talking about Linux or previous ways of infrastructure? >> Well I think that ultimately Kubernetes, in and of itself, is a mechanism to enable developers. It plays one part in the whole software development lifecycle. It accelerates a certain part. Now it's on us, distributors of Kubernetes, to ensure that all the other portions of this whole lifecycle and ecosystem around Kubernetes, where do I deploy it? How do I lifecycle manage it? If there's a security breach like last Monday, what happens to my existing stack and how does that go down? That acceleration is not solved by Kubernetes, it's solved for Kubernetes. >> Your software lives in lots and lots of environments. Maybe you can help clarify for people trying to understand how Kubernetes fits, and when you're playing with the public cloud, your Kubernetes versus their Kubernetes. The distinction I think is, there's a lot of nuance there that people may need help with. >> That's true, yeah. So I think that, first of all, we always distance ourself from the notion of having our Kubernetes. I think we have a distribution of Kubernetes. I think there is conformance, tests that are in place that they're in place for a reason. I think it is the right approach, and we won't install a fourth version of Kubernetes anytime soon. Certainly, that is one of the principles we adhere to. What is different about our distribution of Kubernetes is the operational tooling and the ability to really cookie-cutter out Kubernetes clusters that feel identical, even though they're distributed and spread across multiple different substrates. So I think that is really the fundamental difference of our Kubernetes distribution versus others that are out there on the market. >> The role of developers now, 'cause obviously you're seeing a lot of different personas emerging in this world. I'm just going to lay them out there and I want to get your reaction. The classic application developer, the ones who are sitting there writing code inside a company. It could be a consumer company like Lyft or an enterprise company that needs ... They're rebuilding inside, so it's clear that CIOs or enterprises, CXOs or whatever the title is, they're bringing more software in-house, bringing that competitive advantage under application development. You have the IT pro expert, practitioner kind of role, classic IT, and then you got the opensource community vibe, this show. So you got these three things inter-playing with each other, this show, to me feels a lot like an opensource show, which it is, but it also feels a lot like an IT show. >> Which it also is. >> It also is, and it feels like an app development show, which it also is. So, opportunity, challenge, is this a marketplace condition? What's you thoughts on these kind of personas? >> Well I think it's really a question of how far are you willing to go in your implementation of devops cultural change, right? If you look at that notion of devops and that movement that has really taken ahold in people's minds and hearts over the last couple of years, we're still far off in a lot of ways and a lot of places, right? Even the places who are saying they're doing devops, they're still quite early, if at all, on that adoption curve. I think bringing operators, developers and IT professionals together in a single show is a great way for the community and for the market to actually engage in a larger devops conversation, without the constraint of the individual enterprise that those teams find themselves in. If you can just talk about how you should do something better and how would that work, and there is other kinds of personas and roles at the same table, it is much better that you have the conversation without the constraint of like a deadline or a milestone, or some outage somewhere. Something is always going on. Being able to just have that conversation around a technology and really say, "Hey, this is going to be the one, the vehicle that we use to solve this problem and further that conversation," I think it's extremely powerful. >> Yeah, and we always talk about who's winning and who's losing. It's what media companies do. We do it on theCUBE, we debate it. At the end of the day we always like ... There's no magic quadrant for this kind of market, but the scoreboard can be customers. Amazon's got over 5000 reputable customers. I don't know how many CNCF has. It's probably a handful, not 5000. The customer implications are really where this is going. Multi-cloud equals choice. What's your conversations like with customers? What do you see on the customer landscape in terms of appetite, IQ, or progress for devops? We were talking, not everyone's on server lists yet and that's so obvious that's going to be a big thing. Enterprises are hot right now and they want the tech. Seeing the cloud growth, where's your customer-base? What are those conversations like? Where are they in the adoption of CloudNative? >> It's an extremely interesting question actually, because it really depends on whether they started with PaaS or not. If they ever had a PaaS strategy then they're mostly disillusioned. They came out, they thought it was going to solve a huge problem for them and save them a lot of money, and it turns out that developers want more flexibility than any PaaS approach really was able to offer them. So ultimately they're saying, "You know what, let's go back to basics." I'll just give you a Kubernetes API end point. You already know how to deal with everything else beyond that, and actually you're not cookie-cuttering out post ReSQueL- >> Kubernetes is a reset to PaaS. >> It really does. It kind of disrupted that whole space, and took a step back. >> All right, Stephan, how about Serverless. So a lot of discussion about Knative here. We've been teasing out where that fits compared to functions from AWS and Azure. What's the canonical take on this? What are you hearing from your customers? >> So Serverless is one of those ... Well it's certainly a hot technology and a technology of interest to our customers, but we have longstanding partnerships with Galactic Fog and others in place around Serverless. I haven't seen real production deployments of that yet, and frankly it's probably going to take a little bit longer before that materializes. I do think that there's a lot of efforts right now in containerization. Lots of folks are at that point where they are ready to, and are already running containerized workloads. I think they're busy now implementing Kubernetes. Once they have done that, I think they'll think a little bit more about Serverless. >> One of the things that interest me about this ecosystem is the rise of Kubernetes, the rise of choice, the rise of a lot of tools, a lot of services, trying to fend off the tsunami wave that's hit the beach out of Amazon. I've always said in theCUBE that that's ... They're going to take as much inland territory on this tsunami unless someone puts up a sea wall. I think this is this community here. The question is, is that ... And I want to get your expert opinion on this, because the behemoths, the big guys are getting richer. The innovation's coming from them, they have scale. You mentioned that as a key point in the value of Kubernetes, is scale, as one of those players, I would consider in the big size, not like a behemoth like an Amazon, you got a unique position. How can the industry move forward with disruption and innovation, with the big guys dominating? What has to happen? Is there going to change the size of certain TAMs? Is there going to be new service providers emerging? Something's got to give, either the big guys get richer at the expense of the little guys, or market expands with new categories. How do you guys look at that? Developers are out there, so is it promising look to new categories, but your thoughts. >> I think it's ... So a technology perspective certainly would be, there could be a disruptive technology that comes in and just eats their lunch, which I don't believe is going to happen, but I think it might actually be a more of a market functionality actually. If it goes down to the economics, and as they start to compete there will be a limit to the race to the bottom. So if I go in on an economical advantage point as a public cloud, then I can only take that so far. Now, I can still take it a lot further, but there's going to be a limit to that ultimately. So, I would say that all of the public clouds, we see that increasingly happening, are starting to differentiate. So they're saying, "Come to me for IML." "Come to me for a rich service catalog." "Come to me for workload portability," or something like that, right? And we'll se more differentiation as time goes on. I think that will develop in a little bit of a bubble, to the point where actually other players who are not watching, for example, Chinese clouds, right? Very large, very influential, very rich in services, they can come in and disrupt their market in a totally different way than a technology ever could. >> So key point you mentioned earlier, I want to pivot on that and get to the AI conversation, but scale is a competitive advantage. We've seen that on theCUBE, we see it in the marketplace. Kubernetese by itself is great but at scale it gets better, got nobs and policy. AI is a great example of where a dormant computer science concept that has not yet been unleashed ... Well, it gets unleashed by cloud. Now that's proliferating. AI, what else is out there? How do you see this trend around just large-scale Kubernetes, AI and machine learning coming on around the corner? That's going to be unique, and is new. So you mentioned the Chinese cloud could be a developer here. It's a lever. >> Absolutely, we've been involved with kubeflow since the early days. Early days, it's barely a year, so what early days? It's a year old. >> It's yesterday. >> So a year a ago we started working with kubeflow, and we published one of the first tutorials of how to actually get that up and running and started on Ubuntu, and with our distribution of Kubernetes, and it has since been a focal point of our distribution. We do a couple of things with kubeflow. So the first thing, something that we can bring as a unique value preposition is, because we're the operating system for almost all GKE, all of AKS, all EKS, such a strong standing as an operating system, and have strong partnerships with folks like NVIDIA. It was kind of one of the big milestones that we tried to achieve and we've since completed, actually as another announcement since last week, is the full automatic deployment of GPU enablement on Kubernetes clusters, and have that identical experience happen across the public clouds. So, GPGPU enablement on Kubernetes, as one of the key enablers for projects like kubeflow, which gives you machine learning stacks on demand, right? And then a parallel, we've been working with kubeflow in the community, very active, formed a steering committee to really get the industry perspective into the needs of kubeflow as a community and work with everybody else in that community to make sure that kubeflow releases on time, and hopefully soon, and a 1.0, which is due this summer, but right now they're focused on 0.4. That's a key area of innovation though, opportunity. >> Oh, absolutely. >> I see Amazon's certainly promoting that. What else is new? I've got one last question for you. What's next for you guys? Get a quick plugin for Canonical. What's coming around the corner, what's up? >> We're definitely happy to continue to work on GPGPU enablement. I think that is one of the key aspects that needs to stay ... That we need to stay on top of. We're looking at Kubernates across many different use cases now, especially with our IoT, open to core operating system, which we'll release shortly, and here actually having new use cases for AIML inference. For example, out at the edge looking at drones, robots, self-driving cars, et cetera. We're working with a bunch of different industry partners as well. So increased focus on the devices side of the house can be expected in 2019. >> And that's key these data, in a way that's really relevant. >> Absolutely. >> All right, Stephan, thanks for coming on theCUBE. I appreciate it, Canonical's. Great insight here, bringing in more commentary to the conversation here at KubeCon, CoudNativeCon. Large-scale deployments as a competitive advantage. Kubernetes really does well there: Data, machine learning, AI, all a part of the value and above and below Kubernatese. We're seeing a lot of great advances. CUBE coverage here in Seattle. We'll be back with more after this short break. (digital music)

Published Date : Dec 13 2018

SUMMARY :

North America 2018, brought to you by Red Hat, Good to see you. Good to see you too. You guys are always in the middle of all the action. in the latest release 1.13. Maybe explain that, 'cause we often talk about scale, and into the public cloud. What's the impact to me? and enabling the operator to operate Kubernetes, that need to be done, work areas. I gain a competitive advantage, that the cost per container in the economics of that? in and of itself, is a mechanism to enable developers. that people may need help with. Certainly, that is one of the principles we adhere to. You have the IT pro expert, practitioner kind of role, What's you thoughts on these kind of personas? and really say, "Hey, this is going to be the one, At the end of the day we always like ... You already know how to deal It kind of disrupted that whole space, and took a step back. What's the canonical take on this? of interest to our customers, One of the things that interest me about this ecosystem and as they start to compete there will be a limit around the corner? since the early days. in that community to make sure What's coming around the corner, what's up? So increased focus on the devices side of the house in a way that's really relevant. AI, all a part of the value and above and below Kubernatese.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
StephanPERSON

0.99+

2019DATE

0.99+

Stephan FabelPERSON

0.99+

NVIDIAORGANIZATION

0.99+

SeattleLOCATION

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

John FurrierPERSON

0.99+

CanonicalORGANIZATION

0.99+

DellORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

AWSORGANIZATION

0.99+

last weekDATE

0.99+

KubeConEVENT

0.99+

CiscoORGANIZATION

0.99+

SuperMicroORGANIZATION

0.99+

yesterdayDATE

0.99+

8000 peopleQUANTITY

0.99+

last MondayDATE

0.99+

one partQUANTITY

0.99+

CloudNativeConEVENT

0.99+

ServerlessORGANIZATION

0.99+

LyftORGANIZATION

0.99+

two thingsQUANTITY

0.98+

oneQUANTITY

0.98+

a yearQUANTITY

0.98+

Seattle, WashingtonLOCATION

0.98+

LinuxTITLE

0.97+

a year a agoDATE

0.97+

first thingQUANTITY

0.97+

KubernetesTITLE

0.97+

first tutorialsQUANTITY

0.96+

CloudNativeCon 2018EVENT

0.96+

UbuntuTITLE

0.96+

threeQUANTITY

0.96+

ChineseOTHER

0.96+

OneQUANTITY

0.95+

one problemQUANTITY

0.95+

waveEVENT

0.95+

kubeflowTITLE

0.95+

single showQUANTITY

0.94+

5000QUANTITY

0.94+

last couple of monthsDATE

0.94+

CUBEORGANIZATION

0.93+

AKSORGANIZATION

0.93+

fourth versionQUANTITY

0.92+

KuberneteseTITLE

0.92+

one last questionQUANTITY

0.92+

this summerDATE

0.92+

4000QUANTITY

0.91+

CNCFORGANIZATION

0.91+

MicroK8sORGANIZATION

0.91+

KubeCon 2018EVENT

0.91+

singleQUANTITY

0.87+

ARMORGANIZATION

0.87+

last couple of yearsDATE

0.86+

firstQUANTITY

0.85+

single containerQUANTITY

0.85+

North America 2018EVENT

0.84+

CoudNativeConORGANIZATION

0.83+

Lew Tucker, Cisco | KubeCon 2018


 

>> Live from Seattle, Washington it's theCUBE covering KubeCon and CloudNativeCon, North America 2018. Brought to you by Red Hat, The CloudNative Computing Foundation, and Antico System Partners. (upbeat music) >> Hey everyone, welcome back to theCUBE. Day two live coverage here in Seattle of the CNCF KubeCon and CouldNative. I'm John Furrier, host of theCUBE with Stu Miniman here all week for three days as multiple years we've been covering KubeCon. We've been covering this community, all the way back to the OpenStack days to now CloudNative and Kubernetes, rise of Kubernetes, and KubeCon has been great. CloudNative Computing Foundation and the center of it has been an individual CUBE alumni that we've talked to many times, Lew Tucker, VP and CTO of Cloud Computing at Cisco Systems. Great to have Lew on, good to see you. >> Great to be back again. >> We got a great history of conversations and every year we kind of have a pinch me moment where it's like it's so awesome right now, the technology's coming together, now more than ever, the standardization, the maturization of Kubenetes and what's going on around it, is probably one of the most exciting trends. It's not just about Kubernetes, it's about what that's enabling, ecosystems, storage, networking and compute, now the, is working now magically creating a lot of value. So, we've talked about it, what's the update from your perspective, how do you see it evolving now? >> I see it very much the same way, I had a short little keynote today, yesterday, and was talking about I think we've entered this kind of golden age of software where because of the number of projects that are now going into the CNCF for example, and elsewhere, and get up repositories, we just have a major driving force which is the accumulation of the software that's used now to power the cloud, power data centers, totally transforming infrastructure. We're no longer cabling as I sort of say has no become code. >> Yeah. >> And that's all about the software, it's about through the open source communities. >> We've been talking about before we came on camera about the, and we've had other conversations about the historical waves of innovation. AI's been around for a while, you know all these things have kind of been around but now with cloud computing and the resources available in terms of compute power, storage, and networking now programmable, it's creating a lot of innovation right. And this has been a tailwind for some and a headwind for others, companies that have transformed and understood that have been leveraging it. We've seen conversations from Net App, Cisco, you guys are transform, you turned it into a tailwind, for Cisco, because now all that magic can come in for the programmability on the networking side. >> Exactly right, yeah. We see AI as having a big impact across the board on all of these, we're big contributors also into Cube Clove, for example, because on top of Kubernetes, the biggest issue we're going to have in AI going forward is we don't have enough AI engineers. We don't have enough people who are trained in that. So we need to create these tools and the services that we see coming out in the cloud now for AI are designed to make it easy to consume AI. You don't have to be an AI expert in order to use it and that sort of thing is really exciting. >> How is the CloudNative environment changing IT investments 'cause again, the old days I'd have to throw a machine at something, I got to buy this and siloed, you got now horizontal capabilities, you got the vertical specialization with machine learning and AI as you just referenced. How is it changing investments, people now are looking at re-imagining their infrastructure, they're re-imagining how apps are built. How is Kubernetes, CloudNative impacting IT investments? >> So we've found for example when we talk to our customers and everything else, they're all using multiple clouds. So I think internally we're getting to see a rise here now is this multi-cloud environment that we have. And so Cisco with what we've been doing with our hybrid solutions for AWS and hybrid solutions that we're having with Google is making it so that you can have the same environment within your data center as you have in the cloud, and then we connect the two so that now the IT infrastructure really is looking like a cloud and there's many clouds, multiple clouds in your own data center, in multiple service providers. That makes it easier for IT to really consume CloudNative technology. >> I wonder if you can chill us down a level from what we're talking- you talk about cube flow and machine learning remember back to big data, was like okay, well what do we have to do with the network? Well, I need some more buffering but you know, how are we what is just the base infrastructure layer and where Kubernetes and this ecosystem just becomes the platform for all of the modern applications, and what has to be done differently, I wonder if you could help- >> Yeah so one of the big challenges I think is this how do we connect the different clouds together with your own data center. And that's why we, the hybrid solutions, where Cisco's driving now are designed specifically to make that easy because it's scary for IT organizations to say they're going to open up some part of their firewall to have connections coming in, and so we provide a solution that makes it easy for people. And that means that things such as cube flow, and things like that, they can be running, perhaps they might do some of their research in a hybrid- in a public cloud provider, such as AWS or Google. And then they want to run it now in production within their own data center, and they don't want to change a thing. And at the same time, we're seeing other capabilities. You want to access some service in the cloud as a part of your enterprise app. >> Yeah one of the things people have a hard time understanding is what is just kind of standardized, okay I've got compliant Kubernetes it can run all these places and then there's areas where Cisco has done deep integration work with both Google Cloud and with AWS, maybe help understand what are the standard pieces and what's the extra engineering work needed to be done to support some of these? >> Well I think what has helped us all is the fact that Kubernetes has really taken off. So we really are seeing if you have a Kubernetes platform and you adhere to the public APIs of Kubernetes and everything else like that, you then can have the portability of applications back in the java days we were going after that, and now we're seeing it with Kubernetes. And so by what we've developed has been with the Cisco container platform is an on premise manage Kubernetes environment that looks identical to what you find in the Kubernetes environment at AWS or at Google. So the same interfaces are there, the IT doesn't have to relearn things, they can actually get the advantage of that standardization. >> And that's key for operations and IT because that is the promise of cloud operations. Similar on both platforms on premises and in the cloud. And the next question is okay from a networking perspective, we've had many conversations with Suzie Wee at Cisco around network programmability or net dev options as you guys call it, which is kind of a play on dev ops. This is the future because with multi-cloud the apps don't need to know about where to provision workloads, which cloud when, is it better region over here, latency, network factors come in, you still got to move things around, put A to B, edge of the network for IOT. Talk about the importance of network programmability now more than ever with CloudNative why it's so important. >> Well the first and foremost, it has to be driven by APIs. The old days of actually going out and having people configure network switches to make connectivity or open up provisions and firewalls and things like that, that's behind us. Now we have that all being because of programmability of the network through what we've been doing with ACI and other technologies, we can make it so we can connect these clouds and make it, maintain the security. We're also seeing other things such as isteo and edgebased computing and things like that come into play, where again, the ordinary developer doesn't have to learn all of the details of networking and security, but the operations people need it to be secure, need it to be able to be moved around, need to be able to have telemetry so they can tell what's going on. >> One of the things we've been talking about on theCUBE, Stu and I were yesterday riffing on this but for a while, but it's also now trickled into the Silicon Valley conversations around some of the tech elite people around architecture. Cloud architects are in high demand and there's two schools of thought. There's a persona around a systems architect, more of a systems view, operating systems kind of view, that's cloud that's operating, environment, serverless, advanced, these are kind of concepts that is a systems-oriented thinker. And then you have the application developer that looks like an app server kind of world. Those are all paradigms that we've lived through. >> Right. >> Now coming together now in one, horizontally scaled both cloud that's a system, vertical specialization around the apps, and with dev ops layer, having these guys work together. Talk about this dynamic, your thoughts on it, how it shapes employee selection, people who lead projects. 'Cause the CTO and architect role's now more important, but the software side's just as important. >> Yeah so I think one thing that's become very clear is that we need to make it easier for the domain experts in an application area to just take care of their part. And so that's why like one of the previous episodes we talked about here was about istea, where we've actually separated out essentially the data play, the transport of data around with security, encryption, identity, and everything else from the actual application code of the micro service. That makes it much easier because now the engineering teams are too large, you can't have everybody know everything anymore, like you say, we've got specialists in different areas. We need to be able to provide then, underlying systems that connect these things and that underlying system then has to be managed by your operations people. So we've got dev ops where the application people are writing code actually that the operations people use, so that we can actually have this kind of uniform infrastructure that is maintainable. >> And security is super important and all that good stuff. >> Yeah so Lew it's interesting, we've been watching so many of the pieces we've worked on OpenStack, it was really from the bottoms up building the infrastructure, we've seen the dynamic the last two years, Kubernetes some, and server-less even more, coming from the top down. We want to get your thoughts on that, we've been digging in and trying to tease out some of the Knative pieces that are being discussed here, versus some of the functions things that are happening, especially in Amazon and Microsoft, I'd love to get your take. >> I think we're always seeing this progression in platforms for computing, and programming languages, and paths we've talked about years ago. All of these things are designed always to make it easier. So you're right we've got for example Knative now really coming on as saying can we standardize a way specifically helping Kubernetes people move into this area. Like I've mentioned before the Kubeflow again, how can we start to standardize these pieces? The beauty of this is, the standardized pieces are coming out in open source. So everybody gets it, and that means it's deployable in your public clouds, it's deployable in your data center, and then through a lot of the hybrid technology that Cisco's working, you can connect those together. But you're right we're going to continue to see innovation, that's great, because we need that, we need that constantly. What we need to be able to do is make it easier to consume and then integrate into these systems. And that's where I think Kubernetes has a lot do with how we make it easier. >> Final question on Cisco then I want to go on a more personal note with you on your situation which is news breaking here on theCUBE. Cisco has successfully transformed it's direction, it's been always a great leader in networking, always a great business, billions and billions of dollars in revenue. Now with CloudNative and Kubernetes, the relationship I saw with Amazon, you got Google, you guys have taken that systems view in making things programmable. Explain the Cisco strategy from your perspective as a CTO and as a legend in the industry, for the people that know Cisco, know the old Cisco, what is the new Cisco? And how does Kubernetes and how does all this CloudNative fit into the new Cisco? >> I think the new Cisco really is focused now on where customers are taking their computing resources and it is in this multi-cloud world where we're seeing it's not a fight anymore. You can't say I have a reason to keep things here in my data center, I'm never going to go to cloud, and other customers are saying I'm never going to have a data center, now everybody's saying we're probably going to have both. And Cisco as a networking company, this plays right into our strength because what you have to be able to do is now connect those environments in a secure way, in a manageable way. And so this plays right into where Cisco's growth I think is going to be, it'll be in much more of these kinds of services that allow that to happen, and in the relationships and partnerships that we have with the major cloud providers. >> This basically, the decomposition of monolithic applications into sets of microservices is connected by the network. >> Exactly right. >> This is the fundamental beauty of where you guys see that tailwind. >> Exactly. >> Awesome. Well Lew you've been a legend in the industry, I've been following your career from the beginning. You've been- you have product that's in the Computers Museum you've done amazing work at Sun Microsystems, I mean just a great story career, the work you've done at Cisco, you've been on theCUBE so many times, I don't know that number. You've really contributed to the industry and this news now about your situation, share the news about what's happening with you. >> Well I made announcements at our CNCF board and our OpenStack board meetings that I'm leaving Cisco and so I'm having to withdraw from the board positions as well as Cloud Foundry and that's sad in a way because I have relationships with those people, but it many ways after I want to spend some time to really see where the future is again, because as you know in my career I've changed several times. And I'm so looking forward to actually, now going into sort of a new direction which may be much more moving up the stack. I think there's very exciting things going on in AI, there's exciting things going on in genomics. There's a lot of activity going on so we've been building this technology for a purpose to allow us to have those kinds of things. Now I want to start focusing much more directly. >> And you're leaving Cisco on what date? >> Leaving Cisco beginning of January. >> Well congratulations, great work and I think one of the trends I think this speaks to is I see a lot of computer scientists, a lot of people who have some DNA from the old ways like you do, and been there, and contributed at a seminal level, just some great contributions. Seeing computer science as an opportunity to solve problems. This is kind of a renaissance from seasoned pioneers and young people coming together. This is a great opportunity, is that kind of what you're thinking, you're just going to attack the problem? >> There's 8000 people here, this show's sold out and this is all developers so people who have background in computer science or are getting online and learning it themselves, this is an opportunity and the time to get in. >> You've been a great mentor to many, you've been a great contributor in the open source community, again, your contributions at the systems level and you understand certainly what's going on with CloudNative, looking forward to following up and congratulations. >> Yep, well I hope to be back again. >> Of course, you're VIP CUBE alumni. Lew Tucker, exciting news, Cisco's transformed. He's moving on to- taking on some big new challenges, thanks for coming on theCUBE really appreciate it. Lew Tucker, Vice President CTO systems, Cisco systems, moving on to some new endeavors. Here in theCUBE we're covering the live coverage here at KubeCon CloudNative I'm John Furrier, Stu Miniman, back with more day two interviews after this short break. (upbeat music)

Published Date : Dec 12 2018

SUMMARY :

Brought to you by Red Hat, Foundation and the center of it is probably one of the of the software that's used And that's all about the and the resources available the biggest issue we're going How is the CloudNative so that now the IT infrastructure And at the same time, we're the IT doesn't have to relearn things, the apps don't need to know of the network through what One of the things we've around the apps, and with dev ops layer, and everything else from the important and all that good stuff. of the pieces we've worked on the hybrid technology that that know Cisco, know the old that to happen, and in the is connected by the network. This is the fundamental the industry and this news now and so I'm having to withdraw think this speaks to is and the time to get in. great contributor in the the live coverage here

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
CiscoORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

MicrosoftORGANIZATION

0.99+

John FurrierPERSON

0.99+

Red HatORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

SeattleLOCATION

0.99+

Lew TuckerPERSON

0.99+

AWSORGANIZATION

0.99+

yesterdayDATE

0.99+

twoQUANTITY

0.99+

CloudNative Computing FoundationORGANIZATION

0.99+

Antico System PartnersORGANIZATION

0.99+

Sun MicrosystemsORGANIZATION

0.99+

billionsQUANTITY

0.99+

8000 peopleQUANTITY

0.99+

bothQUANTITY

0.99+

Seattle, WashingtonLOCATION

0.99+

Silicon ValleyLOCATION

0.99+

KubeConEVENT

0.99+

CNCFORGANIZATION

0.99+

OneQUANTITY

0.99+

Suzie WeePERSON

0.98+

CloudNativeORGANIZATION

0.98+

OpenStackORGANIZATION

0.98+

todayDATE

0.98+

Cisco SystemsORGANIZATION

0.98+

firstQUANTITY

0.98+

three daysQUANTITY

0.98+

CloudNativeConEVENT

0.98+

two schoolsQUANTITY

0.98+

oneQUANTITY

0.97+

both platformsQUANTITY

0.97+

one thingQUANTITY

0.97+

KubernetesTITLE

0.97+

CUBEORGANIZATION

0.95+

billions of dollarsQUANTITY

0.94+

KnativeORGANIZATION

0.94+

CloudNativeTITLE

0.93+

two interviewsQUANTITY

0.92+

KubernetesORGANIZATION

0.91+

StuPERSON

0.91+

LewPERSON

0.91+

Net AppORGANIZATION

0.91+

OpenStackTITLE

0.89+

CNCF KubeConEVENT

0.87+

Day twoQUANTITY

0.86+

OpenStackEVENT

0.86+

Janet Kuo, Google, KubeCon | CUBEConversation, October 2018


 

(spirited orchestral music) >> Hello and I'm John Furrier, cohost of theCUBE, founder of SiliconANGLE Media. I'm here at Palo Alto studios for CUBE Conversation as a preview for upcoming, the CNCF-sponsored KubeCon event coming up in Shanghai and in Seattle. I'm here with Janet Kuo, who is a software engineer at Google and recently named the co-chair of KubeCon, the main event around Kubernetes, multi-cloud, all the things happening in cloud-native. Janet, thanks for joining me today. >> Thanks for having me. So you were recently named co-chair, Kelsey was previously the co-chair and he always had those good demos but the program has been changing a lot and you're the new co-chair, what's it like? What's happening? What's the focus this year? What's the content going to look like? Tell us what's happening >> So we get a lot of overwhelming number of submissions, much more than last year, and I see a lot of interesting case studies and also I see that because Kubernetes is actually help you extract the infrastructure away and it runs anywhere so I see a lot of people are actually deploying it everywhere, multi-cloud, hybrid, and even in Edge. For example, I see Chick-Fil-A, they are going to talk about how they deploy Kubernetes in their Edge restaurants and the store owners, they are not tech expert, as you can expect. >> Yeah, I mean that's the edge of the network, a Chick-Fil-A, and you know, great retail example. We run a lot of Chick-Fil-A certainly out here in California it's like In-N-Out Burger, they go hand in hand. But this is a good use case of Edge and this is real world, so Kubernetes has certainly grown up. We know from the growth of KubeCon, the event itself has gotten to be pretty massive, the number of people involved has been great, how has Kubernetes grown up? Because we're seeing the conversation move from we love containers, Kubernetes is great for orchestrating everything, but now people are starting to really start really cranking it up a notch, is that the trend that you're seeing as well, and is that some of the content you'll be focused on? >> So I see, I took a lot at the Google trend for search for Kubernetes and it's still going way up since the beginning and also I look at a recent CNCF survey and I realize that about 40% of people who'll respond to their survey and they work in a enterprise and they said they run Kubernetes in production so that's a huge number. >> That's awesome. Well, now that you're the new co-chair, tell us a little bit about yourself, how, what's your background, how did you get there? >> I started working at Google in 2015 and that's before Kubernetes 1.0 was released and before CNCF and before the first KubeCon and when I joined Google, it's Kubernetes is a way, very new concept and not like it's fixed and it's already adopted by everyone so we work very hard to get the ease of use and get more people adoption and we get a lot of feedback from people and then Kubernetes is getting more and more popular, so after that I decided that I want to submit my first ever conference talk to KubeCon and I got selected and then I start to feel like I enjoy this and I did, and other CNCF hosted events, for example, a panel in San Francisco and I think that might be how I was selected. >> What was your first talk about, that you talked about? >> So I talk about running workloads in Kubernetes and I did an overview of the workloads API because I am the developer of that workloads API. >> So that's also, you got hooked on Kubernetes like everybody else, it's like the Kubernetes drug. So how did you get involved in open source? Were you always developing with open source? How did you get involved in the open source community? >> So Kubernetes is actually my first open source project and before that, I had a phone call with Tim Hawkins, he's the principal engineer at Google and he sold me the idea of Kubernetes and we need to be open and let people choose the best technology for them and he sold me the idea and I think Kubernetes is the future and also I want to work on open source but I just didn't have the chance to work on it yet. >> So we had a good fun time in Copenhagen for the last KubeCon, and we, theCUBE, has been at all the KubeCons as you know. We love this community, we think it's really special, not only because we've been there from the beginning, but we've gotten to see the people involved and the people have been very close-knit but yet so open and inclusive, we're seeing a lot of input, and then at the same time, so that's always great, open source, inclusive, and fun, but then the companies are coming in in waves, a massive amount of waves of commercial vendors jumping in, and I think this foundation's done a great job of balancing being a good upstream and good project but that dynamic is very interesting. It's probably the fastest open source kind of commercial, yet good vibes, commercial open source, how does that change or affect you guys as you pick and look at the data, 'cause you get surveys, you see what people want, vendors, users, industry participants, developers, what is the data telling you? What's all this data coming from the different KubeCons and how is that changing the selections and what's the trend I guess, what's the trends coming from the community? >> So from selecting talks, because we want to focus on make Kubernetes, make KubeCon, still community-focused conference so when we pick talks, we pick the ones that not just doing vendor pitch or sales pitch but we pick the ones that we think the community is going to benefit from and especially when they are talking about a solution that others could adopt or is it open source or not, then that affect our choice and then we also see a lot of people start customizing Kubernetes for their own needs and a lot of people are starting using Kubernetes API to managing resources outside of Kubernetes and that's a very interesting trend because with that, you can have Kubernetes to manage everything your infrastructure, lot of things running on Kubernetes. >> So what are some of those examples that are outside Kubernetes? So for example, you can use, so Kubernetes has a concept called custom resource that you can register a custom API in Kubernetes and so you can use that, you can register an API and you can implement a controller to manage anything you want, for example, different cloud resources or VMs, I even saw people use Kubernetes API to manage robots. >> Wow, so this is real world, so you mentioned you were working workload API at Google, the big trend that we're seeing on theCUBE and that crosses all the different events, not just cloud-native, is workload management, managing workloads and workloads are changing and it's very dynamic, it's not a static world anymore. So managing workloads to the infrastructure is where we see this nice activity happening from containers, Kubernetes, to service meshes, so there's a lot of activity going on there and some of the stuff is straightforward, I won't say straightforward, but containers and Kubernetes is easy to work with but services meshes are difficult. Istio, for instance, Kubeflow or Hot Projects, there's a real focus of stateless has been there, but stateful is hard, is there going to be talks about stateful applications, are you guys looking at some of the Istio, is service mesh going to be a focus this year? >> Yeah, we still see a lot of submissions from service meshes and so you can use service mesh to manage your service easily and secure them easily and we also see a lot of talks for stateful workloads, for example, how you customize something that manage your stateful workloads or what that best practice is and there is a pattern that's popular in the community which is called operator and the concept is that you write a controller, use the custom API that I just mentioned, and you just embed the knowledge of a human operator into that controller and let the controller do the automation for you. >> So it's putting intelligence, like an operator, into the software and letting that ride? >> Yeah and it will do all the work for you and you only need to write it once. >> And automation's a big trend, so if you could stack or rank the top three trends that we expect to see at KubeCon this year, what would they be? >> In the top three, I would say customize and multi-cloud and then service mesh or serverless they're both pretty popular, yeah. >> Is storageless coming? So if we have serverless, will there be storageless (laughs) I made that up, I tweeted that the other day, if there's servers, there's no servers, there's going to be no storage. I mean, service and storage go together so again, this is where the fun action is, the infrastructure is being programmable. And I think one of the things I like about what KubeCon has done is they've really enabled developers to be more efficient with DevOps, the DevOps trend, which is the cloud-native trend. The question I want to ask you is specifically kind of a Google question because I think this is important and Google cloud, I really love the trend of how application developers are being modernized, that's so cool, I love that, but the SRE concept that Google pioneered is becoming more of a trend as more of an operator role, not in the sense of what we just talked about but like an SRE, businesses are starting to look at that kind of scale out infrastructure where there's a need for kind of like an SRE, does that come up at KubeCon at all or is that too operator-oriented? Is that on the agenda? Does that come up in the KubeCon selection criteria, the notion of having operators or SRE-like roles? >> So we have a track called operations, so some of the operator, human operator, talks are submitting through that, to that topic, but we didn't see... >> Might be too early. >> Yeah, too early. >> It might be a little bit too early, that's what I think, alright and then since I brought up some of the tracks, we're always interested in knowing about startups 'cause there seems to be a lot of startup activity, doing a lot of AI stuff or applications, AI ops, and some new things going on, is there a startup activity involved that you're seeing, is there features of startups at all, do you guys look at that, is there going to be an emphasis of emerging companies and startups involved or is it mostly coming from the community? >> We definitely see a lot of startups and something in talks and also you just mentioned mission learning, we also see several talks on and about mission learning and AI submitting to both the Shanghai event and Seattle event. So projects like Kubeflow and Spark, that's being used a lot and we still, we see a lot of submissions from those. >> So those are the popular ones? >> Yeah, the popular ones and those are from Shanghai, I saw some AI submissions and I'm excited about those. >> Okay, so now back to the popular question, everyone wants to know where the popular parties are, what's the popular projects if you had to, in terms of contributors, activity, do you guys have like a rating like here's the most popular project? Do you guys look at just number of contributors? How do you rank the popularity of the projects? >> Or how would you rank them? >> We didn't actually look at the popularity of the projects because are you talking about CNCF projects or any projects? >> CNCF and KubeCon, let me ask the question differently... If I go to Shanghai or Seattle, what's going on? What do I engage, what should I pay attention to, what can I expect if I'm a user and I come to the event, what's going to happen at Shanghai and Seattle? What's the format? >> We separate all the talks in tracks so you can look up the track that you are interested in, for example, do you want to know all the case studies, then you can go to case studies and if you're interested in observability then you go to the observability track and they'll be a lot of different projects, they are presenting their own solutions and you can go and figure out which one fits you the best. >> And so multi-cloud's high, I'll ask you a multi-cloud question 'cause one of the things that we're tracking is what is multi-cloud and how is that different from hybrid? How would you describe that 'cause there are people that talk about hybrid cloud all the time but multi-cloud seems to have different definitions. Is there a different definition to hybrid cloud versus multi-cloud? >> So I think hybrid includes things that's not cloud, for example, your on-prem versus you have your on-premise solutions and you also use some cloud solutions and that's hybrid... >> And multi-cloud is multiple clouds so workloads on different clouds or sharing workloads across clouds? >> Workloads on different clouds. >> Yeah, so Office 365, that's Azure, a TensorFlow on Google and something, okay. I always want to know, comparing running workloads between clouds, that would be the ideal scenario. Here's the tough question for you, put you on the spot here, what is your favorite open source project in the CNCF and favorite track at KubeCon? >> My favorite project is of course Kubernetes and my favorite track would be case studies because I care a lot about user experience and I love to hear user stories. So for Seattle we picked a lot of user stories that we think are interesting and we also pick some keynote speakers that are going to talk about their large-scale usage of Kubernetes and that's very exciting for me, I can't wait to hear their story. >> Yeah, we love the end user stories too, 'cause it really puts the real world scenario around it. Okay, final question for you Janet, I wanted to ask you about diversity at KubeCon, what's going on and what can you share around that program? >> Yeah, we care about diversity a lot. We look at that when we select talks to accept and also we have a diversity scholarship that allows people to apply for a scholarship, we're going to cover the ticket to conference and also the travel to conference and also we have a diversity luncheon on December 12 and that will be sponsored by both Google and Heptio. >> So December 12 in Seattle? And that was a great, by the way, you did a great job last year, the program with scholarship got I think a standing ovation, so that's awesome. Thanks for doing that. >> Thank you, thanks. For the folks watching that might not be really deep on Kubernetes, in your opinion, why is Kubernetes so important and why should IT leaders, developers, and people in mainstream tech who are now new to Kubernetes and seeing the trends, why should they pay attention to Kubernetes, what's the relevance, what's the impact, why should they pay attention to Kubernetes? >> Because Kubernetes allows you to easily adopt cloud, because it's extract every infrastructure the infrastructure level away and allows you to easily run your infrastructure anywhere and most importantly, because a lot of people on different cloud and different stack of development, for example, CICD service mesh, they put a lot energy to integrate with Kubernetes so if you have Kubernetes you have everything. >> You have Kubernetes, you have everything. We love the work you're doing, thanks for co-chairing the KubeCon event, we love going there, CNCF's been very successful, been a great relationship, we love working with them, obviously it's a content-rich environment and I think everyone who is interested in cloud-native should go to the CNCF, there's a lot of sponsors, and more and more logos come on every day, so you guys are doing a good job. Thanks for doing that, appreciate it. Maybe we'll do two cubes this year. Janet Kuo, who is a software engineer at Google is joining me here at theCUBE. She's also the co-chair for KubeCon, the event put on by the CNCF and the industry around cloud-native and all things Kubernetes, multi-cloud, and really applications' workloads for a cloud environment. I'm John Furrier here in theCUBE studios in Palo Alto, thanks for watching. (spirited orchestral music)

Published Date : Oct 18 2018

SUMMARY :

at Google and recently named the co-chair of KubeCon, What's the content going to look like? restaurants and the store owners, they are not a Chick-Fil-A, and you know, great retail example. and I realize that about 40% of people who'll respond how did you get there? and before the first KubeCon and when I joined Google, and I did an overview of the workloads API So how did you get involved in open source? and he sold me the idea of Kubernetes and we need to and how is that changing the selections and what's the trend the ones that we think the community is going to an API and you can implement a controller to manage anything of the Istio, is service mesh going to be a focus this year? and you just embed the knowledge of a human operator Yeah and it will do all the work for you In the top three, I would say customize Is that on the agenda? of the operator, human operator, talks are submitting and also you just mentioned mission learning, we also see Yeah, the popular ones and those are from Shanghai, CNCF and KubeCon, let me ask the question differently... and figure out which one fits you the best. that talk about hybrid cloud all the time and you also use some cloud solutions Here's the tough question for you, put you on the spot here, and I love to hear user stories. and what can you share around that program? the ticket to conference and also the travel to conference by the way, you did a great job last year, and seeing the trends, why should they pay attention to the infrastructure level away and allows you to easily the KubeCon event, we love going there, CNCF's been

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JanetPERSON

0.99+

Janet KuoPERSON

0.99+

Tim HawkinsPERSON

0.99+

KelseyPERSON

0.99+

John FurrierPERSON

0.99+

CaliforniaLOCATION

0.99+

SeattleLOCATION

0.99+

ShanghaiLOCATION

0.99+

CopenhagenLOCATION

0.99+

San FranciscoLOCATION

0.99+

December 12DATE

0.99+

Palo AltoLOCATION

0.99+

2015DATE

0.99+

GoogleORGANIZATION

0.99+

SiliconANGLE MediaORGANIZATION

0.99+

KubeConEVENT

0.99+

October 2018DATE

0.99+

todayDATE

0.99+

CNCFORGANIZATION

0.99+

KubernetesTITLE

0.99+

firstQUANTITY

0.99+

Office 365TITLE

0.99+

last yearDATE

0.98+

two cubesQUANTITY

0.98+

this yearDATE

0.98+

Chick-Fil-AORGANIZATION

0.98+

KubeConsEVENT

0.98+

Kubernetes 1.0TITLE

0.98+

bothQUANTITY

0.97+

oneQUANTITY

0.97+

KubeConORGANIZATION

0.97+

In-N-Out BurgerORGANIZATION

0.97+

AzureTITLE

0.96+

first talkQUANTITY

0.94+

EdgeORGANIZATION

0.94+

KubernetesORGANIZATION

0.93+

about 40%QUANTITY

0.92+

DD, Cisco + Han Yang, Cisco | theCUBE NYC 2018


 

>> Live from New York, It's the CUBE! Covering theCUBE, New York City 2018. Brought to you by SiliconANGLE Media and its Ecosystem partners. >> Welcome back to the live CUBE coverage here in New York City for CUBE NYC, #CubeNYC. This coverage of all things data, all things cloud, all things machine learning here in the big data realm. I'm John Furrier and Dave Vellante. We've got two great guests from Cisco. We got DD who is the Vice President of Data Center Marketing at Cisco, and Han Yang who is the Senior Product Manager at Cisco. Guys, welcome to the Cube. Thanks for coming on again. >> Good to see ya. >> Thanks for having us. >> So obviously one of the things that has come up this year at the Big Data Show, used to be called Hadoop World, Strata Data, now it's called, the latest name. And obviously CUBE NYC, we changed from Big Data NYC to CUBE NYC, because there's a lot more going on. I heard hallway conversations around blockchain, cryptocurrency, Kubernetes has been said on theCUBE already at least a dozen times here today, multicloud. So you're seeing the analytical world try to be, in a way, brought into the dynamics around IT infrastructure operations, both cloud and on premises. So interesting dynamics this year, almost a dev ops kind of culture to analytics. This is a new kind of sign from this community. Your thoughts? >> Absolutely, I think data and analytics is one of those things that's pervasive. Every industry, it doesn't matter. Even at Cisco, I know we're going to talk a little more about the new AI and ML workload, but for the last few years, we've been using AI and ML techniques to improve networking, to improve security, to improve collaboration. So it's everywhere. >> You mean internally, in your own IT? >> Internally, yeah. Not just in IT, in the way we're designing our network equipment. We're storing data that's flowing through the data center, flowing in and out of clouds, and using that data to make better predictions for better networking application performance, security, what have you. >> The first topic I want to talk to you guys about is around the data center. Obviously, you do data center marketing, that's where all the action is. The cloud, obviously, has been all the buzz, people going to the cloud, but Andy Jassy's announcement at VMworld really is a validation that we're seeing, for the first time, hybrid multicloud validated. Amazon announced RDS on VMware on-premises. >> That's right. This is the first time Amazon's ever done anything of this magnitude on-premises. So this is a signal from the customers voting with their wallet that on-premises is a dynamic. The data center is where the data is, that's where the main footprint of IT is. This is important. What's the impact of that dynamic, of data center, where the data is with the option of a cloud. How does that impact data, machine learning, and the things that you guys see as relevant? >> I'll start and Han, feel free to chime in here. So I think those boundaries between this is a data center, and this a cloud, and this is campus, and this is the edge, I think those boundaries are going away. Like you said, data center is where the data is. And it's the ability of our customers to be able to capture that data, process it, curate it, and use it for insight to take decision locally. A drone is a data center that flies, and boat is a data center that floats, right? >> And a cloud is a data center that no one sees. >> That's right. So those boundaries are going away. We at Cisco see this as a continuum. It's the edge cloud continuum. The edge is exploding, right? There's just more and more devices, and those devices are cranking out more data than ever before. Like I said, it's the ability of our customers to harness the data to make more meaningful decisions. So Cisco's take on this is the new architectural approach. It starts with the network, because the network is the one piece that connects everything- every device, every edge, every individual, every cloud. There's a lot of data within the network which we're using to make better decisions. >> I've been pretty close with Cisco over the years, since '95 timeframe. I've had hundreds of meetings, some technical, some kind of business. But I've heard that term edge the network many times over the years. This is not a new concept at Cisco. Edge of the network actually means something in Cisco parlance. The edge of the network >> Yeah. >> that the packets are moving around. So again, this is not a new idea at Cisco. It's just materialized itself in a new way. >> It's not, but what's happening is the edge is just now generating so much data, and if you can use that data, convert it into insight and make decisions, that's the exciting thing. And that's why this whole thing about machine learning and artificial intelligence, it's the data that's being generated by these cameras, these sensors. So that's what is really, really interesting. >> Go ahead, please. >> One of our own studies pointed out that by 2021, there will be 847 zettabytes of information out there, but only 1.3 zettabytes will actually ever make it back to the data center. That just means an opportunity for analytics at the edge to make sense of that information before it ever makes it home. >> What were those numbers again? >> I think it was like 847 zettabytes of information. >> And how much makes it back? >> About 1.3. >> Yeah, there you go. So- >> So a huge compression- >> That confirms your research, Dave. >> We've been saying for a while now that most of the data is going to stay at the edge. There's no reason to move it back. The economics don't support it, the latency doesn't make sense. >> The network cost alone is going to kill you. >> That's right. >> I think you really want to collect it, you want to clean it, and you want to correlate it before ever sending it back. Otherwise, sending that information, of useless information, that status is wonderful. Well that's not very valuable. And 99.9 percent, "things are going well." >> Temperature hasn't changed. (laughs) >> If it really goes wrong, that's when you want to alert or send more information. How did it go bad? Why did it go bad? Those are the more insightful things that you want to send back. >> This is not just for IoT. I mean, cat pictures moving between campuses cost money too, so why not just keep them local, right? But the basic concepts of networking. This is what I want to get in my point, too. You guys have some new announcements around UCS and some of the hardware and the gear and the software. What are some of the new announcements that you're announcing here in New York, and what does it mean for customers? Because they want to know not only speeds and feeds. It's a software-driven world. How does the software relate? How does the gear work? What's the management look like? Where's the control plane? Where's the management plane? Give us all the data. >> I think the biggest issues starts from this. Data scientists, their task is to export different data sources, find out the value. But at the same time, IT is somewhat lagging behind. Because as the data scientists go from data source A to data source B, it could be 3 petabytes of difference. IT is like, 3 petabytes? That's only from Monday through Wednesday? That's a huge infrastructure requirement change. So Cisco's way to help the customer is to make sure that we're able to come out with blueprints. Blueprints enabling the IT team to scale, so that the data scientists can work beyond their own laptop. As they work through the petabytes of data that's come in from all these different sources, they're able to collaborate well together and make sense of that information. Only by scaling with IT helping the data scientists to work the scale, that's the only way they can succeed. So that's why we announced a new server. It's called a C480ML. Happens to have 8 GPUs from Nvidia inside helping customers that want to do that deep learning kind of capabilities. >> What are some of the use cases on these as products? It's got some new data capabilities. What are some of the impacts? >> Some of the things that Han just mentioned. For me, I think the biggest differentiation in our solution is things that we put around the box. So the management layer, right? I mean, this is not going to be one server and one data center. It's going to be multiple of them. You're never going to have one data center. You're going to have multiple data centers. And we've got a really cool management tool called Intersight, and this is supported in Intersight, day one. And Intersight also uses machine learning techniques to look at data from multiple data centers. And that's really where the innovation is. Honestly, I think every vendor is bend sheet metal around the latest chipset, and we've done the same. But the real differentiation is how we manage it, how we use the data for more meaningful insight. I think that's where some of our magic is. >> Can you add some code to that, in terms of infrastructure for AI and ML, how is it different than traditional infrastructures? So is the management different? The sheet metal is not different, you're saying. But what are some of those nuances that we should understand. >> I think especially for deep learning, multiple scientists around the world have pointed that if you're able to use GPUs, they're able to run the deep learning frameworks faster by roughly two waters magnitude. So that's part of the reason why, from an infrastructure perspective, we want to bring in that GPUs. But for the IT teams, we didn't want them to just add yet another infrastructure silo just to support AI or ML. Therefore, we wanted to make sure it fits in with a UCS-managed unified architecture, enabling the IT team to scale but without adding more infrastructures and silos just for that new workload. But having that unified architecture, it helps the IT to be more efficient and, at the same time, is better support of the data scientists. >> The other thing I would add is, again, the things around the box. Look, this industry is still pretty nascent. There is lots of start-ups, there is lots of different solutions, and when we build a server like this, we don't just build a server and toss it over the fence to the customer and say "figure it out." No, we've done validated design guides. With Google, with some of the leading vendors in the space to make sure that everything works as we say it would. And so it's all of those integrations, those partnerships, all the way through our systems integrators, to really understand a customer's AI and ML environment and can fine tune it for the environment. >> So is that really where a lot of the innovation comes from? Doing that hard work to say, "yes, it's going to be a solution that's going to work in this environment. Here's what you have to do to ensure best practice," etc.? Is that right? >> So I think some of our blueprints or validated designs is basically enabling the IT team to scale. Scale their stores, scale their CPU, scale their GPU, and scale their network. But do it in a way so that we work with partners like Hortonworks or Cloudera. So that they're able to take advantage of the data lake. And adding in the GPU so they're able to do the deep learning with Tensorflow, with Pytorch, or whatever curated deep learning framework the data scientists need to be able to get value out of those multiple data sources. These are the kind of solutions that we're putting together, making sure our customers are able to get to that business outcome sooner and faster, not just a-- >> Right, so there's innovation at all altitudes. There's the hardware, there's the integrations, there's the management. So it's innovation. >> So not to go too much into the weeds, but I'm curious. As you introduce these alternate processing units, what is the relationship between traditional CPUs and these GPUs? Are you managing them differently, kind of communicating somehow, or are they sort of fenced off architecturally. I wonder if you could describe that. >> We actually want it to be integrated, because by having it separated and fenced off, well that's an IT infrastructure silo. You're not going to have the same security policy or the storage mechanisms. We want it to be unified so it's easier on IT teams to support the data scientists. So therefore, the latest software is able to manage both CPUs and GPUs, as well as having a new file system. Those are the solutions that we're putting forth, so that ARC-IT folks can scale, our data scientists can succeed. >> So IT's managing a logical block. >> That's right. And even for things like inventory management, or going back and adding patches in the event of some security event, it's so much better to have one integrated system rather than silos of management, which we see in the industry. >> So the hard news is basically UCS for AI and ML workloads? >> That's right. This is our first server custom built ground up to support these deep learning, machine learning workloads. We partnered with Nvidia, with Google. We announced earlier this week, and the phone is ringing constantly. >> I don't want to say godbot. I just said it. (laughs) This is basically the power tool for deep learning. >> Absolutely. >> That's how you guys see it. Well, great. Thanks for coming out. Appreciate it, good to see you guys at Cisco. Again, deep learning dedicated technology around the box, not just the box itself. Ecosystem, Nvidia, good call. Those guys really get the hot GPUs out there. Saw those guys last night, great success they're having. They're a key partner with you guys. >> Absolutely. >> Who else is partnering, real quick before we end the segment? >> We've been partnering with software sci, we partner with folks like Anaconda, with their Anaconda Enterprise, which data scientists love to use as their Python data science framework. We're working with Google, with their Kubeflow, which is open source project integrating Tensorflow on top of Kubernetes. And of course we've been working with folks like Caldera as well as Hortonworks to access the data lake from a big data perspective. >> Yeah, I know you guys didn't get a lot of credit. Google Cloud, we were certainly amplifying it. You guys were co-developing the Google Cloud servers with Google. I know they were announcing it, and you guys had Chuck on stage there with Diane Greene, so it was pretty positive. Good integration with Google can make a >> Absolutely. >> Thanks for coming on theCUBE, thanks, we appreciate the commentary. Cisco here on theCUBE. We're in New York City for theCUBE NYC. This is where the world of data is converging in with IT infrastructure, developers, operators, all running analytics for future business. We'll be back with more coverage, after this short break. (upbeat digital music)

Published Date : Sep 12 2018

SUMMARY :

It's the CUBE! Welcome back to the live CUBE coverage here So obviously one of the things that has come up this year but for the last few years, Not just in IT, in the way we're designing is around the data center. and the things that you guys see as relevant? And it's the ability of our customers to It's the edge cloud continuum. The edge of the network that the packets are moving around. is the edge is just now generating so much data, analytics at the edge Yeah, there you go. that most of the data is going to stay at the edge. I think you really want to collect it, (laughs) Those are the more insightful things and the gear and the software. the data scientists to work the scale, What are some of the use cases on these as products? Some of the things that Han just mentioned. So is the management different? it helps the IT to be more efficient in the space to make sure that everything works So is that really where a lot of the data scientists need to be able to get value There's the hardware, there's the integrations, So not to go too much into the weeds, Those are the solutions that we're putting forth, in the event of some security event, and the phone is ringing constantly. This is basically the power tool for deep learning. Those guys really get the hot GPUs out there. to access the data lake from a big data perspective. the Google Cloud servers with Google. This is where the world of data

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

NvidiaORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

Han YangPERSON

0.99+

GoogleORGANIZATION

0.99+

New YorkLOCATION

0.99+

Diane GreenePERSON

0.99+

AmazonORGANIZATION

0.99+

DavePERSON

0.99+

HortonworksORGANIZATION

0.99+

2021DATE

0.99+

New York CityLOCATION

0.99+

Andy JassyPERSON

0.99+

8 GPUsQUANTITY

0.99+

847 zettabytesQUANTITY

0.99+

John FurrierPERSON

0.99+

99.9 percentQUANTITY

0.99+

MondayDATE

0.99+

SiliconANGLE MediaORGANIZATION

0.99+

3 petabytesQUANTITY

0.99+

AnacondaORGANIZATION

0.99+

WednesdayDATE

0.99+

DDPERSON

0.99+

first timeQUANTITY

0.99+

one serverQUANTITY

0.99+

ClouderaORGANIZATION

0.99+

PythonTITLE

0.99+

first topicQUANTITY

0.99+

one pieceQUANTITY

0.99+

VMworldORGANIZATION

0.99+

'95DATE

0.98+

1.3 zettabytesQUANTITY

0.98+

NYCLOCATION

0.98+

bothQUANTITY

0.98+

oneQUANTITY

0.98+

this yearDATE

0.98+

Big Data ShowEVENT

0.98+

CalderaORGANIZATION

0.98+

two watersQUANTITY

0.97+

todayDATE

0.97+

ChuckPERSON

0.97+

OneQUANTITY

0.97+

Big DataORGANIZATION

0.97+

earlier this weekDATE

0.97+

IntersightORGANIZATION

0.97+

hundreds of meetingsQUANTITY

0.97+

CUBEORGANIZATION

0.97+

first serverQUANTITY

0.97+

last nightDATE

0.95+

one data centerQUANTITY

0.94+

UCSORGANIZATION

0.92+

petabytesQUANTITY

0.92+

two great guestsQUANTITY

0.9+

TensorflowTITLE

0.86+

CUBE NYCORGANIZATION

0.86+

HanPERSON

0.85+

#CubeNYCLOCATION

0.83+

Strata DataORGANIZATION

0.83+

KubeflowTITLE

0.82+

Hadoop WorldORGANIZATION

0.81+

2018DATE

0.8+

Stephan Fabel, Canonical | OpenStack Summit 2018


 

(upbeat music) >> Announcer: Live from Vancouver, Canada. It's The Cube covering Openstack Summit, North America, 2018. Brought to you by Red Hat, The Open Stack Foundation, and it's ecosystem partners. >> Welcome back to The Cube's coverage of Openstack Summit 2018 in Vancouver. I'm Stu Miniman with cohost of the week, John Troyer. Happy to welcome back to the program Stephan Fabel, who is the Director of Ubuntu product and development at Canonical. Great to see you. >> Yeah, great to be here, thank you for having me. Alright, so, boy, there's so much going on at this show. We've been talking about doing more things and in more places, is the theme that the Open Stack Foundation put into place, and we had a great conversation with Mark Shuttleworth, and going to dig in a little bit deeper in some of the areas with you. >> Stephan: Okay, absolutely. >> So we have the Cube, and we're go into all of the Kubernetes, Kubeflow, and all those other things that we'll mispronounce how they go. >> Stephan: Yes, yes, absolutely. >> What's your impression of the show first of all? >> Well I think that it's really, you know, there's a consolidation going on, right? I mean, we really have the people who are serious about open infrastructure here, serious about OpenStack. They're serious about Kubenetes. They want to implement, and they want to implement at a speed that fits the agility of their business. They want to really move quick with the obstrain release. I think the time for enterprise hardening delays an inertia there is over. I think people are really looking at the core of OpenStack, that's mature, it's stable, it's time for us to kind of move, get going, get success early, get it soon, then grow. I think most of the enterprise, most of the customers we talk to adopt that notion. >> One of the things that sometimes helps is help us lay out the stack a little bit here because we actually commented that some of the base infrastructure pieces we're not talking as much about because they're kind of mature, but OpenStack very much at the infrastructure level, your compute, storage, and network need to understand. But then we when we start doing things like Kubernetes as well, I can either do or, or on top of, and things like that, so give us your view as to what'd you put, what Canonical's seeing, and what customers-- how you lay out that stack? >> I think you're right, I think there's a little bit of path-finding here that needs to be done on the Kubernetes side, but ultimately, I think it's going to really converge around OpenStack being operative-centric, and operative-friendly, working and operating the infrastructure, scaling that out in a meaningful manner, providing multitenancy to all the different departments. Having Kubernetes be developer-centric and really help to on-board and accelerate the workload that option of the next gen initiatives, right? So, what we see is absolutely a use case for Kubernetes and OpenStack to work perfectly well together, be an extension of each other, possibly also sit next to each other without being too incumbenent there. But I think that ultimately having something like Kubernetes contain a based developer APIs that are providing that orchestration layer are the next thing, and they run just perfectly fine on Canonical OpenStack. >> Yeah, there certainly has been a lot of talk about that here at the show. Let's see, let's go a level above that, things we run on Kubernetes, I wanted to talk a little bit about ML and AI and Kubeflow. It seems like we're, I'd almost say that we're, this is like, if we were a movie, we're in a sequel like AI-5; this time, it's real. I really do see real enterprise applications incorporating these technologies into the workflow for what otherwise might be kind of boring, you know, line of business, can you talk a little bit about where we are in this evolution? >> You mean, John, only since we've been talking about it since the mid-1800s, so yeah. >> I was just about to point that out, I mean, AI's not new, right? We've seen it since about 60 years. It's been around for quite some time. I think that there is an unprecedented amount of sponsorship of new startups in this area, in this space, and there's a reason why this is heating up. I think the reason why ultimately it's there is because we're talking about a scale that's unprecedented, right? We thought the biggest problem we had with devices was going to be the IP addresses running out, and it turns out, that's not true at all, right? At a certain scale, and at a certain distributed nature of your rollout, you're going to have to deal with just such complexity and interaction between the underlying, the under-cloud, the over-cloud, the infrastructure, the developers. How do I roll this out? If I spin up 1000 BMs over here, why am I experiencing dropped calls over there? It's those types of things that need to be self-correlated. They need to be identified, they need to be worked out, so there's a whole operator angle just to be able to cope with that whole scenario. I think there's projects that are out there that are trying to ultimately address that, for example, Acumos (mumbles) Then, there is, of course, the new applications, right? Smart cities to connect to cars, all those car manufacturers who are, right now, faced with the problem: how do I deal with mobile, distributed inference rollout on the edge while still capturing the data continually, train my model, update, then again, distribute out to the edge to get a better experience. How do I catch up to some of the market leaders here that are out there? As the established car manufacturers are going to come and catch up, put more and more miles autonomously on the asphalt, we're going to basically have to deal with a whole lot more of proctization of machine-learning applications that just have to be managed at scale. And so we believe for all certain good company in that belief that having to manage large applications at scale, that containers and Kubernetes is a great way to do that, right? They did that for web apps. They did that for the next generation applications. This is one example where with the right operators in mind, the right CRDs, the right frameworks on top of Kubernetes managed correctly, you are actually in a great position to just go to market with that. >> I wonder if you might have a customer example that might go to walk us through kind of where they are in this discussion, talk to many companies, you know, the whole IOT even pieces were early in this. So what's actually real today, how much is planning, is this years we're talking before some of these really come to fruition? >> So yeah, I can't name a customer, but I can say that every single car manufacturer we're talking to is absolutely interested in solving the operational problem of running machine-learning frameworks as a service, making sure those are up running and up to speed at any given point in time, spin them up in a multitenant fashion, make sure that the GPU enablement is actually done properly at all layers of the virtualization. These are real operational challenges that they're facing today, and they're looking to solve with us. Pick a large car manufacturer you want. >> John: Nice. We're going down to something that I can type on my own keyboard then, and go to GitHub, right? That's one of the places to go where it is run, TensorFlow of machine-learning framework on Kubernetes is Kubeflow, and that little bit yesterday on stage, you want to talk about that maybe? >> Oh, absolutely, yes. That's the core of our current strategy right now. We're looking at Kubeflow as one of the key enablers of machine-learning frameworks as a service on top of Kubernetes, and I think they're a great example because they can really show how that as a service can be implemented on top of a virtualization platform, whether that be KVM, pure KVM, on bare metal, on OpenStack, and actually provide machine-learning frameworks such as TensorFlow, Pipe Torch, Seldon Core. You have all those frameworks being supported, and then basically start mix and matching. I think ultimately it's so interesting to us because the data scientists are really not the ones that are expected to manage all this, right? Yet they are the core of having to interact with it. In the next generation of the workloads, we're talking to PHDs and data scientists that have no interest whatsoever in understanding how all of this works on the back end, right? They just want to know this is where I'm going to submit my artifact that I'm creating, this is how it works in general. Companies pay them a lot of money to do just that, and to just do the model because that's where, until the right model is found, that is exactly where the value is. >> So Stephan, does Canonical go talk to the data scientists, or is there a class of operators who are facilitating the data scientists? >> Yes, we talk to the data scientists who understand their problems, we talk to the operators to understand their problems, and then we work with partners such as Google to try and find solutions to that. >> Great, what kind of conversations are you having here at the show? I can't imagine there's too many of those, great to hear if there are, but where are they? I think everybody here knows containers, very few know Kubernetes, and how far up the stack of building new stuff are they? >> You'd be surprised, I mean, we put this out there, and so far, I want to say the majority of the customer conversations we've had took an AI turn and said, this is what we're trying to do next year, this is what we're trying to do later in the year, this is what we're currently struggling with. So glad you have an approach because otherwise, we would spend a ton of time thinking about this, a ton of time trying to solve this in our own way that then gets us stuck in some deep end that we don't want to be. So, help us understand this, help us pave the way. >> John: Nice, nice. I don't want to leave without talking also about Microcades, that's a Kubernetes snap, you code some clojure download, Can we talk a little bit about that? >> Yeah, glad to. This was an idea that we conceived that came out of this notion of alright, well if I do have, talking to a data scientist, if I do have a data scientist, where does he start? >> Stu: Does Kubernetes have a learning curve to date? >> It does, yeah, it does. So here's the thing, as a developer, you have, what options do you have right when you get started? You can either go out and get a community stood up on one of the public clouds, but what if you're in the plane, right? You don't have a connection, you want to work on your local laptop. Possibly, that laptop also has a GPU, and you're a data scientist and you want to try this out because you know you're going to submit this training job now to a (mumbles) that runs un-prem behind the firewall with a limited training set, right? This is the situation we're talking about. So ultimately, the motivation for creating Microcades was we want to make this very, very equivalent. Now you can deploy Kubeflow on top of Microcades today, and it'll run just fine. You get your TensorBoard, you have Jupyter notebook, and you can do your work, and you can do it in a fashion that will then be compatible to your on-prem and public machine-learning framework. So that was your original motivation for why we went down this road, but then we noticed you know what, this is actually a wider need. People are thinking about local Kubernetes in many different ways. There are a couple of solutions out there. They tend to be cumbersome, or more cumbersome than developers would like it. So we actually said, you know, maybe we should turn this into a more general purpose solution. So hence, Microcades. It works like a snap on your machine, you kick that off, you have Kubernetes API, and under 30 seconds or little longer if your download speed plays a factor here, you enable DNS and you're good to go. >> Stephan, I just want to give you the opportunity, is there anything in the Queens Release that your customers have been specifically waiting for or any other product announcements before we wrap? >> Sure, we're very excited about the Queens Release. We think Queens Release is one of the great examples of the maturity of the code base and really the knot towards the operator, and that, I think was the big challenge beyond the olden days of OpenStack where the operators took a long time for the operators to be heard, and to establish that conversation. We'd like to say and to see that OpenStack Queens has matured in that respect, and we like things like Octavia. We're very exciting about (mumbles) as a service, taking its own life and being treated as a first-class citizen. I think that it was a great decision of the community to get on that road. We're supporting as a part of our distribution. >> Alright, well, appreciate the update. Really fascinating to hear about all, you know, everybody's thinking about it and really starting to move on all the ML and AI stuff. Alright, for John Troyer, I'm Tru Miniman. Lots more coverage here from OpenStack Summit 2018 in Vancouver. Thanks for watching The Cube. (upbeat music)

Published Date : May 22 2018

SUMMARY :

Brought to you by Red Hat, The Open Stack Foundation, Great to see you. Yeah, great to be here, thank you for having me. So we have the Cube, and we're go into all of the I mean, we really have the people who are serious about and what customers-- how you lay out that stack? of path-finding here that needs to be done about that here at the show. since the mid-1800s, so yeah. As the established car manufacturers are going to in this discussion, talk to many companies, a multitenant fashion, make sure that the GPU That's one of the places to go where it is run, and to just do the model because Yes, we talk to the data scientists who understand that we don't want to be. I don't want to leave without talking also about Microcades, talking to a data scientist, and you can do your work, and you can do of the community to get on that road. Really fascinating to hear about all, you know,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
StephanPERSON

0.99+

Mark ShuttleworthPERSON

0.99+

JohnPERSON

0.99+

John TroyerPERSON

0.99+

Stephan FabelPERSON

0.99+

Red HatORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

VancouverLOCATION

0.99+

Open Stack FoundationORGANIZATION

0.99+

KubernetesTITLE

0.99+

CanonicalORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

Vancouver, CanadaLOCATION

0.99+

OpenStackTITLE

0.99+

next yearDATE

0.99+

mid-1800sDATE

0.99+

yesterdayDATE

0.99+

Tru MinimanPERSON

0.99+

under 30 secondsQUANTITY

0.99+

OpenStack Summit 2018EVENT

0.99+

GitHubORGANIZATION

0.98+

QueensORGANIZATION

0.98+

Openstack Summit 2018EVENT

0.98+

one exampleQUANTITY

0.98+

OneQUANTITY

0.98+

OpenStack Summit 2018EVENT

0.98+

KubeflowTITLE

0.97+

Openstack SummitEVENT

0.97+

1000 BMsQUANTITY

0.97+

TensorFlowTITLE

0.96+

about 60 yearsQUANTITY

0.96+

oneQUANTITY

0.96+

JupyterORGANIZATION

0.94+

The CubeORGANIZATION

0.94+

StuPERSON

0.94+

todayDATE

0.94+

asphaltTITLE

0.93+

North AmericaLOCATION

0.92+

UbuntuORGANIZATION

0.88+

The Open Stack FoundationORGANIZATION

0.87+

KubernetesORGANIZATION

0.86+

CubeCOMMERCIAL_ITEM

0.77+

Queens ReleaseTITLE

0.77+

single carQUANTITY

0.76+

Seldon CoreTITLE

0.75+

Pipe TorchTITLE

0.72+

KubeflowORGANIZATION

0.7+

The CubeTITLE

0.69+

OctaviaTITLE

0.67+

firstQUANTITY

0.57+

coupleQUANTITY

0.5+

MicrocadesORGANIZATION

0.5+

KubenetesORGANIZATION

0.49+

2018DATE

0.48+

TensorBoardTITLE

0.48+

KubernetesCOMMERCIAL_ITEM

0.42+

ReleaseTITLE

0.4+

Jonathan Donaldson, Google Cloud | Red Hat Summit 2018


 

(upbeat electronic music) >> Narrator: Live from San Francisco, it's The Cube, covering Red Hat Summit 2018. Brought to you by Red Hat. >> Hey, welcome back, everyone. We are here live, The Cube in San Francisco, Moscone West for the Red Hat Summit 2018 exclusive coverage. I'm John Furrier, the cohost of The Cube. I'm here with my cohost, John Troyer, who is the co-founder of Tech Reckoning, an advisory and community development firm. Our next guest is Jonathan Donaldson, Technical Director, Office of the CTO, Google Cloud. Former Cube Alumni. Formerly was Intel, been on before, now at Google Cloud for almost two years. Welcome back, good to see you. >> Good to see you too, it's great to be back. >> So, had a great time last week with the Google Cloud folks at KubeCon in Denmark. Kubernetes, rocking the world. Really, when I hear the word de facto standard and abstraction layers, I start to get, my bells go off, let me look at that. Some interesting stuff. You guys have been part of that from the beginning, with the CNCF, Google, Intel, among others. Really created a movement, congratulations. >> Yeah, thank you. It really comes down to the fact that we've been running containers for almost a dozen years. Four billion a week, we launch and collapse. And we know that at some point, as Docker and containers really started to take over the new way of developing things, that everyone is going to run into that scalability wall that we had run into years and years and years ago. And so Craig and the team at Google, again, I wasn't at Google at this time, but they had a really, let's take what we know from internally here and let's take those patterns and let's put them out there for the world to use, and that became Kubernetes. And so I think that's really the massive growth there, is that people are like, "Wow, you've solved a problem, "but not from a science project. "It's actually from something "that's been running for a decade." >> Internally, that's called bore. That's tools that Google used, that their SRE cyber lab engineers used to massively provision manage. And they're all software engineers, so it's not like they're operators. They're all Google engineers. But I want to take a minute, if you can, to explain. 'Cause you're new to Google Cloud. You're in the industry, you've been around, you helped form the CNCF, which is the Cloud Native Foundation. You know cloud, you know tech. Google's changed a lot, and Google Cloud specifically has a narrative of, they're one big cloud and they have an application called Google stuff and enterprises are different. You've been there now for almost a year or more. >> Jonathan: Little over a year, yeah. >> What's Google Cloud like right now? Break the myths down around Google Cloud. What's the current status? I know personally, a lot of cloud DNA is coming in from the industry. They've been hiring, making some great progress. Take a minute to explain the Google Cloud. >> Yeah, so it's really interesting. So again, it comes back from where you started from. So Google itself started from a scale consumer SAS type of business. And so that, they understood really well. And we still understand, obviously, uptime and scalability really, really well. And I would say if you backtrack several years ago, as the enterprise really started to look at public clouds and Google Cloud itself started to spin up, that was probably not, they probably didn't understand exactly all of the things that an enterprise would need. Really, at that point in time, no one cloud understood any of the enterprise specifically. And so what they did is they started hiring in people like myself and others that are in the group that I'm in. They're former CIOs of large enterprise companies or former VPs of engineering, and really our job in the Office of the CTO for Google Cloud is to help with the product teams, to help them build the products that enterprises need to be able to use the public cloud. And then also work with some of those top enterprise customers to help them adopt those technologies. And so I think now that if you look at Google Cloud, they understand enterprise really, really well, certainly from the product and the technology perspective. And I think it's just going to get better. >> I interviewed Jennifer Lynn, I had a one-on-one with her. I didn't publish it, it was more of a briefing. She runs Product Management, all on security side. >> Jonathan: Yeah, she's fantastic. >> So she's checking the boxes. So the table stakes are set for Google. I know you got to do some basic things to catch up to get in the cloud. But also you have partnerships. Google Next is coming up, The Cube will be there. Red Hat's a partner. Talk about that relationship with Red Hat and partners. So you're very partner-centric with Google Cloud. >> Jonathan: We are. >> And that's important in the enterprise, but so what-- >> Well, there tends to be two main ares that we focus on, from what we consider the right way to do cloud. One of them is open source. So having, which again, aligns perfectly with Red Hat, is putting the technologies that we want customers to use and that we think customers should use in open source. Kubernetes is an example, there's Istio and others that we've put out that are examples of those. A lot of the open source projects that we all take for granted today were started from white papers that we had put out at one point in time, explaining how we did those things. Red Hat, from a partner perspective, I think that that follows along. We think that the way that customers are going to consume these technologies, certainly enterprise customers are, through those partners that they know and trust. And so having a good, flourishing ecosystem of partners that surround Google Cloud is absolutely key to what we do. >> And they love multicloud too. >> They love multicloud. >> Can't go wrong with it. >> And we do too. The idea is that we want customers to come to Google Cloud and stay there because they want to stay there, because they like us for who we are and for what we offer them, not because they're locked into a specific service or technology. And things like Kubernetes, things like containers, being open sourced allows them to take their tool chains all the way from their laptop to their own cloud inside their own data center to any cloud provider they want. And we think hopefully they'll naturally gravitate towards us over time. >> One of the things I like about the cloud is that there's a flywheel, if you will, of expertise. Like I look at Amazon, for instance. They're getting a lot of metadata of the kinds of workloads that are on their cloud, so they can learn from that and turn that into an advantage for them, or not, or for their customers, and how they could do that. That's their business decision. Google has a lot of flywheel action going on. A lot of Android devices connected in the Google system. You have a lot of services that you can bring to bear in the cloud. How are you guys looking at, say, from a security standpoint alone, that would be a very valuable service to have. I can tap into all the security goodness of Google around what spear phishing is out there, things of that nature. So are you guys thinking like that, in terms of services for customers? How does that play out? >> So where we, we're very consistent on what we consider is, privacy is number one for our customers, whether they're consumer customers or whether they're enterprise customers. Where we would use data, you had mentioned a lot of things, but where we would use some data across customer bases are typically for security things, so where we would see some sort of security impact or an attack or something like that that started to impact many customers. And we would then aggregate that information. It's not really customer information. It's just like you said, metadata, themes, or trends. >> John Furrier: You're not monetizing it. >> Yeah, we're not monetizing it, but we're actually using it to protect customers. But when a customer actually uses Google Cloud, that instance is their hermetically sealed environment. In fact, I think we just came out recently with even the transparency aspects of it, where it's almost like the two key type of access, for if our engineers have to help the customer with a troubleshooting ticket, that ticket actually has to be opened. That kind of unlocks one door. The customer has to say, "Yes," that unlocks the other door. And then they can go in there and help the customer do things to solve whatever the problem is. And each one of those is transparently and permanently logged. And then the customer can, at any point in time, go in and see those things. So we are taking customer privacy from an enterprise perspective-- >> And you guys are also a whole building from Google proper, like it's a completely different campus. So that's important to note. >> It is. And a lot of it just chains on from Google proper itself. If you understood just how crazy and fanatical they are about keeping things inside and secret and proprietary. Not proprietary, but not allowing that customer data out, even on the consumer side, it would give a whole-- >> Well, you got to amplify that, I understand. But what I also see, a good side of that, which is there's a lot of resources you're bringing to bear or learnings. >> Yeah, absolutely. >> The SRE concept, for instance, is to me, really powerful, because Google had to build that out themselves. This is now a paradigm, we're seeing a cloud scale here, with the Cloud Native market bringing in all-new capabilities at scale. Horizontally scalable, fully synchronous, microservices architecture. This future is a complete game-changer on functionality at the different scale points. So there's no longer the operator's room, provisioning storage here. >> And this is what we've been doing for years and years and years. That's how all of Google itself, that's how search and ads and Gmail and everything runs, in containers all orchestrated by Borg, which is our version of Kubernetes. And so we're really just bringing those leanings into the Google Cloud, or learnings into Google Cloud and to our customers. >> Jonathan, machine learning and AI have been the big topic this week on OpenShift. Obviously that's a big strength of Google Cloud as well. Can you drill down on that story, and talk about what Google Cloud is bringing on, and machine learning on OpenShift in general? Give us a little picture of what's running. >> Yeah, so I think they showed some of the service broker stuff. And I think, did they show some of the Kubeflow stuff, which is taking some machine learning and Kubernetes underneath OpenShift. I think those are very, very interesting for people that want to start getting into using AutoML, which is kind of roll-your-own machine learning, or even the voice or vision APIs to enhance their products. And I think that those are going to be keys. Easing the adoption of those, making them really, really easy to consume, is what's going to drive the significant ramp on using those types of technologies. >> One of the key touchpoints here has been the fact that this stuff is real-world and production-ready. The fact that the enterprise architecture now rolling out apps within days or weeks. One of those things that's now real is ML. And even in the opening keynote, they talked about using a little bit of it to optimize the scheduling and what sessions were in which rooms. As you talk to enterprises, it does seem like this stuff is being baked into real enterprise apps today. Can you talk a little bit about that? >> Sure, so I certainly can't give any specific examples, because what I think what you're saying is that a lot of enterprises or a lot of companies are looking at that like, "Oh, this is our new secret sauce." It always used to be like they had some interesting feature before, that a competitor would have to keep up with or catch up with. But I think they're looking at machine learning as a way to enhance that customer experience, so that it's a much more intimate experience. It feels much more tailored to whomever is using their product. And I think that you're seeing a lot of those types of things that people are starting to bake into their products. We've, again, this is one of these things where we've been using machine learning for almost 10 years inside Google. Things like for Gmail, even in the early days, like spam filtering, something just mundane like that. Or we even used it, turned it on in our data centers, 'cause it does a really good job of lowering the PUE, which is the power efficiency in data centers. And those are very mundane things. But we have a lot of experience with that. And we're exposing that through these products. And we're starting to see people, customers gravitate to grab onto those. Instead of having to hard code something that is a one to many kind of thing, I may get it right or I may have to tweak it over time, but I'm still kind of generalizing what the use cases are that my customers want to see, once they turn on machine learning inside their applications, it feels much more tailored to the customer's use cases. >> Machine learning as a service seems to be a big hot button that's coming out. How are you guys looking at the technical direction from the cloud within the enterprise? 'Cause you have three classes of enterprise. You have the early adopters, the power, front, cutting-edge. Then you have the fast followers, then you have everybody else. The everybody else and fast followers, they know about Kubernetes, some might not even, "What is Kubernetes?" So you have kind of-- >> Jonathan: "What containers?" >> A level of progress where people are. How are you guys looking at addressing those three areas, because you could blow them away with TensorFlow as a service. "Whoa, wowee, I'm just trying to get my storage LUNs "moving to a cloud operation system." There's different parts of this journey. Is there a technical direction that addresses these? What are you guys doing? >> So typically we'll work with those customers to help them chart the path through all those things, and making it easy for them to use and consume. Machine learning is still, unless you are a stats major or you're a math major, a lot of the algorithms and understanding linear algebra and things like that are still very complex topics. But then again, so is networking and BGP and things like OSPF back a few years ago. So technology always evolves, and the thing that you can do is you can just help pull people along the continuum there, by making it easy for them to use and to provide a lot of education. And so we work with customers on all ends of the spectrum. Even if it's just like, "How do I modernize my applications, "or how do I even just put them into the cloud?" We have teams that can help do that or can educate on that. If there are customers that are like, "I really want to go do something special "with maybe refactoring my applications. "I really want to get the Cloud Native experience." We help with that. And those customers that say, "I really want to find out this machine learning thing. "How can I actually make that an impactful portion of my company's portfolio?" We can certainly help with that. And there's no one, and typically you'll find in any large enterprise, because there'll be some people on each one of those camps. >> Yeah, and they'll also want to put their toe in the water here and there. The question I have for you guys is you got a lot of goodness going on. You're not trying to match Amazon speed for speed, feature for feature, you guys are picking your shots. That is core to Google, that's clear. Is there a use case or a set of building blocks that are highly adopted with you guys now, in that as Google gets out there and gets some penetration in the enterprise, what's the use, what are the key things you see with successes for you guys, out of the gate? Is there a basic building? Amazon's got EC2 and S3. What are you guys seeing as the core building blocks of Google Cloud, from a product standpoint, that's getting the most traction today? >> So I think we're seeing the same types of building blocks that the other cloud providers are, I think. Some of the differences is we look at security differently, because of, again, where we grew up. We do things like live migration of virtual machines, if you're using virtual machines, because we've had to do that internally. So I think there are some differences on just even some of the basic block and tackling type of things. But I do think that if you look at just moving to the cloud, in and of itself is not enough. That's a stepping stone. We truly believe that artificial intelligence and machine learning, Cloud Native style of applications, containers, things like service meshes, those things that reduce the operational burdens and improve the rate of new feature introduction, as well as the machine learning things, I think that that's what people tend to come to Google for. And we think that that's a lot of what people are going to stay with us for. >> I overheard a quote I want to get your reaction to. I wrote it down, it says, "I need to get away from VPNs and firewalls. "I need user and application layer security "with un-phishable access, otherwise I'm never safe." So this is kind of a user perspective or customer perspective. Also with cloud there's no perimeters, so you got phishing problems. Spear phishing's one big problem. Security, you mentioned that. And then another quote I had was, "Kubernetes is about running frameworks, "and it's about changing the way "applications are going to be built over time." That's where, I think, SRE and Istio is very interesting, and Kubeflow. This is a modern architecture for-- >> There's even KubeVirt out there, where you can run a VM inside a container, which is actually what we do internally too. So there's a lot of different ways to slice and dice. >> Yeah, how relevant is that, those concepts? Because are you hearing that as well on the customers? 'Cause that's pain point, but also the new modern software development's future way to do things. So there's pain point, I need some aspirin for that. And then I need some growth with the new applications being built and hiring talent. Is that consistent with how you guys see it? >> So which one should I tackle? So you're talking about. >> John Furrier: VPN, do the VPNs first. >> The VPNs first, okay. >> John Furrier: That's my favorite one. >> So one of the most, kind of to give you the backstory, so one of the most interesting things when I came to Google, having come from other large enterprise vendors before this, was there's no VPNs. We don't even have it on our laptop. They have this thing called BeyondCorp, which is essentially now productized as the Identity-Aware Proxy. Which is, it actually takes, we trust no one or nothing with anything. It's not the walled garden style of approach of firewall-type VPN security. What we do is, based upon the resource you're going to request access for, and are you on a trusted machine? So on one that corporate has given you? And do you have two-factor authentication that corporate, not only your, so what you have and what you know. And so they take all of those things into awareness. Is this the laptop that's registered to you? Do you have your two-factor authentication? Have you authenticated to it and it's a trusted platform? Boom, then I can gain access to the resources. But they will also look for things like if all of a sudden you were sitting here and I'm in San Francisco, but something from some country in Asia pops up with my credentials on it, they're going to slam the door shut, going, "There's no way that you can be in two places at one time." And so that's what the Identity-Aware Proxy or BeyondCorp does, kind of in a nutshell. And so we use that everywhere, internally, externally. And so that's one of the ways that we do security differently is without VPNs. And that's actually in front of a lot of the GCP technologies today, that you can actually leverage that. So I would say we take-- >> Just rethinking security. >> It's rethinking security, again, based upon a long history. And not only that, but what we use internally, from our corporate perspective. And now to get to the second question, yeah. >> Istio, Kubeflow, is more of the way software gets run. One quote from one of the ex-Googlers who left Google then went out to another company, she goes, she was blown away, "This is the way you people ship software?" Like she was a fish out of water. She was like, "Oh my god, where's Borg?" "We do Waterfall." So there's a new approach that opens doors between these, and people expect. That's this notion of Kubeflow and orchestration. So that's kind of a modern, it requires training and commitment. That's the upside. Fix the aspirin, so Identity Proxy, cool. Future of software development architecture. >> I think one of the strong things that you're going to see in software development is I think the days of people running it differently in development, and then sandbox and testing, QA, and then in prod, are over. They want to basically have that same experience, no matter where they are. They want to not have to do the crossing your fingers if it, remember, now it gets reddited or you got slash-dotted way back in the past and things would collapse. Those days of people being able to put up with those types of issues are over. And so I think that you're going to continue to see the development and the style of microservices, containers, orchestrated by something that can do auto scaling and healing, like Kubernetes. You're going to see them then start to use that base layer to add new capabilities on top, which is where we see Kubeflow, which is like, hey, how can I go put scalable machine learning on top of containers and on top of Kubernetes? And you even see, like I said, you see people saying, "Well, I don't really want to run "two different data planes and do the inception model. "If I can lay down a base layer "of Kubernetes and containers, then I can run "bare metal workloads against the bare metal. "If I need to launch a virtual machine, "I'll just launch that inside the container." And that's what KubeVirt's doing. So we're seeing a lot of this very interesting stuff pop. >> John Furrier: Yeah, creativity. >> Creativity. >> Great, talk about your role in the Office of the CTO. I know we got a couple of minutes left. I want to get out there, what is the role of the CTO? Bryan Stevens, formerly a Red Hat executive. >> Yeah, Bryan's our CTO. He used to run a big chunk of the engineering for Google Cloud, absolutely. >> And so what is the office's charter? You mentioned some CIOs, former CIOs are in there. Is it the think tank? Is it the command and control ivory tower? What's the role of the office? >> So I think a couple of years ago, Diane Greene and Bryan Stevens and other executives decided if we want to really understand what the enterprise needs from us, from a cloud perspective, we really need to have some people that have walked in those shoes, and they can't just be Diane or can't just be Bryan, who also had a big breadth of experience there. But two people can't do that for every customer for every product. And so they instituted the Office of the CTO. They tapped Will Grannis, again, had been in Boeing before, been in the military, and so tapped him to build this thing. And they went and they looked for people that had experience. Former VPs of Engineering, former CIOs. We have people from GE Oil and Gas, we have people from Boeing, we have people from Pixar. You name it, across each of the different verticals. Healthcare, we have those in the Office of the CTO. And about, probably, I think 25 to 30 of us now. I can't remember the exact numbers. And really, what our day to day life is like is working significantly with the product managers and the engineering teams to help facilitate more and more enterprise-focused engineering into the products. And then working with enterprise customers, kind of the big enterprise customers that we want to see successful, and helping drive their success as they consume Google Cloud. So being the conduit, directly into engineering. >> So in market with customers, big, known customers, getting requirements, helping facilitate product management function as well. >> Yeah, and from an engineering perspective. So we actually sit in the engineering organization. >> John Furrier: Making sure you're making the good bets. >> Jonathan: Yes, exactly. >> Great, well thanks for coming on The Cube. Thanks for sharing the insight. >> Jonathan: Thanks for having me again. >> Great to have you on, great insight, again. Google, always great technology, great enterprise mojo going on right now. Of course, The Cube will be at Google Next this July, so we'll be having live coverage from Google Next here in San Francisco at that time. Thanks for coming on, Jonathan. Really appreciate it, looking forward to more coverage. Stay with us for more of day three, as we start to wrap up our live coverage of Red Hat Summit 2018. We'll be back after this short break. (upbeat electronic music)

Published Date : May 10 2018

SUMMARY :

Brought to you by Red Hat. Technical Director, Office of the CTO, Google Cloud. You guys have been part of that from the beginning, And so Craig and the team at Google, But I want to take a minute, if you can, to explain. is coming in from the industry. And so I think now that if you look at Google Cloud, I interviewed Jennifer Lynn, I had a one-on-one with her. So she's checking the boxes. is putting the technologies that we want customers to use The idea is that we want customers to come to Google Cloud You have a lot of services that you can that started to impact many customers. that ticket actually has to be opened. And you guys are also a whole building from Google proper, And a lot of it just chains on from Google proper itself. Well, you got to amplify that, I understand. The SRE concept, for instance, is to me, really powerful, and to our customers. have been the big topic this week on OpenShift. And I think that those are going to be keys. And even in the opening keynote, And I think that you're seeing So you have kind of-- How are you guys looking at addressing those three areas, and the thing that you can do is you can just help that are highly adopted with you guys now, Some of the differences is we look at security differently, "and it's about changing the way where you can run a VM inside a container, Is that consistent with how you guys see it? So which one should I tackle? So one of the most, kind of to give you the backstory, And now to get to the second question, yeah. "This is the way you people ship software?" Those days of people being able to put up with I want to get out there, what is the role of the CTO? Yeah, Bryan's our CTO. Is it the think tank? and the engineering teams to help facilitate more and more So in market with customers, big, known customers, So we actually sit in the engineering organization. Thanks for sharing the insight. Great to have you on, great insight, again.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JonathanPERSON

0.99+

John FurrierPERSON

0.99+

John TroyerPERSON

0.99+

GoogleORGANIZATION

0.99+

Jennifer LynnPERSON

0.99+

AmazonORGANIZATION

0.99+

Jonathan DonaldsonPERSON

0.99+

Red HatORGANIZATION

0.99+

AsiaLOCATION

0.99+

Bryan StevensPERSON

0.99+

BryanPERSON

0.99+

25QUANTITY

0.99+

San FranciscoLOCATION

0.99+

CraigPERSON

0.99+

Will GrannisPERSON

0.99+

Diane GreenePERSON

0.99+

second questionQUANTITY

0.99+

DenmarkLOCATION

0.99+

IntelORGANIZATION

0.99+

OneQUANTITY

0.99+

Cloud Native FoundationORGANIZATION

0.99+

two placesQUANTITY

0.99+

DianePERSON

0.99+

two keyQUANTITY

0.99+

Tech ReckoningORGANIZATION

0.99+

One quoteQUANTITY

0.99+

Office of the CTOORGANIZATION

0.99+

PixarORGANIZATION

0.99+

Red Hat Summit 2018EVENT

0.99+

OpenShiftTITLE

0.99+

GE Oil and GasORGANIZATION

0.99+

GmailTITLE

0.98+

oneQUANTITY

0.98+

30QUANTITY

0.98+

CNCFORGANIZATION

0.98+

one timeQUANTITY

0.98+

last weekDATE

0.98+

BoeingORGANIZATION

0.98+

almost 10 yearsQUANTITY

0.97+

AndroidTITLE

0.97+

todayDATE

0.97+

KubernetesTITLE

0.97+

Google CloudORGANIZATION

0.97+

Four billion a weekQUANTITY

0.97+

day threeQUANTITY

0.97+

two-factorQUANTITY

0.97+

The CubeORGANIZATION

0.96+

Dietmar Fauser, Amadeus | Red Hat Summit 2018


 

>> Announcer: From San Francisco, it's theCUBE. Covering Red Hat Summit 2018. Brought to you by Red Hat. >> Hey, welcome back everyone. This is theCUBE live here in San Francisco at Moscone West Fourth, Red Hat Summit 2018. I'm John Furrier, the co-host of theCUBE with John Troyer, the co-founder of TechReckoning, an advisory firm in the area of open source communities and technology. Our next guest is Cube alumni Dietmar Fauser, head of core platforms and middleware at Amadeus, experienced Red Hatter, event go-er, and practitioner. Great to have you back, great to see you. >> Thank you, good to be here. >> So why are you here, what's going on? Tell us the latest and greatest. What's going on in your world? Obviously, you've been on theCUBE. You go on YouTube, there's a lot of videos on there, you go into great detail on. You been on the Docker journey. You got Red Hat, you got some Oracle. You got a complex environment. You're managing cloud native-like services. Tell us about it. >> We do so, yes, so this time I am here mostly to feed back some experience of concrete implementation out there in the Cloud and on premise so. Paul told me that the theme was mostly hybrid cloud deployments so we have chosen two of our really big applications to explain how concretely this works out with you and when you deploy on the Cloud. >> So you were up on stage this morning in the keynote. I think the scale of your operation maybe raised some eyebrows as well. You're talking about over a trillion transactions. Can you talk a little bit about, talk about your multi-cloud stance and what you showed this morning. >> Okay, so first to frame a bit of the trillion transactions. It's not traditional data based transactions. It's individual data access and highly in-memory cached environment. So I'd say that's a very large number and it's a significant challenge to produce this system. So we're talking about like more than 100,000 core deployments of this applications so. Response time matters extremely in this game because at the end what we are talking here about is the back end that powers large P2C sites, like Kayak, some major search engines, online travel agencies. So it just has to respond in a very fast way. Which pushed us to deploy the solutions very close to where the transactions are really originating to avoid our historical data centers in Germany. We just want to take out the back and forth travel under the Atlantic basically to create a better end user experience at the end. >> Furrier: So you had to drive performance big time? >> We, very much. It's either performance or higher availability or both actually. >> This is a true hybrid cloud, right? You're on prem, you're in AWS, and you're in Google Cloud. So could you talk a little bit about that? All powered by OpenShift. >> OpenShift is the common denominator of the solutions. Some of our core design goals is to build the applications in a platform agnostic way. So an application should not know what's its deployment topology, what's the underlying infrastructure. Which is why I believe that platforms like OpenShift and Kubernetes underneath are so important, because they take over the role of a traditional operating system, but at a larger scale. Either in big Cloud deployments or on premise, but the span of operations that you get with these environments is just like an OS but on a bigger scale. It's not a surprise that people talked about this like a data center operating system for a while. We use it this way so OpenShift is clearly the masterpiece, I would say of the deployment. >> That's the key though, I think, thinking about it as an operating system or an operating environment is the kind of the architectural mindset that you have to be in. Because you've got to look at these resources and connections, link them together. You've got all these team systems constant. So you've got to be a systems person kind of design. How does someone get there that may or may not have traditional systems experience? Like us surly generation systems folks have gone through. Because you have devops automating away things. You have more of an SRE model that Google's talking about. Talking about large scale, it's not a data center anymore, it's an operating environment. How do people get there? What's your recommendation, how do I learn more. What do I do to deploy architecturally? >> That's a key question I think. I think there were two sections to your question, how to get there, so. I think at Amadeus we are pretty good at catching early big trends in the industry. We are very close to large engineering houses like Google and Facebook and others like Red Hat of course and so, it was pretty quickly clear to us, at least to a small amount of these decision-makers that the combination of Red Hat and Google was kind of, a game-changing event, which is why we went there, so. It's, I mean. >> Furrier: The containers have been important for you guys. >> Containers were coming along, so, when this happened Docker became big, our development teams, they wanted to do containers. It was not something that the management has had to push for, it was grassroots type of adoption here. So different pieces fed together that gave us some form of certainty, or a belief that these platforms would be around for a decade to come. >> Developers love Kubernetes, and I mean that, containers, it's like a fish to water, it's just natural. Now talk about Kubernetes now, OpenShift made a bet with Kubernetes, obviously, a few years ago. People were like, what is that about? Now it's obvious why. How are you looking at the Kubernetes trade, obviously it creates a de facto capability, you can wrap services around it, there's a notion of service meshes coming, Istio is the hottest product in the Linux Foundation, CNCF, KubeFlow is right behind it, I mean these are kind of thinking about service and micro-services and workload management. How do you view that, what's your opinion on that direction? >> I'm afraid there is no simple answer to this, because if you start new solutions from scratch, going directly to Kubernetes, OpenShift is the natural way. Now the big thing in large corporations is we all have legacy applications, whatever we call legacy applications, in our case these are pretty large C++ environments that are relatively modern but they are not strictly micro-service based and they are a bit fatter, they have an enterprise service bus on top of this, and so it's not, and we have very awkward, old network protocols, so going straight to the mesh for these applications and micro-services is not a possibility because there is significant re-engineering needed in our own applications before we believe it makes sense to throw them onto a container platform. We could stick all of this in a container but you have to wonder whether you get the benefit you really want to. >> Furrier: Time ROI, return on investment, on the engineering, retrofitting it for service mesh. >> Yes, I mean, the interesting thing is Kubernetes or not, we would have touched these applications anyway to cut them into more manageable pieces. We call this compartmentalization. Other people may call this micro-service-ification, or however we want to call this. So that's, to me this is work that is independent from the cloud strategy in itself. Some of our applications, to move faster, we have decided to put them more or less as they are onto OpenShift, others we take some more time to say, okay let's do the engineering homework first so that we reap the full benefits of this platform, and the benefit really is, what is fundamental for developers, efficiency and agility is that you have relatively small, independent load sets, so that you can quickly load small pieces, you can roll them in. >> Time to production, time from developer to production. >> But also quality, the less isolated, the more you isolate the changes, the less you run the risk that a change is cross-impacting things that are in the same delivery basically. It's a lot about, smaller chunks of software that are managed and for this obviously a micro-service platform is absolutely ideal. So it helps us to push the spirit of the company in this direction, no more monolithical applications, fast daily loads. >> Morale's higher, people happy. >> Well, it's a long journey, so some are happy, some are impatient like me to move faster. Some are still a bit reluctant, it's normal in larger organizations. >> Talk about the scale, I'm really interested in your reaction and experience, let's talk about the scale. I think that's a big story. As cloud enables more horizontally scalable applications, the operating aperture is bigger. It's not like managing systems here, it's a little bit bigger picture. How are you guys looking at the operational framework of that, because now you're essentially a site reliable engineering role, that's what Google talks, in SRE, but now you're operating but you're still developing code, and you're writing applications. So, talk about that dynamic and how you see that playing out going forward. >> So, what we try to do is to separate the platform aspects from the application aspects, so I'm leading the platform engineering unit, including platform operations, so this means that we have the platform SRE role, if you want, so we oversee frontline operations 24 by seven stability of the global system. To me, the game is really about trying to separate and isolate as much as we can from the applications to put it on the platform because we have, like, close to 100 applications running on the platform and if we can fix stuff on the platform for all the applications without being involved in the individual load cycles and waiting for them to integrate some features, we just move much faster. >> You can decouple the application from some core platform features, make them highly cohesive, sounds like an operating system to me. >> It is, and I'll come to the second thought of the SRE a bit later, but currently the big bulk of the work we are doing with OpenShift is now to bring our classical platform stuff under OpenShift. And by classical application, I mean our internal components like security, business rule engines, communication systems, but also the data management side of the house. And I think this is what we're going to witness over the next two or three years, is how can we manage, like, in our case CouchBase, Kafka, all of those things, we want them to be managed as applications under OpenShift with descriptive blueprints, descriptive configurations which means you define the to-be state of a system and you leave OpenShift to ensure that if the to-be state is like, I need 1000 ports for a given application, is violated OpenShift will repair automatically the system. >> That's interesting, you bring up a dynamic that's a trend we're seeing, I want to get your thoughts on this. And it hasn't really been kind of crystallized and yet I haven't heard a good explanation but, the trend seems to be to have many databases. In other words, we're living in a world where there's a database for everything, but not one database. So, like, if I got an application at the edge of the network, it can have its own database, so we shouldn't have to design around a database concept, it should be, concept should still be databases everything, living and growing and managing it. How are, first of all do you believe that, and if so, how do you architect the platform to manage potentially ubiquitous amount of different kinds of databases where the apps are kind of driving their own database role, and working with the core platform. Seems to be an area people are really talking about, because this is where AI shines if you get that right. >> So I agree with you that there are a lot of solutions out there. Sometimes a bit confusing choice, which type of solutions to choose. In our case we have quite a mature, what we call a technical policy, a catalog of technologies that application designers can choose from, so there are several data management stores in there. Traditionally speaking we use Oracle, so Oracle is there and is a good solution for many use cases. We were very early in the Nosql space so we have introduced Couchbase for highly scalable environments, Mongo for more sophisticated objects or operations. We try to educate, or to talk with our application people not to go outside of this. We also use Redis for our platform internal things, so we try to narrow their choices down. >> Stack the databases, what about the glue layer? Any kind of glue layer standards, gluing things together? >> In general we always put an API layer on top of the solutions, so we use our own infrastructure independence layer when we talk to the databases, so we try not to have those native bindings in the application, it's always about disentangling platform aspects from the application. >> So Dietmar, you did talk about this architectural concept, right, of these layers, and you're protecting the application from the platform, what about underneath, right? You're running on multiple clouds. What have been the challenges of, in theory, you know, there's a separation layer there and OpenShift is underneath everything, you've got OpenStack, you've got the public clouds, have there been some challenges operationally in making sure everything runs the same? >> There are multiple challenges, so to start with, the different infrastructures do not behave exactly the same, so just taking something from Google to Amazon, it works in theory but practically speaking the APIs are not exactly the same, so you need to remap the APIs. The underlying behavior is not exactly the same. In general from an application design point of view, and we are pretty used to this anyway because we are distributed systems specialists, but the learning curve comes from the fact that you go to an infrastructure that is, in itself, much less reliable if you look to individual pieces of it. It works fine if you use well the availabilities on concepts and you start with the mindset that you can lose availabilities or even complete regions and take this as a granted, natural event that will happen. If you are in this mindset there aren't so many surprises, OpenShift operates very well with the unreliability of virtual machines. We even contract, in the case of Google, what is called preemptive VM so they get restarted anyway very frequently because they have a different value proposition so if you can run with less reliable stuff you pay less, basically. So if you can take advantage of this, you have another advantage using those. >> Dietmar, it's great to hear your stories, congratulations on your success and all the work you're doing, it's sounds like really cutting-edge and great work. You've been to many Red Hats. What's the revelation this year? What's the big thing that people should know about that's happening in 2018? Is it Kubernetes? What should people pay attention to from your opinion? >> I think we can take Kubernetes now as granted. That's very good news for me and for Amadeus, it was quite a bet at the beginning but we see this now as the de facto standard, and so I think people can now relax and say, okay this is one of the pieces that will be predominant for the decade to come. Usually I'm referring to IT decades, only three years long, not 10 years. >> Okay, and as moving to an operating system environment, I love that analogy. I think it's totally right from the data that we see. We're living in a cloud native world, hybrid cloud on-premise, still true private cloud as Wikibon calls it and really it's an operating system concept architecturally, and IoT is coming fast. It's just going to create more and more data. >> So, what I believe, and what we believe in general at Amadeus is that the next evolution of systems, the big architectural design approach will be to create applications that are much more streaming oriented because it allows to decouple the different computing steps much more. So rather than waiting for a transaction, you subscribe to an event, and any number of processes can subscribe to an event, the producer doesn't have to know who is consuming what, so we go streaming data-centric and massively asynchronous. Which, which, which yields smoother throughput, less hiccups because in transactional systems you always have something that slows down temporarily a little bit, it's very difficult to architect systems with absolute separation of concerns in mind, so sometimes a slowdown of a disk might trigger impacts to other systems. With a streaming and asynchronous approach the systems tend to be much more stable with higher throughput. >> And a lot more scalable. There's the horizontally scalable nature of the cloud, you've got to have the streaming and this architecture in place. This is a fundamental mistake we see with people out there, they don't think like this but then when they hit scale points, it breaks. >> Absolutely, and so, I mean we are a highly transactional shop but many of our use cases already are asynchronous so we go a deep step further on this and we currently work on bringing Kafka massively under OpenShift because we're going to use Kafka to connect data center footprints for all types of data that we have to stream to the application that are out in the public cloud, or on premise basically. >> We should call you professor because this was such a great segment, thanks for sharing an awesome amount of insight on theCube. Thanks for coming on, good to see you again. Dietmar Fauser, head of core platforms and middleware at Amadeus. You know, down and dirty, getting under the hood really at the architecture of scale, high availability, high performance of the systems to be scalable with cloud, obviously open source is powering it, OpenShift and Red Hat. It's theCube bringing you all the power here in San Francisco for Red Hat Summit 2018. I'm John Furrier and John Troyer, we'll be back with more after this short break. (electronic music)

Published Date : May 8 2018

SUMMARY :

Brought to you by Red Hat. Great to have you back, great to see you. You been on the Docker journey. and when you deploy on the Cloud. So you were up on stage of the trillion transactions. We, very much. So could you talk a little bit about that? but the span of operations that you get kind of the architectural that the combination of Red Hat and Google for you guys. that the management has Istio is the hottest product Now the big thing in large corporations is the engineering, retrofitting efficiency and agility is that you have Time to production, time from developer the less you run the risk that a change is some are impatient like me to move faster. Talk about the scale, the applications to put it on the platform You can decouple the the to-be state of a system and you leave of the network, it can So I agree with you that there are of the solutions, so we in making sure everything runs the same? the same, so you need to remap the APIs. What's the revelation this year? predominant for the decade to come. from the data that we see. the systems tend to be much more stable of the cloud, you've got the application that are the systems to be scalable with cloud,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
GermanyLOCATION

0.99+

John TroyerPERSON

0.99+

AmazonORGANIZATION

0.99+

John TroyerPERSON

0.99+

2018DATE

0.99+

PaulPERSON

0.99+

Dietmar FauserPERSON

0.99+

GoogleORGANIZATION

0.99+

John FurrierPERSON

0.99+

San FranciscoLOCATION

0.99+

San FranciscoLOCATION

0.99+

FacebookORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

twoQUANTITY

0.99+

AmadeusORGANIZATION

0.99+

OpenShiftTITLE

0.99+

DietmarPERSON

0.99+

10 yearsQUANTITY

0.99+

three yearsQUANTITY

0.99+

San FranciLOCATION

0.99+

Linux FoundationORGANIZATION

0.99+

AWSORGANIZATION

0.99+

CNCFORGANIZATION

0.99+

1000 portsQUANTITY

0.99+

TechReckoningORGANIZATION

0.99+

two sectionsQUANTITY

0.99+

OracleORGANIZATION

0.99+

bothQUANTITY

0.99+

KubeFlowORGANIZATION

0.98+

KafkaTITLE

0.98+

Red Hat Summit 2018EVENT

0.98+

oneQUANTITY

0.98+

firstQUANTITY

0.98+

HatTITLE

0.97+

Moscone West FourthLOCATION

0.97+

this yearDATE

0.97+

Red HatterORGANIZATION

0.97+

over a trillion transactionsQUANTITY

0.96+

AtlanticLOCATION

0.96+

NosqlTITLE

0.96+

WikibonORGANIZATION

0.95+

one databaseQUANTITY

0.94+

SRETITLE

0.94+

more than 100,000 core deploymentsQUANTITY

0.94+

theCUBEORGANIZATION

0.94+

IstioORGANIZATION

0.94+

YouTubeORGANIZATION

0.93+

CouchBaseTITLE

0.93+

Red HatTITLE

0.91+

KayakORGANIZATION

0.9+

KubernetesORGANIZATION

0.9+

second thoughtQUANTITY

0.89+

this morningDATE

0.88+

CubeORGANIZATION

0.88+

OpenShiftORGANIZATION

0.88+

OpenStackTITLE

0.87+

CouchbaseTITLE

0.86+

few years agoDATE

0.86+

C+TITLE

0.85+

Google CloudTITLE

0.8+

100 applicationsQUANTITY

0.79+

sevenQUANTITY

0.78+

KubernetesTITLE

0.77+

Kelsey Hightower, Google Cloud Platform | KubeCon + CloudNativeCon EU 2018


 

>> Announcer: Live from Copenhagen, Denmark, it's theCUBE covering KubeCon and CloudNativeCon Europe 2018. Brought to you by the Cloud Native Computing Foundation and its ecosystem partners. >> Hello, everyone, welcome back to theCUBE's exclusive coverage here in Copenhagen, Denmark for coverage of KubeCon 2018, part of the CNCF CloudNative Compute Foundation, part of the Linux Foundation, I'm John Furrier with my cohost, Lauren Cooney, the founder of Spark Labs. We're here with Kelsey Hightower, co-chair of the program as well as a staff engineer, developer, advocate, at Google Cloud Platform, a celebrity in the industry, dynamic, always great to have you on, welcome back. >> Awesome, good to be back. >> How are you feeling, tired? You've got the energy, day two? >> I'm good, I finished my keynote yesterday. My duties are done, so I get to enjoy the conference like most attendees. >> Great. Keynote was phenomenal, got good props. Great content format, very tight, moving things along. A little bit of a jab at some of the cloud providers. Someone said, "Oh, Kelsey took a jab at the cloud guys." What was that about, I mean, there was some good comments on Twitter, but, keeping it real. >> Honestly, so I work at a cloud provider, so I'm part of the cloud guys, right? So I'm at Google Cloud, and what I like to do is, and I was using Amazon's S3 in my presentation, and I was showing people basically like the dream of, in this case, serverless, here's how this stuff actually works together right now. We don't really need anything else from the cloud providers. Here's what you can do right now, so, I like to take a community perspective, When I'm on the stage, so I'm not here only to represent Google and sell for Google. I'm here to say, "Hey, here's what's possible," and my job is to kind of up-level the thinking. So that was kind of the goal of that particular presentation is like, here's all this stuff, let's not lock it all down to one particular provider, 'cause this is what we're here for, KubeCon, CloudNativeCon, is about taking all of that stuff and standardizing it and making it accessible. >> And then obviously, people are talking about the outcome, that that's preferred right now in the future, which is a multi-cloud workload portability. Kubernetes is playing a very key role in obviously the dev ops, people who have been doing it for many many years, have eaten glass, spit nails, custom stuff, have put, reaped the benefits, but now they want to make it easy. They don't want to repeat that, so with Kubernetes nice formation, a lot of people saying here on theCUBE and in the hallways that a de facto standard, the word actually said multiple times here. Interesting. >> Yeah, so you got Kubernetes becoming the de facto standard for computes, but not events, not data, not the way you want to compute those events or data, so the job isn't complete. So I think Kubernetes will solve a large portion of compute needs, thumbs up, we're good to go. Linux has done this for the virtualization layer, Kubernetes is doing it for the containerization, but we don't quite have that on the serverless side. So it's important for us all to think about where the industry is going and so it's like, hey, where the industry is moving to, where we are now, but it's also important for us to get ahead of it, and also be a part of defining what the next de facto standard should be. >> And you mentioned community, which is important, because I want to just bring this up, there's a lot of startups in the membership of CNCF, and when you have that first piece done, you mentioned the other work to be done, that's an opportunity to differentiate. This is the commercialization opportunity to strike that balance. Your reaction to that, how do you see that playing out? Because it is an opportunity to create some value. >> Honestly I'm wearing a serverless.com T-shirt right now, right, that's the startup in the space. They're trying to make serverless easy to use for everyone, regardless of the platform. I think no matter what side of the field you stand on, we need these groups to be successful. They're independent companies, they're going for ambition, they're trying to fill the gaps in what we're all doing, so if they're successful, they just make a bigger market for everyone else, so this is why not only do we try to celebrate them, we try to give them this feedback, like, "Hey, here's what we're doing, "here's what the opportunities are," so I think we need them to be successful. If they all die out every time they start something, then we may not have people trying anymore. >> And I think there's actually a serverless seg in the CNCF, right? And I think that they're doing a lot of great work to kind of start to figure out what's going on. I mean, are you aware what those guys are up to? >> Exactly, so the keynote yesterday was largely about some of the work they're doing. So you mentioned the serverless seg, and CNCF. So some of the work that they're doing is called cloud events. But they wanted to standardize the way we take these events from the various providers, we're not going to make them all work the same way, but what we can do is capture those events in a standard way, and then help define a way to transport those between different providers if you will, and then how those responses come back. So at least we can start to standardize at least that part of the layer, and if Google offers you value, or Amazon offers you value, you own the data, and that data generates events, you can actually move it wherever you want, so that's the other piece, and I'm glad that they're getting in front of it. >> Well I think goal is, obviously, if I'm using AWS, and then I want to use Asher, and then I want to go to Google Cloud, or I want my development teams are using different components, and features, in all of them, right? You want to be able to have that portability across the cloud-- >> And we say together, so the key part of that demo was, if you're using one cloud provider for a certain service, in this case, I was using Google Translate to translate some data, but maybe your data lives in Amazon, the whole point was that, be notified that your data's in Amazon, so that it can be fired off an event into Google, function runs a translation, and writes the data back to Amazon. There are customers that actually do this today, right? There are different pieces of stacks that they want to be able to access, our goal is to make sure they can actually do that in a standard way, and then, show them how to do it. >> A lot of big buzz too also going on around Kubeflow, that Google co-chaired, or co-founded, and now part of the CNCF, Istio service meshes, again, this points to the dots that are connecting, which is okay, I got Kubernetes, we got containers, now Istio, what's your vision on that, how did that play out? An opportunity certainly to abstract the weights of complexity, what's your thoughts on Istio? >> So I think there's going to be certain things, things like Istio, there are parts of Istio that are very low level, that if done right, you may never see them. That's a good thing, so Istio comes in, and says, "Look, it's one thing to connect applications together, "which Kubernetes can help you do "with this built-in service discovery, "how does one app find the other app," but then it's another thing to lock down security and implement policy, this app can talk to this app under these conditions. Istio comes in, brings that to the playing field. Great, that's a great addition. Most people will probably wrap that in some higher-level platform, and you may never see it! Great! Then you mention Kubeflow, now this is a workflow, or at least an opinionated workflow, for doing machine-learning, or some analytics work. There's too many pieces! So if we start naming every single piece that you have to do, or we can say, "Look, we know there's a way that works, "we'll give it a name, we'll call it Kubeflow," and then what's going to happen there is the community's going to rally around actually more workflow, we have lots of great technology wrapped underneath all of that, but how should people use it? And I think that's what I'm actually happy to see now that we're in like year four or five of this thing, as people are actually talking about how to people leverage all of these things that fall below? >> As the IQ starts to increase with cloud-native, you're seeing enterprises, and there's levels of adoption, the early adopters, you know, the shiny new toy, are pushing the envelope, fast followers coming in, then you got the mainstream coming in, so mainstream, there's a lot of usage and consumption of containers, very comfortable with that, now they're bumping into Kubernetes, "Oh wow, this is great," different positions of the adoption. What's your message to each one, mainstream, fast followers, early adoptives, the early adoptives keep pushing, keep bringing that community together, form the community, fast forward. What's the position, what's the Kelsey Hightower view of each one of those points of the evolution? >> So I think we need a new model. So I think that model is kind of out now. Because if you look at the vendor relationships now, so the enterprise typically buys off the shelf when it's mature and ready to go. But at this point now, a lot of the library is all in the programming languages, if you see a language or library that you need, if it's on GitHub, you look around, it's like, "We're going to use this open-source library, "'cause we got to ship," right? So, they started doing early adoption maybe at the library level. Now you're starting to see it at the service level. So if I go to my partner or my vendor, and they say, "Hey, the new version of our software requires Kubernetes." Now, that's a little bit early for some of these enterprises to adopt, but now you're having the vendor relationship saying, "We will help you with Kubernetes." And also, a lot of these enterprises, it's early? Guess what, they have contributors to these projects. They helped design them. I remember back in the day, when I was in financial services, JPMC came out with their own messaging standard, so banks could communicate with each other. They gave that to Red Hat, and Red Hat turns it into a product, and now there's a new messaging standard. That kicked off ten years ago, and now we're starting to see these same enterprises contribute to Kubernetes. So I think now, there's a new model where, if it's early, enterprises are becoming the contributors, donating to the foundations, becoming members of things like CNCF, and on the flip side, they may still use their product, but they want a say in their future. >> So you can jump in at any level as a company, you don't need to wait for the mainstream, you can have a contributor, and in the front wave, to help shepherd through. >> Yeah, you need more say, I think when people bought typical enterprise software, if there wasn't a feature in there, you waited for the vendor to do it, the vendor comes up with their feature, and tells you it's going to cost another 200 million dollars for this add-on, and you have no say into the progress of it, or the speed of it. And then we moved to a world where there was APIs. Look, here's APIs, you can kind of build your own thing on top, now, the vendor's like, "You know what? "I'm going to help actually build the product that I rely on," so if vendor A is not my best partner right now, I could pick a different vendor and say, "Hey, I want a relationship, around this open-source "ecosystem, you have some features I like right now, "but I may want to able to modify them later." I think that's where we are right now. >> Well I think also the emergence of open-source offices, and things like that, and, you know, enterprises that are more monolithic, have really helped to move things forward with their users and their developers. I'm seeing a lot of folks here that are actually coming from larger companies inside of Europe, and they're actually trying to learn Kubernetes now, and they are here to bring that back into their companies, that they want to know about what's going on, right? >> That's a good observation-- >> It's great. >> That open-source office is replacing the I'm the vendor management person. >> Well you need legal-- >> Exactly. >> And you need all of those folks to just get the checkmarks, and get the approval, so that folks can actually take code in, and if it's under the right license, which is super important, or put code back out. >> And it seemed to be some of the same people that were managing the IBM relationship. The people that were managing the big vendor relationship, right? This thing's going to cost us all this cash, we got to make sure that we're getting the right, we're complying with the licensing model, that we're not using more than we paid for, in case we get an audit, the same group has some of the similar skills needed to shepherd their way through the open-source landscape, and then, in many cases, hiring in some of those core developers, to sit right in the organization, to give back, and to kind of have that first-tier support. >> That's a really good point, Lauren. I think this is why I think CNCF has been so successful is, they've kind of established the guardrails, and kind of the cultural notion of commercializing, while not foregoing the principles of open-source, so the operationalizing of open-source is really huge-- >> I'm kind of laughing over here, because, I started the open-source organization at Cisco, and Cisco was not, was new to open-source, and we had to put open data into the Linux Foundation, and I just remember the months of calls I was on, and the lawyers that I got to know, and-- >> You got scar tissue to prove it, too. >> I do, and I think when we did CNCF, I was talking to Craig years ago when we kind of kicked that off, it was really something that we wanted to do differently, we wanted to fast track it, we had the exact license that we wanted, we had the players that we wanted, and we really wanted to have this be something community-based, which I think, Kelsey, you've said it right there. It's really the communities that are coming together that you're seeing here. What else are you seeing here? What are the interesting projects that you see, that are kind of popping up, we have some, but are there others that you see? >> Well, so now, these same enterprises, now they have the talent, or at least not letting the talent leave, the talent now is like, "Well, we have an idea, and it's not core "to our business, let's open-source it." So, Intuit just inquired this workflow, small little start-up project, Argo, they're Intuit now, and maybe they had a need internally, suck in the right people, let the project continue, throw that Intuit logo there, and then sometimes you just see tools that are just being built internally, also be product ties from this open-source perspective, and it's a good way for these companies to stay engaged, and also to say, "Hey, if we're having this problem, "so are other people," so this is new, right? This open-source usually comes from the vendors, maybe a small group of developers, but now you're starting to see the companies say, "You know what, let's open-source our tool as well," and it's really interesting, because also they're pretty mature. They've been banked, they've been used, they're real, someone depends on them, and they're out. Interesting to see where that goes. >> Well yeah, Derek Hondell, from VMware, former Linux early guy, brought the same question. He says, "Don't confuse project with product." And to your point about being involved in the project, you can still productize, and then still have that dual relationship in a positive way, that's really a key point. >> Exactly, we're all learning how to share, and we're learning what to share. >> Okay, well let's do some self awareness here, well, for you, program's great, give you some props on that, you did a great job, you guys are the team, lot of high marks, question marks that are here that we've heard is security. Obviously, love Kubernetes, everyone's high-fiving each other, got to get back to work to reality, security is a conversation. Your thoughts on how that's evolving, obviously, this is front and center conversation, with all this service meshes and all these new services coming up, security is now being fought in the front end of this. What's your view? >> So I think the problem with security from certain people is that they believe that a product will come out that they can buy, to do security. Every time some new platform, oh, virtualization security. Java security. Any buzzword, then someone tries to attach security. >> It's a bolt-on. >> It's, yeah. So, I mean, most people think it's a practice. The last stuff that I seen on security space still applies to the new stack, it's not that the practice changed. Some of the threat models are the same, maybe some new threat models come up, or new threat models are aggravated because of the way people are using these platforms. But I think a lot of companies have never understood that. It's a practice, it will never be solved, there's nothing you can buy or subscribe to-- >> Not a silver bullet. >> Like antivirus, right? I'm only going to buy antivirus, as long as I run it, I should never get a virus. It's like, "No!" That's not how that works. The antivirus will be able to find things it knows about. And then you have to have good behavior to prevent having a problem in the first place. And I think security should be the same way, so I think what people need to do now, is they're being forced back into the practice of security. >> John: Security everywhere, basically. >> It's just a thing you have to do no matter what, and I think what people have to start doing with this conversation is saying, "If I adopt Kubernetes, does my threat model change?" "Does the container change the way I've locked down the VM?" In some cases, no, in some cases, yes. So I think when we start to have these conversations, everyone needs to understand the question you should ask of everyone, "What threat model should I be worried about, "and if it's something that I don't understand or know," that's when you might want to go look for a vendor, or go get some more training to figure out how you can solve it. >> And I think, Tyler Jewell was on from Ballerina, and he was talking about that yesterday, in terms of how they actually won't, they assume that the code is not secure. That is the first thing that they do when they're looking at Ballerina in their programming language, and how they actually accept code into it, is just they assume it's not secure. >> Oh exactly, like at Google we had a thing, we called it BeyondCorp. And there's other aspects to that, if you assume that it's going to be bad if someone was inside of your network, then pretend that someone is already inside your network and act accordingly. >> Yep, exactly, it's almost the reverse of the whitelisting. Alright, so let me ask you a question, you're in a unique position, glad to have you here on theCUBE, thanks for coming on and sharing your insights and perspective, but you also are the co-chair of this progress, so you get to see the landscape, you see the 20 mile stare, you have to have that long view, you also work at Google, which gives a perspective of things like BeyondCorp, and all of the large-scale work at Google, a lot of people want to, they're buying into the cloud-native, no doubt about it, there's still some educational work on the peoples' side, and process, and operationalizing it, with open-source, et cetera, but they want to know where the headroom is, they want to know, as you said, where's the directionally correct vector of the industry. So I got to ask you, in your perspective, where's all this going? For the folks watching who just want to have a navigation, paint the picture, what's coming directionally, shoot the arrow forward, as service meshes, as you start having this service layer, highly valuable, creative freedom to do things, what's the Kelsey vision on-- >> So I think this world of computing, after the mainframe, the mainframe, you want to process census data, you walk up, give it, it spits it back out. To me, that is beautiful. That's like almost the ultimate developer workflow. In, out. Then everyone's like, "I want my own computer, "and I want my own programming language, "and I want to write it in my basement, "without the proper power, or cords, or everything, "and we're all going to learn how "to do computing from scratch." And we all learnt, and we have what we call a legacy. All the mistakes I've made, but I maintain, and that's what we have! But the ultimate goal of computing is like the calculator, I want to be able to have a very simple interface, and the computer should give me an answer back. So where all this is going, Istio, service mesh, Kubernetes, cloud-native, all these patterns. Here's my app, run it for me. Don't ask me about auto scale groups, and all, run it for me. Give me a security certificate by default. Let's encrypt. Makes it super easy for anyone to get a tailored certificate rotated to all the right things. So we're slowly getting to a world where you can ask the question, "Here's my app, run it for me," and they say, "Here's the URL, "and when you hit this URL, we're going to do "everything that we've learned in the past "to make it secure, scalable, work for you." So that may be called open-shift, in its current implementation with Red Hat, Amazon may call it Lambda, Google Cloud may call it GKE plus some services, and we're never going to stop until the experience becomes, "Here's my app, run it for me." >> A resource pool, just programmability. And it's good, I think the enterprises are used to lifting and shifting, I mean, we've been through the evolution of IT, as we build the legacy, okay, consolidation, server consolidation, oh, hello VMs, now you have lift and shift. This is not a lift and shift kind of concept, cloud-native. It is a-- >> It doesn't have to be a lift and shift. So some people are trying to make it a lift and shift thing, where they say, "Look, you can bolt-on some of the stuff "that you're seeing in the new," and some consultants are like, "Hey, we'll sit their and roll up the sleeves, "and give you what we can," and I think that's an independent thing from where we're pushing towards. If you're ready, there's going to be a world, where you give us your code, and we run it, and it's scary for a lot of people, because they're going to be like, "Well, what do I do?" "What knobs do I twist in that world?" So I think that's just, that's where it's going. >> Well, in a world of millions of services coming out on the line, it's in operating, automation's got to be key, these are principles that have to go get bought into. I mean, you got to understand, administration is the exception, not the rule. This is the new world. It's kind of the Google world, and large-scale world, so it could be scary for some. I mean, you just bump into people all the time, "Hey Kelsey, what do I do?" And what do you say to them? You say, "Hey, what do I do?" What's the playbook? >> Often, so, it's early enough. I wasn't born in the mainframe time. So I'm born in this time. And right now when you look at this, it's like, well, this is your actual opportunity to contribute to what it should do. So if you want to sit on the sidelines, 'cause we're in that period now, where that isn't the case. And everyone right now is trying to figure out how to make it the case, so they're going to come up with their ways of doing things, and their standards, and then maybe in about ten years, you'll be asked to just use what we've all produced. Or, since you're actually around early enough, you can participate. That's what I tell people, so if you don't want to participate, then you get the checkpoints along the way. Here's what we offer, here's what they offer, you pick one, and then you stay on this digital transformation to the end of time. Or, you jump in, and realize that you're going to have a little bit more control over the way you operate in this landscape. >> Well, jumping in the deep end of the pool has always been the philosophy, get in and learn, and you'll survive, with a lot of community support, Kelsey, thanks for coming on, final question for you, surprise is, you're no longer going to be the co-chair, you've co-chaired up to this point, you've done a great job, what surprised you about KubeCon, the growth, the people? What are some of the things that have jumped out at you, either good, surprise, what you did expect, not expect, share some commentary on this movement, KubeCon and CloudNative. >> Definitely surprised that it's probably this big this fast, right? I thought people, definitely when I saw the technology earlier on, I was like, "This is definitely a winner," "regardless of who agrees." So, I knew that early on. But to be this big, this fast, and all the cloud providers agreeing to use it and sell it, that is a surprise, I figured one or two would do it. But to have all of them, if you go to their website, and you read the words Kubernetes' strong competitors, well alright, we all agree that Kubernetes is okay. That to me is a surprise that they're here, they have booths, they're celebrating it, they're all innovating on it, and honestly, this is one of those situations that, no matter how fast they move, everyone ends up winning on this particular deal, just the way Kubernetes was set up, and the foundation as a whole, that to me is surprising that it's still true, four years later. >> Yeah, I mean rising tide floats all boats, when you have an enabling, disruptive technology like Kubernetes, that enables people to be successful, there's enough cake to be eating for everybody. >> Awesome. >> Kelsey Hightower, big time influencer here, inside theCUBE cloud, computing influencer, also works at Google as a developer advocate, also co-chair of KubeCon 2018, I wish you luck in the next chapter, stepping down from the co-chair role-- >> Stepping down from the co-chair, but always in the community. >> Always in the community. Great voice, great guy to have on theCUBE, check him out online, his great Twitter feed, check him out on Twitter, Kelsey Hightower, here on theCUBE, I'm joined here by Lauren Cooney, be right back with more coverage here at KubeCon 2018, stay with us, we'll be right back. (bright electronic music)

Published Date : May 3 2018

SUMMARY :

Brought to you by the Cloud Native Computing Foundation always great to have you on, welcome back. My duties are done, so I get to enjoy the conference A little bit of a jab at some of the cloud providers. When I'm on the stage, so I'm not here only to that that's preferred right now in the future, not the way you want to compute those events or data, Your reaction to that, how do you see that playing out? I think no matter what side of the field you stand on, I mean, are you aware what those guys are up to? and if Google offers you value, so the key part of that demo was, is the community's going to rally around As the IQ starts to increase with cloud-native, the contributors, donating to the foundations, So you can jump in at any level as a company, and tells you it's going to cost another 200 million dollars and they are here to bring that back into their companies, the I'm the vendor management person. And you need all of those folks and to kind of have that first-tier support. and kind of the cultural notion of commercializing, What are the interesting projects that you see, and also to say, "Hey, if we're having this problem, And to your point about being involved in the project, and we're learning what to share. in the front end of this. that they can buy, to do security. because of the way people are using these platforms. And then you have to have good behavior everyone needs to understand the question you should ask That is the first thing that they do when they're looking And there's other aspects to that, if you assume and perspective, but you also are the co-chair the mainframe, you want to process census data, now you have lift and shift. and it's scary for a lot of people, because they're going to And what do you say to them? the way you operate in this landscape. What are some of the things that have jumped out at you, But to have all of them, if you go to their website, like Kubernetes, that enables people to be successful, but always in the community. Always in the community.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lauren CooneyPERSON

0.99+

LaurenPERSON

0.99+

CiscoORGANIZATION

0.99+

Derek HondellPERSON

0.99+

EuropeLOCATION

0.99+

JohnPERSON

0.99+

IBMORGANIZATION

0.99+

JPMCORGANIZATION

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

KelseyPERSON

0.99+

Linux FoundationORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

Spark LabsORGANIZATION

0.99+

CNCFORGANIZATION

0.99+

20 mileQUANTITY

0.99+

oneQUANTITY

0.99+

Tyler JewellPERSON

0.99+

John FurrierPERSON

0.99+

IntuitORGANIZATION

0.99+

twoQUANTITY

0.99+

KubeConEVENT

0.99+

AWSORGANIZATION

0.99+

Copenhagen, DenmarkLOCATION

0.99+

yesterdayDATE

0.99+

200 million dollarsQUANTITY

0.99+

first thingQUANTITY

0.99+

first pieceQUANTITY

0.98+

GitHubORGANIZATION

0.98+

IstioORGANIZATION

0.98+

KubeCon 2018EVENT

0.98+

four years laterDATE

0.98+

JavaTITLE

0.98+

firstQUANTITY

0.98+

ten years agoDATE

0.98+

VMwareORGANIZATION

0.97+

ArgoORGANIZATION

0.97+

millionsQUANTITY

0.97+

CloudNativeORGANIZATION

0.97+

about ten yearsQUANTITY

0.97+

CraigPERSON

0.96+

todayDATE

0.96+

KubernetesTITLE

0.96+

TwitterORGANIZATION

0.96+

KubeConORGANIZATION

0.95+

theCUBEORGANIZATION

0.95+

KubernetesORGANIZATION

0.95+

Kelsey HightowerPERSON

0.95+

fiveQUANTITY

0.94+

inkQUANTITY

0.94+

CloudNativeCon Europe 2018EVENT

0.94+

CNCF CloudNative Compute FoundationORGANIZATION

0.94+

day twoQUANTITY

0.93+

years agoDATE

0.93+

each oneQUANTITY

0.93+

Stephan Fabel, Canonical | KubeCon + CloudNativeCon EU 2018


 

>> Announcer: Live from Copenhagen, Denmark, it's the CUBE, covering KubeCon and Cloud Native Con Europe 2018. Brought to you by the Cloud Native Computing Foundation and its ecosystem partners. (busy music) >> Welcome back, everyone, live here in Copenhagen, Denmark, it's the CUBE's coverage of KubeCon 2018. I'm John Furrier, the host of the CUBE, along with Lauren Cooney, who's the founder of Spark Labs. She's been co-host with me two days, two days of wall to wall coverage. Stephan Fabel, Product Strategy Lead at Canonical, is here inside the CUBE, and from San Francisco. Again, welcome to the CUBE, thanks for coming. >> Thank you, thanks so much for having me. >> I've got to, you guys have been around the block, you know about open source software platforms, you get and do it for a while. Interesting time here at KubeCon. Kubernetes, Istio, Kubeflow, Cloud Native, they've still got the brand name CloudNativeCon and KubeCon. Modern application architecture's now in play. I see this notion of an interoperability model coming in that's certainly going to be a de facto standard. People are already kind of declaring it a de facto standard. It really shows a path to multi-cloud, but also frees up developers from a lot of the heavy lifting. Lou Tucker from Cisco was saying they don't want to do networking. Let's just have that be infrastructure as code, that's DevOps, that's what we want. >> Stephan: That is exactly right. >> What are you guys doing here? What's the story with Canonical and how does that fit into the megatrends? >> Yeah, I mean, there's a couple of things that we at Canonical always believe to be one of the core sort of tenets in our distribution of Kubernetes. As you know, we've been very active in this space fairly early on, and have been an active distributor of Kubernetes and a certified distributor of our version of Kubernetes. Pure upstream, remain conformant to the main public clouds, such as to enable that workload migration and mobility from on prem up to any of the other providers to accommodate all kinds of use cases, right. >> You guys made a bet on Kubernetes, obviously, good call. >> Stephan: Right. >> Right. What's the progress now, what's next? Because that's, the bets are paying off. I saw Red Hat had a great bet with what they did with Kubernetes, changed what OpenShift became. You guys had a bet in Kubernetes, what has that become for Canonical? >> Yeah, so based on the pure upstream distribution that we have, we really feel that enabling the ecosystem in a standards compliant way so that all of the landscape projects that are part of the CNCF can be deployed on top of Kubernetes, on top of our distribution of Kubernetes in just the same way that they would be developed or deployed in any of the large containers of service offerings that are out there is one of the big benefits that our customers would gain from using our Kubernetes. >> What's your differentiator for the distribution of Kubernetes that you have versus others? >> Well, there's two. The first one, I think, is the notion that deploying Kubernetes on premise is something that you want to do in a repeatable fashion, operationally efficient with the right capex opex mix, so we believe that there is a place for Kubernetes as a product, just deploy it, it works on any substrate that you've got available to you. But then also, for mainstream America, right, you may want to have a managed service on top of Kubernetes as well. We offer that, too, just a way to get started and kick the tires and see where that takes you as far as the developers are concerned. Now, on prem, you will find that there are a couple of challenges when deploying Kubernetes that are really the key differentiator. The first one, I would say, is things like integration into the storage that's local, integration into the network that's local, and integration into all of those services that should be available in the Cloud Native microservices architecture platform, such as low bouncers, right, elasticity, object store, etc. The second, and most importantly, because it is a key enabler for those next generation workloads, is the GPGPU enablement work that we're doing with partners such as NVIDIA. When you deploy the Canonical distribution of Kubernetes, you actually get the NVIDIA acceleration out of the box the way that NVIDIA envisions this on top of Kubernetes and the way that it is, by the way, being deployed on the public clouds. >> You bring a lot of your goodness to the table inside the Kubernetes distribution. OK, what are some customers doing? Give some use cases of some customers' Kubernetes, what are some of the things that they're doing with it, what's the early indication? What's the feedback? >> Sure. We have a ton of customers that are using our version of Kubernetes to do the machine learning applications and the AI of the next gen workloads in use cases such as smart cities or connected cars, where, when you look at self-driving cars, right, as the next gen that's coming out of the valley, they put in 300,000, 150,000, 400,000 miles a year on the road these days just optimizing the models that are being used to actually take over one day. Enabling those kinds of workloads in a distributed fashion requires DevOps expertise. Now, the people who are actually writing those applications are not DevOps people, they're data scientists, right. They shouldn't have to learn how to deploy Kubernetes, how to create a container and all those things. They should just be able to deploy the application on top an attractive substrate that actually supports that distributed application use case, and so that is where we come in. >> This is interesting, because what you're basically doing is making an application developer a DevOps developer overnight. >> Stephan: That's exactly right. >> That's really important. I was just talking with the co-chair of CNCF. We're talking about, Liz Rice and I were talking about why everyone's so, like, excited here. One of the things I said was, because people who are doing DevOps were hardcore, and they had to build everything from scratch, and all the scar tissue. But the benefits, once you got through the knothole there, the benefits were amazing, right. You go, okay, you don't want to do that again, but now there's a way to make it easier. There's kind of a shared experience even though no one's met each other, so there's kind of a joint community. >> I agree. I think it is increasingly about enabling developers who are experts in their field to actually leverage Kubernetes and the advantages that it brings in a more intuitive fashion. Just take it up a notch. >> How did the Kubernetes vibe integrate in with Canonical? I'm sure, given the background of the company, it probably was a nice fit, people embraced it. You guys were early. >> Stephan: Yeah. >> What's the internal scuttlebutt on the vibe with Kubernetes? >> Oh, we love Kubernetes as a technology. Ubuntu was always close to the developer and close to where the innovation happens. It was a natural fit to actually support all that workflow now in this new world of Kubernetes. We embraced OpenStack for the same reason, and in a similar fashion, Kubernetes has really driven the point home, containerist applications with a powerful orchestration framework such as Kubernetes are the next step for all the developers that are out there, and so as a consequence, this was a perfect match. >> It's also a no-brainer if you think about it, software methodology moving to the next level. This is total step up function for productivity for developers. That's really a key thing. What's your observation of that trend? Because at the end of the day, there's now Kubernetes, which does a lot of great things, but one of the hottest areas is Istio service meshes, and then you've got Kubeflow orchestration, a lot of other things that are happening around Kubernetes. What are you guys seeing that's important for Canonical's customers, what you're doing product wise. Where's the order of operations, what's next? What are you guys focused on, what's the priorities? >> Well, our biggest priority right now is enabling things like Kubeflow, which, by the way, are also using Istio internally, right, to actually enable those data scientists who actually deploy their I workload. We work very closely with Google to try and enable this in an on prem fashion out of the box which is something you can actually do today. >> John: You guys are doing this now inside this. >> We're doing this right now. This is also where we're going to double and triple down. >> This is actually your best practice, too, if you think about it, you want to take it in house, and then get a feel for it. What's the internal vibe on that, positive? >> Oh, absolutely. I mean, we always saw infrastructure as code and actually as intelligent infrastructure as something that we wanted to build our conceptual framework around, so very concretely, right. We've always had this notion of composable building blocks adding up to, sum of one being greater than two, right, like those types of scenarios. Actually using things like Kubernetes as an effective building block to then build out web applications that use things like machine learning algorithms underneath, that's a perfect use case for a next gen workload, and also something that we might use ourselves internally. >> Well, hey, that whole building block thing, it's happening. >> Stephan: Yeah. >> News flash. >> Stephan: Exactly, right? >> I mean, it's almost a pinch me moment for the people in the industry like, oh my god, it's going to go to a whole other level. How do you guys envision that next level going? Beyond the building blocks, is it, I mean, what's the vision that you guys have? Obviously, infrastructure as code programmability, but now, you're talking about infrastructure as code was great, but now you've got microservices growth coming on top of it, it's a services market now. >> It is, it is. I think that the biggest challenge will be the distribution of the workloads, right. You have edge compute coming along in the telco space, you have, like I said, smart cities, right, the sensors will be everywhere, and they will feed data back, and how do you manage that at scale, right? How do you manage that across various different hardware perspectives? We have hardware platforms such as ARM 64 picking up, right, and actually playing a very significant role at the edge, and increasingly, even in the core. We've always believed that providing that software and the distribution of IS such as Kubernetes and others on top of those additional architectures would make a huge difference, and that is clearly paying off. What we see is, the increased need of managing hybrid workloads across multi-cloud scenarios that could be composed of different architectures, not just x86, the future is not homogeneous at all. It'll be all over the place. All those use cases and all those particular situation require that building block principle, like all the way from the OS up to the application. >> John: That's a great use case for containers. Kubernetes, Istio, Kubeflow. >> Absolutely. >> All stacking in line beautifully from an evolution standpoint. I've got to ask you a personal question. I mean, I was at Canonical, great company, I want to thank Canonical for being a sponsor of the CUBE over the years. We've had Mark Shuttleworth on the CUBE had an OpenStack going way back when. You guys are a great participant in the community as a company and the people there been phenomenal. You're new. >> I'm new. >> What attracted you to Canonical? What was the motivating force? What drew you in? You're now running Products, a big job. You've got a lot in front of you. Obviously, it's a great market, so you're a great company. Just share, just color and why Canonical, what attracted you there? >> I've always been a user of Ubuntu, I've been a user since the first hour. I've used Ubuntu in my research. I did robotics based on Ubuntu way before it was cool. I built all kinds of things on top of Ubuntu throughout my entire career. Working for Canonical, which is a company that always exhibited great vision into the future and great predictions into trends that would prove to become true was just, for me, something that was very attractive. >> Their leadership has a good eye on the prize. They had good 20 mile stare, as we say, they can see the roadmap ahead and then make either course corrections or tweaks. >> Yeah. >> Great, awesome. Well, I mean, what's new there? What's your, take a minute to explain what's new at Canonical, role here at KubeCon, what are some of the conversations you're having? >> Yeah, so I mean, for us at KubeCon, it's always been an important part of our outreach to the community, great opportunity for us to have great conversations with our partners in the field. I think it is really about enabling the ecosystem in a more straightforward way. There's no better place to have those types of conversations than here, where everybody comes together and really establishes those relationships. For us, it is about, again, enabling the developer and really staying close to that innovation and supporting that in an optimal way. Yes, I mean, that, to us, is the role that we play. You've got a lot of end users here who are building stuff. >> Oh, absolutely, yeah. They, I mean, I had a talk today about Kubeflow with Google, and after the talk, lots of folks came up to me and said, hey, how can I use this at home, right? >> Sometimes with, whether it's timing, technology, all the above, Kubernetes really hit it strong with the timing, industry was ready for it. Containers had a nice gestation period. People know about containers. >> Stephan: Absolutely. >> Engineers know containers, know about those kinds of concepts. Now we're at a whole other operating environment. >> Stephan: Absolutely. >> You guys are at the forefront. Thanks for coming on the CUBE. >> Oh, thank you, I appreciate it. >> Stephan sharing the perspective, Stephan Fabel. Running Product and Strategy for Canonical, building stuff, this is what's going on in Kubernetes in KubeCon, end users are actually building and orchestrating workloads. Multi-cloud is what people are talking about and the tech to make it happen is here. I'm John Furrier with the CUBE. Stay with us for more live coverage here at KubeCon 2018, part of the CNCF CUBE coverage. We'll be right back after this short break. (busy music)

Published Date : May 3 2018

SUMMARY :

it's the CUBE, covering KubeCon I'm John Furrier, the host of the CUBE, from a lot of the heavy lifting. and have been an active distributor of Kubernetes What's the progress now, what's next? so that all of the landscape projects and kick the tires and see where that takes you What's the feedback? and the AI of the next gen workloads This is interesting, because what you're basically doing and all the scar tissue. and the advantages that it brings How did the Kubernetes vibe integrate in with Canonical? We embraced OpenStack for the same reason, Because at the end of the day, which is something you can actually do today. This is also where we're going to double and triple down. What's the internal vibe on that, positive? and also something that we might use ourselves internally. Well, hey, that whole building block thing, for the people in the industry like, and the distribution of IS such as Kubernetes and others John: That's a great use case for containers. of the CUBE over the years. what attracted you there? into the future and great predictions into trends Their leadership has a good eye on the prize. what are some of the conversations you're having? and really staying close to that innovation and after the talk, lots of folks came up to me and said, all the above, Kubernetes really hit it strong know about those kinds of concepts. Thanks for coming on the CUBE. and the tech to make it happen is here.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lauren CooneyPERSON

0.99+

StephanPERSON

0.99+

Liz RicePERSON

0.99+

JohnPERSON

0.99+

CanonicalORGANIZATION

0.99+

Lou TuckerPERSON

0.99+

Stephan FabelPERSON

0.99+

San FranciscoLOCATION

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

Mark ShuttleworthPERSON

0.99+

John FurrierPERSON

0.99+

CiscoORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

two daysQUANTITY

0.99+

twoQUANTITY

0.99+

NVIDIAORGANIZATION

0.99+

CNCFORGANIZATION

0.99+

20 mileQUANTITY

0.99+

Spark LabsORGANIZATION

0.99+

KubernetesTITLE

0.99+

Copenhagen, DenmarkLOCATION

0.99+

KubeConEVENT

0.99+

UbuntuTITLE

0.99+

AmericaLOCATION

0.99+

oneQUANTITY

0.99+

todayDATE

0.99+

DevOpsTITLE

0.98+

secondQUANTITY

0.98+

Cloud Native Con Europe 2018EVENT

0.98+

CUBEORGANIZATION

0.98+

first oneQUANTITY

0.98+

KubeCon 2018EVENT

0.98+

first hourQUANTITY

0.98+

OneQUANTITY

0.98+

300,000, 150,000, 400,000 miles a yearQUANTITY

0.96+

greater than twoQUANTITY

0.96+

OpenStackTITLE

0.95+

IstioORGANIZATION

0.95+

KubeflowORGANIZATION

0.92+

CloudNativeCon EU 2018EVENT

0.92+

Liz Rice, Aqua Security & Janet Kuo, Google | KubeCon + CloudNativeCon EU 2018


 

>> Announcer: Live from Copenhagen, Denmark, it's theCUBE. Covering KubeCon and CloudNativeCon Europe 2018. Brought to you by the Cloud Native Computing Foundation and its ecosystem partners. >> Hello, everyone. Welcome back to theCUBE's exclusive coverage here in Copenhagen, Denmark for KubeCon 2018, part of the CNCF Cloud Native Compute Foundation, which is part of the Linux Foundation. I'm John Furrier, your host. We've got two great guests here, we've got Liz Rice, the co-chair of KubeCon and CloudNativeCon, kind of a dual naming because it's Kubernetes and it's Cloud Native and also technology evangelist at Aqua Security. She's co-chairing with Kelsey Hightower who will be on later today, and CUBE alumni as well, and Janet Kuo who is a software engineer at Google. Welcome to theCUBE, thanks for coming on. >> Yeah, thanks for inviting us. >> Super excited, we have a lot of energy even though we've got interviews all day and it's kind of, we're holding the line here. It's almost a celebration but also not a celebration because there's more work to do with Kubernetes. Just the growth of the CNCF continues to hit some interesting good performance KPIs on metrics. Growth's up on the membership, satisfaction is high, Kubernetes is being called a de facto standard. So by all kind of general qualitative metrics and quantitative, it's doing well. >> Lauren: It's doing great. >> But it's just the beginning. >> Yeah, yeah. I talked yesterday a little bit in, in the keynote, about project updates, about how Kubernetes has graduated. That's a real signal of maturity. It's a signal to the end-user companies out there that you know, the risk, nothing is ever risk-free, but you know, Kubernetes is here to stay. It's stable, it's got stable governance model, you know, it's not going away. >> John: It's working. >> It's going to continue to evolve and improve. But it's really working, and we've got end users, you know, not only happy and using it, they're prepared to come to this conference and share their stories, share their learnings, it's brilliant. >> Yeah, and Janet also, you know, you talk about China, we have announcement that, I don't know if it's formally announced, but Shanghai, is it out there now? >> Lauren: It is. >> Okay, so Shanghai in, I think November 14th, let me get the dates here, 14th and 15th in Shanghai, China. >> Janet: Yeah. >> Where it's going to be presented in either English or in Chinese, so it's going to be fully translated. Give us the update. >> Yeah, it will be fully translated, and we'll have a CFP coming soon, and people will be submitting their talks in English but they can choose to present either in English or Chinese. >> Can you help us get a CUBE host that can translate theCUBE for us? We need some, if you're out there watching, we need some hosts in China. But in all seriousness, this is a global framework, and this is again the theme of Cloud Native, you know. Being my age, I've seen the lift and shift IT world go from awesome greatness to consolidation to VMwares. I've seen the waves. But this is a different phenomenon with Cloud Native. Take a minute to share your perspectives on the global phenomenon of Cloud Native. It's a global platform, it's not just IT, it's a global platform, the cloud, and what that brings to the table for end users. >> I think for end users, if we're talking about consumers, it actually is, well what it's doing is allowing businesses to develop applications more quickly, to respond to their market needs more quickly. And end users are seeing that in more responsive applications, more responsive services, improved delivery of tech. >> And the businesses, too, have engineers on the front lines now. >> Absolutely, and there's a lot of work going on here, I think, to basically, we were talking earlier about making technology boring, you know, this Kubernetes level is really an abstraction that most application developers don't really need to know about. And making their experience easier, they can just write their code and it runs. >> So if it's invisible to the application developer, that's the success. >> That's a really helpful thing. They shouldn't have to worry about where their code is running. >> John: That's DevOps. >> Yeah, yeah. >> I think the container in Kubernetes technology or this Cloud Native technology that brings developer the ability to, you know, move fast and give them the agility to react to the business needs very quickly. And also users benefit from that and operators also, you know, can manage their applications much more easily. >> Yeah, when you have that abstraction layer, when you have that infrastructure as code, or even this new abstraction layer which is not just infrastructure, it's services, micro-services, growth has been phenomenal. You're bringing the application developer into an efficiency productivity mode where they're dictating the business model through software of the companies. So it's not just, "Hey build me something "and let's go sell it." They're on the front lines, writing the business logic of businesses and their customers. So you're seeing it's super important for them to have that ability to either double down or abandon quickly. This is what agile is. Now it's going from software to business. This, to me, I think is the highlight for me on this show. You see the dots connecting where the developers are truly in charge of actually being a business impact because they now have more capability. As you guys put this together and do the co-chair, do you and Kelsey, what do you guys do in the room, the secret room, you like, "Well let's do this on the content." I mean, 'cause there's so much to do. Take us through the process. >> So, a little bit of insight into how that whole process works. So we had well over 1,000 submissions, which, you know, there's no, I think there's like 150 slots, something like that. So that's a pretty small percentage that we can actually accept. We had an amazing program committee, I think there were around 60 people who reviewed, every individual reviewer looked at a subset. We didn't ask them to look at all thousand, that would be crazy. They scored them, that gave us a kind of first pass, like a sort of an ability to say, "Well, anything that was below average, "we can only take the top 15%, "so anything that's below average "is not going to make the cut." And then we could start looking at trying to balance, say, for example, there's been a lot of talk about were there too many Istio talks? Well, there were a lot of Istio talks because there were a lot of Istio submissions. And that says to us that the community wants to talk about Istio. >> And then number of stars, that's the number one project on the new list. I mean, Kubeflow and Istio are super hot. >> Yeah, yeah, Kubeflow's another great example, there are lots of submissions around it. We can't take them all but we can use the ratings and the advice from the program committee to try and assemble, you know, the best talks to try and bring different voices in, you know, we want to have subject matter experts and new voices. We want to have the big name companies and start-ups, we wanted to try and get a mix, you know. A diversity of opinion, really. >> And you're a membership organization so you have to balance the membership needs with the content program so, challenging with given the growth. I mean, I can only imagine. >> Yeah, so as program co-chairs, we actually have a really free hand over the content, so it's one of the really, I think, nice things about this conference. You know, sponsors do get to stand on stage and deliver their message, but they don't get to influence the actual program. The program is put together for the community, and by doing things like looking at the number of submissions, using those signals that the community want to talk about, I hope we can carry on giving the attendees that format. >> I would just say from an outsider perspective, I think that's something you want to preserve because if you look at the success of the CNCF, one thing I'm impressed by is they've really allowed a commercial environment to be fostered and enabled. But they didn't compromise the technical. >> Lauren: Yeah. >> And the content to me, content and technical tracks are super important because content, they all work together, right? So as long as there's no meddling, stay in your swim lane, whatever, whatever it is. Content is really important. >> Absolutely, yeah. >> Because that's the learning. >> Yeah, yeah. >> Okay, so what's on the cut list that you wish you could have put back on stage? Or is that too risque? You'll come back to that. >> Yeah. >> China, talk about China. Because obviously, we were super impressed last year when we went to go visit Alibaba just to the order of magnitude to the cultural mindset for their thinking around Cloud Native. And what I was most impressed with was Dr. Wong was talking about artistry. They just don't look at it as just technology, although they are nerdy and geeky like us in Silicon Valley. But they really were thinking about the artistry 'cause the app side of it has kind of a, not just design element to the user perspective. And they're very mobile-centric in China, so they're like, they were like, "This is what we want to do." So they were very advanced in my mind on this. Does that change the program in China vis a vis Seattle and here, is there any stark differences between Shanghai and Copenhagen and Seattle in terms of the program? Is there a certain focus? What's the insight into China? >> I think it's a little early to say 'cause we haven't yet opened the CFP. It'll be opening soon but I'm fully expecting that there will be, you know, some differences. I think the, you know, we're hoping to have speakers, a lot more speakers from China, from Asia, because it's local to them. So, like here, we tried to have a European flavor. You'll see a lot of innovators from Europe, like Spotify and the Financial Times, Monzo Bank. You know, they've all been able to share their stories with us. And I think we're hoping to get the same kind of thing in China, hear local stories as well. >> I mean that's a good call. I think conferences that do the rinse and repeat from North America and just slap it down in different regions aren't as effective as making it localized, in a way. >> Yeah. >> That's super important. >> I know that a lot of China companies, they are pretty invested pretty heavily into Kubernetes and Cloud Native technology and they are very innovative. So I actually joined a project in 2015 and I've been collaborating with a lot of Chinese contributors from China remotely on GitHub. For example, the contributors from Huawei and they've been invested a lot in this. >> And they have some contributors in the core. >> Yeah, so we are expecting to see submissions from those contributors and companies and users. >> Well, that's super exciting. We look forward to being there, and it should be excellent. We always have a fun time. The question that I want to ask you guys now, just to switch gears is, for the people watching who couldn't make it or might watch it on YouTube on Demand who didn't make the trip. What surprised you here? What's new, I'm asking, you have a view as the co-chair, you've seen it. But was there anything that surprised you, or did it go right? Nothing goes perfect. I mean, it's like my wedding, everything happens, didn't happen the way you planned it. There's always a surprise. Any wild cards, any x-factors, anything that stands out to you guys? >> So what I see from, so I attend, I think around five KubeCons. So from the first one it's only 550 people, only the small community, the contributors from Google and Red Hat and CoreOS and growing from now. We are growing from the inner circle to the outside circle, from the just contributors to also the users of it, like and also the ecosystem. Everyone that's building the technology around Cloud Native, and I see that growth and it's very surprising to me. We have a keynote yesterday from CERN and everyone is talking about their keynote, like they have I think 200 clusters, and that's amazing. And they said because of Kubernetes they can just focus on physics. >> Yeah, and that's a testimonial right there. >> Yeah. >> That was really good stories to hear, and I think maybe one of the things that surprises me, it sort of continues to surprise me is how collaborative, it's something about this kind of organization, this conference, this whole kind of movement, if you like. Where companies are coming in and sharing their learnings, and we've seen that, we've seen that a lot through the keynotes. And I think we see it on the conference floor, we see it in the hallway chat. And I think we see it in the way that the different SIGs and working groups and projects are all, kind of, collaborating on problem solving. And that's really exciting. >> That's why I was saying earlier in the beginning that there's a celebration amongst ourselves and the community. But also a realization that this is just the beginning, it's not a, it's kind of like when you get venture funding if you're a start-up. That's really when it begins, you don't celebrate, but you take a little bit of a pause. Now my personal take only to all of the hundreds of events we do a year is that I that think this community here has fought the hard DevOps battle. If you go back to 2008 timeframe, and '08, '09, '10, '11, '12, those years were, those were hyper scale years. Look at Google, Facebook, all the original DevOps engineers, they were eating glass and spitting nails. It was hard work. And it was really build your own, a lot of engineering, not just software development. So I think this, kind of like, camaraderie amongst the DevOps community saying, "Look, this is a really big "step up function with Kubernetes." Everyone's had some scar tissue. >> Yeah, I think a lot of people have learned from previous, you know, even other open source projects that they've worked on. And you see some of the amazing work that goes into the kind of, like, community governance side. The things that, you know, Paris Pittman does around contributor experience. It's so good to see people who are experts in helping developers engage, helping engineers engage, really getting to play that role. >> There's a lot of common experiences for people who have never met each other because there's people who have seen the hard work pay with scale and leverage and benefits. They see it, this is amazing. We had Sheryl from Google on saying, "When I left Google and I went out into the real world, "I was like, oh my God, "they don't actually use Borg," like, what? "What do they, how do they actually write software?" I mean, so she's a fish out of water and that, it's like, so again I think there's a lot of commonality, and it's a super great opportunity and a great community and you guys have done a great job, CNCF. And we hope to nurture that, the principles, and looking forward to China. Thanks for coming on theCUBE, we appreciate it. >> Yeah. >> Okay we're here at CNCF's KubeCon 2018, I'm John Furrier, more live coverage. Stay with us, day two of two days of CUBE coverage. Go to thecube.net, siliconangle.com for all the coverage. We'll be back, stay with us after this short break.

Published Date : May 3 2018

SUMMARY :

Brought to you by the Cloud Native Computing Foundation Welcome back to theCUBE's exclusive coverage Just the growth of the CNCF continues to hit It's a signal to the end-user companies out there It's going to continue to evolve and improve. let me get the dates here, 14th and 15th in Shanghai, China. Where it's going to be presented but they can choose to present either in English or Chinese. and this is again the theme of Cloud Native, you know. to respond to their market needs more quickly. And the businesses, too, have engineers I think, to basically, we were talking earlier So if it's invisible to the application developer, They shouldn't have to worry about that brings developer the ability to, you know, the secret room, you like, And that says to us that the community that's the number one project on the new list. to try and assemble, you know, the best talks so you have to balance the membership needs but they don't get to influence the actual program. I think that's something you want to preserve And the content to me, content and technical tracks that you wish you could have put back on stage? just to the order of magnitude to the cultural mindset I think the, you know, we're hoping to have speakers, I think conferences that do the rinse and repeat and Cloud Native technology and they are very innovative. Yeah, so we are expecting to see submissions anything that stands out to you guys? from the just contributors to also the users of it, And I think we see it in the way that the different SIGs and the community. It's so good to see people who are experts and looking forward to China. Go to thecube.net, siliconangle.com for all the coverage.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
LaurenPERSON

0.99+

Liz RicePERSON

0.99+

JohnPERSON

0.99+

JanetPERSON

0.99+

2015DATE

0.99+

HuaweiORGANIZATION

0.99+

Janet KuoPERSON

0.99+

AsiaLOCATION

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

ChinaLOCATION

0.99+

Linux FoundationORGANIZATION

0.99+

John FurrierPERSON

0.99+

Monzo BankORGANIZATION

0.99+

EuropeLOCATION

0.99+

November 14thDATE

0.99+

WongPERSON

0.99+

Silicon ValleyLOCATION

0.99+

North AmericaLOCATION

0.99+

2008DATE

0.99+

CERNORGANIZATION

0.99+

thecube.netOTHER

0.99+

yesterdayDATE

0.99+

FacebookORGANIZATION

0.99+

siliconangle.comOTHER

0.99+

Kelsey HightowerPERSON

0.99+

GoogleORGANIZATION

0.99+

SpotifyORGANIZATION

0.99+

150 slotsQUANTITY

0.99+

hundredsQUANTITY

0.99+

Cloud NativeTITLE

0.99+

last yearDATE

0.99+

ShanghaiLOCATION

0.99+

KelseyPERSON

0.99+

Copenhagen, DenmarkLOCATION

0.99+

Aqua SecurityORGANIZATION

0.99+

KubeConEVENT

0.99+

two daysQUANTITY

0.99+

SherylPERSON

0.99+

Financial TimesORGANIZATION

0.99+

CNCFORGANIZATION

0.99+

theCUBEORGANIZATION

0.99+

EnglishOTHER

0.99+

CUBEORGANIZATION

0.99+

Paris PittmanPERSON

0.99+

200 clustersQUANTITY

0.98+

Shanghai, ChinaLOCATION

0.98+

15%QUANTITY

0.98+

first oneQUANTITY

0.98+

KubeCon 2018EVENT

0.98+

ChineseOTHER

0.98+

'09DATE

0.98+

14thDATE

0.98+

CNCF Cloud Native Compute FoundationORGANIZATION

0.98+

Red HatORGANIZATION

0.98+

'10DATE

0.98+

IstioORGANIZATION

0.98+

'11DATE

0.97+

'12DATE

0.97+

over 1,000 submissionsQUANTITY

0.96+

15thDATE

0.96+

550 peopleQUANTITY

0.96+

around 60 peopleQUANTITY

0.96+

Dr.PERSON

0.95+

Michael Hausenblas & Diane Mueller, Redhat | KubeCon + CloudNativeCon EU 2018


 

>> Narrator: From Copenhagen, Denmark, it's theCUBE, covering KubeCon, and CloudNativeCon Europe 2018. Brought to you by the Cloud Native Computing Foundation, and its ecosystem partners. >> Okay, welcome back, everyone, live coverage here in theCUBE, in Europe, at Copenhagen, Denmark for KubeCon Europe 2018. This is theCUBE. We have the CNCF, at the Cloud Native Computing Foundation, part of the Linux Foundation. I'm John Furrier, co-host of theCUBE, with Lauren Cooney, the founder of SparkLabs, new venture around open source and innovation. Our analysts here, today with theCUBE, and our two guests are Michael Hausenblas, who's the direct developer advocate at Red Hat. Diane Meuller's the director of community development at Red Hat, talking about OpenShift, Red Hat, and just the rise and success of OpenShift. It's been really well-documented here on theCUBE, but certainly, in the industry, everyone's taking notice. Great to see you again, welcome to theCUBE, good to see you. >> Thank you. >> And wonderful to be here again. >> So, first of all, a lot of big news going on. CoreOS is now part of Red Hat, so that's exciting. I haven't had a chance to talk to you guys about that yet here on theCUBE, but great, great puzzle piece from the industry there for you guys, congratulations. >> Yeah, it's been a wonderful collaboration, having the CoreOS team as part of the Red Hat, and the OpenShift team, it's just a perfect fit. And the team from CoreOS, they've always been my favorite people. Alright, and Brandon Philips and the team over there are just awesome. And to have the expertise from Tectonics, the operator framework, which you'll hear more about here at KubeCon EU this week, to have Quay under the wings of Red Hat now, and Quay is a registry with OpenShift or with any other Kubernetes, you know, the stuff that they brought to the table, and the expertise, as well as the wonderful culture that they had, it was such a perfect fit with OpenShift. >> And you know, you guys bring a lot to the table, too. And I was, I mean, I've been kind of critical of CoreOS in the past, in a good way, 'cause I love those guys. I had good chats with them over the years, but they were so pure open-source guys, like Red Hat. >> Diane: Well, there's nothing wrong with being pure open-source. (laughing) >> No, no, I'm cool with that, but you guys have perfected the business more, you have great customers. So one of the things that they were always strong at was the open-source piece but when you start to monetize, and you start to get into the commercialization, it's hard for a start-up to be both, pure open-source and to monetize. You guys now have it together, >> Yeah. >> Great fit. >> So, it's a wonderful thing. We, on the OpenShift side, we have the OpenShift Commons, which is our open-source community, and we've sort of flipped the model of community development and that's at Red Hat. And one of the things is, they've been really strong, CoreOS, with their open-source projects, whether etcd, or you know, a whole myriad of other things. >> Well, let's double down on that. I want to get your thoughts. What is this OpenShift Commons? Take a minute to talk about what you guys had. You had an event Monday. It was the word on the streets, here in the hallways, is very positive. Take a minute to explain what happened, what's going on with that program? >> So OpenShift Commons is the open-source community around OpenShift Origin, but it also includes all the upstream projects that we collaborate with, with everybody from the Kubernetes world, from the Promytheus, all the CNCF project leads, all kinds of people from the upstream projects that are part of the OpenShift Ecosystem, as well as all the service providers and partners, who are doing wonderful things, and all the hosts, like Google, and you know, Microsoft Azure folks are in there. But, we've kind of flipped the model of community development on its head. In the past, if you were a community manager, which is what I started out as, you were trying to get people to contribute to your own code base. And here, because there's so much cross-community collaboration going on, we've got people working on Kubernetes. We got Kubernetes people making commits to Origin. We work on the OCI Foundation, trying to get the container stuff all figured out. >> So when you say you flipped the model, you mean there's now multiple-project contributions going on, or? >> Yeah, we've got our fingers in lots of pies now, and we have to, the collaboration has to be open, and there has to be a lot of communication. So the OpenShift Commons is really about creating those peer-to-peer networks. We do a lot of stuff virtual. I host my own OpenShift Commons briefings twice a week, and I could probably go to three or four days a week, and do it, because there's so much information. There's a fire hose of new stuff, new features, new releases, and stuff. Michael just did one on FAS. You did one before for the machine-learning Saigon OpenShift on Callum. >> Hold on, I want to just get your thoughts, Michael, on this, because what came up yesterday on theCUBE, was integration glue layers are really important. So I can see the connection here. Having this Commons model allows people to kind of cross-pollenate, one. Two, talk about integration, because we've got Promytheus, I might use KubeFlow. So there's new things happening. What does this mean for the integration piece? Good for it, or accelerating it? What's your thoughts? >> Right, right, right. So, I mainly work upstream which means when it is KubeFlow and other projects. And for me, these kind of areas where you can bring together both, the developers, and the end users, which is super important for us to get the feedback to see where we really are struggling. We hear a lot from those people that meet there, what their pinpoints are. And that is the best way to essentially shape the agenda, to say, well, maybe let's prioritize this over this other feature. And as you mention, integration being one big part, and Functions and Service being, could be considered as the visual basics of applications for Cloud Native Computing. It can act as this kind of glue between different things there. And I'm super excited about Commons. That's for me a great place to actually meet these people, and talk with them. >> So the Commons is almost a cross-pollination of folks that are actually using the code, building the code, and they see other projects that makes sense to contribute to, and so it's an alignment where you allow for that cross-pollination. >> It's a huge series of conversations, and one of the things that is really important to all of the projects is, as Michael said, is getting that feedback from production deployments. People who are working on stuff. So we have, I think we're at around 375 organizational members, so there's... >> John: What percentage of end-user organizations, do you think? >> It's probably about 50/50. You know, you can go to Commons.OpenShift.org, and look up the participants list. I'm behind a little bit in getting everybody in there, but-- >> John: So it's a good healthy dose of end-users? >> It's a good healthy dose of end-users. There's some special interest groups. Our special interest groups are more around used cases. So, we just hosted a machine-learning reception two nights ago, and we had about 200 people in the room. I'd say 50% of them were from the KubeFlow community, and the other 50% were users, or people who are building frameworks for our people to run on OpenShift. And so our goal, as always, is to make OpenShift the optimal, the best place to run your, in this case, machine-learning workloads, or-- >> And I think that's super critical, because one of the things that I've been following a little bit, and you know, I have your blog entry in front of me, is the operator framework, and really what you're trying to do with that framework, and how it's progressing, and where it's going, and really, if you can talk a little bit about what you're doing there, I think that would be great for our viewers. >> So what I'm going to do is I'm going to make sure you get Brandon Philips here, on your KubeFlow, sometime this week, 'cause I don't want to steal the thunder from his keynote tomorrow morning-- >> Lauren: Well, drop a couple hints. (laughs) >> John: Share a little bit, come on. >> So the operator stuff that CoreOS, and they brought it to the table, so it's really their baby. They had done a lot of work to make sure that they had first-class access to be able to inject things into Kubernetes itself, and make it run. And they're going to do a better technical talk on it than I am, and make things run. And so that what they've done is they've opened up and created an STK for operators, so other people can build more. And we think, this is a tipping point for Kubernetes, and I really don't want to steal any thunder here, or get in over my head, is the other part of it, too. >> I think Brandon is the right person to talk about that. >> Brandon, we'll drag Brandon over here. >> I'm super excited about it, but let's-- >> Yeah, let's talk about why you're super excited about it. Is there anything you can kind of tell us in terms of what? >> Enables people to run any kind of workload in communities, in a reliable automated fashion. So you bring the experience that human operators have into software. So you automate that application, which makes it even more suitable to run your enterprise application that so far might have not been the best place to run. >> Lauren: That's great, yeah. >> And yeah, I'm also looking forward to Brandon explaining the details there. >> So I think it's great hearing about that, and we talk a lot about how it's great for users. It's great, you know, operators, developers, how they're building things out, and things along those lines. But one of the things that we are not hearing a ton about here, and we want to hear more about, is security. Security is increasingly important. You know, we're hearing bits and pieces but nothing's really kind of coming together here and what're your thoughts on that? >> Security, I was recently, when I blogged about it, and people on Twitter said, well, is that really true that, you know, couldn't this secure body fall? It's like, well, all the pieces are there. You need to be aware of it. You need to know what you're doing. But it is there, right? All the defaults might not be as you would expect it, but you can enable it. And I think we did a lot of innovations there, as well. With our back, and security context, and so on. And, actually, Liz Rice and myself are working on putting the security cookbook, and for a variety that will come out later this year. We're trying to document the best practice, because it is early days, and it's quite a range of things. From building container images in a secure way, to excess control, and so on, so there's a lot of stuff (mumbles). >> What're some of the end-user feedback sessions, or feedback data that you're getting from these sessions? What is some of the things you guys are hearing? What's the patterns? What's the things that are boiling up to the top? >> Well, there's so many. I mean, this conference is one of those ones where it's a cornucopia of talks, and trying to, I just wrote a little blog post called, The Hitchhiker's Guide to KubeCon. It's on blog.openshift.com. And because, you could spend all of your time here in a different track, and never leave it, like Security 1, or in Operations 1, or-- >> John: There's a lot of great content. >> I think the Istio stuff is probably the hottest thing I'm hearing people going to. There was a great deep-dive training session, hands-on on Monday, here, that got incredible feedback. IBM and Google did that one. We had a lot of customer talks and hands-on training sessions on Monday. Here, there are pretty much, there's a great talk coming up this afternoon, on Kube Controllers that Magic... I think that's at 11:45-ish. There are a lot of the stuff around Service Fish, and service brokers, is really kind of the hot thing that people are looking for to get implemented. And we've got a lot of people from Red Hat working on that. There's, oh man, there's etcd updtes, there's a bazillion things going-- >> John: It's exploding big time here. >> Yeah. >> No doubt about it. >> The number one thing that I'm seeing last couple of months, being onsite with customers, and also here, is that given that Kubernetes is now the defective standard of container authorization, people are much more willing to go all-in, you know? >> Yeah. >> A lot of folks were on the fence, for a couple of years, going like, which one's going to make it? Now, it's kind of like, this is a given. You couldn't, you know, just as Linux is everywhere on the servers, that's the same with Kubernetes, and people are now happy to really invest, to like, okay, let's do it now, let's go all in. >> Yeah, and, what we're hearing, too, just stepping back and looking at the big picture is we see the trend, kind of hearing and connecting the dots, as the number of nodes is going to expand significantly. I mean, Sterring was on stage yesterday, and we heard their, and still small, not a lot of huge, not a lot on a large scale. So, we think that the scale question is coming quickly. >> Well, I think it already came, alright? In the machine-learning reception that we had at night, one of the gentleman, Willem Bookwalter, from Microsoft, and Diane Feddema, from Red Hat, and a whole lot of people are talking about how do we get, because machine-learning workloads, have such huge work, you know, GPU, and Google has their TPU requirements to get to scale, to run these things, that people are already pushing the envelope on Kubernetes. Jeremy Eater from Red Hat has done some incredible performance management work. And on the CNCF blog, they've posted all of that. To get the optimal performance, and to get the scale, is now, I think, one of the next big things, and there's a lot of talks that are on that. >> Yeah, and that's Istio's kind of big service mesh opportunity there, is to bring that to the next level. >> To the next level, you know, there's going to be a lot of things that people are going to experience trying to get the most out of their clusters, but also, I think we're still at the edge of that. I mean, someone said something about getting to 2,500 nodes. And I'm like, thinking, that's just the beginning, baby. >> Yeah, it's going to be more, add a couple zeroes. I got to ask you guys, I got to put you both on the spot here, because it's what we do on theCUBE. You guys are great supporters of theCUBE. We appreciate that, but we've had many conversations over the years with OpenShift, going back to OpenStacks, I don't know what year it was, maybe 2012, or I don't know. I forget what year it was. Now, the success of OpenShift was really interesting. You guys took this to a whole 'nother level. What's the reaction? Are you, as you look back now on where you were with OpenShift and where you are today, do you pinch yourself and say, damn? Or what's your view? >> Red Hat made a big bet on Kubernetes three years ago, three and a half years ago, when people thought we were crazy. You know, they hadn't seen it. They didn't understand what Google was trying to open-source, and some of the engineers inside of Red Hat, Clayton Coleman, Matt Hicks, a lot of great people, saw what was coming, reached out, worked with Google. And the rest of us were like, well, what about Ruby and Rails, and Mongo DB, and you know, doing all this stuff? And like, we invested so much in gears and cartridges. And then, once they explained it, and once Google really open-sourced the whole thing, making that bet as a company, and pivoting on that dime, and making version 3.0 of OpenShift and OpenShift Origin, as a Kubernetes-based platform, as a service, and then, switching over to being a container platform, that was a huge thing. And if you had talked to me back then, three years ago, it was kind of like, is this the right way to go? But, then, you know, okay. >> Well, it's important to history to document that point, because I remember we talked about it. And one of the things, you guys made a good bet, and people were scratching their head, at that time. >> Oh yeah. >> Big time. But also, you've got to give credit to the community, because the leaders in the community recognized the importance of Kubernetes early on. We've been in those conversations, and said, hey, you know, we can't screw this up, because it was an opportunity. People saw the vision, and saw it as a great opportunity. >> I think, as much as I like the technical bits, as an engineer, the API being written and go, and so on, I really think the community, that is what really makes the difference. >> Yeah, absolutely does. >> If you compare it with others, they're also successful. But here with CNCF, all the projects, all the people coming together, and I love the community, I really-- >> It's a case study of how to execute, in my opinion. You guys did a great job in your role, and the people didn't get in the way and try to mess it up. Great smart people understood it, shepherded it through, let it grow. >> And it really is kudos to the Kubernetes community, and the CNCF, for incubating all of this wonderful cross-community collaboration. They do a great job with their ambassadors program. The Kubernetes community does amazing stuff around their SIGs, and making sure that projects get correctly incubated. You know, they're not afraid to rejig the processes. They've just done a wonderful thing, changing the way that new projects come into the Kubernetes, and I think that willingness to learn, learn from mistakes, to evolve, is something that's really kind of unique to the whole new way of thinking about open-source now, and that's the change that we've seen. >> And open-source, open movements, always have a defining moment. You know, the OSI model, remember? That stack never got fully standardized but it stopped at a really important point. PCPIP, IP became really important. The crazy improbability world, CISCO, as we know, and others. This is that kind of moment where there's going to be a massive wealth creation, value creation opportunity because you have people getting behind something, as a de facto standard. And then, there's a lot of edge work around it that can be innovated on. I think, to me, this is going to be one of those moments we look back on. >> Yeah, and I think it's that willingness to adjust the processes, to work with the community, and you know, that Kubernetes, the ethos that's around this project, we've learned from a lot of other foundations' mistakes. You know, not that they're better or worse, but we've learned that you could see the way we're bringing in new projects, and adding them on. We took a step back as a community, and said okay, this is, we're getting too many, too soon, too fast. And maybe, this is not quite the right way to go. And rather than doing the big tent umbrella approach, we've actually starting doing some really re-thinking of our processes, and the governing board and the TOC of the CNCF, have done an awesome job getting that done. >> When you got lightning in a bottle, you stop and you package it up, and you run with it, so congratulations. Red Hat Summit next week, we'll be there, theCUBE. >> Oh yeah. >> Looking forward to going deep on this. >> Well, the OpenShift Commons Gathering is the day before Red Hat Summit. We've completely sold out, so sorry, there's a waitlist. We've gone from being, our first one, I think we had 150 people come. There's over 700 people now coming to the Gathering one, and 25 customers with production deployments speaking. This is the day before Red Hat Summit. And I lost count of how many OpenShift stories are being told at Red Hat Summit. It's going to be a crazy, jetlag-y week, next week, so-- >> Congratulations, you guys got a spring in your step, well done. OpenShift going to the next level, certainly the industry and Kubernetes, a service mesh as Istio. Lot of great coverage here in theCUBE, here in Europe for KubeCon 2018 in Copenhagen, Denmark. I'm John Furrier, and Lauren Cooney, the founder of SparkLabs. I'm with theCUBE, we'll be back with more live coverage. Stay with us! Day Two, here at KubeCon, we'll be right back. (upbeat techno music)

Published Date : May 3 2018

SUMMARY :

Brought to you by the Cloud Native Computing Foundation, and just the rise and success of OpenShift. I haven't had a chance to talk to you guys the stuff that they brought to the table, of CoreOS in the past, in a good way, with being pure open-source. So one of the things that they were always strong at And one of the things is, Take a minute to talk about what you guys had. and all the hosts, like Google, and there has to be a lot of communication. So I can see the connection here. And that is the best way to essentially shape the agenda, and so it's an alignment where you allow and one of the things that is really important You know, you can go to Commons.OpenShift.org, and the other 50% were users, and you know, I have your blog entry in front of me, Lauren: Well, drop a couple hints. and they brought it to the table, Is there anything you can kind of tell us that so far might have not been the best place to run. to Brandon explaining the details there. But one of the things All the defaults might not be as you would expect it, And because, you could spend all of your time here and service brokers, is really kind of the hot thing and people are now happy to really invest, as the number of nodes is going to expand significantly. To get the optimal performance, and to get the scale, is to bring that to the next level. To the next level, you know, I got to ask you guys, I got to put you both on the spot here, and once Google really open-sourced the whole thing, And one of the things, you guys made a good bet, and said, hey, you know, we can't screw this up, as an engineer, the API being written and go, and so on, and I love the community, I really-- and the people didn't get in the way and that's the change that we've seen. You know, the OSI model, remember? and the TOC of the CNCF, and you run with it, so congratulations. This is the day before Red Hat Summit. the founder of SparkLabs.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lauren CooneyPERSON

0.99+

Michael HausenblasPERSON

0.99+

Diane MeullerPERSON

0.99+

LaurenPERSON

0.99+

MichaelPERSON

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

IBMORGANIZATION

0.99+

Liz RicePERSON

0.99+

JohnPERSON

0.99+

threeQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

Willem BookwalterPERSON

0.99+

GoogleORGANIZATION

0.99+

Jeremy EaterPERSON

0.99+

John FurrierPERSON

0.99+

OCI FoundationORGANIZATION

0.99+

DianePERSON

0.99+

BrandonPERSON

0.99+

Linux FoundationORGANIZATION

0.99+

EuropeLOCATION

0.99+

50%QUANTITY

0.99+

two guestsQUANTITY

0.99+

MondayDATE

0.99+

Matt HicksPERSON

0.99+

Red HatORGANIZATION

0.99+

TectonicsORGANIZATION

0.99+

SparkLabsORGANIZATION

0.99+

Diane FeddemaPERSON

0.99+

KubeConEVENT

0.99+

tomorrow morningDATE

0.99+

Copenhagen, DenmarkLOCATION

0.99+

next weekDATE

0.99+

Security 1TITLE

0.99+

Red Hat SummitEVENT

0.99+

CISCOORGANIZATION

0.99+

yesterdayDATE

0.99+

Diane MuellerPERSON

0.99+

PromytheusTITLE

0.99+

OpenShiftTITLE

0.99+

150 peopleQUANTITY

0.99+

25 customersQUANTITY

0.99+

three years agoDATE

0.99+

CNCFORGANIZATION

0.98+

three and a half years agoDATE

0.98+

2012DATE

0.98+

KubeCon 2018EVENT

0.98+

bothQUANTITY

0.98+

KubernetesORGANIZATION

0.98+

Clayton ColemanPERSON

0.98+

Brandon PhilipsPERSON

0.98+

over 700 peopleQUANTITY

0.98+

OpenShift OriginTITLE

0.98+

two nights agoDATE

0.98+

LinuxTITLE

0.97+

KubeCon Europe 2018EVENT

0.97+

David Aronchick & JD Velasquez, Google | KubeCon + CloudNativeCon 2018


 

>> Announcer: Live, from Copenhagen, Denmark. It's theCUBE! Covering KubeCon and CloudNativeCon Europe 2018. Brought to you by the Cloud Native Computing Foundation, and its Ecosystem partners. >> Hi everyone, welcome back, this is theCUBE's exclusive coverage of the Linux Foundation's Cloud Native Compute Foundation KubeCon 2018 in Europe. I'm John Furrier, host of theCUBE and we're here with two Google folks. JD Velazquez who's the Product Manager for Stackdriver, got some news on that we're going to cover, and David Aronchick, who's the co-founder of Kubeflow, also with Google, news here on that. Guys, welcome to theCUBE, thanks for coming on. >> Thank you John. >> Thank you very much. >> So we're going to have Google Next coming out, theCUBE will be there this summer, looking forward to digging in to all the enterprise traction you guys have, and we had some good briefings at Google. Ton of movement on the Cloud for Google, so congratulations. >> JD: Thank you. >> Open source is not new to Google. This is a big show for you guys. What's the focus, you've got some news on Stackdriver, and Kubeflow. Kubeflow, not Cube flow, that's our flow. (laughing) David, share some of the news and then we'll get into Stackdriver. >> Absolutely, so Kubeflow is a brand new project. We launched it in December, and it is basically how to make machine learning stacks easy to use and deploy and maintain on Kubernetes. So we're not launching anything new. We support TensorFlow and PyTorch, Caffe, all the tools that you're familiar with today. But we use all the native APIs and constructs that Kubernetes rides to make it very easy and to let data scientists and researchers focus on what they do great, and let the I.T. Ops people deploy and manage these stacks. >> So simplifying the interactions and cross-functionality of the apps. Using Kubernetes. >> Exactly, when you go and talk to any researcher out there or data scientist, what you'll find is that while the model, TensorFlow, or Pytorch or whatever, that gets a little bit of the attention. 95% of the time is spent in all the other elements of the pipeline. Transforming your data, ingesting it, experimenting, visualizing. And then rolling it out toward production. What we want to do with Kubeflow is give everyone a standard way to interact with those, to interact with all those components. And give them a great workflow for doing so. >> That's great, and the Stackdriver news, what's the news we got going on? >> We're excited, we just announced the beta release of Stackdriver Kubernetes monitoring, which provides very rich and comprehensive observability for Kubernetes. So this is essentially simplifying operations for developers and operators. It's a very cool solution, it integrates many signals across the Kubernetes environment, including metrics, logs, events, as well as metadata. So what it allows is for you to really inspect your Kubernetes environment, regardless of the role, and regardless of where your deployment is running it. >> David is bringing up just the use cases. I just, my mind is exploding, 'cause you think about what Tensorflow is to a developer, and all the goodness that's going on with the app layer. The monitoring and the instrumentation is a critical piece, because Kubernetes is going to bring the people what is thousands and thousands of new services. So, how do you instrument that? I mean, you got to know, I want to provision this service dynamically, that didn't exist. How do you measure that, I mean this is, is this the challenge you guys are trying to figure out here? >> Yeah, for sure John. The great thing here is that we, and at Google primarily, many of our ancillary practices go beyond monitoring. It really is about observability, which I would describe more as a property of a system. How do you, are able to collect all these many signals to help you diagnose the production failure, and to get information about usage and so forth. So we do all of that for you in your Kubernetes environment, right. We take that toil away from the developer or the operator. Now, a cool thing is that you can also instrument your application in open source. You can use Prometheus, and we have an integration for that, so anything you've done in a Prometheus instrumentation, now you can bring into the cloud as needed. >> Tell about this notion, everyone gets that, oh my God, Google's huge. You guys are very open, you're integrating well. Talk about the guiding principles you guys have when you think about Prometheus as an example. Integrating in with these other projects. How are you guys treating these other projects? What's the standard practice? API Base? Is there integration plans? How do you guys address that question? >> Yeah, at a high level I would say, at Google, we really believe in contributing and helping grow open communities. I think that the best way to maintain a community open and portable is to help it grow. And Prometheus particularly, and Kubernetes of course, is a very vibrant community in that sense. So we are, from the start, designing our systems to be able to have integration, via APIs and so on, but also contributing directly to the projects. >> And I think that one thing that's just leveraging off that exact point, y'know, we realize what the world looks like. There's literally zero customers out there, like, "Well, I want be all in on one cloud. "Y'know, that 25 million dollar data center "I spent last year building. "Yeah, I'll toss that out so that I can get, "y'know, some special thing." The reality is, people are multi-cloud. And the only way to solve any problem is with these very open standards that work wherever people are. And that's very much core to our philosophy. >> Well, I mean, I've been critical of multi-cloud, by the definition. Statistically, if I'm on Azure, with 365, that's Azure. If I'm running something on Amazon, those are two clouds, they're not multi-cloud, by my definition. Which brings up where this is going, which is latency and portability, which you guys are really behind. How are you guys looking at that, because you mentioned observation. Let's talk about the observation space of clouds. How are you guys looking at, 'cause that's what people are talking about. When are we going to get to the future state, which is, I need to have workload portability, in real time, if I want to move something from Azure to AWS or Google Cloud, that would be cool. Can't do that today. >> That is actually the core of what we did around Kubeflow. What we are able to do is describe in code all the layers of your pipeline, all the steps of your pipeline. That works based on any conformant Kubernetes cluster. So, you have a Kubernetes conformant cluster on Azure, or on AWS, or on Google Cloud, or on your laptop, or in your private data center, that's great. And to be clear, I totally agree. I don't think that having single workloads spread across cloud, that's not just unrealistic, because of all the things you identified. Latency, variability, unknown failures, y'know. Cap theorem is a thing because, y'know, it's well-known. But what people want to do is, they want to take advantage of different clouds for the efforts that they provide. Maybe my data is here, maybe I have a legal reason, maybe this particular cloud has a unique chip, or unique service-- >> Use cases can drive it. >> Exactly, and then I can take my workload, which has been described in code and deploy it to that place where it makes sense. Keeping it within a single cloud, but as an organization I'll use multiple clouds together. >> Yeah, I agree, and the data's key, because if you can have data moving between clouds, I think that's something I would like to see, because that's going to be, because the metadata you mentioned is a real critical piece of all these apps. Whether it's instrumentation logging, and/or, y'know, provisioning new services. >> Yeah, and as soon as you have, as David is mentioning, if you have deployments on, y'know, with public or private clouds, then the difficult part is that of severability, that we were talking before. Because now you're trying to stitch together data, and tools to help you get that diagnosed, or get signals when you need them. This is what we're doing with Stackdriver Kubernetes monitoring, precisely. >> Y'know, we're early days in the cloud. It stills feels like we're 10 years in, but, y'know, a lot of people are now coming to realize cloud native, so. Y'know, I'm not a big fan of the whole, y'know, Amazon, although they do say Amazon's winning, they are doing quite well with the cloud, 'cause they're a cloud. It's early days, and you guys are doing some really specific good things with the cloud, but you don't have the breadth of services, say, Amazon has. And you guys are above board about that. You're like, "Hey, we're not trying to meet them "speed for speed on services." But you do certain things really, really well. You mentioned SRE. Site Reliability Engineers. This is a scale best practice that you guys have bringing to the table. But yet the customers are learning about Kubernetes. Some people who have never heard of it before say, "Hey, what's this Kubernetes thing?" >> Right. >> What is your perspectives on the relevance of Kubernetes at this point in history? Because it really feels like a critical mass, de facto, standard movement where everyone's getting behind Kubernetes, for all the right reasons. It feels a lot like interoperability is here. Thoughts on Kubernetes' relevance. >> Well I think that Alexis Richardson summed it up great today, the chairperson of the technical oversight committee. The reality is that what we're looking for, what operators and software engineers have been looking for forever, is clean lines between the various concerns. So as you think about the underlying infrastructure, and then you think about the applications that run on top of that, potentially services that run on top of that, then you think about applications, then you think about how that shows up to end users. Before, if you're old like me, you remember that you buy a $50,000 machine and stick it in the corner, and you'd stack everything on there, right? That never works, right? The power supply goes out, the memory goes out, this particular database goes out. Failure will happen. The only way to actually build a system that is reliable, that can meet your business needs, is by adopting something more cloud native, where if any particular component fails, your system can recover. If you have business requirements that change, you can move very quickly and adapt. Kubernetes provides a rich, portable, common set of APIs, that do work everywhere. And as a result, you're starting to see a lot of adoption, because it gives people that opportunity. But I think, y'know and let me hand off to JD here, y'know, the next layer up is about observability. Because without observing what's going on in each of those stacks, you're not going to have any kind of-- >> Well, programmability comes behind it, to your point. Talk about that, that's a huge point. >> Yeah, and just to build on what David is saying, one thing that is unique about Google is that we've been doing for more than a decade now, we've been very good at being able to provide innovative services without compromising reliability. Right, and so what we're doing is in that commitment, and you see that with Kubernetes and Istio, we're externalizing many of our, y'know, opinionated infrastructure, and platforms in that sense, but it's not just the platforms. You need those methodologies and best practices. And now the toolset. So that's what we're doing now, precisely. >> And you guys have made great strides, just to kind of point out to the folks watching, in the enterprise, I know you've got a lot more work to do but you're pedaling as fast as you can. I want to ask you specifically around this, because again, we're still early days with the cloud, if you think about it, there are now table stakes that are on the table that you got to get done. Check boxes if you will. Certainly on the government side there's like, compliance issues, and you guys are now checking those boxes. What is the key thing, 'cause you guys are operating at a scale that enterprises can't even fathom. I mean, millions of services, on and on up a huge scale. That's going to be helpful for them down the road, no doubt about it. But today, what is the Google table stakes that are done, and what are enterprises need to have for table stakes to do cloud native right, from your perspective? >> Well, I think more than anything, y'know, I agree with you. The reality is all the hyperscale cloud providers have the same table stakes, all the check boxes are checked, we're ready to go. I think what will really differentiate and move the ball forward for so many people is this adoption of cloud native. And really, how cloud native is your cloud, right? How much do you need to spin up an entire SRE team like Netflix in order to operate in the Netflix model of, y'know, complete automation and building your own services and things like that. Does your cloud help you get cloud native? And I think that's where we really want to lean in. It's not about IAS anymore, it's about does your cloud support the reliability, support the distribution, all the various services, in order to help you move even faster and achieve higher velocity. >> And standing up that is critical, because now these applications are the business model of companies, when you talk about digital. So I tweeted, I want to get your reaction to this, yesterday I got a quote I overheard from a person here in the hallways. "I need to get away from VPNs and firewalls. "I need user application layer security "with unphishable access, otherwise I'm never safe." Again this talks about the perimeterless cloud, spearphishing is really hot right now, people are getting killed with security concerns. So, I'm going to stop if I'm enterprise, I'm going to say, "Hold on, I'm not going," Y'know, I'm going to proceed with caution. What are you guys doing to take away the fear, and also the reality that as you provision all these, stand up all this infrastructure, services for customers, what are you guys doing to prevent phishing attacks from happening, security concerns, what's the Google story? >> So I think that more than anything, what we're trying to do is exactly what JD just said, which is externalize all the practices that we have. So, for example, at Google we have all sorts of internal tools that we've used, and internal practices. For example, we just published a whitepaper about our security practices where you need to have two vulnerabilities in order to break out of any system. We have all that written up there. We just published a whitepaper about encryption and how to do encryption by default, encryption between machines and so on. But I think what we're really doing is, we're helping people to operate like Google without having to spin up an entire SRE team as big as Google's to do it. An example is, we just released something internally, we have something called BeyondCorp. It's a non-firewall, non-VPN based way for you to authenticate against any Google system, using two-factor authentication, for our internal employees. Externally, we just released it, it's called, Internet, excuse me, IdentityAware proxy. You can use with literally any service that you have. You can provision a domain name, you can integrate with OAuth, you can, including Google OAuth or your own private OAuth. All those various things. That's simply a service that we offer, and so, really, y'know, I think-- >> And there's also multi, more than two-factor coming down the road, right? >> Exactly, actually IdentityAware proxy already supports two-factor. But I will say, one of the things that I always tell people, is a lot of enterprises say exactly what you said. "Jeez, this new world looks very scary to me. "I'm going to slow down." The problem is they're mistaken, under the mistaken impression that they're secure today. More than likely, they're not. They already have firewall, they already have VPN, and it's not great. In many ways, the enterprises that are going to win are the ones that lean in and move faster to the new world. >> Well, they have to, otherwise they're going to die, with IOT and all these benefits, they're exposed even as they are, just operationally. >> Yep. >> Just to support it. Okay, I want to get your thoughts, guys, on Google's role here at the Linux Foundation's CNCF KubeCon event. You guys do a lot of work in open source. You've got a lot of great fan base. I'm a fan of what you guys do, love the tech Google brings to the table. How do people get involved, what are you guys connecting with here, what's going on at the show, and how does someone get on board with the Google train? Certainly TensorFlow has been, it's like, great open source goodness, developers are loving it, what's going on? >> Well we have over almost 200 people from Google here at the show, helping and connecting with people, we have a Google booth which I invite people to stop by and tell about the different project we have. >> Yeah, and exactly like you said, we have an entire repo on Github. Anyone can jump in, all our things are open source and available for everyone to use no matter where they are. Obviously I've been on Kubernetes for a while. The Kubernetes project is on fire, Tensorflow is on fire, KubeFlow that we mentioned earlier is completely open source, we're integrating with Prometheus, which is a CNCF project. We are huge fans of these open source foundations and we think that's the direction that most software projects are going to go. >> Well congratulations, I know you guys invested a lot. I just want to highlight that. Again, to show my age, y'know these younger generation have no idea how hard open source was in the early days. I call it open bar and open source, you guys are bringing so much, y'know, everyone's drunk on all this goodness. Y'know, just these libraries you guys bringing to the table. >> David: Right. >> I mean Tensorflow is just the classic poster-child example. I mean, you're bringing a lot of stuff to the table. I mean, you invented Kubernetes. So much good stuff coming in. >> Yeah, I couldn't agree more. I hesitate to say we invented it. It really was a community effort, but yeah, absolutely-- >> But you opened it up, and you did it right, and did a good job. Congratulations. Thanks for coming on theCUBE, I'm going to see you at Google Next. theCUBE will be broadcasting live at Google Next in July. Of course we'll do a big drill-down on Google Cloud platform at that show. It's theCUBE here at KubeCon 2018 in Copenhagen, Denmark. More live coverage after this short break, stay with us. (upbeat music)

Published Date : May 2 2018

SUMMARY :

Brought to you by the Cloud Native Computing Foundation, of the Linux Foundation's Cloud Native Compute Foundation all the enterprise traction you guys have, This is a big show for you guys. and let the I.T. and cross-functionality of the apps. Exactly, when you go and talk to any researcher out there So what it allows is for you is this the challenge you guys to help you diagnose the production failure, Talk about the guiding principles you guys have is to help it grow. And the only way to solve any problem is with these How are you guys looking at that, because of all the things you identified. and deploy it to that place where it makes sense. because the metadata you mentioned Yeah, and as soon as you have, that you guys have bringing to the table. the relevance of Kubernetes at this point in history? and then you think about Well, programmability comes behind it, to your point. and you see that with Kubernetes and Istio, and you guys are now checking those boxes. in order to help you move even faster and also the reality that as you provision all these, You can use with literally any service that you have. is a lot of enterprises say exactly what you said. with IOT and all these benefits, I'm a fan of what you guys do, and tell about the different project we have. Yeah, and exactly like you said, Y'know, just these libraries you guys bringing to the table. I mean, you invented Kubernetes. I hesitate to say we invented it. I'm going to see you at Google Next.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JD VelazquezPERSON

0.99+

DavidPERSON

0.99+

David AronchickPERSON

0.99+

AmazonORGANIZATION

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

JohnPERSON

0.99+

thousandsQUANTITY

0.99+

GoogleORGANIZATION

0.99+

JD VelasquezPERSON

0.99+

DecemberDATE

0.99+

John FurrierPERSON

0.99+

PrometheusTITLE

0.99+

NetflixORGANIZATION

0.99+

95%QUANTITY

0.99+

EuropeLOCATION

0.99+

JulyDATE

0.99+

10 yearsQUANTITY

0.99+

Alexis RichardsonPERSON

0.99+

two-factorQUANTITY

0.99+

Linux FoundationORGANIZATION

0.99+

$50,000QUANTITY

0.99+

Copenhagen, DenmarkLOCATION

0.99+

AWSORGANIZATION

0.99+

zero customersQUANTITY

0.99+

yesterdayDATE

0.99+

KubernetesTITLE

0.99+

last yearDATE

0.99+

JDPERSON

0.99+

todayDATE

0.99+

oneQUANTITY

0.98+

KubeCon 2018EVENT

0.98+

theCUBEORGANIZATION

0.98+

KubeConEVENT

0.98+

two cloudsQUANTITY

0.98+

two vulnerabilitiesQUANTITY

0.97+

OAuthTITLE

0.97+

eachQUANTITY

0.97+

twoQUANTITY

0.97+

single cloudQUANTITY

0.96+

CloudNativeCon Europe 2018EVENT

0.96+

one thingQUANTITY

0.96+

StackdriverORGANIZATION

0.96+

25 million dollarQUANTITY

0.96+

more than two-factorQUANTITY

0.95+

IstioORGANIZATION

0.95+

GithubORGANIZATION

0.94+

KubernetesORGANIZATION

0.93+

one cloudQUANTITY

0.93+

NextTITLE

0.93+

CNCF KubeConEVENT

0.93+

almost 200 peopleQUANTITY

0.93+

AzureTITLE

0.93+

TensorFlowTITLE

0.93+

Google OAuthTITLE

0.93+

more than a decadeQUANTITY

0.93+

Wendy Cartee, VMware and Aparna Sinha, Google | CUBEConversation, March 2018


 

>> Hey welcome back to everybody, Jeff Frick here with theCUBE. We're in our Palo Alto studio for a CUBE conversation. The crazy conference schedule is just about ready to break over our heads, but we still have a little time to do CUBE conversations before we hit the road. But one show we're doing this summer that we've never done before is Kubecon Cloud Native Con, I got to get all the words. It used to be Cloud Native, now Kubecon's up front. But we're going to go to the European show first time ever. It's May 2nd through 4th at the Bella Center in Copenhagen, Denmark. We're really excited to go 'cause obviously a ton of activity around containers and Kubecon and Kubernetes, and we're excited to have a little preview of the show with two folks. We've got Wendy Cartee, she is the Senior Director Cloud Native Applications Marketing for VMWare. Welcome. >> Thank you, it's a pleasure to be here. >> And also giving us a little preview on her keynote, maybe we can get something out of her, I don't know, Aparna Sinha, she is a Group Product Manager for Kubernetes and Google's Kubernete Engine at Google, long title. Just see the Kubernete shirt, that's all we need to see. Welcome. >> Thank you. Glad to be here. >> Absolutely. So for the folks that have not been to Kubecon before, let's go through some of the basics. How big is it? Who can they expect to be there? Do you have the fancy letter for them to give to their boss to get out of work for a week? >> Yeah, yeah. >> Give us the basics. >> This is going to be our biggest event in Europe yet. So we're expecting actually four thousand plus people. We expect that it'll be sold out. So, folks should register early. And who should go? Actually tends to be a mix of developers who want to contribute to the project as well as users. I think in Austin, which was our last conference, there was about a 50/50 mix of folks that were using Kubernetes. So it's a really great place to meet others that are using the software. >> Are there a couple of new themes this year? Or is just just kind of generic training and moving the platform along? Or are there some big announcements that people can expect? >> Yeah, I expect some big announcements. And I expect that there'll be a couple of themes around security, around Serverless, that's a major area, and around developer experience, and of course machine learning. So those are some of the things that are top of mind for the community. >> And probably Service Mesh will be another round of hot topics this year as well. >> Which one? >> Service Mesh. >> Jeff: What is that? >> It's a project that is a part of CNCF around Envoy. And it's essentially the notion of having a stack of services that provide everything from connectivity to API access for microservices. >> I ask because we had an old customer of Service Mesh saying they got bought by some services company... >> Yeah, this is, I think the term is an old term, so obviously when you start using Kubernetes it's really around breaking down your applications and having microservices. You get a proliferation of microservices. Service Mesh essentially enables you to manage those, so set up security and communication between those services and then manage them at scale, so that's really what a Service Mesh is. And Envoy is at the heart of that. And then there's a project called Istio. There will definitely be, and there was a lot of discussion around that at Kubecon in Austin. And they'll be some training before the conference this time. There are several co-located events. There'll be some training beforehand. So for folks that want to learn, they're new to Kubernetes, they're new to the concept of Service Mesh, I would recommend coming a day early or two days early, 30th and 1st, there's a number of different workshops. >> It's pretty amazing just the growth and the momentum of containers and Serverless, and obviously Docker kind of came out of nowhere a couple three or four years ago. And then Kubernetes really kind of seemed to jump on the scene in terms of at least me paying attention, probably a couple two, three years ago. And it's phenomenal. And even only just to check it out, Google's putting on all these little development workshops. This one was at Santa Clara Convention Center probably a month ago that I went down. And the place was packed, packed. And it was, get out your laptop, get out your notes, and let's start going through and developing applications and really learning. I mean, why does this momentum continue to grow so strongly? >> From what we see, we have enterprises that are in the journey of digital, you're kind of going on the digital transformation. >> Jeff: Right. >> And to drive that faster business model they need technologies like Cloud Native to help them with faster development, to help them with driving new innovations in their application, and I think that that's what we see in the Kubernetes community. I think we see developers and contributors coming to conferences like Kubecon, especially to really learn from each other and find out what are some of the latest innovations in this space and how they can bring that back into their companies to drive faster development, and at the end of it, essentially driving better services, better experience for their end users as well. >> And it's really been interesting watching the VMWare story particularly, because you know people were a little confused when the merger happened with Dell and EMC and how was that going to affect (mumbles) and VMWare, and yet, the ecosystem is super vibrant. We do VMworld every single year. It's one of our biggest shows. The thing is packed with a really excited ecosystem, obviously you guys made big moves with Amazon last year. You're making moves with Google and Kubernetes, and it was funny. People were concerned a couple years. It's almost this rebirth of what's going on at VMworld and this adoption of really (mumbles) technology as well as open source technologies. Has the culture changed inside? Is this something that you guys figured you have to do or was it always there under the covers and maybe we just weren't paying enough attention? >> Yeah, I think it was always there. I think we are very close to the transformation and the journey that our customers are on. And obviously the customers themselves have a full stack solution deployed in their environment today. Many of them are using vSphere or vSan or NSX, vRealize Portfolio to build their business, and they're looking at how to transform and add containers as another layer on top of their software defined data center, to essentially breathe some of these newer technologies into their environment as well. >> Yeah, and Aparna, Google's been sharing open source stuff for a while. Even back to early Hadoop, Hadoop days. So, as big and powerful as a company that it is and as much as scale is such an important piece of that competitive advantage, it's wild that you guys are opening things up and really embracing an open source developer kind of ethos to acknowledge. As smart as you are, as big as you are, as much power as you have, you don't have all the smartest people inside the four walls of Google. Well, Google has always contributed to open source. I think we have a very long and rich history of sharing software and, you know, really doing joint development. So Android is an open source, Chrome, Chromium is open source. TensorFlow is open source. And Kubernetes really is, I think, different in that sense in that there is a thriving community around it and Google's been very, very active, and I've been very active personally, in developing that community and engaging in the project. And I think that goes back to what you were saying about the meetups. There are several meetups all around, so it's not just in one location. I think globally. And I think the reason it's so diverse and so many people are involved is because it does lead to, you know, Kubernetes enables a benefit that is meaningful in enterprises, large and small, where you can start rolling out applications multiple times a day. And it just gives developers that productivity. It's very accessible. And over the years, especially as the project has matured, it has become, it's like my daughter or my son can go and they can use it. It's really easy to use. So it's not hard to pick up either. >> And it's also interesting because we do a lot of shows, as you know, theCUBE goes to a ton of shows, and everybody wants the attention of developer if they haven't had (mumbles) everybody's got a developer track a developer this, a developer that. Everybody wants to get to developers. It's very competitive. As a developer you have a lot of options of where you want to spend your time. But really, especially Google, kind of comes at it from, and always has, development first. Right? It's kind of developer first. So I'm curious, you talked about the community that's going to be gathered in Denmark when you've got contributors as well as users and contributors all kind of blended together. Not really forced together, but coming together around this universal gravity that is Kubernetes. What is that enable that you don't get if you're traditionally either a developer show or kind of a user show? >> Yes, I think that's really important and one of the beautiful things about open source, is that you get what you see. And you can actually change it and own it and it's not some other entity that owns it. So we'll have many companies presenting, so Bookings.com, Spotify, New York Times, Ebay, Lyft. These are all companies that are using Kubernetes and also contributing to Kubernetes. And so it's a nice virtual cycle. And what you get from that is you're in touch, you're in constant touch with your users. So a lot of them actually use Google Kubernetes Engine, and I know what they're looking for. And so we can then shape the project and shape the product accordingly. >> Then the other question I always think is interesting when you're working with open source projects and contributors, right? A lot of times it's a big part of whom they are, especially if they're a good contributor. You know, it's part of their identity, it's part of the way they connect with their community, but they got to get work done for the company, too. So in terms of kind of managing in the development world with contributing people, people contributing to open source projects as well as you got to get our work done that we're working on, too. How do you manage that? How is kind of best practices for having a vibrant open source contributing staff that's also being very productive in getting their day job done? >> I think engineers love to learn from other engineers and developers, and I think that community is the reason why they come. And it's not only our conferences when everybody gets together at a conference like Kubecon, but there's a tremendous amount of activity day to day offline over conference calls like Zoom and, you know I'm on some of the calls that Aparna is on and its amazing. You have people from all over the world, developers from everywhere, who will meet on a weekly basis, and they'll Slack each other. And I think that that sense of community, that sharing of information and really learning some of the best practices and learning what others have done is why people come, and it's great to have a conference like Kubecon where people can finally come together and meet in person and just kind of enjoy each other's presence and communicate face to face, and really connect in person. We're very excited about Kubecon and kind of being part of that energy, that enthusiasm that is in the community. >> It's interesting, the Slack, the kind of cross-enterprise Slack phenomenon, which I hadn't really been exposed to until a couple of projects we got involved with, and I got invited into these other companies' Slack, which I didn't really know that that was a thing to open up that wall in between the two companies and enable a very similar type of interaction and engagement that I have with my peers inside the walls as I do now with my peers outside the walls. So that's a pretty interesting twist in enabling these tools to build community outside of your own company. >> Yes, it is, and Slack is a great tool for that. But even aside from the tooling, I think that the pace of software innovation is very, very fast these days. And if you stay within the walls of your company you miss out on so much innovation that is available, and I totally agree with Wendy. Contributors and developers in general, they like to know what's next. And they like to contribute to what's next. And you said you went to some of the meetups, so you can sort of see that you're actually benefiting from that, from both contributing as well as from meeting with and absorbing what others are doing. You're directly benefiting your company, you're directly benefiting in your own job because you're innovating. >> So before we let you go, any particular session or something is happening at the show in Denmark that either you're super excited about or maybe is a little bit kind of flying underneath the radar that people should be aware of that maybe they didn't think to go to that type of session. >> Well I think there are a variety of excellent sessions at the Kubecon that's coming up. There are user topics. Arpana talked about some of the companies that will be there to share their experience. I've seen talks about communities and contributors and how they can contribute and build the community. I think there are SIG updates that I think would be very informative. And I also think that there are a lot of announcements that will be made at the event as well. I think that's exciting for everybody to see the new innovations that's coming out that impact the community, the users, and in general the ecosystem as well. >> Aparna? >> Yeah, yeah, so if I were to lay it out, I mean definitely folks should register early 'cause it's going to sell out. There were a thousand plus submissions and a 125 talks have been accepted. There are 31 Google talks. There's all manner of content. I would suggest users go a little bit early if they want to get the hands-on training in the workshops. And then as Wendy mentioned, I think on May 2nd there's a contributor summit, which is actually, that's the thing that's flying under the radar. It's a free event, and if you want to learn how to contribute to Kubernetes, that's where a lot of the training will be. And the SIGs, the special interest groups, in the community, each of them will be giving an introduction to what they do. So it's a really good event to meet maintainers, meet contributors, become one yourself. And then in terms of the agenda, I think I mentioned the topics. I'm giving a keynote. I think I'm giving the opening keynote there. It'll be about developer experience, because that's a big deal that we're working on in Kubernetes, and I think there's many new innovations in improving the developer experience with Kubernetes. I'll also be giving an overall project update. And then some of the other keynotes, there's a keynote on KubeFlow, which is a machine learning framework on top of Kubernetes. And then there's a series of talks on security and how to run securely in containers. >> All right, well I think we're almost ready. We got to register, we got to study up, and make a couple contributions before we're headin' over there, right? >> Absolutely. >> All right, Wendy, Aparna, thanks for taking a few minutes and look forward to seeing you across the pond in a month or so. It's May 2nd through 4th in Denmark at the Bella Center, Copenhagen, Denmark. Thanks again for stopping by. >> Wendy: Thank you. >> Aparna: Thank you. >> All right, I'm Jeff Frick, you're watching theCUBE from Palo Alto, we'll see you next time. Thanks for watchin'.

Published Date : Mar 23 2018

SUMMARY :

is Kubecon Cloud Native Con, I got to get all the words. Just see the Kubernete shirt, that's all we need to see. Glad to be here. So for the folks that have not been to Kubecon before, So it's a really great place to meet others And I expect that there'll be a couple of themes And probably Service Mesh will be And it's essentially the notion of having I ask because we had an old customer And Envoy is at the heart of that. And even only just to check it out, that are in the journey of digital, and at the end of it, essentially driving better services, and maybe we just weren't paying enough attention? and they're looking at how to transform And I think that goes back to what you were saying What is that enable that you don't get and it's not some other entity that owns it. it's part of the way they connect with their community, and it's great to have a conference like Kubecon and I got invited into these other companies' Slack, And they like to contribute to what's next. that maybe they didn't think to go to that type of session. and in general the ecosystem as well. and if you want to learn how to contribute to Kubernetes, We got to register, we got to study up, and look forward to seeing you across the pond we'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jeff FrickPERSON

0.99+

DellORGANIZATION

0.99+

AparnaPERSON

0.99+

Wendy CarteePERSON

0.99+

EuropeLOCATION

0.99+

WendyPERSON

0.99+

EMCORGANIZATION

0.99+

Aparna SinhaPERSON

0.99+

JeffPERSON

0.99+

AmazonORGANIZATION

0.99+

EbayORGANIZATION

0.99+

AustinLOCATION

0.99+

May 2ndDATE

0.99+

March 2018DATE

0.99+

GoogleORGANIZATION

0.99+

New York TimesORGANIZATION

0.99+

SpotifyORGANIZATION

0.99+

DenmarkLOCATION

0.99+

two companiesQUANTITY

0.99+

LyftORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

last yearDATE

0.99+

31QUANTITY

0.99+

125 talksQUANTITY

0.99+

VMwareORGANIZATION

0.99+

eachQUANTITY

0.99+

SlackTITLE

0.99+

a month agoDATE

0.99+

KubernetesTITLE

0.99+

two folksQUANTITY

0.98+

CNCFORGANIZATION

0.98+

this yearDATE

0.98+

Copenhagen, DenmarkLOCATION

0.98+

Bookings.comORGANIZATION

0.98+

four thousand plus peopleQUANTITY

0.98+

KubeconORGANIZATION

0.98+

vSphereTITLE

0.98+

one locationQUANTITY

0.97+

AparnaORGANIZATION

0.97+

three years agoDATE

0.97+

EnvoyORGANIZATION

0.97+

VMWareTITLE

0.97+

vSanTITLE

0.97+

Santa Clara Convention CenterLOCATION

0.97+

first timeQUANTITY

0.97+

Bella CenterLOCATION

0.97+

bothQUANTITY

0.96+

VMworldORGANIZATION

0.96+

oneQUANTITY

0.96+

VMWareORGANIZATION

0.96+

4thDATE

0.96+

ChromeTITLE

0.95+

one showQUANTITY

0.94+

four years agoDATE

0.94+

ChromiumTITLE

0.94+

AndroidTITLE

0.94+

two daysQUANTITY

0.93+

NSXTITLE

0.93+