Image Title

Search Results for John Furrrier:

PA3 Ian Buck


 

(bright music) >> Well, welcome back to theCUBE's coverage of AWS re:Invent 2021. We're here joined by Ian Buck, general manager and vice president of Accelerated Computing at NVIDIA. I'm John Furrrier, host of theCUBE. Ian, thanks for coming on. >> Oh, thanks for having me. >> So NVIDIA, obviously, great brand. Congratulations on all your continued success. Everyone who does anything in graphics knows that GPU's are hot, and you guys have a great brand, great success in the company. But AI and machine learning, we're seeing the trend significantly being powered by the GPU's and other systems. So it's a key part of everything. So what's the trends that you're seeing in ML and AI that's accelerating computing to the cloud? >> Yeah. I mean, AI is kind of driving breakthroughs and innovations across so many segments, so many different use cases. We see it showing up with things like credit card fraud prevention, and product and content recommendations. Really, it's the new engine behind search engines, is AI. People are applying AI to things like meeting transcriptions, virtual calls like this, using AI to actually capture what was said. And that gets applied in person-to-person interactions. We also see it in intelligence assistance for contact center automation, or chat bots, medical imaging, and intelligence stores, and warehouses, and everywhere. It's really amazing what AI has been demonstrating, what it can do, and its new use cases are showing up all the time. >> You know, Ian, I'd love to get your thoughts on how the world's evolved, just in the past few years alone, with cloud. And certainly, the pandemic's proven it. You had this whole kind of fullstack mindset, initially, and now you're seeing more of a horizontal scale, but yet, enabling this vertical specialization in applications. I mean, you mentioned some of those apps. The new enablers, this kind of, the horizontal play with enablement for, you know, specialization with data, this is a huge shift that's going on. It's been happening. What's your reaction to that? >> Yeah. The innovation's on two fronts. There's a horizontal front, which is basically the different kinds of neural networks or AIs, as well as machine learning techniques, that are just being invented by researchers and the community at large, including Amazon. You know, it started with these convolutional neural networks, which are great for image processing, but has expanded more recently into recurrent neural networks, transformer models, which are great for language and language and understanding, and then the new hot topic, graph neural networks, where the actual graph now is trained as a neural network. You have this underpinning of great AI technologies that are being invented around the world. NVIDIA's role is to try to productize that and provide a platform for people to do that innovation. And then, take the next step and innovate vertically. Take it and apply it to a particular field, like medical, like healthcare and medical imaging, applying AI so that radiologists can have an AI assistant with them and highlight different parts of the scan that may be troublesome or worrying, or require some more investigation. Using it for robotics, building virtual worlds where robots can be trained in a virtual environment, their AI being constantly trained and reinforced, and learn how to do certain activities and techniques. So that the first time it's ever downloaded into a real robot, it works right out of the box. To activate that, we are creating different vertical solutions, vertical stacks, vertical products, that talk the languages of those businesses, of those users. In medical imaging, it's processing medical data, which is obviously a very complicated, large format data, often three-dimensional voxels. In robotics, it's building, combining both our graphics and simulation technologies, along with the AI training capabilities and difference capabilities, in order to run in real time. Those are just two simple- >> Yeah, no. I mean, it's just so cutting-edge, it's so relevant. I mean, I think one of the things you mentioned about the neural networks, specifically, the graph neural networks, I mean, we saw, I mean, just go back to the late 2000s, how unstructured data, or object storage created, a lot of people realized a lot of value out of that. Now you got graph value, you got network effect, you got all kinds of new patterns. You guys have this notion of graph neural networks that's out there. What is a graph neural network, and what does it actually mean from a deep learning and an AI perspective? >> Yeah. I mean, a graph is exactly what it sounds like. You have points that are connected to each other, that establish relationships. In the example of Amazon.com, you might have buyers, distributors, sellers, and all of them are buying, or recommending, or selling different products. And they're represented in a graph. If I buy something from you and from you, I'm connected to those endpoints, and likewise, more deeply across a supply chain, or warehouse, or other buyers and sellers across the network. What's new right now is, that those connections now can be treated and trained like a neural network, understanding the relationship, how strong is that connection between that buyer and seller, or the distributor and supplier, and then build up a network to figure out and understand patterns across them. For example, what products I may like, 'cause I have this connection in my graph, what other products may meet those requirements? Or, also, identifying things like fraud, When patterns and buying patterns don't match what a graph neural networks should say would be the typical kind of graph connectivity, the different kind of weights and connections between the two, captured by the frequency of how often I buy things, or how I rate them or give them stars, or other such use cases. This application, graph neural networks, which is basically capturing the connections of all things with all people, especially in the world of e-commerce, is very exciting to a new application of applying AI to optimizing business, to reducing fraud, and letting us, you know, get access to the products that we want. They have our recommendations be things that excite us and want us to buy things, and buy more. >> That's a great setup for the real conversation that's going on here at re:Invent, which is new kinds of workloads are changing the game, people are refactoring their business with, not just re-platforming, but actually using this to identify value. And also, your cloud scale allows you to have the compute power to, you know, look at a note in an arc and actually code that. It's all science, it's all computer science, all at scale. So with that, that brings up the whole AWS relationship. Can you tell us how you're working with AWS, specifically? >> Yeah, AWS have been a great partner, and one of the first cloud providers to ever provide GPUs to the cloud. More recently, we've announced two new instances, the G5 instance, which is based on our A10G GPU, which supports the NVIDIA RTX technology, our rendering technology, for real-time ray tracing in graphics and game streaming. This is our highest performance graphics enhanced application, allows for those high-performance graphics applications to be directly hosted in the cloud. And, of course, runs everything else as well. It has access to our AI technology and runs all of our AI stacks. We also announced, with AWS, the G5 G instance. This is exciting because it's the first Graviton or Arm-based processor connected to a GPU and successful in the cloud. The focus here is Android gaming and machine learning inference. And we're excited to see the advancements that Amazon is making and AWS is making, with Arm in the cloud. And we're glad to be part of that journey. >> Well, congratulations. I remember, I was just watching my interview with James Hamilton from AWS 2013 and 2014. He was teasing this out, that they're going to build their own, get in there, and build their own connections to take that latency down and do other things. This is kind of the harvest of all that. As you start looking at these new interfaces, and the new servers, new technology that you guys are doing, you're enabling applications. What do you see this enabling? As this new capability comes out, new speed, more performance, but also, now it's enabling more capabilities so that new workloads can be realized. What would you say to folks who want to ask that question? >> Well, so first off, I think Arm is here to stay. We can see the growth and explosion of Arm, led of course, by Graviton and AWS, but many others. And by bringing all of NVIDIA's rendering graphics, machine learning and AI technologies to Arm, we can help bring that innovation that Arm allows, that open innovation, because there's an open architecture, to the entire ecosystem. We can help bring it forward to the state of the art in AI machine learning and graphics. All of our software that we release is both supportive, both on x86 and on Arm equally, and including all of our AI stacks. So most notably, for inference, the deployment of AI models, we have the NVIDIA Triton inference server. This is our inference serving software, where after you've trained a model, you want to deploy it at scale on any CPU, or GPU instance, for that matter. So we support both CPUs and GPUs with Triton. It's natively integrated with SageMaker and provides the benefit of all those performance optimizations. Features like dynamic batching, it supports all the different AI frameworks, from PyTorch to TensorFlow, even a generalized Python code. We're activating, and help activating, the Arm ecosystem, as well as bringing all those new AI use cases, and all those different performance levels with our partnership with AWS and all the different cloud instances. >> And you guys are making it really easy for people to use use the technology. That brings up the next, kind of, question I wanted to ask you. I mean, a lot of people are really going in, jumping in big-time into this. They're adopting AI, either they're moving it from prototype to production. There's always some gaps, whether it's, you know, knowledge, skills gaps, or whatever. But people are accelerating into the AI and leaning into it hard. What advancements has NVIDIA made to make it more accessible for people to move faster through the system, through the process? >> Yeah. It's one of the biggest challenges. You know, the promise of AI, all the publications that are coming out, all the great research, you know, how can you make it more accessible or easier to use by more people? Rather than just being an AI researcher, which is obviously a very challenging and interesting field, but not one that's directly connected to the business. NVIDIA is trying to provide a fullstack approach to AI. So as we discover or see these AI technologies become available, we produce SDKs to help activate them or connect them with developers around the world. We have over 150 different SDKs at this point, serving industries from gaming, to design, to life sciences, to earth sciences. We even have stuff to help simulate quantum computing. And of course, all the work we're doing with AI, 5G, and robotics. So we actually just introduced about 65 new updates, just this past month, on all those SDKs. Some of the newer stuff that's really exciting is the large language models. People are building some amazing AI that's capable of understanding the corpus of, like, human understanding. These language models that are trained on literally the content of the internet to provide general purpose or open-domain chatbots, so the customer is going to have a new kind of experience with the computer or the cloud. We're offering those large language models, as well as AI frameworks, to help companies take advantage of this new kind of technology. >> You know, Ian, every time I do an interview with NVIDIA or talk about NVIDIA, my kids and friends, first thing they say is, "Can you get me a good graphics card?" They all want the best thing in their rig. Obviously the gaming market's hot and known for that. But there's a huge software team behind NVIDIA. This is well-known. Your CEO is always talking about it on his keynotes. You're in the software business. And you do have hardware, you are integrating with Graviton and other things. But it's a software practice. This is software. This is all about software. >> Right. >> Can you share, kind of, more about how NVIDIA culture and their cloud culture, and specifically around the scale, I mean, you hit every use case. So what's the software culture there at NVIDIA? >> Yeah, NVIDIA's actually a bigger, we have more software people than hardware people. But people don't often realize this. And in fact, that it's because of, it just starts with the chip, and obviously, building great silicon is necessary to provide that level of innovation. But it's expanded dramatically from there. Not just the silicon and the GPU, but the server designs themselves. We actually do entire server designs ourselves, to help build out this infrastructure. We consume it and use it ourselves, and build our own supercomputers to use AI to improve our products. And then, all that software that we build on top, we make it available, as I mentioned before, as containers on our NGC container store, container registry, which is accessible from AWS, to connect to those vertical markets. Instead of just opening up the hardware and letting the ecosystem develop on it, they can, with the low-level and programmatic stacks that we provide with CUDA. We believe that those vertical stacks are the ways we can help accelerate and advance AI. And that's why we make them so available. >> And programmable software is so much easier. I want to get that plug in for, I think it's worth noting that you guys are heavy hardcore, especially on the AI side, and it's worth calling out. Getting back to the customers who are bridging that gap and getting out there, what are the metrics they should consider as they're deploying AI? What are success metrics? What does success look like? Can you share any insight into what they should be thinking about, and looking at how they're doing? >> Yeah. For training, it's all about time-to-solution. It's not the hardware that's the cost, it's the opportunity that AI can provide to your business, and the productivity of those data scientists which are developing them, which are not easy to come by. So what we hear from customers is they need a fast time-to-solution to allow people to prototype very quickly, to train a model to convergence, to get into production quickly, and of course, move on to the next or continue to refine it. >> John Furrier: Often. >> So in training, it's time-to-solution. For inference, it's about your ability to deploy at scale. Often people need to have real-time requirements. They want to run in a certain amount of latency, in a certain amount of time. And typically, most companies don't have a single AI model. They have a collection of them they want to run for a single service or across multiple services. That's where you can aggregate some of your infrastructure. Leveraging the Triton inference server, I mentioned before, can actually run multiple models on a single GPU saving costs, optimizing for efficiency, yet still meeting the requirements for latency and the real-time experience, so that our customers have a good interaction with the AI. >> Awesome. Great. Let's get into the customer examples. You guys have, obviously, great customers. Can you share some of the use cases examples with customers, notable customers? >> Yeah. One great part about working at NVIDIA is, as technology company, you get to engage with such amazing customers across many verticals. Some of the ones that are pretty exciting right now, Netflix is using the G4 instances to do a video effects and animation content from anywhere in the world, in the cloud, as a cloud creation content platform. We work in the energy field. Siemens energy is actually using AI combined with simulation to do predictive maintenance on their energy plants, preventing, or optimizing, onsite inspection activities and eliminating downtime, which is saving a lot of money for the energy industry. We have worked with Oxford University. Oxford University actually has over 20 million artifacts and specimens and collections, across its gardens and museums and libraries. They're actually using NVIDIA GPU's and Amazon to do enhanced image recognition to classify all these things, which would take literally years going through manually, each of these artifacts. Using AI, we can quickly catalog all of them and connect them with their users. Great stories across graphics, across industries, across research, that it's just so exciting to see what people are doing with our technology, together with Amazon. >> Ian, thank you so much for coming on theCUBE. I really appreciate it. A lot of great content there. We probably could go another hour. All the great stuff going on at NVIDIA. Any closing remarks you want to share, as we wrap this last minute up? >> You know, really what NVIDIA's about, is accelerating cloud computing. Whether it be AI, machine learning, graphics, or high-performance computing and simulation. And AWS was one of the first with this, in the beginning, and they continue to bring out great instances to help connect the cloud and accelerated computing with all the different opportunities. The integrations with EC2, with SageMaker, with EKS, and ECS. The new instances with G5 and G5 G. Very excited to see all the work that we're doing together. >> Ian Buck, general manager and vice president of Accelerated Computing. I mean, how can you not love that title? We want more power, more faster, come on. More computing. No one's going to complain with more computing. Ian, thanks for coming on. >> Thank you. >> Appreciate it. I'm John Furrier, host of theCUBE. You're watching Amazon coverage re:Invent 2021. Thanks for watching. (bright music)

Published Date : Nov 18 2021

SUMMARY :

to theCUBE's coverage and you guys have a great brand, Really, it's the new engine And certainly, the pandemic's proven it. and the community at the things you mentioned and connections between the two, the compute power to, you and one of the first cloud providers This is kind of the harvest of all that. and all the different cloud instances. But people are accelerating into the AI so the customer is going to You're in the software business. and specifically around the scale, and build our own supercomputers to use AI especially on the AI side, and the productivity of and the real-time experience, the use cases examples Some of the ones that are All the great stuff going on at NVIDIA. and they continue to No one's going to complain I'm John Furrier, host of theCUBE.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
John FurrrierPERSON

0.99+

Ian BuckPERSON

0.99+

AmazonORGANIZATION

0.99+

AWSORGANIZATION

0.99+

IanPERSON

0.99+

John FurrierPERSON

0.99+

NVIDIAORGANIZATION

0.99+

Oxford UniversityORGANIZATION

0.99+

James HamiltonPERSON

0.99+

2014DATE

0.99+

NetflixORGANIZATION

0.99+

Amazon.comORGANIZATION

0.99+

G5 GCOMMERCIAL_ITEM

0.99+

PythonTITLE

0.99+

late 2000sDATE

0.99+

GravitonORGANIZATION

0.99+

AndroidTITLE

0.99+

OneQUANTITY

0.99+

oneQUANTITY

0.99+

Accelerated ComputingORGANIZATION

0.99+

firstQUANTITY

0.99+

first timeQUANTITY

0.99+

twoQUANTITY

0.98+

2013DATE

0.98+

A10GCOMMERCIAL_ITEM

0.98+

bothQUANTITY

0.98+

two frontsQUANTITY

0.98+

eachQUANTITY

0.98+

single serviceQUANTITY

0.98+

PyTorchTITLE

0.98+

over 20 million artifactsQUANTITY

0.97+

singleQUANTITY

0.97+

TensorFlowTITLE

0.95+

EC2TITLE

0.94+

G5 instanceCOMMERCIAL_ITEM

0.94+

over 150 different SDKsQUANTITY

0.93+

SageMakerTITLE

0.93+

G5COMMERCIAL_ITEM

0.93+

ArmORGANIZATION

0.91+

first thingQUANTITY

0.91+

single GPUQUANTITY

0.9+

theCUBEORGANIZATION

0.9+

about 65 new updatesQUANTITY

0.89+

two new instancesQUANTITY

0.89+

pandemicEVENT

0.88+

TritonORGANIZATION

0.87+

PA3ORGANIZATION

0.87+

TritonTITLE

0.84+

InventEVENT

0.83+

G5 G.COMMERCIAL_ITEM

0.82+

two simpleQUANTITY

0.8+

Cindi Howson, ThoughtSpot and Kent Graziano, Snowflake | CUBE Conversation, December 2020


 

>> Narrator: From theCUBE Studios in Palo Alto and Boston, connecting with thought leaders all around the world. This is a CUBE conversation. >> Hi, everyone. Welcome to this CUBE conversation. I'm John Furrier here in the Palo Alto Studios. Yeah, during the pandemic, we're not in person. Usually we are, but we are doing remote interviews and as a lead-up to ThoughtSpot Beyond 2020 a virtual event coming up, we got two awesome visionaries here to have a conversation around data and the role of data. Cindi Howson, who's the Chief Data Strategy Officer at ThoughtSpot and Kent Graziano, Chief Technical Evangelist at Snowflake which has been great success. Welcome to the program. Thanks for coming on. >> Thanks for having us, John. >> So Kent, >> Yeah, happy to be here. >> Dave Volante who's just a fan boy of Snowflake. I mean, he's just gushing over the success of the company. I see Frank Slootman who you've known for years. Congratulations on your success. Great stuff. >> Yeah, thank you very much. >> Well, the topic I want to get into immediately is obviously data. You know, we're seeing in the heels of Amazon reinvent conference, the role of data cloud in the cloud and also on premise, you're seeing both things going on and companies are adopting this. Now it's a do or die situation for companies to either get on board with a full on data strategy. Can you guys talk about how that move to the cloud is imperative and so important? >> Yeah, I mean, as you said, John, it's the do or die moment and we've seen even pre-pandemic, many organizations were in the process of modernizing their cloud data and analytics moving to the cloud, but COVID has really just accelerated that. The ones that innovated sooner here are performing better and the ones that are still dragging their heels, the laggards, I am not convinced they will survive. >> Kent, do you have thoughts? You guys are born in the cloud data company. I mean, you can't get any more born in the cloud than you guys. >> No, obviously I started out in the on-prem world. I've been with Snowflake for five years now, but exactly what Cindi was saying there. And I've been telling folks, as I've talked to them over the last five years, that it's things are changing. The world is changing, things are changing and this was even pre-pandemic. Things were changing faster than anyone could have imagined and the only way to really keep pace with the growth of data and the diversity of data in my mind was to go to the cloud and this concept of having a data cloud where we can easily share and govern data is the game changer, right? And making customers and organizations so much more successful by being able to do things with data that they just couldn't do in the on-prem world. The elasticity and the power in the cloud is just giving people unprecedented access to do just amazing things. >> Yeah, whether you are a startup or a big company or on-premise trying to transform with digital transformation, you're either inventing or reinventing or creating a category or redefining a category and data is going to be the critical piece of it. And the cloud can actually scale that. So I want to get your thoughts on this notion of re-invention. How does data become because you could be a category creator and redefine a category, but the people have to understand, the customers have to first understand that their problem that they have is something that can be solved with data. This is a critical moment of connection, the product market fit kind of thing, where they go, okay, I get it now. Cindi, when do they have that moment? The aha moment of, I see the problem I got to do this. >> Yeah, well, there's two things. The aha moment and, John, I have to preface this. If I may, you know, many people listening to you may not have met me or Kent until now, Kent and I go way back, both previously independent analysts but we remain with this North star of helping our customers unlock the value of data. So I don't want people to think, oh, we're pushing cloud because we work for these companies. Now, it really is a belief. You have to use this to innovate faster. So when did that aha come? It depends, for some people it's only just now staring at them and that's why there's been a lot of churn in leadership, but let's go back even a few years ago, you can take Walmart as an example as they were maybe losing to Amazon, they went to digital, they went to cloud and are now competing beautifully. So it happens at different paces. Capital One, of course, was earlier here, there's a lot of financial services, organizations that really are moving too slowly to the cloud. And you see how well Capital One is doing versus some of the others that have moved too slow. >> Well, Kent, you guys go way back. You know, you've seen the old school, old guard as Andy Jassy at Amazon calls it, but there is a real shift happening now finally. It's not just the old school data warehouse model anymore, there's new requirements and there's new benefits for being in the cloud that you don't get on-prem or with a data warehouse. You know, you've got a different kind of access to more scale, maybe another company with an API. So the idea of connecting in the cloud, cloud native is completely different. Can you share your view on how that helps people understand the cloud better? >> Oh, yeah. Yeah, and I've certainly seen that. Like I grew up in the on-prem data warehouse world which is where Cindi and I met. And what I'm seeing now is the lines are being blurred between some of what we would have thought of as the traditional silos of data in the on-prem world. The data lake and data warehouse are foremost in my mind is with the data cloud, that line's not really there anymore. It's now about the workload and the use case than it is about, I'll say the structure of the data or the location of the data. We're able to eliminate the data silos by getting them all up into a platform like Snowflake and the form of the data is less important than it was. We can start with a very raw form and be doing data profiling and having data scientists look at it and maybe even feeding a machine learning engine in the process. And then as you discover the important bits in that data, maybe curated, some are cause we do need some data governance, we need some data quality. And that goes more into what you would think of traditionally as a data warehouse type format or a data mart format for running and supporting dashboards. But we're now able to unify all this data and really get to this concept of having a single source of truth and be agile at the same time. That's one of the things that attracted me to Snowflake out of my independent consulting world at the time to jump on board with Snowflake, I was just so amazed at what we could do in the cloud with that power and the elasticity that was unheard of and unthinkable in the on-prem world that we just can make so much more progress. And so, you know, fewer constraints, faster time to value, all kinds of things like that that just were amazing to me. >> Okay. Kent, it's been too long since we've jointly met with customers. You used dashboard, that's a dirty word. We're trying to get rid of those. We'll say cloud flying. >> Well, that's a good point. I mean, let's talk about the dashboard is what people are comfortable with. That's what they're used to, is kind of the first gen but now going beyond the traditional analytics this is where you start to see machine learning and AI become the value and that's the one thing that's constant now is okay, data's accessible. You get cloud scale, massive amounts of data. How fast can you put it to work? Sounds trivial, but it's not. What do you guys react to that comment? >> Yeah, and it's not trivial on the impact, but I would say it's become more trivial to make it happen because you have that unlimited compute or elastic compute, Snowflake separates the compute and storage. So you can do analytics that were just not possible in an on-premises world, on-premises discourages experimentation because of the high fixed costs to even get going. And with ThoughtSpot, the AI driven insights lets you find the anomalies, the correlations without a data scientist on all your data. So granular, every, you know, terabytes, just millions of records within your Snowflake data warehouse. And I think it's also combining the different workloads that in the past used to be separate, right? Kent, they would take the data out and do it on the desktop or in the data lake even, the data scientists anyway. >> Yeah, exactly. I mean, well in the past the repositories themselves were even separate, right? You often have very different technologies and I've worked with customers that would have data replicated across two massive data warehouses, one for loading, one for reporting. And then they'd be extracting that very same data into Hadoop cluster to put it in the same place with the semi-structured data, so the data scientists could go at it. So they really had three copies of that same data and the amount of engineering and synchronization required to make that work so that everybody was sort of working off of the same data. And we've been able to now eliminate all of that with Snowflake to put it all in one place, just once and let everyone work on it and really democratize the access to that data in one place. So whether it is, you know, machine learning and AI being one of the really big use cases that's certainly growing now and getting to it faster, you know, driving that time to value in those insights with products like ThoughtSpot to be able to get in there and make it so much easier for professionals to look at that data and analyze that data and find those insights that they really need. >> Yeah. You know, that's a great point. You mentioned, you know, the old way of setting up a dupe cluster and all the time, you know, we all know what happened there. I mean, there was too much engineering going into setting up clusters than getting the value out of the clusters and then in comes Spark and then in comes to Amazon. Hello, you know, Goodbye Hadoop. Right, so Cloudera certainly has shifted, they merged with Hortonworks. You know, they're going back into the clouds, smart, smart move. But the data world has changed. Obviously you guys are leaders in this new data in the cloud phenomenon with new business models, new value propositions. But I got to ask you about kind of the old personnel files that are out there. You talk about people, you know, there's people's jobs, where's the DBA? I ran the data where I set up those clusters. So, you know, I hear what you're saying, Kent, but like the data administrators, do their jobs go away? So take me through the impact because this is a big challenge to how to redeploy and how to retrain or leverage the existing personnel. >> Yeah, and I've been using the agile term refactor, we have to refactor the database administrator's job to be more of an architect or a platform builder. And we're talking more now about having, you know, data coaches, data storytellers. Cindi's talking about that all the time is it's different skillsets, but folks that have been in the space for awhile are very adaptable. And if they're data experts at some level, then, you know, it's just looking at it a little differently. And in reality, when I talk to DBAs, when you look at it and say, well, where do you really get the most joy out of your work? It's delivering the value. Nobody's overly excited about backup and recovery, right? That's not where they're getting their job satisfaction from, it's getting the business access to the data. And so now with the advances in technology we're able to give them that opportunity to really become, you know, data providers and to work in partnership with the business to get the business access to the data they need from new sources, different data types, but, you know, in a more timely manner rather than having to spend 70% of their day working on really manual mundane administration just to keep the platform up and running. And we've had customers tell us that, that they've seen is, you know, 50, 60, 70, 80% reduction or more in the amount of administration necessary, which means that their staff is actually more productive... >> And that's going to be a good shift. Cindi take us through the ship because, you know, one mega trend that's happening and you see chips coming out there with more horsepower, with built-in machine learning, you're seeing this kind of new layer of democratization for insights and storytelling and analytics and then you've got this embedded model and you guys do search embedded into all your activities. You've got three layers, almost a stack of data of software, you know, built in, you know, easy to use and simple and then completely forgotten by the user because it's built into some apps somewhere, right? So you're starting to see this change. How does that affect like who works on stuff? >> Yeah, so it does shift. You have to think the analyst, we talk about the analyst of the future in a way similar to what Kent was saying with the DBAs trying to become data engineers, the analysts of the future really want to be this strategic business champions and even a research report from TDWI talked about how most feel beaten down, they can't keep up with it, but 36% would say if you freed up our time, we would become more strategic business advisors. So that's kind of the core analysts now, the embedded that you're talking about is really where data becomes a product and it's the product managers that are embedding data in these applications. But this people change management is super hard and in fact, Harvard Business Review said the lack of accounting for people change management is one of the top reasons why technology is not adopted for these frontline decision makers. We can make it easy, consumer grade, but if we're not looking at how we change these people's roles, it's still a tough hill to climb. >> Well, I got to ask you both kind of the real question that's kind of in the middle of the table here is you both have seen waves of innovation before, what's going on now? And it's pretty obvious, it's playing out in the real world right now, it's in full display as we see it with COVID and digital transformation how do people do it? What's the playbook? How do you advise folks who are saying, cause you see both sides of the table, you've been there. You now see the other sides, Snowflake and ThoughtSpot. What's the mindset, what's the playbook? What do people do? How do they get going? >> Yeah. So start small with the business outcome, with your biggest pain or your biggest opportunity, learn, figure out how you're going to change the people and then run fast, run faster than you ever have before. The rate of creative destruction has never been faster. >> Yeah. In the agile world they talk about failing fast, so exactly to Cindi's point. Things are changing so rapidly, you don't have time to sit around and mull it over for very long. And so really adopting an agile mindset is very important to being successful today. And certainly with the pandemic, we've seen, you know, many organizations come to the top and those were folks that were able to rapidly adapt. And in part that as their mindset, the willingness to adapt not to sit around and overly complicate the issue, overly discuss the issue, too many committees, all of that, but really getting into that mindset of what can we do today? What technology do we have at hand to take advantage of today to make a significant difference? And that's where, you know, Snowflake we've certainly seen an increase in adoption from many of our customers where they're actually, you know, using Snowflake more, they're creating new use cases and they're able to use that flexibility and the agility of the platform to make significant business changes in a short period of time. But back to Cindi's point, you've got to have the right culture in place, right? And the right mindset in place to even see that as a possibility. >> You know, there are three things that make business go great. You make things easy to use and simple and provide value fast is a really good formula, you guys do that. Kent, congratulations on your success at Snowflake. I know Frank Slootman is going to be speaking at the ThoughtSpot Beyond 2020. You guys had great depths of business success, your customers are voting with their wallet. ThoughtSpot, you guys are having innovative formula, doing very well as well to AI and built in search and all the greatness, the new models are here. And so congratulations. Thanks for watching theCUBE. I'm John Furrrier. To learn more aboutS Snowflake and ThoughtSpot working together, check out Beyond 2020. It's a virtual event on December 9th and 10th and you can register at thoughtspot.com/beyond2020, that's thoughtspot.com/beyond2020. I'm John Furrier from theCUBE, thanks for watching this CUBE conversation. (upbeat music)

Published Date : Dec 8 2020

SUMMARY :

leaders all around the world. and as a lead-up to the success of the company. in the heels of Amazon and the ones that are the cloud data company. and the diversity of data but the people have to understand, people listening to you for being in the cloud and the form of the data is since we've jointly met with customers. and that's the one thing that in the past used and getting to it faster, you know, and all the time, you know, to really become, you know, data providers and you guys do search embedded and it's the product managers in the real world right now, going to change the people and the agility of the platform and all the greatness,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VolantePERSON

0.99+

Cindi HowsonPERSON

0.99+

Frank SlootmanPERSON

0.99+

John FurrierPERSON

0.99+

WalmartORGANIZATION

0.99+

Andy JassyPERSON

0.99+

TDWIORGANIZATION

0.99+

John FurrrierPERSON

0.99+

JohnPERSON

0.99+

AmazonORGANIZATION

0.99+

70%QUANTITY

0.99+

Capital OneORGANIZATION

0.99+

50QUANTITY

0.99+

ThoughtSpotORGANIZATION

0.99+

60QUANTITY

0.99+

70QUANTITY

0.99+

Palo AltoLOCATION

0.99+

Kent GrazianoPERSON

0.99+

HortonworksORGANIZATION

0.99+

BostonLOCATION

0.99+

CindiPERSON

0.99+

oneQUANTITY

0.99+

80%QUANTITY

0.99+

10thDATE

0.99+

five yearsQUANTITY

0.99+

36%QUANTITY

0.99+

December 2020DATE

0.99+

KentPERSON

0.99+

two thingsQUANTITY

0.99+

December 9thDATE

0.99+

one placeQUANTITY

0.99+

both sidesQUANTITY

0.99+

bothQUANTITY

0.98+

first genQUANTITY

0.98+

two awesome visionariesQUANTITY

0.98+

todayDATE

0.98+

CUBEORGANIZATION

0.98+

two massive data warehousesQUANTITY

0.97+

three layersQUANTITY

0.97+

Palo Alto StudiosLOCATION

0.97+

SnowflakeEVENT

0.96+

millions of recordsQUANTITY

0.96+

three copiesQUANTITY

0.96+

firstQUANTITY

0.96+

both thingsQUANTITY

0.95+

three thingsQUANTITY

0.94+

SnowflakeORGANIZATION

0.93+

theCUBE StudiosORGANIZATION

0.93+

thoughtspot.com/beyond2020OTHER

0.93+

ClouderaORGANIZATION

0.92+

few years agoDATE

0.91+

HadoopPERSON

0.91+

single sourceQUANTITY

0.9+

SnowflakeTITLE

0.9+

one thingQUANTITY

0.9+

pandemicEVENT

0.88+

onceQUANTITY

0.85+

Matt Klein, Lyft | KubeCon 2017


 

>> Narrator: Live from Austin Texas. It's theCUBE, covering KubeKon and CloudNativeCon 2017. Brought to you by Red Hat, the Linux Foundation, and theCUBE's ecosystem partners. >> Welcome back everyone, live here in Austin Texas, theCUBE's exclusive coverage of CloudNativeConference and KubeKon, for Kubernetes' Conference. I'm John Furrier, co-founder of SiliconANGLE and my co-host Stu Miniman, our analyst. And next is Matt Klein, a software engineer at Lyft, ride-hailing service, car sharing, social network, great company, everyone knows that everyone loves Lyft. Thanks for coming on. >> Thanks very much for having me. >> All right so you're a customer of all this technology. You guys built, and I think this is like the shiny use cases of our generation, entrepreneurs and techies build their own stuff because they can't get product from the general market. You guys had a large-scale demand for the service, you had to go out and build your own with open source and all those tools, you had a problem you had to solve, you build it, used some open source and then give it back to open source and be part of the community, and everybody wins, you donated it back. This is, this is the future, this is what it's going to be like, great community work. What problem were you solving? Obviously Lyft, everyone knows it's hard, they see their car, lot of real time going on, lot of stuff happening >> Matt: Yeah, sure. >> magic's happening behind the scenes, you had to build that. Talk about the problem you solved. >> Well, I think, you know, when people look at Lyft, like you were saying, they look at the app and the car, and I think many people think that it's a relative simple thing. Like how hard could it be to bring up your app and say, I want a ride, and you know, get that car from here to there, but it turns out that it's really complicated. There's a lot of real-time systems involved in actually finding what are all the cars that are near you, and what's the fastest route, all of that stuff. So, I think what people don't realize is that Lyft is a very large, real-time system that, at current scale, operates at millions of requests per second, and has a lot of different use cases around databases, and caching, you know, all those technologies. So, Lyft was built on open source, as you say, and, you know Lyft grew from what I think most companies do, which is a very simple, monolithic stack, you know, it starts with a PHP application, we're a big user of MongoDB, and some load balancer, and then, you know-- >> John: That breaks (laughs) >> Well, well no but but people do that because that's what's very quick to do. And I think what happened, like most companies, is, or that most companies that become very successful, is Lyft grew a lot, and like the few companies that can become very successful, they start to outgrow some of that basic software, or the basic pieces that they're actually using. So, as Lyft started to grow a lot, things just didn't actually start working, so then we had to start fixing and building different things. >> Yeah, Matt, scale is one of those things that gets talked about a lot. But, I mean Lyft, you know, really does operate at a significant scale. >> Matt: Yeah, sure. >> Maybe you can talk a little bit about, you know, what kind of things were breaking, >> Matt: Absolutely, yeah, and then what led to Envoy and why that happened. >> Yeah, sure. I mean, I think there's two different types of scale, and I think this is something that people don't talk about enough. There's scale in terms of things that people talk about, in terms of data throughput or requests per second, or stuff like that. But there's also people scale, right. So, as organizations grow, we go from 10 developers to 50 developers to 100, where Lyft is now many hundreds of developers and we're continuing to grow, and what I think people don't talk about enough is the human scale, so you know, so we have a lot of people that are trying to edit code, and at a certain size, that number of people, you can't all be editing on that same code base. So that's I think the biggest move where people start moving towards this microservice or service-oriented architecture, so you start splitting that apart to get people-scale. People-scale probably usually comes with requests per second scale and data scale and that kind of stuff. But these problems come hand in hand, where as you grow the number of people, you start going into microservices, and then suddenly you have actual scale problems. The database is not working, or the network is not actually reliable. So from Envoy perspective, so Envoy is an open source proxy we built at Lyft, it's now part of CNCF, it's having tremendous uptake across the industry, which is fantastic, and the reason that we built Envoy is what we're seeing now in the industry is people are moving towards polyglot architectures, so they're moving towards architectures with many different applications, or many different languages. And it used to be that you could use Java and you could have one particular library that would do all of your networking and service discovery and load balancing, and now you might have six different languages. So how as an organization do you actually deal with that? And what we decided to do was build an out-of-process proxy, which allows people to build a lot of functionality into one place, around load balancing, and service discovery, and rate limiting, and buffering, and all those kinds of things, and also most importantly, observability. So things like tracing and stats and logging. And that allowed us to actually understand what was going on in the network, so that when problems were happening, we could actually debug what was going on. And what we saw at Lyft, about three years ago, is we had started our microservices journey, but it was actually almost, it was almost stopped, because what people found is they had started to build services because supposedly it was faster than the monolith, but then we would start having problems with tail latency and other things, and they didn't know hot to debug it. So they didn't trust those services, and then at that point they say, not surprisingly, we're just going to go back and we're going to build it back into the monolith. So, we're almost in that situation where things are kind of in that split. >> So Matt I have to think that's the natural, where you led to service mesh, and Istio specifically and Lyft, Google, IBM all working on that. Talk a little bit about, more about what Istio, it was really the buzz coming in with service mesh, there's also there's some competing offerings out there, Conduit, new one announced this week, maybe give us the landscape, kind of where we are, and what you're seeing. >> So I think service mesh is, it's incredible to look around this conference, I think there's 15 or more talks on service mesh between all of the Buoyant talks on Linker D and Conduit and Istio and Envoy, it's super fantastic. I think the reason that service mesh is so compelling to people is that we have these problems where people want to build in five or six languages, they have some common problems around load balancing and other types of things, and this is a great solution for offloading some of those problems into a common place. So, the confusion that I see right now around the industry is service mesh is really split into two pieces. It's split into the data plane, so the proxy, and the control plane. So the proxy's the thing that actually moves the bytes, moves the requests, and the control plane is the thing that actually tells all the proxies what to do, tells it the topology, tells it all the configurations, all the settings. So the landscape right now is essentially that Envoy is a proxy, it's a data plane. Envoy has been built into a bunch of control planes, so Istio is a control plane, it's reference proxy is Envoy, though other companies have shown that they can integrate with Istio. Linker D has shown that, NGINX has shown that. Buoyant just came out with a new combined control plane data plane service mesh called Conduit, that was brand new a couple days ago, and I think we're going to see other companies get in there, because this is a very popular paradigm, so having the competition is good. I think it's going to push everyone to be better. >> How do companies make sense of this, I mean, if I'm just a boring enterprise with complexity, legacy, you know I have a lot of stuff, maybe not the kind of scale in terms of transactions per second, because they're not Lyft, but they still have a lot of stuff. They got servers, they got data center, they got stuff in the cloud, they're trying to put this cloud native package in because the developer movement is clearly pushing the legacy guy, old guard, into cloud. So how does your stuff translate into the mainstream, how would you categorize it? >> Well, what I counsel people is, and I think that's actually a problem that we have within the industry, is that I think sometimes we push people towards complexity that they don't necessarily need yet. And I'm not saying that all of these cloud native technologies aren't great, right, I mean people here are doing fantastic things. >> You know how to drive a car, so to speak, you don't know how to use the tech. >> Right, and I advise companies and organizations to use the technology and the complexity that they need. So I think that service mesh and microservices and tracing and a lot of the stuff that's being talked about at this conference are very important if you have the scale to have a service-oriented microservice architecture. And, you know, some enterprises they're segmented enough where they may not actually need a full microservice real-time architecture. So I think that the thing to actually decide is, number one, do you need a microservice architecture, and it's okay if you don't, that's just fine, take the complexity that you need. If you do need a microservice architecture, then I think you're going to have a set of common problems around things like networking, and databases, and those types of things, and then yes, you are probably going to need to build in more complicated technologies to actually deal with that. But the key takeaway is that as you bring on, as you bring on more complexity, the complexity is a snowballing effect. More complexity yields more complexity. >> So Matt, might be a little bit out of bounds for what we're talking about, but when I think about autonomous vehicles, that's just going to put even more strain on the kind of the distributed natured systems, you know, things that have to have the edge, you know. Are we laying the groundwork at a conference like this? How's Lyft looking at this? >> For sure, and I mean, we're obviously starting to look into autonomous a lot, obviously Uber's doing that a fair amount, and if you actually start looking at the sheer amount of data that is generated by these cars when they're actually moving around, it's terabytes and terabytes of data, you start thinking through the complexity of ingesting that data from the cars into a cloud and actually analyzing it and doing things with it either offline or in real-time, it's pretty incredible. So, yes, I think that these are just more massive scale real-time systems that require more data, more hard drives, more networks, and as you manage more things with more people, it becomes more complicated for sure. >> What are you doing inside Lyft, your job. I mean obviously, you're involved in open source. Like, what are you coding specifically these days, what's the current assignment? >> Yeah, so I'm a software engineer at Lyft, I lead our networking team. Our networking team owns obviously all the stuff that we do with Envoy, we own our edge system, so basically how internet traffic comes into Lyft, all of our service discovery systems, rate limiting, auth between services. We're increasingly owning our GRPC communications, so how people define their APIs, moving from a more polling-based API to a more push-based API. So our team essentially owns the end-to-end pipe from all of our back-end services to the client, so that's APIs, analytics, stats, logging, >> So to the app >> Yeah, right, right, to the app, so, on the phone. So that's my job. I also help a lot with general kind of infrastructure architecture, so we're increasingly moving towards Kubernetes, so that's a big thing that we're doing at Lyft. Like many companies of Lyft's kind of age range, we started on VMs and AWS and we used SaltStack and you know, it's the standard story from companies that were probably six or eight years old. >> Classic dev ops. >> Right, and >> Gen One devops. >> And now we're trying to move into the, as you say, Gen Two world, which is pretty fantastic. So this is becoming, probably, the most applicable conference for us, because we're obviously doing a lot with service mesh, and we're leading the way with Envoy. But as we integrate with technologies like Istio and increasingly use Kubernetes, and all of the different related technologies, we are trying to kind of get rid of all of our bespoke stuff that many companies like Lyft had, and we're trying to get on that general train. >> I mean you guys, I mean this is going to be written in the history books, you look at this time in a generation, I mean this is going to define open source for a long, long time, because, I say Gen one kind of sounds pejorative but it's not. It's really, you need to build your own, you couldn't just buy Oracle database, because, you probably have some maybe Oracle in there, but like, you build your own. Facebook did it, you guys are doing it. Why, because you're badass, you had to. Otherwise you don't build customers. >> Right and I absolutely agree about that. I think we are in a very unique time right now, and I actually think that if you look out 10 years, and you look at some of the services that are coming online, and like Amazon just did Fargate, that whole container scheduling system, and Azure has one, and I think Google has one, but the idea there is that in 10 years' time, people are really going to be writing business logic, they're going to insert that business logic >> They may do a powerpoint slides. >> That would be nice. >> I mean it's easy to me, like powerpoint, it's so easy, that's, I'm not going to say that's coding, but that's the way it should be. >> I absolutely agree, and we'll keep moving towards that, but the way that's going to happen is, more and more plumbing if you will, will get built into these clouds, so that people don't have to worry about all this stuff. But we're in this intermediate time, where people are building these massive scale systems, and the pieces that they need is not necessarily there. >> I've been saying in theCUBE now for multiple events, all through this last year, kind of crystallized and we were talking about with Kelsey about this, Hightower, yesterday, craft is coming back to programming. So you've got software engineering, and you've got craftsmanship. And so, there's real software engineering being done, it's engineering. Application development is going to go back to the old school of real craft. I mean, Agile, all it did was create a treadmill of de-risking rapid build scale, by listening to data and constantly iterating, but it kind of took the craft out of it. >> I agree. >> But that turned into engineering. Now you have developers working on say business logic or just solving, building a healthcare app. That's just awesome software. Do you agree with this craft? >> I absolutely agree, and actually what we say about Envoy, so kind of the catchword buzz phrase of Envoy is to make the network transparent to applications. And I think most of what's happening in infrastructure right now is to get back to a time where application developers can focus on business logic, and not have to worry about how some of this plumbing actually works. And what you see around the industry right now, is it is just too painful for people to operate some of these large systems. And I think we're heading in the right direction, all of the trends are there, but it's going to take a lot more time to actually make that happen. >> I remember when I was graduating college in the 80s, sound old but, not to date myself, but the jobs were for software engineering. I mean that is what they called it, and now we're back to this devops brought it, cloud, the systems kind of engineering, really at a large scale, because you got to think about these things. >> Yeah, and I think what's also kind of interesting is that companies have moved toward this devops culture, or expecting developers to operate their systems, to be on call for them and I think that's fantastic, but what we're not doing as an industry is we're not actually teaching and helping people how to do this. So like we have this expectation that people know how to be on-call and know how to make dashboards, and know how to do all this work, but they don't learn it in school, and actually we come into organizations where we may not help them learn these skills. >> Every company has different cultures, that complicates things. >> So I think we're also, as an industry, we are figuring out how to train people and how to help them actually do this in a way that makes sense. >> Well, fascinating conversation Matt. Congratulations on all your success. Obviously a big fan of Lyft, one of the board members gave a keynote, she's from Palo Alto, from Floodgate. Great investors, great fans of the company. Congratulations, great success story, and again open source, this is the new playbook, community scale contribution, innovation. TheCUBE's doing it's share here live in Austin, Texas, for KubeKon, for Kubernetes conference and CloudNativeCon. I'm John Furrrier, for Stu Miniman, we'll be back with more after this short break. (futuristic music)

Published Date : Dec 7 2017

SUMMARY :

Brought to you by Red Hat, the Linux Foundation, and KubeKon, for Kubernetes' Conference. and all those tools, you had a problem you had to solve, Talk about the problem you solved. and caching, you know, all those technologies. some of that basic software, or the basic pieces But, I mean Lyft, you know, really does operate and why that happened. is the human scale, so you know, so we have a lot of people where you led to service mesh, and Istio specifically that actually tells all the proxies what to do, you know I have a lot of stuff, maybe not the kind of scale is that I think sometimes we push people towards you don't know how to use the tech. But the key takeaway is that as you bring on, on the kind of the distributed natured systems, you know, amount, and if you actually start looking at the sheer Like, what are you coding specifically these days, from all of our back-end services to the client, and you know, it's the standard story from companies And now we're trying to move into the, as you say, in the history books, you look at this time and I actually think that if you look out 10 years, They may do a powerpoint I mean it's easy to me, like powerpoint, it's so easy, and the pieces that they need is not necessarily there. Application development is going to go back Now you have developers working on say business logic And what you see around the industry right now, I mean that is what they called it, and now we're back and know how to do all this work, but they don't learn it that complicates things. and how to help them actually do this in a way Obviously a big fan of Lyft, one of the board members

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Matt KleinPERSON

0.99+

fiveQUANTITY

0.99+

Stu MinimanPERSON

0.99+

IBMORGANIZATION

0.99+

UberORGANIZATION

0.99+

John FurrierPERSON

0.99+

John FurrrierPERSON

0.99+

sixQUANTITY

0.99+

GoogleORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

MattPERSON

0.99+

JohnPERSON

0.99+

LyftORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

10 developersQUANTITY

0.99+

Linux FoundationORGANIZATION

0.99+

two piecesQUANTITY

0.99+

AmazonORGANIZATION

0.99+

oneQUANTITY

0.99+

six languagesQUANTITY

0.99+

50 developersQUANTITY

0.99+

Palo AltoLOCATION

0.99+

theCUBEORGANIZATION

0.99+

Austin TexasLOCATION

0.99+

OracleORGANIZATION

0.99+

10 yearsQUANTITY

0.99+

eight yearsQUANTITY

0.99+

JavaTITLE

0.99+

AWSORGANIZATION

0.99+

10 years'QUANTITY

0.99+

ConduitORGANIZATION

0.99+

100QUANTITY

0.99+

CloudNativeConferenceEVENT

0.99+

hundredsQUANTITY

0.99+

SiliconANGLEORGANIZATION

0.99+

last yearDATE

0.98+

Austin, TexasLOCATION

0.98+

EnvoyORGANIZATION

0.98+

this weekDATE

0.98+

KubeConEVENT

0.98+

CloudNativeConEVENT

0.98+

Linker DORGANIZATION

0.98+

yesterdayDATE

0.98+

KelseyPERSON

0.98+

KubeKonEVENT

0.98+

IstioORGANIZATION

0.97+

six different languagesQUANTITY

0.97+

PHPTITLE

0.97+

MongoDBTITLE

0.97+

80sDATE

0.97+

EnvoyTITLE

0.96+

two different typesQUANTITY

0.96+

one placeQUANTITY

0.94+

NGINXTITLE

0.94+

TheCUBEORGANIZATION

0.93+

second scaleQUANTITY

0.92+

CloudNativeCon 2017EVENT

0.92+

FloodgateORGANIZATION

0.92+

about three years agoDATE

0.92+

Patrick Chanezon, Docker | Open Source Summit 2017


 

(Upbeat Music) >> Announcer: Live from Los Angeles, it's theCUBE, covering Open Source Summit, North America, 2017, brought to you by the Linux Foundation and The Red Hat. >> Hey, welcome back everyone, live here in Los Angeles, California for theCUBE's exclusive coverage of Open Source Summit in North America. I'm John Furrrier, with my co-star Stu Miniman, Our next guest is Patrick Chanezan, who is a member of the technical docker, also on the governing board of the Cloud Native Compute Foundation, also known as CNCF, which is the hottest part of the open-source community right now. It's very fast, we're very trendy, a lot of people are on the bandwagon, a lot of contribution going on. Welcome back to theCUBE. Great to see you. >> Hey, thanks, John and Stu, it's very good to be back on theCUBE. >> Docker's been just a great company to follow since the beginning, the birth of Docker to the transformation from Dark Cloud to Docker. It's just a great team. We have a lot of respect for you guys. Congratulations. But the CNCF right now is the hottest thing, there's more platinum sponsors than I think maybe members. It seems to be very hot. Industry loves it, developer is going crazy about it, why is CNCF so hot? What's your perspective on that? >> What we're seeing right now is really the realization of adoption of containers, we talked about it two years ago. It was very early, and people were starting to use Docker and just covering containers. Today they're really putting them into production, and what we see at Docker with our customer base is that they are using it more and more to modernize traditional applications. So we see tremendous use of containers everywhere in enterprises, and the rise of CNCF is tied to that, I think. We're seeing more and more developers joining the bandwagon, more and more systems being built based on containers. And at Docker, we're playing a big role into that. >> Patrick, for a couple years, the chant was Docker, Docker, Docker, and sometimes people say, "Cubernetti's is where the hotness is." Well underneath that, there's containers. And a lot of those containers, Docker's involved there. Maybe you can help us understand the nuance a little bit as the Cubernetti's wave has grown, sure there was the Mezos, Docker Swarm, Cubernetti's war, if you will there, but what does this mean for Docker? What are you seeing from your customers? Give us the update on Docker itself. We'll probably need to get into the Mobi stuff, too, as we get into the interview. >> Sure, definitely. That's a big question, so let's start with the beginning. When enterprises adopt containers, what happens is that usually it starts with the wrappers who are adopting containers with Docker. So they download Docker for their Windows machine, or for their Mac, or on Linux, they start modernizing their applications. What we see is more and more enterprising wrappers, modernizing existing applications by Dockerizing them, and then the next step is that they want to put that into production. For that, you need the whole system. So at Docker, we have two systems. We have Docker C and Docker E, our enterprise version that has role-based controlled sequencing and all that good stuff. There are lots of different components that you need in order to have a production container system, and so Cuberneris, the orchestration engine is one piece of that. At Docker, we have swarm kits. But there are lots of other different components and lots of different layers to that system. So you have the infrastructure layer that you are using to deploy that inside the firewall or in different cloud providers. Many different solutions there. At Docker, we have one that's called infrakit, that we're using in our additions, to deploy it everywhere. Then on top of that, you need some version of Linux. At Docker Con in April, we released a project called Linuxkit, which helps you do that. On top of that, you need a container run-time. Traditionally, it's been Docker. Right now, we re-factored the Docker codebase to extract a core run-time component that's called container G, which we donated to CNCF. Container G is nearing one or better, so it would be one of them pretty soon. Then, on top of that, you need an orchestration engine. Docker E comes with its own orchestration based on swarm, Cuberneris is another orchestration engine that people like. Cuberneris, behind the scenes, is using Docker, and right now we are working very closely with theCUBE rneris community to implement CRI container G. So CRI is the container run-time interface in Cuberneris that lets you plug in different engines to plug container G in the place of Docker in there. >> Stu: There's a lot of pieces in here. We had too many interviews yesterday talking about the Open Container Initiative, or OCI, which really made sure we've got the 1.0 version of that done. What container format, seems like we're in agreement. We're not fighting over that kind of piece anymore. From the Cubernetti's community, I heard loud and clear, they're like, we've got container D. We've kind of got what we want. We're happy it's open-sourced. We're going. We were at Docker Con when you annouced Mobi, which is kind of open-source, and it felt like we were still trying to figure out all those pieces. Give us the update as to Mobi, you're talking at the open source show, you talk a little bit about CE and EE being the productized versions, but part of it is what we used to think of as Docker is now Mobi, and the company Docker versus the project. You kind of teased those apart a little bit, right? >> Yes. Exactly. And actually, that's what I came here at the Open Summit to talk about, to give people an update on the Mobi project. So what we announced back in April was the launch of the Mobi project, which is the end of a two year re-factoring of the Docker codebase into different components. So all these components on the stack that I told you about, we just tease them out from the Docker codebase so that it's a modular set of components that you can assemble together. Mobi is three things. It's an open source project where people can collaborate in container-based systems. It's also a tool that we're using to assemble our components into Mobi Corp, which is the upstream of Docker products. Then it's also a set of lots of components, like container G, Linux, Infrakit, Notary, and all the projects I talked about. One other thing we've started doing since April as well is we started proposing to donate some of these container projects to CNCF. So container G is already part of CNCF now. Recently, this summer, we proposed Infrakit, and they think it's a little bit too early for donation, because they want to see other, different projects in there. Right now we're in the process of donating and proposing Notary, so there's an active discussion in there, and I hope that the vote will happen probably next week or something like that. So Notary is the component that we're using for Docker, and we think that this could be used in lots of different Cloud Native systems, so it really has its place in the CNCF. >> So identity component for the container management, or what specifically is that going to address? >> So Notary is the piece that we're using in Docker Con Contrast to make sure that you can trust the images that you've built. A signed signature should be able to revoke all the signatures, all the kind of features that our customers love in Docker E. >> John: It's kind of like Stu and me on Twitter, he's verified, I'm not. But this is important, because now, this is a stamp of approval, if you will, that the community can look to. >> Yeah, definitely. So it's something that we implement in Docker, and now people building other containment systems who will be able to use it. And so Mobi saw a lot of traction for its different projects, some of them are going to CNCF, some of them are growing by themselves. On the Docker side, we made some progress prioritizing all that with Docker C and Docker E. We had a 1706 launch of Docker E recently, with lots of new role-based axis control, controls for enterprises, who are adopting it essentially to modernize their traditional apps. >> Take us through a kind of personal question. You were just at a board meeting with the CNCF. Did everyone show up or are people calling in? >> I think Alexi Richardson was the only one, maybe two people on the phone. >> John: Was Sam Redjay there? >> Sam was not there either, but Epona was standing for him. So the room was full, and to me it's really an impressive achievement, two years after we helped start the CNCF. The first meetings were 10, 15 people at Google deciding to create this foundation, and today, maybe we're twenty or thirty people around the table. An\d everybody-- >> Even before that Google meeting, we were covering theCUBE Con Cubernettis' movement early on from your event. So I think, out of Docker Con and some of the Linux Foundation events, the early momentum, we were there, Stu. Then it became the CNCF, and they decided, hey, let's get the Cloud Native Foundation. So it's interesting to me, seeing the growth from the beginning. And it's unique to have that opportunity to be in the front lines of an organically developing group. It wasn't really build the table and come, this was a realization. >> It was a realization and also a concerted effort to build something together to show customers where the containment systems were going in terms of architecture-- >> What were the factors beside, I mean Docker was big driver. Notably, you should get the credit for pioneering the space. But what were the drivers for this coalescing, this call to arms, if you will, or this organic formation of CNCF. What were the key drivers in your mind. Obviously, containers is one. What are the other ones? >> Yeah, to me, containers is a big one, because when you are starting to design your system with containers in mind, you need to change lots of things, how you're building them and things like that. And how you are architecting things together. There were lots of questions about how you do the balancing in that kind of system, how do you do monitoring, how do you do tracing. The CNCF was assembled so that all these components have a place where we can show our inter-repairability between them. So Docker is part of that, Mezos is part of that, as well as Cuberneris. There's a big inter-repairability work that's happening in there. We had a report in the board meeting today about the new CI Initiative that tests different CNCF projects together. >> John: What CI? >> Sorry, continuous integration. >> John: Got it, yeah. >> So there's the continuous integration-- >> John: Not conversion infrastructure. >> Oh, you're right, yeah. >> We always get acronym-ed up. But Chris Anazik was talking yesterday about the graduation path, still waiting to see something graduate from the process. What's going to graduate first? Any bets, what's the betting, what betting is going on? Do you guys actually make bets? Is there a fantasy drafting going on? >> I don't think that really matters, what matters is really adoption of the components. >> Okay, so what's happening on the graduation scale? What's coming out of the woodworks? What's next? What's going to graduate first? >> So one thing I'm curious about is whether Container G will graduate, because it's kind of mature now, it's reaching 1-0 with the CRI and soon integration in Docker, it may be a good candidate for graduation. For the others, I don't know which ones would be first into the graduation process. >> Well, we know it's a high bar, for sure. >> Patrick, the stuff that's getting mature. What about some of the roadmap there? From Docker and CNCF, something like serverless containers, first generation, are going to be important. We had too many interviews this week talking about, today, many of the containers we'll see in the future where serverless and open Faz and things like that go. So how does that all fit in? Can you give us a Docker and a CNCF view on that? >> Let's talk about the CNCF view first. CNCF is working on lots of different areas where there needs to be more definition about what Cloud Native means for storage, for example, with the CSI Initiative, container storage interface, CNI, container networking interface, and then there's the working group for CI, which is about integrating all these projects together, but the working group I'm most interested in is the serverless one. So we have a Docker rep at the serverless working group, and there we're trying to define what a portable, serverless stack looks like. And at Docker, we're naturally interested in this -- >> Of course, Serverless is a beautiful thing. >> Most of these projects are running on top of Docker, so open Faz for people-- >> I got to ask you, Patrick, because we love serverless, I have a love/hate relationship with the word serverless because technically it's a beautiful thing, but there's servers involved. I'm an old-school, so I kind of look at it differently. The younger generation, they want infrastructure as code. This is a clear obvious thing. It was once a dream, but now it's become a reality. What's your position on that? Where is it on the progress bar? How close are we to serverless? >> I'd say there's an initial adoption of serverless on one of the few stacks that exist out there today. So you have the hosted services, the Faz services, from Amazon, Microsoft, and Google, where I'm more interested, and I think customers are kind of looking for that, is a portable way of doing that. For example, in studying that on top of Docker platforms, so that's what projects like Open Faz is doing. Right now, I think we're really in the stage of discussions with CNCF of what a portable service layer would look like so that you could focus on your code, but be able to deploy on Prim, on top of Docker, or in different cloud providers. So that portability aspect to me is very important there. And I think it's important for customers as well. To me, also, I'm an old timer as well, I used to pitch a platform as a service at the beginning of it, Google App Engine, many years ago. To me, it's kind of a feeling of deja vu. We're kind of re-inventing that, but with containers and in a much more portable way. >> The beautiful thing about being an old-timer is we get to look back and, not so much to the young kids, get off my lawn, we had to walk to school with bare feet in the snow, build our own libraries. I was just talking to Eilene, she's like, "Oh, my low-level class was C and my high-level class was Python." I'm like, "Our low-level class was machine code "and high-level wasn't even C yet." >> Yesterday, at the party, I was discussing with one of the IBM engineers, who's working on Linux and containers on mainframe, and we were talking about GCL, and that's the type of feeling that we got. Like we're getting higher up in the stack, and I think for modern developers, it really helped them-- >> It's a beautiful thing right now. Just think about the young guns that are coming up. This is a beautiful library of options now. 90% of the code is leverage-able. That's like unbelievable. So it really allows the creativity of the developer to be a lot more about structural engineering code-base rather than just being very creative on the 10-20% of real intellectual property that they can bring to the table. >> I would add something, it's really about creating value, as opposed to building infrastructure. When we're getting up the stack, and serverless is an example of that, it's really about creating value for enterprises, and that's what these wrappers are about. >> When you start dreaming in code, you know you're doing good. Patrick, thanks so much for coming on theCUBE, and congratulations on all the success with CNCF, and certainty Docker. You guys continue to impress and do a great job. I know there's some changes over there we're looking for, some of the cool stuff graduating out of CNCF, more Docker container goodness from you guys. Thanks for coming on theCUBE. We appreciate it. I'm John Furrier, we're live in Los Angeles, California, for the Open Source Summit North America coverage with theCUBE. I'm John Furrier, Stu Miniman back with more after this short break.

Published Date : Sep 12 2017

SUMMARY :

brought to you by the Linux Foundation a lot of people are on the bandwagon, it's very good to be back on theCUBE. We have a lot of respect for you guys. and the rise of CNCF is tied to that, I think. the chant was Docker, Docker, Docker, So CRI is the container run-time interface in Cuberneris at the open source show, you talk a little bit So Notary is the component that we're using for Docker, So Notary is the piece that we're using in Docker Con that the community can look to. On the Docker side, we made some progress You were just at a board meeting with the CNCF. I think Alexi Richardson was the only one, So the room was full, and to me it's really and some of the Linux Foundation events, this call to arms, if you will, the balancing in that kind of system, how do you do about the graduation path, still waiting to see something I don't think that really matters, For the others, I don't know which ones would be first What about some of the roadmap there? is the serverless one. Serverless is a beautiful thing. Where is it on the progress bar? on one of the few stacks that exist out there today. is we get to look back and, not so much to the young kids, and that's the type of feeling that we got. So it really allows the creativity of the developer to be and that's what these wrappers are about. and congratulations on all the success with CNCF,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

PatrickPERSON

0.99+

Chris AnazikPERSON

0.99+

Patrick ChanezanPERSON

0.99+

John FurrrierPERSON

0.99+

twentyQUANTITY

0.99+

SamPERSON

0.99+

Stu MinimanPERSON

0.99+

StuPERSON

0.99+

Patrick ChanezonPERSON

0.99+

AmazonORGANIZATION

0.99+

Cloud Native Compute FoundationORGANIZATION

0.99+

John FurrierPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Linux FoundationORGANIZATION

0.99+

90%QUANTITY

0.99+

DockerORGANIZATION

0.99+

AprilDATE

0.99+

IBMORGANIZATION

0.99+

two peopleQUANTITY

0.99+

next weekDATE

0.99+

two systemsQUANTITY

0.99+

EilenePERSON

0.99+

Cloud Native FoundationORGANIZATION

0.99+

Alexi RichardsonPERSON

0.99+

GoogleORGANIZATION

0.99+

CNCFORGANIZATION

0.99+

yesterdayDATE

0.99+

Sam RedjayPERSON

0.99+

DockerTITLE

0.99+

PythonTITLE

0.99+

thirty peopleQUANTITY

0.99+

todayDATE

0.99+

two yearQUANTITY

0.99+

North AmericaLOCATION

0.99+

Los Angeles, CaliforniaLOCATION

0.99+

YesterdayDATE

0.99+

Los AngelesLOCATION

0.99+

TodayDATE

0.99+

Mobi CorpORGANIZATION

0.99+

Docker ConEVENT

0.98+

LinuxTITLE

0.98+

Open Source SummitEVENT

0.98+

two years agoDATE

0.98+

Docker ETITLE

0.98+

this weekDATE

0.98+

firstQUANTITY

0.98+

EponaPERSON

0.98+

WindowsTITLE

0.98+

MacCOMMERCIAL_ITEM

0.97+

theCUBEORGANIZATION

0.97+

MezosORGANIZATION

0.97+

this summerDATE

0.97+

one pieceQUANTITY

0.97+

first meetingsQUANTITY

0.96+

CubernerisORGANIZATION

0.96+