Murli Thirumale, Portworx & Satish Puranam, Ford | KubeCon + CloudNativeCon NA 2019
(upbeat music) >> Narrator: Live, from San Diego, California, it's theCUBE! Covering KubeCon, and CloudNativeCon. Brought to you by RedHat, the Cloud Native Computing Foundation, and its ecosystem partners. >> Welcome back, this is theCUBE's fourth year of covering KubeCon and CloudNativeCon. This is the North America show here in San Diego it's 2019, he is John Troyer, I am Stu Miniman, and happy to welcome to the program, first of all, I have Murli Thirumali, who is the co-founder and CEO of Portworx, and Murli, thank you so much for bringing one of your customers on the program, Satish Puranam, who is a Technical Specialist with Ford Motor Company. Gentlemen, thank you so much for joining us. >> Delighted to be here. >> All right, so Satish, we're going to start with you because, you know, the growth of this ecosystem has been phenomenal, there were End Users up on the mainstage, we've already had them, there's over, there's 129 now CNCF End User Participants there, but, you know, bring us in Ford, you know, we were getting ready for this, we're talking, there's so much change going in from, you know, of course, everybody talks about autonomous vehicles, and what there have, but, you know, technology has really embedded itself deeply into a company like Ford, so before we get into all the crew, just, bring us a little about into your world, what's happening, changing, and, you know, what your team does. >> Sure, in like uh, Ford generally has been in like a transformation journey for about the last two years now, that includes like, completely redoing our Data Centers, our Application Portfolio, as part of this monolithic journey, we started our journey with Cloud Foundry, we have been a huge favorite to Cloud Foundry shops for some time. And then, we also would like to start dabbling with like, Kubernetes and things, associated technologies primarily do for like, looking for like, data services, messaging services, lot of the stateful things, right? Cloud Native and like, Kubernetes, and I- Cloud Foundry, I am sorry, Did great wonders for us, for qualified graphs. So what do we do with like, stateful things? And that's what we started dabbling with Kubernetes and things like that. >> Yeah, Satish, if I could, I want to step back one second here, and say, you know, you do a transformation, consolidation, moving from monoliths to microservices, what was the business driver here, was it one day, some executive got up and said, you know, "hey this sounds really cool, go do it", or was there a specific driver in the business that now, your organization needs to respond to? >> I think the business drive is cost efficiency. Like, uh, there were, like, a lot of things that we would have not done, so there's a lot of technical debt we have to pay down, because of various fragmentation and various other things, so it's always about realizing business efficiencies, and most importantly, speed at which we deliver services to our customers internally, so that was the main driving force for our engaging in this transformation journey for like, about the last few years. >> Okay, Murli we'd love to have bring you to this conversation here. You obviously, agility is one of the things I hear most from customers, the driver of what new things. Infrastructure for the longest time, in many ways, it was like a boat anchor of what held us back. >> Murli: Yep. >> Especially you know, our friends in Networking and Storage, it is difficult to change and keep up with, with what's driving there, so bring us uh, bring us up to speed with Portworx and how you fit into Ford and more broadly. >> Yeah, just a quick introduction to Portworx, we've been around for about five years, now, right from the early days of containers and Kubernetes, and you know, we have quite a few customers now in Production, we have about 130 customers, 50 of the uh, the global 2k and so on, many, almost all those customers are in Production, deploying stat significant workloads. The interesting thing about Kubernetes in the last couple of years, especially, is that everybody recognizes it has won the war for orchestrating containers and applications, but the reality is, the customer still has to manage the whole stack, the stack of not just the app but the data itself underneath, and that's kind of the role of Portworx, Portworx is the platform for storage for Kubernetes, and we orchestrate all the underlying storage and the data applications, with that being said, I think one of the things that we've seen that Ford has kind of led the way in, and has been really amazing, is some of the many surprising things that people don't really know about Kubernetes, which has been happening now with customers like Ford for a while, one of them, for example, is just the use of Kubernetes in on-prem applications. Very few people really kind of, they think of Kubernetes as something that was born in the Cloud, and therefore, has kind of really only mushroomed in the Cloud, and you know, the, one of the key things about Kubernetes, and most of our customers are actually on-prem, and it to me is transforming the Data Center. The agility that Satish speaks about, is something that you don't just need because you are operating in the Cloud, you need that for all of your on-prem applications, too, and that's been one of the unique characteristics that we've seen from Ford. >> Yeah, and that's, I mean, you talked about your journey, Satish, you know, the pivotal folks really talk a lot about transformation and agility you know, no matter where your apps were sitting, I'm kind of curious in terms of the storage and the stateful- statefulness of the applications that your working with now, you know, what kind of a, if I looked at it, the diagram, what kind of a set-up would there be? So there's a Portworx layer underneath and beside Kubernetes that's managing some of the storage and some of the replication? Is it then, is the data sitting in a, you know, on a SAN somewhere, is it sitting in the Cloud, I mean, can you kind of describe what a typical application would look like? >> With your typical application, yes we draw storage, we've been drawing storage for the past several years from NetApp as being as the primary source of our data, and then we run on top of that, we run some kind of storage overlays, we dabble with quite a few technologies, including, uh, Rook, NetApp Trident, Uh, Loster, You know I'm like a, it was a journey A journey that we took us, to ultimately lead us to Portworx and we just didn't started with Portworx, but the toughest aspect has been the gravity that the stories bring along with it, and all the things that are, Cloud Native is great but Cloud Native has stayed somewhere and that has to be managed someplace, and we said "Hey, can we do that with Kubernetes?" Right? So, I think we have done a- I won't say an outstanding job but at least we've done a reasonably good job at actually at least wrapping our heads around it and we have quite a few workloads in production that are actually stateful, whether they are Base Systems, uh, there are also like Data Messaging Systems, many cards applications and all that stuff so that has been something that we have been working on for the past few years on our platforms at least. >> Yeah, well I wonder if you could expand a little bit on kind of the application suite you know, "What can we do? What can't we do?" Listen to the keynote this morning I definitely heard it was, if you look at a multi cluster environment, You know, you want to mirror and have the same things there. Well I can't just magically have all the data everywhere and data has gravity and the laws of physics do apply so I can't just automatically replicate terabytes from here to the Cloud or back so help us understand where we are. >> So, you know, one of the, uh, one of the things Satish told me yesterday which I loved was he saying, he said: "Stateful is almost easier than stateless now because of the fact that we have these extensions of Kubenetes." So, one of the things that's been very very impactful is that Kubernetes is now these extensions for managing you know, storage networking and so on, and in fact the way they do that is through an API that just an overlay, so we are an example of an overlay. And so think about it this way, if a customer about 60 percent of our customers are building a platform as their service, in many cases they don't even know what applications are going to be in there, so over our customer base we see the same alphabet soup over and over and over again. Guess what it is, Postgress, Cassandra's, all the databases Redis, right? You know, all of the messaging queues, right? Things like Kafta and uh, you know, Streaming Data, for example, Spark workloads. And so, one of the key things that is happening around with customers particularly on the enterprise side, like large enterprises, they are using all kinds of applications and they're all stateful. I mean they're very few enterprises that are not stateful and they're all running on some kind of a storage substrate that has virtualized the underlying storage. So we run on top of the underlying hardware, but then we're enabled to kind of work with all of the orchestration that Kubernetes provides but we're adding the orchestration of the Data infrastructure as well as the storage itself And I think that's one of the key things that's changed with Kubernetes in the last, I would say, two and a half years is, most people used to think of it as "in the cloud and stateless" but now it's "on-prem and stateful." >> So Satish, one of the things we've talked to customers is their journey of modernizing their applications, it's, there's things that you might build brand-new and are great here but, you know, I'm sure you have thousands of applications and-- >> Satish: Absolutely. >> You know, going from the old way to a brand new thing, there's lot of different ways to get there. Some of it you might need to just-- Where are you with the journey of getting things onto this platform layer that we're talking about? And what will that journey look like for Port? >> Net new apps, anything being new we're talking about writing and like Cloud Native, Twelve factor Apps, like, but anything new, I'm like, anything existing data services, messaging services, what we affectionately call as table stakes services, right? So, which are the Twelve Factor Apps rely on, we are targeting towards Kubernetes. The idea is, "are we there yet?" Probably "no" like We are getting there with along with our partners to put it on the platforms like Kubernetes, right? So, we are also doing a lot of automation orchestration on VMs itself. But the idea is heavy and heavier workloads are going to be lining on Kubernetes platforms, and there will be a lot of work in the upcoming years particularly 2020, where we will be concentrating more on those things and with the continuing growth would be on Twelve Factor, Net New, would be Twelve Factor, Net New, could be in Cloud Foundry, could be in Kubernetes. Time will tell, but uh, that's the guiding philosophy, so to speak, but uh, There's a lot that we have to learn in this journey right now. >> Well I was kind of curious about that Satish, we've talked about an alphabet soup, we've talked about a lot of different projects, and certainly here at KubeCon, the thing about the Cloud Native Computing Foundation is that not that they don't have opinions, but everybody has an opinon, there's lots of different components here, it's not one stack, it's a collection of things that could be put together in several different ways. So you've tried a bunch of different things with storage, I'm actually, I'm interested if there are, if there were kind of surprises or, you know, containerized activity is probably different than I/O activity and storage I/O is probably different than in a virtual machine, the storage itself has some different assumptions built into it, so like, do you have any advise for people? I'm interested in the storage case but also just in, you don't have to evaluate nerworking and security and compliance and a lot of different things. Like, how do you go about approaching this sort of evaluation in this trial; in this journey of when you have-- when you're facing an "alphabet soup" of options? >> I think uh, it all comes down to basic engineering, right? So, what I make, think about "what are your failure points?" I'm like, "could be servers failing, infrastructure, hardware failing" right? So, the basic tendance is that we try to introduce failure as early as possible, like, "what happens if you pull the wire?" and "what happens if the server failure, failure happens?" The question that always comes back is that "is there a way I can compose the same infrastructure so that I can spread it across a couple of failure domains?" I think that was the whole idea of when we started, is like, "can we decompose the problem such that we can actually take advantage of primitives that begged into Kebernetes?" The great thing with CSI, that we're just realizing, before that were all flex drivers, but, how do you actually organize storage in the back end that actually allows you to actually compose this thing on the front end using the Kubernetes primitives. I think that was the process we though. >> John: And CSI is a standard API, >> Correct. >> Yeah, storage API, yeah. >> Exactly. I mean that's what we are relying, we're hoping that it's going to help us with things like, uh, moving compute, uh, to the storage rather than moving storage to the compute. That's one of the evolving, thinking that we're working with. Portworx, we've been working with the community folks from work and a couple of other areas. It's, there's lot to be done here, like we're just in still early days I would say. >> Murli, want to make sure we get out there, Portworx had some updates for this week so what do you say to latest? >> Yeah, so, the updates actually relate to exactly to what Satish was talking about, you know, the idea of, so, container storage has kind of been on it's own journey right? In the early days that John remembers well, it was really providing storage per system, making that data available everywhere. It's then clearly moved to HA being having the High Availability say within the cluster and so on. So, but the data life cycle for the application that's been containerized extends well beyond that so we are making extensions to our own product that are kind of following that path. So, one of the things we launched a few months ago was disaster recovery, DR, which is very very specific to containers, so, container granular DR, so you can kind of you know, take a snapshot, not just of the data, but of the application state as well as the Kubernetes pods back and recover all three of them. At this KubeCon we're announcing two other things. One of them is backup, so our customers, as they make the journey through their app life cycle, inevitably they need to backup their data and we have, again, container granular backup, that will provide all of, by the way, on existing storage. We're not asking anybody to up change, there's underneath their hardware storage substructure. The last thing we're introducing is storage capacity management which is fully automated. You know one of the characteristics of Kubernetes is all of that is "get the person" "get the trouble to get out of the picture," right? The world is going to be automated. Kubernetes is one of the ways people are doing that. And what we have provided is the ability to auto-resize volumes, and auto-resize pods of storage and add more nodes automatically based on policy that is completely automated so that again, these applications, you know when the characteristics of containerized workloads, they aren't predictable; they go up and down and they grow very fast sometimes, and so all of that management, so autopilot, uh, you know, backup DR have now been added in addition to persistent in HA. >> Alright, so before I let you both go, uh, want to talk about 2020? >> So soon. >> Satish, I want to give you a wish. You talked about all the things you'd do the next couple of years, if you could get one thing more out if this ecosystem to make your lives easier for you and your team, you know? What would that be? >> I think standardization on more of these interfaces. Kerbenetes provides a great platform for everybody to interact equally. Uh, more things like CSI, CRI, stuff that's happening in the community. And more standardization will lead to actually, make my life and things and end prizes a lot more easier. Will like to see continue that happening, GPU space looks very interesting, um, so we'll see. That would be my wish at least. >> Alright so Murli, I'm not giving you a wish. You're going to tell me, what should we be looking for from Portworx in participation in, you know, in this community over the next year. >> I think one of the big changes that's happened, really, in the last couple of years that is really kind of achieving a hockey stick is that enterprises are recognizing that stateful apps are really, should be using Kubernetes and can use Kubernetes. So to me, what I predict is that I think, Kubernetes is going to move from not, from just managing applications, to actually managing infrastructure like storage. And so I, you know, my belief is that 2020 is the beginning of where Kubernetes becomes the control plane across the Data Center and Cloud. It's the new control plane. No, what Openstack was aspiring to be many years ago, and that it will be looking upwards to manage applications and downwards to manage infrastructure and, it's not just us who are doing that, folks like VMware with Project Pacific have kind of kind of indicated that that's the direction that we see. So I think it's roll is now much more than just an app orchestrator, it's really going to be the new control plane for infrastructure and apps in the enterprise and in the Cloud. >> Murli, Satish, thank you so much for sharing all the update. >> Thank you >> Pleasure to catch up with both of you >> Thanks. >> Northbound, Southbound, Multi Cloud, theCube is at all of these environments and all the shows. For John Trayer, I'm Stu Miniman as always, thank you for watching theCube.
SUMMARY :
Brought to you by RedHat, This is the North America show here in San Diego All right, so Satish, we're going to start with you messaging services, lot of the stateful things, right? that we would have not done, so there's a lot of You obviously, agility is one of the things I hear most and how you fit into Ford and more broadly. and the data applications, with that being said, and all the things that are, Cloud Native is great but and data has gravity and the laws of physics do apply because of the fact that we have Some of it you might need to just-- that's the guiding philosophy, so to speak, but uh, and certainly here at KubeCon, the thing about the So, the basic tendance is that we try to introduce failure that it's going to help us with things like, uh, So, one of the things we launched a few months ago was the next couple of years, if you could get one thing more stuff that's happening in the community. from Portworx in participation in, you know, kind of indicated that that's the direction that we see. for sharing all the update. thank you for watching theCube.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John Troyer | PERSON | 0.99+ |
Satish Puranam | PERSON | 0.99+ |
John Trayer | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Murli Thirumali | PERSON | 0.99+ |
Ford | ORGANIZATION | 0.99+ |
Murli | PERSON | 0.99+ |
50 | QUANTITY | 0.99+ |
John | PERSON | 0.99+ |
2019 | DATE | 0.99+ |
Satish | PERSON | 0.99+ |
Portworx | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
Cloud Native Computing Foundation | ORGANIZATION | 0.99+ |
Ford Motor Company | ORGANIZATION | 0.99+ |
2020 | DATE | 0.99+ |
San Diego | LOCATION | 0.99+ |
San Diego, California | LOCATION | 0.99+ |
RedHat | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
129 | QUANTITY | 0.99+ |
two and a half years | QUANTITY | 0.99+ |
Project Pacific | ORGANIZATION | 0.99+ |
Kubernetes | TITLE | 0.99+ |
KubeCon | EVENT | 0.98+ |
Murli Thirumale | PERSON | 0.98+ |
North America | LOCATION | 0.98+ |
fourth year | QUANTITY | 0.98+ |
CloudNativeCon | EVENT | 0.98+ |
about 60 percent | QUANTITY | 0.98+ |
2k | QUANTITY | 0.98+ |
about 130 customers | QUANTITY | 0.97+ |
Cloud Native | TITLE | 0.97+ |
one | QUANTITY | 0.97+ |
Postgress | ORGANIZATION | 0.97+ |
one stack | QUANTITY | 0.97+ |
this week | DATE | 0.97+ |
next year | DATE | 0.96+ |
one second | QUANTITY | 0.96+ |
Twelve factor Apps | TITLE | 0.95+ |
Net New | ORGANIZATION | 0.95+ |
VMware | ORGANIZATION | 0.94+ |
Portworx | TITLE | 0.94+ |
this morning | DATE | 0.93+ |
CNCF | ORGANIZATION | 0.93+ |
Lew Tucker, Cisco | KubeCon 2018
>> Live from Seattle, Washington it's theCUBE covering KubeCon and CloudNativeCon, North America 2018. Brought to you by Red Hat, The CloudNative Computing Foundation, and Antico System Partners. (upbeat music) >> Hey everyone, welcome back to theCUBE. Day two live coverage here in Seattle of the CNCF KubeCon and CouldNative. I'm John Furrier, host of theCUBE with Stu Miniman here all week for three days as multiple years we've been covering KubeCon. We've been covering this community, all the way back to the OpenStack days to now CloudNative and Kubernetes, rise of Kubernetes, and KubeCon has been great. CloudNative Computing Foundation and the center of it has been an individual CUBE alumni that we've talked to many times, Lew Tucker, VP and CTO of Cloud Computing at Cisco Systems. Great to have Lew on, good to see you. >> Great to be back again. >> We got a great history of conversations and every year we kind of have a pinch me moment where it's like it's so awesome right now, the technology's coming together, now more than ever, the standardization, the maturization of Kubenetes and what's going on around it, is probably one of the most exciting trends. It's not just about Kubernetes, it's about what that's enabling, ecosystems, storage, networking and compute, now the, is working now magically creating a lot of value. So, we've talked about it, what's the update from your perspective, how do you see it evolving now? >> I see it very much the same way, I had a short little keynote today, yesterday, and was talking about I think we've entered this kind of golden age of software where because of the number of projects that are now going into the CNCF for example, and elsewhere, and get up repositories, we just have a major driving force which is the accumulation of the software that's used now to power the cloud, power data centers, totally transforming infrastructure. We're no longer cabling as I sort of say has no become code. >> Yeah. >> And that's all about the software, it's about through the open source communities. >> We've been talking about before we came on camera about the, and we've had other conversations about the historical waves of innovation. AI's been around for a while, you know all these things have kind of been around but now with cloud computing and the resources available in terms of compute power, storage, and networking now programmable, it's creating a lot of innovation right. And this has been a tailwind for some and a headwind for others, companies that have transformed and understood that have been leveraging it. We've seen conversations from Net App, Cisco, you guys are transform, you turned it into a tailwind, for Cisco, because now all that magic can come in for the programmability on the networking side. >> Exactly right, yeah. We see AI as having a big impact across the board on all of these, we're big contributors also into Cube Clove, for example, because on top of Kubernetes, the biggest issue we're going to have in AI going forward is we don't have enough AI engineers. We don't have enough people who are trained in that. So we need to create these tools and the services that we see coming out in the cloud now for AI are designed to make it easy to consume AI. You don't have to be an AI expert in order to use it and that sort of thing is really exciting. >> How is the CloudNative environment changing IT investments 'cause again, the old days I'd have to throw a machine at something, I got to buy this and siloed, you got now horizontal capabilities, you got the vertical specialization with machine learning and AI as you just referenced. How is it changing investments, people now are looking at re-imagining their infrastructure, they're re-imagining how apps are built. How is Kubernetes, CloudNative impacting IT investments? >> So we've found for example when we talk to our customers and everything else, they're all using multiple clouds. So I think internally we're getting to see a rise here now is this multi-cloud environment that we have. And so Cisco with what we've been doing with our hybrid solutions for AWS and hybrid solutions that we're having with Google is making it so that you can have the same environment within your data center as you have in the cloud, and then we connect the two so that now the IT infrastructure really is looking like a cloud and there's many clouds, multiple clouds in your own data center, in multiple service providers. That makes it easier for IT to really consume CloudNative technology. >> I wonder if you can chill us down a level from what we're talking- you talk about cube flow and machine learning remember back to big data, was like okay, well what do we have to do with the network? Well, I need some more buffering but you know, how are we what is just the base infrastructure layer and where Kubernetes and this ecosystem just becomes the platform for all of the modern applications, and what has to be done differently, I wonder if you could help- >> Yeah so one of the big challenges I think is this how do we connect the different clouds together with your own data center. And that's why we, the hybrid solutions, where Cisco's driving now are designed specifically to make that easy because it's scary for IT organizations to say they're going to open up some part of their firewall to have connections coming in, and so we provide a solution that makes it easy for people. And that means that things such as cube flow, and things like that, they can be running, perhaps they might do some of their research in a hybrid- in a public cloud provider, such as AWS or Google. And then they want to run it now in production within their own data center, and they don't want to change a thing. And at the same time, we're seeing other capabilities. You want to access some service in the cloud as a part of your enterprise app. >> Yeah one of the things people have a hard time understanding is what is just kind of standardized, okay I've got compliant Kubernetes it can run all these places and then there's areas where Cisco has done deep integration work with both Google Cloud and with AWS, maybe help understand what are the standard pieces and what's the extra engineering work needed to be done to support some of these? >> Well I think what has helped us all is the fact that Kubernetes has really taken off. So we really are seeing if you have a Kubernetes platform and you adhere to the public APIs of Kubernetes and everything else like that, you then can have the portability of applications back in the java days we were going after that, and now we're seeing it with Kubernetes. And so by what we've developed has been with the Cisco container platform is an on premise manage Kubernetes environment that looks identical to what you find in the Kubernetes environment at AWS or at Google. So the same interfaces are there, the IT doesn't have to relearn things, they can actually get the advantage of that standardization. >> And that's key for operations and IT because that is the promise of cloud operations. Similar on both platforms on premises and in the cloud. And the next question is okay from a networking perspective, we've had many conversations with Suzie Wee at Cisco around network programmability or net dev options as you guys call it, which is kind of a play on dev ops. This is the future because with multi-cloud the apps don't need to know about where to provision workloads, which cloud when, is it better region over here, latency, network factors come in, you still got to move things around, put A to B, edge of the network for IOT. Talk about the importance of network programmability now more than ever with CloudNative why it's so important. >> Well the first and foremost, it has to be driven by APIs. The old days of actually going out and having people configure network switches to make connectivity or open up provisions and firewalls and things like that, that's behind us. Now we have that all being because of programmability of the network through what we've been doing with ACI and other technologies, we can make it so we can connect these clouds and make it, maintain the security. We're also seeing other things such as isteo and edgebased computing and things like that come into play, where again, the ordinary developer doesn't have to learn all of the details of networking and security, but the operations people need it to be secure, need it to be able to be moved around, need to be able to have telemetry so they can tell what's going on. >> One of the things we've been talking about on theCUBE, Stu and I were yesterday riffing on this but for a while, but it's also now trickled into the Silicon Valley conversations around some of the tech elite people around architecture. Cloud architects are in high demand and there's two schools of thought. There's a persona around a systems architect, more of a systems view, operating systems kind of view, that's cloud that's operating, environment, serverless, advanced, these are kind of concepts that is a systems-oriented thinker. And then you have the application developer that looks like an app server kind of world. Those are all paradigms that we've lived through. >> Right. >> Now coming together now in one, horizontally scaled both cloud that's a system, vertical specialization around the apps, and with dev ops layer, having these guys work together. Talk about this dynamic, your thoughts on it, how it shapes employee selection, people who lead projects. 'Cause the CTO and architect role's now more important, but the software side's just as important. >> Yeah so I think one thing that's become very clear is that we need to make it easier for the domain experts in an application area to just take care of their part. And so that's why like one of the previous episodes we talked about here was about istea, where we've actually separated out essentially the data play, the transport of data around with security, encryption, identity, and everything else from the actual application code of the micro service. That makes it much easier because now the engineering teams are too large, you can't have everybody know everything anymore, like you say, we've got specialists in different areas. We need to be able to provide then, underlying systems that connect these things and that underlying system then has to be managed by your operations people. So we've got dev ops where the application people are writing code actually that the operations people use, so that we can actually have this kind of uniform infrastructure that is maintainable. >> And security is super important and all that good stuff. >> Yeah so Lew it's interesting, we've been watching so many of the pieces we've worked on OpenStack, it was really from the bottoms up building the infrastructure, we've seen the dynamic the last two years, Kubernetes some, and server-less even more, coming from the top down. We want to get your thoughts on that, we've been digging in and trying to tease out some of the Knative pieces that are being discussed here, versus some of the functions things that are happening, especially in Amazon and Microsoft, I'd love to get your take. >> I think we're always seeing this progression in platforms for computing, and programming languages, and paths we've talked about years ago. All of these things are designed always to make it easier. So you're right we've got for example Knative now really coming on as saying can we standardize a way specifically helping Kubernetes people move into this area. Like I've mentioned before the Kubeflow again, how can we start to standardize these pieces? The beauty of this is, the standardized pieces are coming out in open source. So everybody gets it, and that means it's deployable in your public clouds, it's deployable in your data center, and then through a lot of the hybrid technology that Cisco's working, you can connect those together. But you're right we're going to continue to see innovation, that's great, because we need that, we need that constantly. What we need to be able to do is make it easier to consume and then integrate into these systems. And that's where I think Kubernetes has a lot do with how we make it easier. >> Final question on Cisco then I want to go on a more personal note with you on your situation which is news breaking here on theCUBE. Cisco has successfully transformed it's direction, it's been always a great leader in networking, always a great business, billions and billions of dollars in revenue. Now with CloudNative and Kubernetes, the relationship I saw with Amazon, you got Google, you guys have taken that systems view in making things programmable. Explain the Cisco strategy from your perspective as a CTO and as a legend in the industry, for the people that know Cisco, know the old Cisco, what is the new Cisco? And how does Kubernetes and how does all this CloudNative fit into the new Cisco? >> I think the new Cisco really is focused now on where customers are taking their computing resources and it is in this multi-cloud world where we're seeing it's not a fight anymore. You can't say I have a reason to keep things here in my data center, I'm never going to go to cloud, and other customers are saying I'm never going to have a data center, now everybody's saying we're probably going to have both. And Cisco as a networking company, this plays right into our strength because what you have to be able to do is now connect those environments in a secure way, in a manageable way. And so this plays right into where Cisco's growth I think is going to be, it'll be in much more of these kinds of services that allow that to happen, and in the relationships and partnerships that we have with the major cloud providers. >> This basically, the decomposition of monolithic applications into sets of microservices is connected by the network. >> Exactly right. >> This is the fundamental beauty of where you guys see that tailwind. >> Exactly. >> Awesome. Well Lew you've been a legend in the industry, I've been following your career from the beginning. You've been- you have product that's in the Computers Museum you've done amazing work at Sun Microsystems, I mean just a great story career, the work you've done at Cisco, you've been on theCUBE so many times, I don't know that number. You've really contributed to the industry and this news now about your situation, share the news about what's happening with you. >> Well I made announcements at our CNCF board and our OpenStack board meetings that I'm leaving Cisco and so I'm having to withdraw from the board positions as well as Cloud Foundry and that's sad in a way because I have relationships with those people, but it many ways after I want to spend some time to really see where the future is again, because as you know in my career I've changed several times. And I'm so looking forward to actually, now going into sort of a new direction which may be much more moving up the stack. I think there's very exciting things going on in AI, there's exciting things going on in genomics. There's a lot of activity going on so we've been building this technology for a purpose to allow us to have those kinds of things. Now I want to start focusing much more directly. >> And you're leaving Cisco on what date? >> Leaving Cisco beginning of January. >> Well congratulations, great work and I think one of the trends I think this speaks to is I see a lot of computer scientists, a lot of people who have some DNA from the old ways like you do, and been there, and contributed at a seminal level, just some great contributions. Seeing computer science as an opportunity to solve problems. This is kind of a renaissance from seasoned pioneers and young people coming together. This is a great opportunity, is that kind of what you're thinking, you're just going to attack the problem? >> There's 8000 people here, this show's sold out and this is all developers so people who have background in computer science or are getting online and learning it themselves, this is an opportunity and the time to get in. >> You've been a great mentor to many, you've been a great contributor in the open source community, again, your contributions at the systems level and you understand certainly what's going on with CloudNative, looking forward to following up and congratulations. >> Yep, well I hope to be back again. >> Of course, you're VIP CUBE alumni. Lew Tucker, exciting news, Cisco's transformed. He's moving on to- taking on some big new challenges, thanks for coming on theCUBE really appreciate it. Lew Tucker, Vice President CTO systems, Cisco systems, moving on to some new endeavors. Here in theCUBE we're covering the live coverage here at KubeCon CloudNative I'm John Furrier, Stu Miniman, back with more day two interviews after this short break. (upbeat music)
SUMMARY :
Brought to you by Red Hat, Foundation and the center of it is probably one of the of the software that's used And that's all about the and the resources available the biggest issue we're going How is the CloudNative so that now the IT infrastructure And at the same time, we're the IT doesn't have to relearn things, the apps don't need to know of the network through what One of the things we've around the apps, and with dev ops layer, and everything else from the important and all that good stuff. of the pieces we've worked on the hybrid technology that that know Cisco, know the old that to happen, and in the is connected by the network. This is the fundamental the industry and this news now and so I'm having to withdraw think this speaks to is and the time to get in. great contributor in the the live coverage here
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Cisco | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Seattle | LOCATION | 0.99+ |
Lew Tucker | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
CloudNative Computing Foundation | ORGANIZATION | 0.99+ |
Antico System Partners | ORGANIZATION | 0.99+ |
Sun Microsystems | ORGANIZATION | 0.99+ |
billions | QUANTITY | 0.99+ |
8000 people | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Seattle, Washington | LOCATION | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
KubeCon | EVENT | 0.99+ |
CNCF | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
Suzie Wee | PERSON | 0.98+ |
CloudNative | ORGANIZATION | 0.98+ |
OpenStack | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
Cisco Systems | ORGANIZATION | 0.98+ |
first | QUANTITY | 0.98+ |
three days | QUANTITY | 0.98+ |
CloudNativeCon | EVENT | 0.98+ |
two schools | QUANTITY | 0.98+ |
one | QUANTITY | 0.97+ |
both platforms | QUANTITY | 0.97+ |
one thing | QUANTITY | 0.97+ |
Kubernetes | TITLE | 0.97+ |
CUBE | ORGANIZATION | 0.95+ |
billions of dollars | QUANTITY | 0.94+ |
Knative | ORGANIZATION | 0.94+ |
CloudNative | TITLE | 0.93+ |
two interviews | QUANTITY | 0.92+ |
Kubernetes | ORGANIZATION | 0.91+ |
Stu | PERSON | 0.91+ |
Lew | PERSON | 0.91+ |
Net App | ORGANIZATION | 0.91+ |
OpenStack | TITLE | 0.89+ |
CNCF KubeCon | EVENT | 0.87+ |
Day two | QUANTITY | 0.86+ |
OpenStack | EVENT | 0.86+ |
Jonsi Stefansson & Anthony Lye, NetApp | KubeCon 2018
>> Live from Seattle, Washington, it's theCUBE, covering KubeCon and Cloud Native Con North America 2018. Brought to you by RedHat, the Cloud Native Computing Foundation and its ecosystem partners. >> Okay welcome back everyone we're here live in Seattle for KubeCon and Cloud Native Con. I'm John Furrier your host, Stu Miniman from Wikibon here. Next guests Anthony Lye, whose the senior vice president general manager of Cloud Data Services at NetApp, and Jonsi Stergesson, CTO and VP of Cloud Services. Great to have you guys on, great to see you again Anthony. >> As always thank you. >> So first I want to get out there we talked lots in the Kube lounge just to reset. The value parsons of NetApp have significantly been enhanced with the cloud. What is that value proposition? What have you guys seen the explosive headroom for value creation that you guys are enabling with NetApp and the cloud? >> You know what I think NetApp has done over now, probably five years, is really pushed itself to embrace the cloud. To recognize that the cloud is a very important part of everybody's IT infrastructure whether it's an extension of the existing IT infrastructure for things like DR or backup or whether it's the primary platform for legacy workloads or, as we're all here to do, to discuss the refactoring and rebuilding of applications around microservices. I think NetApp chose, unlike all of the traditional storage vendors, to see the cloud as an opportunity and I think it's helped the company and it's helped our customers to operate in what is, I think, is by default now, the end state for many companies is hybrid cloud. >> You guys also made some good moves early on with the cloud. We've documented certainly on SiliconANGLE and theCUBE early on. And then as flash comes in for performance, now you've got compute, storage and networking all being optimized in the cloud, creates app developers an environment where it's programmable infrastructure finally. I mean dev ops is happening, this is where services and notion of compute has gone from standing something up in seconds on the cloud to with functions milliseconds. This is changing the dynamic of applications but you've still got to store the data. Talk about, Jonsi, the impact of the services in piece to the developer, storage, services, provisioning, all that and it covers. >> We are taking, I mean all of our services that are running in all the hyperskills in Google and Azure and AWS and more and even on premise. Our view is our role is always to find the best home for any workload at any given time. Even though it's in public cloud or on premise. However storage has always been sort of left aside, it's always been living in this propietary chunk that is hard to move and the weight of the data is actually quite heavy. So we actually want to use Kubernetes and microservices and resistant volume claims by taking that data and making that very easily migratable replicated between locations, between hyperscalers and sort of adopt a true multi cloud strategy. With data with it not only moving those workloads or applications but the data is key, data is key. >> Sometimes, you know, you want to move the data to a compute and sometimes you want to move compute to the data. >> And that's been validated by Amazon's RDS announcement on VMware, Amazon announced outposting on premises, and the number one thing was latency, work was not yet moving. This is exactly to what you guys have been doing and implementing, today, this is like real product. >> I think the reality of the world is, you know, while there is a ton of innovation that exists in public cloud there are well documented use cases that struggle with a cloud only environment. I think NetApp has chosen to make each one of those three potential persistent stores equal to one another. So whether that's in a traditional on premise and upgrading on premise environments to get better price performance characteristics, embracing the public cloud or combining public and private cloud. >> While it's not trivial NetApp, at it's core, always was software so moving it from a hardware appliance, I mean, back in the day Network Appliance was the original name of the company to a software defined solution to being multi-cloud, you can kind of see that genesis where it can go. A lot of times the tougher part is from the customer standpoint. You know, the traditional person that bought and managed this was a storage administrator and getting them to understand cloud native applications and dev ops and all those things, those are pretty challenging moves so how much of it is education? How much of it is new buying centers inside the company or new clients, help us walk through that. >> Yeah I would make two points in maybe answering to you. So I think NetApp's history, actually 25 years ago, NetApp started off as selling into the developers who were running SUN workstations, who wanted shared everything and NetApp actually you know went around IT and put those appliances into the developers. We built a SaaN business, a very successful SaaN business, with the IT people. Now you're absolutely right, the people around here fall into the, sort of, the modern day dev ops characters. What Google calls the SREs the Site Reliability Engineers. And they're a new breed, they're young, they're doing more and more CICD. Storage is an integral part of what they do but maybe not a primary part. They expect storage to work. We are really lucky you know, a little company called Microsoft and another little company called Google sell our stuff so we get introduced into all of those cloud first, cloud only sort of use cases. Not just of refactoring of primary but building. So we're actually, in many cases now, very relevant to those people but we've been fortunate enough to leverage the big public clouds together. >> So you have a relationship with AWS, Google and Microsoft, Microsoft and Google, which you've just mentioned. You mentioned SRE, Site Reliability Engineer, this is a new persona that's clearly emerging and it has a focus around operations, now IT operations has been around for a long time, dev is changing too but this is, if they sell your stuff, their customers need to operate at scale. This is a big point, can you elaborate on the importance of this and what you guys are doing specifically to help that. >> So the Site Reliability Engineer, he is not doing operations. He is actually in charge of running the workload or the development or the application or the product that comes from development. They have to abide by specific rules that are actually set by the SRE. And to your point, because you were talking about different selling motions and not selling into the storage admin or not selling to traditional IT. This is actually what has actually been really surprising and showcases the power of Kubernetes and how widely adopted it has been, both on premise and in the public cloud because customers are actually coming to us and saying, "Hey we had no idea NetApp was actually "doing all of this in the public cloud. "We had no idea that you had your own Kubernetes services "that actually help solve one of the biggest problems "which is persistent volume claims and application of data." So it's actually coming, and you sort of see how important CNCF is, because they're actually educating the market and educating the enterprise space just as well as the new up and coming development team like I've traditionally come from. So I'm actually seeing that it's easier than I would have sort of thought in the beginning. So they're actually becoming more educated about microservices, more educated about how to run their, actually everybody almost in any company that I go into now, they have the SRE playbook somewhere in their meeting room somewhere and everybody sort of getting educated on how they need to, sort of, elevate themselves from being traditional system administrators into that SRE or dev op role. >> And it's also a cultural thing too, they have to develop, not just the playbook, but have some experience in economies of scale, managing it, and certainly it's a tail wind for you guys, storage because, again, it's also a lot of coating involved they need a pool of resources, storage being one of them. But the other thing that's interesting, those are single clouds, Amazon, Google, multi cloud is really where the action is, right? So multi cloud to me is just, to me, a modern version of multi vendor, which basically is about choice. Choice is critical, but having choice around the app, it becomes the value creator. So if you guys can scale with the app development environments that seems to be a sweet spot. How are you guys talking about that particular point because this becomes an under the covers, a new kind of operations, a new kind of scale, pushing code, not just you know stacking interacting boxes but, like, really making things, patching security things or could have been head of security things so doing things in a really really automated way. >> Yeah, I mean, I think the one thing I'm most proud of at my time at NetApp and what the team does and what the team continues to do is we took a very, very, I think, deliberate perspective that we would deliver storage, but we would do it in a very unique way. That my background was from Saas, I spent my entire career building applications, and when you build an application, you run the application, there is nothing you give the customer and say, "Here, administer it." When you look at a lot of the infrastructure services, they make the customer do a lot of work. So what we did at NetApp was we decided that we ourselves would almost create like an always available protocol that people could just ask for it and it would be there. There was no concept of setting it up or patching it or upgrading it. And that's really I think we have set a bar now on the public clouds that, I think, even the public clouds themselves have not done, and giving those developers that I asked for a storage through an API and all I need to do is ask for capacity and throughput. Nothing else, that's something to a developer they're like, "So now I don't even have to ask "anybody with storage skills. "I can tell my application to ask for it's own storage." >> It's interesting you're living in a new world where you need the scale of a system but the functionality of like an app server. I feel like we're living in that app server days where that middle ground and app development was the key focus, you've got to have both now. You need scalable systems but really application performance. >> And then you add an additional layer because now everybody wants to be able to use the same deployment script, the same configuration management system, Terraform, whatever they're actually using to deploy it on premise or in a public cloud but it needs to be done in a unified manner. This is why it's so important to be upstream compatible and there's a lot of companies out there that are actually destroying that model and not following the true cloud concept. >> Yes give them a slap on the wrist, get in line, fix it! >> If you are going to play in this space with the CNCF and with Kubenetes, you better play by the rules and do the open standards. And so you're actually compatible no matter where your workload resides. >> We've been monitoring how storage is maturing in this whole cloud native Kubenetes ecosystem here. A year ago there were a lot of backroom arguments over what were the right architectures, a few sub projects working through here, it actually blew me away in the keynote this morning to hear that 40% of all applications that are deployed in Kubernetes are stateful. So where are we? What's working? What's good for customers? And what do we still need to work on to kind of solidify the storage data piece of this? >> I think it's interesting, 'cause I think we, sort of, ourselves now consider NetApp to be a data company. Storage is an enabler but what's interesting, everyone talks about their Saas strategy, their PaaS strategy their IaaS strategies. I always ask people, "What's your data strategy?" and that's something I think the CNCF Kubernetes, themselves, recognize that they've done a lot of really great things for compute around the microservices themselves but the storage piece has always been something of a challenge. And we said, about solving that problem, we have an open source project called Trident, that essentially enables people to make persistent volume claims and if the container dies, they can essentially start a new container and pick up the storage exactly where they left off. So we really believe that stateful is an ever increasing percentage of the overall application model. Databases are important things, people need them. >> I would agree with that and that's developing too, it's early on. All right so I want to ask you guys a question, kind of outside the box. Multi cloud certainly is part of a hybrid, what they call a hybrid today, it's really a choice, multi cloud will be a future reality, no matter what anyone says, I believe that. How is multi cloud changing IT investments? Business investments, technical investments or both, what's your guys thoughts on how multi cloud is driving and changing IT investments? >> Well I actually think it offers you the opportunity to have like placement policy algorithms that fit your workload at any given time. For example, if this particular application is latency sensitive, and I created an application that all of a sudden became really popular in Mexico, then I should be able to see which one of the hyperscalers actually has a presence in Mexico City, deploy it there. If I'm under utilizing my private cloud and I have a lot of space on it and there is no specific requirements, it gives you that flexibility to, like I said, always find the best home for your workload at any given time. >> Dynamic policy based stuff? >> Yeah, precisely. And it allows you also, I mean, you can choose to do it whether its based on workload requirements or you can start doing it in a least cost effective route, I mean least cost routing. So it actually impacts both from a technical and a business sense in my opinion. >> I think you know you cannot help but get excited every day with what one cloud delivers over another cloud, and we're seeing something not unlike the arms race, you know, Google does this, then Amazon does this, then Microsoft does this. As developers we're very keen to take advantage of all these capabilities and we want to, in many cases, let the application itself make the decision. >> So yeah Amazons got there, everyone's catching up. Competitions good. All right, final question. Predictions for multi cloud in 2019. What's going to happen? Is there going to be a loud bang? Is there going to be a crash? Is it going to be fruit on the trees? What's the state of the multi cloud predictions for 2019? >> Well I actually believe it's going to become a standard. Nobody should be locked into any region or any one provider, I don't even care if it's on premise or NetOps specific, you should be able to... I mean, I think it's just going to become standard. Everybody has to have a multi cloud strategy and you can see that, like the IDC report that 86% of Fortune 500 companies are adopting multi cloud. And I think I'm actually quite fed up of this hyper cloud stuff because, in my opinion, on premise is just the fourth or the fifth hyperscaler and should be treated as such. So if you actually have that true cloud concept, you should be able to deploy that using the same script, the same APIs to deploy it everywhere. >> As I said in theCube the data center and non print, they're just an edge, a big edge. If it's an operating mall? >> My prediction? Your prediction. >> 2019 is the year of Istio. I think we've become enamored with Kubernetes, I think what Istio brings significantly advances Kubernetes, and we barely scratched the surface, I think, with the service mesh and all of the enhancements and all the contributions that will go into that. I think, you know, that 2019 will probably see as many vendors here next year with Istio credentials and STO capabilities as we see today with Kubernetes. >> Anthony, Jonsi, thanks for coming on, great insights, smart commentary, appreciate it. We should get in the studio and dig into this a little bit deeper. Really a great example of an incumbent, large company, NetApp, really getting a tailwind from the cloud, good smart bets you guys made, programmable infrastructure, dynamic policy routing, all kinds of under the covers goodness from smart cloud deployments. This is where software drives the data. >> Yep data is the new oil, that's what they say right? If you don't have a data set you're not very competitive. >> Thanks for coming on I appreciate it. More Kube coverage here, getting all the breakdown here, the impact of cloud computing at scale, the role of data software, all happening here at the CNCF. This is the KubeCon, I'm John Furrier and Stu Miniman, thanks for watching. More live coverage after this short break.
SUMMARY :
Brought to you by RedHat, Great to have you guys on, in the Kube lounge just to reset. To recognize that the cloud in seconds on the cloud to that are running in all the hyperskills and sometimes you want to This is exactly to what you guys have been the world is, you know, and getting them to understand the big public clouds together. on the importance of and not selling into the storage admin that seems to be a sweet spot. and all I need to do is ask but the functionality and not following the true cloud concept. and do the open standards. in the keynote this morning and if the container dies, kind of outside the box. and I have a lot of space on it And it allows you also, I I think you know you cannot What's the state of the multi the same APIs to deploy it everywhere. As I said in theCube the and all the contributions really getting a tailwind from the cloud, Yep data is the new oil, This is the KubeCon, I'm John Furrier
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Microsoft | ORGANIZATION | 0.99+ |
Anthony Lye | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Jonsi Stergesson | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Mexico City | LOCATION | 0.99+ |
Mexico | LOCATION | 0.99+ |
Seattle | LOCATION | 0.99+ |
Anthony | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
2019 | DATE | 0.99+ |
Cloud Native Computing Foundation | ORGANIZATION | 0.99+ |
40% | QUANTITY | 0.99+ |
Jonsi Stefansson | PERSON | 0.99+ |
86% | QUANTITY | 0.99+ |
Cloud Data Services | ORGANIZATION | 0.99+ |
next year | DATE | 0.99+ |
fourth | QUANTITY | 0.99+ |
Jonsi | PERSON | 0.99+ |
A year ago | DATE | 0.99+ |
KubeCon | EVENT | 0.99+ |
IDC | ORGANIZATION | 0.99+ |
NetApp | ORGANIZATION | 0.99+ |
Cloud Native Con. | EVENT | 0.99+ |
five years | QUANTITY | 0.99+ |
RedHat | ORGANIZATION | 0.99+ |
Cloud Services | ORGANIZATION | 0.98+ |
two points | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
Seattle, Washington | LOCATION | 0.98+ |
fifth hyperscaler | QUANTITY | 0.97+ |
CNCF | ORGANIZATION | 0.97+ |
Cloud Native Con North America 2018 | EVENT | 0.97+ |
Istio | ORGANIZATION | 0.97+ |
one | QUANTITY | 0.97+ |
Kubernetes | TITLE | 0.96+ |
25 years ago | DATE | 0.96+ |
SRE | ORGANIZATION | 0.96+ |
Amazons | ORGANIZATION | 0.96+ |
Saas | ORGANIZATION | 0.95+ |
today | DATE | 0.94+ |
SRE | TITLE | 0.92+ |
Wikibon | ORGANIZATION | 0.91+ |
PaaS | TITLE | 0.89+ |
one provider | QUANTITY | 0.89+ |
VMware | ORGANIZATION | 0.89+ |
SUN | ORGANIZATION | 0.87+ |
SiliconANGLE | ORGANIZATION | 0.86+ |
single clouds | QUANTITY | 0.86+ |
theCUBE | ORGANIZATION | 0.86+ |
this morning | DATE | 0.85+ |
each one | QUANTITY | 0.85+ |
NetApp | TITLE | 0.83+ |
first | QUANTITY | 0.83+ |
Trident | ORGANIZATION | 0.81+ |
three potential persistent stores | QUANTITY | 0.78+ |
Yaron Haviv, Iguazio | theCUBE NYC 2018
Live from New York It's theCUBE! Covering theCUBE New York City 2018 Brought to you by Silicon Angle Media and it's ecosystem partners >> Hey welcome back and we're live in theCUBE in New York city. It's our 2nd day of two days of coverage CUBE NYC. The hashtag CUBENYC Formerly Big data NYC renamed because it's about big data, it's about the server, it's about Cooper _________'s multi-cloud data. It's all about data, and that's the fundamental change in the industry. Our next guest is Yaron Haviv, who's the CTO of Iguazio, key alumni, always coming out with some good commentary smart analysis. Kind of a guest host as well as an industry participant supplier. Welcome back to theCUBE. Good to see you. >> Thank you John. >> Love having you on theCUBE because you always bring some good insight and we appreciate that. Thank you so much. First, before we get into some of the comments because I really want to delve into comments that David Richards said a few years ago, CEO of RenDisco. He said, "Cloud's going to kill Hadoop". And people were looking at him like, "Oh my God, who is this heretic? He's crazy. What is he talking about?" But you might not need Hadoop, if you can run server less Spark, Tensorflow.... You talk about this off camera. Is Hadoop going to be the open stack of the big data world? >> I don't think cloud necessary killed Hadoop, although it is working on that, you know because you go to Amazon and you know, you can consume a bunch of services and you don't really need to think about Hadoop. I think cloud native serve is starting to kill Hadoop, cause Hadoop is three layers, you know, it's a file system, it's DFS, and then you have server scheduling Yarn, then you have applications starting with map produce and then you evolve into things like Spark. Okay, so, file system I don't really need in the cloud. I use Asfree, I can use a database as a service, as you know, pretty efficient way of storing data. For scheduling, Kubernetes is a much more generic way of scheduling workloads and not confined to Spark and specific workloads. I can run with Dancerflow, I can run with data science tools, etc., just containerize. So essentially, why would I need Hadoop? If I can take the traditional tools people are now evolving in and using like Jupiter Notebooks, Spark, Dancerflow, you know, those packages with Kubernetes on top of a database as a service and some object store, I have a much easier stack to work with. And I could mobilize that whether it's in the cloud, you know on different vendors. >> Scale is important too. How do you scale it? >> Of course, you have independent scaling between data and computation, unlike Hadoop. So I can just go to Google, and use Vquery, or use, you know, DynamoDB on Amazon or Redchick, or whatever and automatically scale it down and then, you know >> That's a unique position, so essentially, Hadoop versus Kubernetes is a top-line story. And wouldn't that be ironic for Google, because Google essentially created Map Produce and Coudera ran with it and went public, but when we're talking about 2008 timeframe, 2009 timeframe, back when ventures with cloud were just emerging in the mainstream. So wouldn't it be ironic Kubernetes, which is being driven by Google, ends up taking over Hadoop? In terms of running things on Kubernetes and cloud eight on Visa Vis on premise with Hadoop. >> The poster is tend to give this comment about Google, but essentially Yahoo started Hadoop. Google started the technology  and couple of years after Hadoop started, with Google they essentially moved to a different architecture, with something called Percolator. So Google's not too associated with Hadoop. They're not really using this approach for a long time. >> Well they wrote the map-produced paper and the internal conversations we report on theCUBE about Google was, they just let that go. And Yahoo grabbed it. (cross-conversation) >> The companies that had the most experience were the first to leave. And I think it may respect what you're saying. As the marketplace realizes the outcomes of the dubious associate with, they will find other ways of achieving those outcomes. It might be more depth. >> There's also a fundamental shift in the consumption where Hadoop was about a ranking pages in a batch form. You know, just collecting logs and ranking pages, okay. The chances that people have today revolve around applying AI to business application. It needs to be a lot more concurring, transactional, real-time ish, you know? It's nothing to do with Hadoop, okay? So that's why you'll see more and more workers, mobilizing different black server functions, into service pre-canned services, etc. And Kubernetes playing a good role here is providing the trend. Transport for migrating workloads across cloud providers, because I can use GKE, the Google Kubenetes, or Amazon Kubernetes, or Azure Kubernetes, and I could write a similar application and deploy it on any cloud, or on Clam on my own private cluster. It makes the infrastructure agnostic really application focused. >> Question about Kubernetes we heard on theCUBE earlier, the VP of Project BlueData said that Kubernetes ecosystem and community needs to do a better job with Stapla, they nailed Stapflalis, Stafle application support is something that they need help on. Do you agree with that comment, and then if so, what alternatives do you have for customers who care about Stafe? >> They should use our product (laughing) >> (mumbling) Is Kubernetes struggling there? And if so, talk about your product >> So, I think that our challenge is rounded that there are many solutions in that. I think that they are attacking it from a different approach Many of them are essentially providing some block storage to different containers on really cloud 90. What you want to be able is to have multiple containers access the same data. That means either sharing through file systems, for objects or through databases because one container is generating, for example, ingestion or __________. Another container is manipulating that same data. A third container may look for something in the data, and generate a trigger or an action. So you need shared access to data from those containers. >> The rest of the data synchronizes all three of those things. >> Yes because the data is the form of state. The form of state cannot be associated with the same container, which is what most of where I am very active and sincere in those committees, and you have all the storage guys in the committees, and they think the block story just drag solution. Cause they still think like virtual machines, okay? But the general idea is that if you think about Kubernetes is like the new OS, where you have many processes, they're just scattered around. In OS, the way for us to share state between processes an OS, is whether through files, or through databases, in those form. And that's really what >> Threads and databases as a positive engagement. >> So essentially I gave maybe two years ago, a session at KubeCon in Europe about what we're doing on storing state. It's really high-performance access from those container processes to our database. Impersonate objects, files, streams or time series data, etc And then essentially, all those workloads just mount on top of and we can all share stape. We can even control the access for each >> Do you think you nailed the stape problem? >> Yes, by the way, we have a managed service. Anyone could go today to our cloud, to our website, that's in our cloud. It gets it's own Kubernetes cluster, a provision within less than 10 minutes, five to 10 minutes. With all of those services pre-integrated with Spark, Presto, ______________, real-time, these services functions. All that pre-configured on it's own time. I figured all of these- >> 100% compatible with Kubernetes, it's a good investment >> Well we're just expanding it to Kubernetes stripes, now it's working on them, Amazon Kubernetes, EKS I think, we're working on AKS and GK. We partner with Azure and Google. And we're also building an ad solution that is essentially exactly the same stock. Can run on an edge appliance in a factory. You can essentially mobilize data and functions back and forth. So you can go and develop your work loads, your application in the cloud, test it under simulation, push a single button and teleport the artifacts into the edge factory. >> So is it like a real-time Kubernetes? >> Yes, it's a real-time Kubernetes. >> If you _______like the things we're doing, it's all real-time. >> Talk about real-time in the database world because you mentioned time-series databases. You give objects store versus blog. Talk about time series. You're talking about data that is very relevant in the moment. And also understanding time series data. And then, it's important post-event, if you will, meaning How do you store it? Do you care? I mean, it's important to manage the time series. At the same time, it might not be as valuable as other data, or valuable at certain points and time, which changes it's relationship to how it's stored and how it's used. Talk about the dynamic of time series.. >> Figured it out in the last six or 12 months that since real-time is about time series. Everything you think about real-time censored data, even video is a time-series of frames, okay And what everyone wants to do is just huge amount of time series. They want to cross-correlate it, because for example, you think about stock tickers you know, the stock has an impact from news feeds or Twitter feeds, or of a company or a segment. So essentially, what they need to do is something called multi-volume analysis of multiple time series to be able to extract some meaning, and then decide if you want to sell or buy a stock, as in vacation example. And there is a huge gap in the solution in that market, because most of the time series databases were designed for operational databases, you know, things that monitor apps. Nothing that injects millions of data points per second, and cross-correlates and run real-time AI analytics. Ah, so we've essentially extended because we have a programmable database essentially under the hoop. We've extended it to support time series data with about 50 to 1 compression ratio, compared to some other solutions. You know we've break with the customer, we've done sizing, they told them us they need half a pitabyte. After a small sizing exercise, about 10 to 20 terabytes of storage for the same data they stored in Kassandra for 500 terabytes. No huge ingestion rates, and what's very important, we can do an in-flight with all those cross-correlations, so, that's something that's working very well for us. >> This could help on smart mobility. Kenex 5G comes on, certainly. Intelligent edge. >> So the customers we have, these cases that we applied right now is in financial services, two or three main applications. One is tick data and analytics, everyone wants to be smarter learning on how to buy and sell stocks or manage risk, the second one is infrastructure, monitoring, critical infrastructure, monitoring is SLA monitoring is be able to monitor network devices, latencies, applications, you now, transaction rate, or that, be able to predict potential failures or escalation We have similar applications; we have about three Telco customers using it for real-time time. Series analytics are metric data, cybersecurity attacks, congestion avoidance, SLA management, and also automotive. Fleet management, file linking, they are also essentially feeding huge data sets of time series analytics. They're running cross-correlation and AI logic, so now they can generate triggers. Now apply to Hadoop. What does Hadoop have anything to do with those kinds of applications? They cannot feed huge amounts of datasets, they cannot react in real-time, doesn't store time-series efficiently. >> Hapoop (laughing) >> You said that. >> Yeah. That's good. >> One, I know we don't have a lot of time left. We're running out of time, but I want to make sure we get this out here. How are you engaging with customers? You guys got great technical support. We can vouch for the tech chops that you guys have. We seen the solution. If it's compatible to Kubernetes, certainly this is an alternative to have really great analytical infrastructure. Cloud native, goodness of your building, You do PFC's, they go to your website, and how do you engage, how do you get deals? How do people work with you? >> So because now we have a cloud service, so also we engage through the cloud. Mainly, we're going after customers and leads, or from webinars and activities on the internet, and we sort of follow-up with those customers, we know >> Direct sales? >> Direct sales, but through lead generation mechanism. Marketplace activity, Amazon, Azure, >> Partnerships with Azure and Google now. And Azure joint selling activities. They can actually resale and get compensated. Our solution is an edge for Azure. Working on similar solution for Google. Very focused on retailers. That's the current market focus of since you think about stores that have a single supermarket will have more than a 1,000 cameras. Okay, just because they're monitoring shelves in real-time, think about Amazon go, kind of replication. Real-time inventory management. You cannot push a 1,000 camera feeds into the cloud. In order to analyze it then decide on inventory level. Proactive action, so, those are the kind of applications. >> So bigger deals, you've had some big deals. >> Yes, we're really not a raspberry pie-kind of solution. That's where the bigger customers >> Got it. Yaron, thank you so much. The CTO of Iguazio Check him out. It's actually been great commentary. The Hadoop versus Kubernetes narrative. Love to explore that further with you. Stay with us for more coverage after this short break. We're live in day 2 of CUBE NYC. Par Strata, Hadoop Strata, Hadoop World. CUBE Hadoop World, whatever you want to call it. It's all because of the data. We'll bring it to ya. Stay with us for more after this short break. (upbeat music)
SUMMARY :
It's all about data, and that's the fundamental change Love having you on theCUBE because you always and then you evolve into things like Spark. How do you scale it? and then, you know and cloud eight on Visa Vis on premise with Hadoop. Google started the technology and couple of years and the internal conversations we report on theCUBE The companies that had the most experience It's nothing to do with Hadoop, okay? and then if so, what alternatives do you have for So you need shared access to data from those containers. The rest of the data synchronizes is like the new OS, where you have many processes, We can even control the access for each Yes, by the way, we have a managed service. So you can go and develop your work loads, your application If you And then, it's important post-event, if you will, meaning because most of the time series databases were designed for This could help on smart mobility. So the customers we have, and how do you engage, how do you get deals? and we sort of follow-up with those customers, we know Direct sales, but through lead generation mechanism. since you think about stores that have Yes, we're really not a raspberry pie-kind of solution. It's all because of the data.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Ed Macosky | PERSON | 0.99+ |
Darren Anthony | PERSON | 0.99+ |
Yaron Haviv | PERSON | 0.99+ |
Mandy Dolly | PERSON | 0.99+ |
Mandy Dhaliwal | PERSON | 0.99+ |
David Richards | PERSON | 0.99+ |
Suzi Jewett | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
HP | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
2.9 times | QUANTITY | 0.99+ |
Darren | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Suzi | PERSON | 0.99+ |
Silicon Angle Media | ORGANIZATION | 0.99+ |
RenDisco | ORGANIZATION | 0.99+ |
2009 | DATE | 0.99+ |
Suzie Jewitt | PERSON | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
2022 | DATE | 0.99+ |
Yahoo | ORGANIZATION | 0.99+ |
Lisa | PERSON | 0.99+ |
2008 | DATE | 0.99+ |
AKS | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
500 terabytes | QUANTITY | 0.99+ |
60% | QUANTITY | 0.99+ |
2021 | DATE | 0.99+ |
Hadoop | TITLE | 0.99+ |
1,000 camera | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
18,000 customers | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
Amsterdam | LOCATION | 0.99+ |
2030 | DATE | 0.99+ |
One | QUANTITY | 0.99+ |
HIPAA | TITLE | 0.99+ |
tomorrow | DATE | 0.99+ |
2026 | DATE | 0.99+ |
Yaron | PERSON | 0.99+ |
two days | QUANTITY | 0.99+ |
Europe | LOCATION | 0.99+ |
First | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
Stephan Fabel, Canonical | OpenStack Summit 2018
(upbeat music) >> Announcer: Live from Vancouver, Canada. It's The Cube covering Openstack Summit, North America, 2018. Brought to you by Red Hat, The Open Stack Foundation, and it's ecosystem partners. >> Welcome back to The Cube's coverage of Openstack Summit 2018 in Vancouver. I'm Stu Miniman with cohost of the week, John Troyer. Happy to welcome back to the program Stephan Fabel, who is the Director of Ubuntu product and development at Canonical. Great to see you. >> Yeah, great to be here, thank you for having me. Alright, so, boy, there's so much going on at this show. We've been talking about doing more things and in more places, is the theme that the Open Stack Foundation put into place, and we had a great conversation with Mark Shuttleworth, and going to dig in a little bit deeper in some of the areas with you. >> Stephan: Okay, absolutely. >> So we have the Cube, and we're go into all of the Kubernetes, Kubeflow, and all those other things that we'll mispronounce how they go. >> Stephan: Yes, yes, absolutely. >> What's your impression of the show first of all? >> Well I think that it's really, you know, there's a consolidation going on, right? I mean, we really have the people who are serious about open infrastructure here, serious about OpenStack. They're serious about Kubenetes. They want to implement, and they want to implement at a speed that fits the agility of their business. They want to really move quick with the obstrain release. I think the time for enterprise hardening delays an inertia there is over. I think people are really looking at the core of OpenStack, that's mature, it's stable, it's time for us to kind of move, get going, get success early, get it soon, then grow. I think most of the enterprise, most of the customers we talk to adopt that notion. >> One of the things that sometimes helps is help us lay out the stack a little bit here because we actually commented that some of the base infrastructure pieces we're not talking as much about because they're kind of mature, but OpenStack very much at the infrastructure level, your compute, storage, and network need to understand. But then we when we start doing things like Kubernetes as well, I can either do or, or on top of, and things like that, so give us your view as to what'd you put, what Canonical's seeing, and what customers-- how you lay out that stack? >> I think you're right, I think there's a little bit of path-finding here that needs to be done on the Kubernetes side, but ultimately, I think it's going to really converge around OpenStack being operative-centric, and operative-friendly, working and operating the infrastructure, scaling that out in a meaningful manner, providing multitenancy to all the different departments. Having Kubernetes be developer-centric and really help to on-board and accelerate the workload that option of the next gen initiatives, right? So, what we see is absolutely a use case for Kubernetes and OpenStack to work perfectly well together, be an extension of each other, possibly also sit next to each other without being too incumbenent there. But I think that ultimately having something like Kubernetes contain a based developer APIs that are providing that orchestration layer are the next thing, and they run just perfectly fine on Canonical OpenStack. >> Yeah, there certainly has been a lot of talk about that here at the show. Let's see, let's go a level above that, things we run on Kubernetes, I wanted to talk a little bit about ML and AI and Kubeflow. It seems like we're, I'd almost say that we're, this is like, if we were a movie, we're in a sequel like AI-5; this time, it's real. I really do see real enterprise applications incorporating these technologies into the workflow for what otherwise might be kind of boring, you know, line of business, can you talk a little bit about where we are in this evolution? >> You mean, John, only since we've been talking about it since the mid-1800s, so yeah. >> I was just about to point that out, I mean, AI's not new, right? We've seen it since about 60 years. It's been around for quite some time. I think that there is an unprecedented amount of sponsorship of new startups in this area, in this space, and there's a reason why this is heating up. I think the reason why ultimately it's there is because we're talking about a scale that's unprecedented, right? We thought the biggest problem we had with devices was going to be the IP addresses running out, and it turns out, that's not true at all, right? At a certain scale, and at a certain distributed nature of your rollout, you're going to have to deal with just such complexity and interaction between the underlying, the under-cloud, the over-cloud, the infrastructure, the developers. How do I roll this out? If I spin up 1000 BMs over here, why am I experiencing dropped calls over there? It's those types of things that need to be self-correlated. They need to be identified, they need to be worked out, so there's a whole operator angle just to be able to cope with that whole scenario. I think there's projects that are out there that are trying to ultimately address that, for example, Acumos (mumbles) Then, there is, of course, the new applications, right? Smart cities to connect to cars, all those car manufacturers who are, right now, faced with the problem: how do I deal with mobile, distributed inference rollout on the edge while still capturing the data continually, train my model, update, then again, distribute out to the edge to get a better experience. How do I catch up to some of the market leaders here that are out there? As the established car manufacturers are going to come and catch up, put more and more miles autonomously on the asphalt, we're going to basically have to deal with a whole lot more of proctization of machine-learning applications that just have to be managed at scale. And so we believe for all certain good company in that belief that having to manage large applications at scale, that containers and Kubernetes is a great way to do that, right? They did that for web apps. They did that for the next generation applications. This is one example where with the right operators in mind, the right CRDs, the right frameworks on top of Kubernetes managed correctly, you are actually in a great position to just go to market with that. >> I wonder if you might have a customer example that might go to walk us through kind of where they are in this discussion, talk to many companies, you know, the whole IOT even pieces were early in this. So what's actually real today, how much is planning, is this years we're talking before some of these really come to fruition? >> So yeah, I can't name a customer, but I can say that every single car manufacturer we're talking to is absolutely interested in solving the operational problem of running machine-learning frameworks as a service, making sure those are up running and up to speed at any given point in time, spin them up in a multitenant fashion, make sure that the GPU enablement is actually done properly at all layers of the virtualization. These are real operational challenges that they're facing today, and they're looking to solve with us. Pick a large car manufacturer you want. >> John: Nice. We're going down to something that I can type on my own keyboard then, and go to GitHub, right? That's one of the places to go where it is run, TensorFlow of machine-learning framework on Kubernetes is Kubeflow, and that little bit yesterday on stage, you want to talk about that maybe? >> Oh, absolutely, yes. That's the core of our current strategy right now. We're looking at Kubeflow as one of the key enablers of machine-learning frameworks as a service on top of Kubernetes, and I think they're a great example because they can really show how that as a service can be implemented on top of a virtualization platform, whether that be KVM, pure KVM, on bare metal, on OpenStack, and actually provide machine-learning frameworks such as TensorFlow, Pipe Torch, Seldon Core. You have all those frameworks being supported, and then basically start mix and matching. I think ultimately it's so interesting to us because the data scientists are really not the ones that are expected to manage all this, right? Yet they are the core of having to interact with it. In the next generation of the workloads, we're talking to PHDs and data scientists that have no interest whatsoever in understanding how all of this works on the back end, right? They just want to know this is where I'm going to submit my artifact that I'm creating, this is how it works in general. Companies pay them a lot of money to do just that, and to just do the model because that's where, until the right model is found, that is exactly where the value is. >> So Stephan, does Canonical go talk to the data scientists, or is there a class of operators who are facilitating the data scientists? >> Yes, we talk to the data scientists who understand their problems, we talk to the operators to understand their problems, and then we work with partners such as Google to try and find solutions to that. >> Great, what kind of conversations are you having here at the show? I can't imagine there's too many of those, great to hear if there are, but where are they? I think everybody here knows containers, very few know Kubernetes, and how far up the stack of building new stuff are they? >> You'd be surprised, I mean, we put this out there, and so far, I want to say the majority of the customer conversations we've had took an AI turn and said, this is what we're trying to do next year, this is what we're trying to do later in the year, this is what we're currently struggling with. So glad you have an approach because otherwise, we would spend a ton of time thinking about this, a ton of time trying to solve this in our own way that then gets us stuck in some deep end that we don't want to be. So, help us understand this, help us pave the way. >> John: Nice, nice. I don't want to leave without talking also about Microcades, that's a Kubernetes snap, you code some clojure download, Can we talk a little bit about that? >> Yeah, glad to. This was an idea that we conceived that came out of this notion of alright, well if I do have, talking to a data scientist, if I do have a data scientist, where does he start? >> Stu: Does Kubernetes have a learning curve to date? >> It does, yeah, it does. So here's the thing, as a developer, you have, what options do you have right when you get started? You can either go out and get a community stood up on one of the public clouds, but what if you're in the plane, right? You don't have a connection, you want to work on your local laptop. Possibly, that laptop also has a GPU, and you're a data scientist and you want to try this out because you know you're going to submit this training job now to a (mumbles) that runs un-prem behind the firewall with a limited training set, right? This is the situation we're talking about. So ultimately, the motivation for creating Microcades was we want to make this very, very equivalent. Now you can deploy Kubeflow on top of Microcades today, and it'll run just fine. You get your TensorBoard, you have Jupyter notebook, and you can do your work, and you can do it in a fashion that will then be compatible to your on-prem and public machine-learning framework. So that was your original motivation for why we went down this road, but then we noticed you know what, this is actually a wider need. People are thinking about local Kubernetes in many different ways. There are a couple of solutions out there. They tend to be cumbersome, or more cumbersome than developers would like it. So we actually said, you know, maybe we should turn this into a more general purpose solution. So hence, Microcades. It works like a snap on your machine, you kick that off, you have Kubernetes API, and under 30 seconds or little longer if your download speed plays a factor here, you enable DNS and you're good to go. >> Stephan, I just want to give you the opportunity, is there anything in the Queens Release that your customers have been specifically waiting for or any other product announcements before we wrap? >> Sure, we're very excited about the Queens Release. We think Queens Release is one of the great examples of the maturity of the code base and really the knot towards the operator, and that, I think was the big challenge beyond the olden days of OpenStack where the operators took a long time for the operators to be heard, and to establish that conversation. We'd like to say and to see that OpenStack Queens has matured in that respect, and we like things like Octavia. We're very exciting about (mumbles) as a service, taking its own life and being treated as a first-class citizen. I think that it was a great decision of the community to get on that road. We're supporting as a part of our distribution. >> Alright, well, appreciate the update. Really fascinating to hear about all, you know, everybody's thinking about it and really starting to move on all the ML and AI stuff. Alright, for John Troyer, I'm Tru Miniman. Lots more coverage here from OpenStack Summit 2018 in Vancouver. Thanks for watching The Cube. (upbeat music)
SUMMARY :
Brought to you by Red Hat, The Open Stack Foundation, Great to see you. Yeah, great to be here, thank you for having me. So we have the Cube, and we're go into all of the I mean, we really have the people who are serious about and what customers-- how you lay out that stack? of path-finding here that needs to be done about that here at the show. since the mid-1800s, so yeah. As the established car manufacturers are going to in this discussion, talk to many companies, a multitenant fashion, make sure that the GPU That's one of the places to go where it is run, and to just do the model because Yes, we talk to the data scientists who understand that we don't want to be. I don't want to leave without talking also about Microcades, talking to a data scientist, and you can do your work, and you can do of the community to get on that road. Really fascinating to hear about all, you know,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Stephan | PERSON | 0.99+ |
Mark Shuttleworth | PERSON | 0.99+ |
John | PERSON | 0.99+ |
John Troyer | PERSON | 0.99+ |
Stephan Fabel | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Vancouver | LOCATION | 0.99+ |
Open Stack Foundation | ORGANIZATION | 0.99+ |
Kubernetes | TITLE | 0.99+ |
Canonical | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Vancouver, Canada | LOCATION | 0.99+ |
OpenStack | TITLE | 0.99+ |
next year | DATE | 0.99+ |
mid-1800s | DATE | 0.99+ |
yesterday | DATE | 0.99+ |
Tru Miniman | PERSON | 0.99+ |
under 30 seconds | QUANTITY | 0.99+ |
OpenStack Summit 2018 | EVENT | 0.99+ |
GitHub | ORGANIZATION | 0.98+ |
Queens | ORGANIZATION | 0.98+ |
Openstack Summit 2018 | EVENT | 0.98+ |
one example | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
OpenStack Summit 2018 | EVENT | 0.98+ |
Kubeflow | TITLE | 0.97+ |
Openstack Summit | EVENT | 0.97+ |
1000 BMs | QUANTITY | 0.97+ |
TensorFlow | TITLE | 0.96+ |
about 60 years | QUANTITY | 0.96+ |
one | QUANTITY | 0.96+ |
Jupyter | ORGANIZATION | 0.94+ |
The Cube | ORGANIZATION | 0.94+ |
Stu | PERSON | 0.94+ |
today | DATE | 0.94+ |
asphalt | TITLE | 0.93+ |
North America | LOCATION | 0.92+ |
Ubuntu | ORGANIZATION | 0.88+ |
The Open Stack Foundation | ORGANIZATION | 0.87+ |
Kubernetes | ORGANIZATION | 0.86+ |
Cube | COMMERCIAL_ITEM | 0.77+ |
Queens Release | TITLE | 0.77+ |
single car | QUANTITY | 0.76+ |
Seldon Core | TITLE | 0.75+ |
Pipe Torch | TITLE | 0.72+ |
Kubeflow | ORGANIZATION | 0.7+ |
The Cube | TITLE | 0.69+ |
Octavia | TITLE | 0.67+ |
first | QUANTITY | 0.57+ |
couple | QUANTITY | 0.5+ |
Microcades | ORGANIZATION | 0.5+ |
Kubenetes | ORGANIZATION | 0.49+ |
2018 | DATE | 0.48+ |
TensorBoard | TITLE | 0.48+ |
Kubernetes | COMMERCIAL_ITEM | 0.42+ |
Release | TITLE | 0.4+ |
Mark Shuttleworth, Canonical | OpenStack Summit 2018
(soft electronic music) >> Announcer: Live from Vancouver, Canada, it's theCUBE. Covering OpenStack Summit North America 2018. Brought to you by Red Hat, the OpenStack Foundation, and it's ecosystem partners. >> Welcome back, I'm Stu Miniman here with my cohost John Troyer and you're watching theCUBE's exclusive coverage of OpenStack Summit 2018 in Vancouver. Happy to welcome you back to the program, off the keynote stage this morning, Mark Shuttleworth, the founder of Canonical. Thank you so much for joining us. >> Stu, thanks for the invitation. >> Alright, so you've been involved in this OpenStack stuff for quite a bit. >> Right, since the beginning. >> I remember three years ago we were down in the other hall talking about the maturity of the platform. I think three years ago, it was like this container thing was kind of new and the basic infrastructure stuff was starting to get, in a nice term, boring. Because that meant we could go about business and be on the buzz of there's this cool new thing and we're going to kill Amazon, kill VMware, whatever else things that people thought that had a misconceived notion. So bring us forward to where we are 2018, what you're hearing from customers as you look at OpenStack and this community. >> Well, I think you pretty much called it. OpenStack very much now is about solving a real business problem, which is the automation of the data center and the cost parody of private data centers with public data centers. So I think we're at a time now where people understand the public cloud is a really good thing. It's great that you have these giant companies dueling it out to deliver better quality infrastructure at a better price. But then at the same time, having your own private infrastructure that runs cost-effectively is important. And OpenStack really is the only approach to that that exists today. And it's important to us that the conversation is increasingly about what we think really matters, which is the economics of owning it, the economics of running it, and how people can essentially keep that in line with what they get from the public cloud providers. >> Yeah, one of the barometers I use for vendors these days is in this multi-cloud world, where do you sit? Do you play with the HyperScalers? Are you a public cloud denier? Or, like most people you're, most people are somewhere in-between. In your keynote this morning, you were talking a bit about all of the HyperScalers that use your products as well as-- >> Ubuntu is at the heart of all of the major public cloud operations at multiple levels. So we see them as great drivers of innovation, great drivers of exposure of Ubuntu into the enterprise. We're still, by far, the number one platform used in public cloud by enterprises. It's hard to argue that public cloud is testing Dev now. It really, really isn't and so most of that is still Ubuntu. And now we're seeing that pendulum swing, all of those best practices, that consumption of Ubuntu, that understanding of what a leaner, meaner Enterprise Linux looks like. Bringing that back to the data center is exciting. For us, it's an opportunity to help enterprises rethink the data center to make it fully automated from the ground up. OpenStack is part of that, Kubernetes is part of that and now the cherry on top is really AI where people understand they have to be able to do it on public cloud, on private infrastructure and at the Edge. >> Mark, I wanted to talk about open source. Marketing open source, for a minute. We are obviously here, we're part of an open source community. Open source, defacto, has won the cloud technology stack wars. So there's one way of selling OpenStack where you pound on open a lot. >> I'm always a bit nervous about projects that put open. It sounds like they're sort of trying to gloss over something or wash over something or prove a point. They shouldn't have to. >> There's one about the philosophy of open source, which certainly has to stay there, right. Because that's what drove the innovation but I was kind of impressed about on the stage today, you talked about the benefits. You didn't say, well the venture's open. You said, well, we're facilitating these benefits. Speed to market, cost, et cetera. Can you talk about your approach, Canonical's approach to talking about this open source product in terms of its benefits? >> Sure, look, open source is a license. Under that license, there's room for a huge spectrum of interest and opinions and approaches. And I'd say that I certainly see an enormous amount of value in what I would call the passion-based open source story. Now, OpenStack is not that. It's too big, too complicated, to be one person's deep passion. It really isn't. But there's still a ton of innovation that happens in our world, across the full spectrum of what we see with open source, which is really experts trying to do something beautiful and elegant. And I still think that's really important in open source. You also have a new kind of dimension, which is almost like industrial trench warfare with open source. Which is huge organizations leveraging effectively their ability go get something widespread, widely adopted, quickly and efficiently by essentially publishing it as open source. And often, people get confused between these two ends of the spectrum. There's a bunch in between. What I like about OpenStack is that I think it's over the industrial trench warfare phase. You know, you just don't see a ton of people showing up here to throw parties and prove to everyone how cool they are. They've moved on to other open source projects. The people who are here are people who essentially have the real problem of I want to automate my data center, I want to have, essentially, a cloud that runs cost-effectively in my data center that I can use as part of a multi-cloud strategy. And so now I think we're in to that sort of, a more mature place with OpenStack. We're not either sort of artisan or craftsmen oriented, nor are we a guns blazing brand oriented. It's kind of now just solving the problems. >> Mark, there's still some nay-sayers out in the marketplace. Either they say that this never matured, there's a certain analyst firm that put out a report a couple of months ago that, it kind of denigrated what's happening here. And then there's others that, as you said, off chasing that next big wave of open source. What are you hearing from your customers? You've got a good footprint around the globe. >> So that report is nonsense, for a start. They're always wrong, right. If they're hyping something, they're wrong and if they're dissing something then they're usually wrong too. >> Stu: They have a cycle for that, I believe. (chuckling) >> Exactly. Selling gold at the barroom. Here's how I see it. I think that enterprises have a real problem, which is how do they create private cloud infrastructure. OpenStack had a real problem in that it had too many opinions, too many promises. Essentially a governing structure not a leadership structure. Our position on this has always been focus on the stuff that is really necessary. There was a ton of nonsense in OpenStack and that stuff is all failing. And so what? It was never essential to the mission. The mission is stand up a data center in an automated way, provide it, essentially, as resources, as a service to everybody who you think is authorized to be there, effectively. Segment and operate that efficiently. There's only a small part of OpenStack that was ever really focused on that. That's the stuff that's succeeding, that's the stuff we deliver. That's the stuff, we think very carefully about how to automate it so that, essentially, anybody can consume it at reasonable prices. Now, we have learned that it's better for us to do the operations almost. It's better for us actually to take it to people as a solution, say look, explain your requirements to us then let us architect that cloud with you then let us build that cloud then let us operate that cloud. Until it's all stable and the economics are good, then you can take over. I think what we have seen is that you ask every single different company to build OpenStack, they will make a bunch of mistakes and then they'll say OpenStack is the problem. OpenStack's not the problem. Because we do it again and again and again, because we do it in many different data centers, because we do it with many different industries, we're able to essentially put it on rails. When you consume OpenStack that way it's super cheap. These aren't my numbers, analysts have studied the costs of public infrastructure, the cost of the established, incumbent enterprise, virtualization solutions and so on. And they found that when you consume OpenStack from Canonical it is much, much cheaper than any of your other options in your own private data center. And I think that's a success that OpenStack should be proud of. >> Alright, you've always done a good job at poking at some of the discussions happening in the industry. I wouldn't say I was surprised but you were highlighting AI as something that was showing a lot of promise. People have been a little hot and cold depending on what part of the market you're at. Tell us about AI and I'd love to hear your thoughts in general. Kubernetes, Serverless, and ask you to talk about some of those new trends that are out there. >> Sure, the big problem with data science was always finding the right person to ask the right question. So you could get all the data in the world in a data lake but now you have to hire somebody who instinctively has to ask the right question that you can test out of that data. And that's a really hard problem. What machine learning does is kind of inverts the problem. It says, well, why don't we put all that data through a pattern matching system and then we'll end up with something that reflects the underlying patterns, even if we don't know what they are. Now, we can essentially say if you saw this, what would you expect? And that turns out to be a very powerful way to deal with huge amounts of data that, previously, you had to kind of have this magical intuition to kind of get to the bottom of. So I think machine learning is real, it's valuable in almost every industry, and the challenges now are really about standardizing underlying operations so that the people who focus on the business problems can, essentially, use them. So that's really what I wanted to show today is us working with, in that case it was Google, but you can generalize that. To standardize the experience for an institution who wants to hire developers, have them effectively build machine-driven models if they can then put those into production. There's a bunch of stuff I didn't show that's interesting. For example, you really want to take the learnings from machine-learning and you want to put those at the Edge. You want to react to what's happening as close to where it's happening as possible. So there's a bunch of stuff that we're working on with various companies. It's all about taking that AI outcome right to the Edge, to IOT, to Edge Cloud but we don't have time to get in to all of that today. >> Yeah, and Ubuntu is at the Edge, on the mobile platform. >> So we're in a great position that we're on the Cloud. Now you see what we're doing in the data center for enterprises, effectively recrafting the data center has a much leaner, more automated machine. Really driving down the cost of the data center. And yes, we're on the higher-end things. We're never going to be on the LightBulb. We're a full general-purpose operating system. But you can run Ubuntu on a $10 board now and that means that people are taking it everywhere. Amazon, for example, put Ubuntu on the DeepLens so that's a great example of AI at the edge. It's super exciting. >> So the Kubernetes, Serverless-type applications, what are your thinkings around there? >> Serverless is a lovely way to think about the flow of code in a distributed system. It's a really nice way to solve certain problems. What we haven't yet seen is we haven't seen a Serverless framework that you can port. We've seen great Serverless experiences being built inside the various public clouds but there's nothing consistent about them. Everything that you invest in a particular place is very useful there but you can't imagine taking that anywhere else. I think that's fine. >> Stu: Today's primarily Lando. >> And I think the other clouds have done a credible job of getting there quickly. But kudos to Amazon for kind of pioneering that. I do think we'll see generalized Serverless, it just doesn't exist at the moment and as soon as it does we'll be itching to get it into people's hands. >> Okay, yeah? >> Well, I just wanted to pull out something that you had said in case people miss it, you talked about managed OpenStack. And that, I think, managed Kubernetes has been a trend over the last year. Managed OpenStack now. Has been trans-- >> With these complex pieces of infrastructure, you could easily drown in learning it all and if you're only ever going to do one, maybe it makes sense to have somebody else do it for a while. You can always take it over later. So we're unusual in that we will essentially standup something complex like an OpenStack or a Kubernetes, operate it as long as people want and then train them to take over. So we're not exclusively managed and we're not exclusively arms-length. We're happy to start the one way and then hand over. >> I think that's an important development, though, that's been developing as the systems get more complicated. One UNIX admin needs a whole new skill set or broader skill set now that we're orchestrating a whole cloud so that's, I think that's great. And that's interesting. Anything else you're looking forward to, in terms of operation models. I guess we've said, Ubuntu everywhere from the edge to the center and now managed, as well. Anything else we're looking at in terms of operators should be looking at? >> Well, I think it just is going to stay sort of murky for a while simply because each different group inside a large institution has a boundary of their authority and to them, that's the edge. (chuckling) And so the term is heavily overloaded. But I would say, ultimately, there are a couple of underlying problems that have to be solved and if you look at the reference architectures that the various large institutions are putting out, they all show you how they're trying to attack these patterns using Ubuntu. One is physical provisioning. The one thing that's true with every Edge deployment is there are no humans there. So you can't kind of Band-Aid over the idea that when something breaks you need to completely be able to reset it from the ground up. So MAAS, Middle as a Service, shows up in the reference architectures from AT&T and from SoftBank and from Dorich Telecom and a bunch of others because it solves their problem. It's the smallest piece of software you can use to take one server or 10 servers or 100 servers and just reflash them with Windows or CentOS or whatever you need. That's one thing. The other thing that I think is consistently true in all these different H-Cloud permutations or combinations is that overhead's really toxic. If you need three nodes of overhead for a hundred node OpenStack, it's 3%. For a thousand node OpenStack, it's .3%. It's nothing, you won't notice it. If you need three nodes of OpenStack for a nine node Edge Cloud, well then that's 30% of your infrastructure costs. So really thinking through how to get the overhead down is kind of a key for us. And all the projects with telcos in particular that we're working, that's really what we bring is that underlying understanding and some of those really lightweight tools to solve those problems. On top of that, they're all different, right. Kubenetes here, Lixti there, OpenStack on the next one. AI everywhere. But those two problems, I think, are the consistent things we see as a pattern in the Edge. >> Alright, so Mark, last question I have for you. Company update. So last year we talked a little bit about focusing, where the company's going, talked a bit about the business model and you said to me, "Developers should never have to pay for anything." It's the governance people and everything like that. Give us the company update, everything from rumors from hey, maybe you're IPO-ing to what's happening, what can you share? >> Right, so the twin areas of focus, IOT and cloud infrastructure. IOT continues to be an area of R and D for us so we're still essentially underwriting an IOT investment. I'm very excited about that. I think it's the right thing to be doing at the moment. I think IOT is the next wave, effectively, and we're in a special position. We really can get down, both economically and operationally, into that sort of small itch kind of scenario. Cloud, for us, is a growth story. I talked a little bit about taking Ubuntu and Canonical into the finance sector. In one year, we closed deals with 20% of the top 20 banks in the world to build Ubuntu base and open infrastructure. That's a huge shift from the traditional dependence exclusively on VMware Red Hat. Now, suddenly, Ubuntu's in there, Canonical's in there. I think everybody understands that telcos really love Ubuntu and so that continues to grow for us. Commercially, we're expanding both in Emir and here in the Americas. I won't talk more about our corporate plans other than to say I see no reason for us to scramble to cover any other areas. I think cloud infrastructure and IOT is plenty for one company. For me, it's a privilege to combine that kind of business with what happens in the Ubuntu community. I'm still very passionate about the fact that we enable people to consume free software and innovate. And we do that without any friction. We don't have an enterprise version of Ubuntu. We don't need an enterprise version of Ubuntu, the whole thing's enterprise. Even if you're a one-person startup. >> Mark Shuttleworth, always a pleasure to catch up. Thank you so much for joining us. >> Mark: Thank you, Stu. >> For John Troyer, I'm Stu Miniman. Back with lots more coverage here from OpenStack Summit 2018 in Vancouver. Thanks for watching theCUBE. (soft electronic music)
SUMMARY :
Brought to you by Red Hat, the OpenStack Foundation, Happy to welcome you back to the program, in this OpenStack stuff for quite a bit. and be on the buzz of there's this cool new thing And OpenStack really is the only approach a bit about all of the HyperScalers that use your products Ubuntu is at the heart of all of the major the cloud technology stack wars. I'm always a bit nervous about projects that put open. There's one about the philosophy of open source, It's kind of now just solving the problems. And then there's others that, as you said, So that report is nonsense, for a start. Stu: They have a cycle for that, I believe. to us then let us architect that cloud with you happening in the industry. so that the people who focus on the business problems so that's a great example of AI at the edge. a Serverless framework that you can port. it just doesn't exist at the moment something that you had said in case people miss it, of infrastructure, you could easily drown from the edge to the center and now managed, as well. that the various large institutions are putting out, about the business model and you said to me, really love Ubuntu and so that continues to grow for us. Thank you so much for joining us. from OpenStack Summit 2018 in Vancouver.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Mark Shuttleworth | PERSON | 0.99+ |
John Troyer | PERSON | 0.99+ |
Madrid | LOCATION | 0.99+ |
60 | QUANTITY | 0.99+ |
Jeff | PERSON | 0.99+ |
Dorich Telecom | ORGANIZATION | 0.99+ |
Canonical | ORGANIZATION | 0.99+ |
Vodafone | ORGANIZATION | 0.99+ |
$10 | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Miguel Perez | PERSON | 0.99+ |
Spain | LOCATION | 0.99+ |
10 servers | QUANTITY | 0.99+ |
two questions | QUANTITY | 0.99+ |
Carrefour | ORGANIZATION | 0.99+ |
45 | QUANTITY | 0.99+ |
North Carolina | LOCATION | 0.99+ |
Miguel | PERSON | 0.99+ |
Americas | LOCATION | 0.99+ |
SoftBank | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
25 years | QUANTITY | 0.99+ |
2021 | DATE | 0.99+ |
Vancouver | LOCATION | 0.99+ |
AT&T | ORGANIZATION | 0.99+ |
20% | QUANTITY | 0.99+ |
Mark | PERSON | 0.99+ |
100 servers | QUANTITY | 0.99+ |
30% | QUANTITY | 0.99+ |
Java | TITLE | 0.99+ |
2018 | DATE | 0.99+ |
OpenStack Foundation | ORGANIZATION | 0.99+ |
2020 | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
last year | DATE | 0.99+ |
PowerPoint | TITLE | 0.99+ |
Stu | PERSON | 0.99+ |
one server | QUANTITY | 0.99+ |
15 years | QUANTITY | 0.99+ |
North America | LOCATION | 0.99+ |
64% | QUANTITY | 0.99+ |
Jeffrey | PERSON | 0.99+ |
next year | DATE | 0.99+ |
3% | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
today | DATE | 0.99+ |
11 | QUANTITY | 0.99+ |
CentOS | TITLE | 0.99+ |
Vancouver, Canada | LOCATION | 0.99+ |
.3% | QUANTITY | 0.99+ |
two words | QUANTITY | 0.99+ |
120 | QUANTITY | 0.99+ |
six | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Kaleena | PERSON | 0.99+ |
three years ago | DATE | 0.99+ |
Python | TITLE | 0.99+ |
OpenStack | ORGANIZATION | 0.99+ |
two problems | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
both | QUANTITY | 0.99+ |