Image Title

Search Results for WebLogic:

Jay Marshall, Neural Magic | AWS Startup Showcase S3E1


 

(upbeat music) >> Hello, everyone, and welcome to theCUBE's presentation of the "AWS Startup Showcase." This is season three, episode one. The focus of this episode is AI/ML: Top Startups Building Foundational Models, Infrastructure, and AI. It's great topics, super-relevant, and it's part of our ongoing coverage of startups in the AWS ecosystem. I'm your host, John Furrier, with theCUBE. Today, we're excited to be joined by Jay Marshall, VP of Business Development at Neural Magic. Jay, thanks for coming on theCUBE. >> Hey, John, thanks so much. Thanks for having us. >> We had a great CUBE conversation with you guys. This is very much about the company focuses. It's a feature presentation for the "Startup Showcase," and the machine learning at scale is the topic, but in general, it's more, (laughs) and we should call it "Machine Learning and AI: How to Get Started," because everybody is retooling their business. Companies that aren't retooling their business right now with AI first will be out of business, in my opinion. You're seeing massive shift. This is really truly the beginning of the next-gen machine learning AI trend. It's really seeing ChatGPT. Everyone sees that. That went mainstream. But this is just the beginning. This is scratching the surface of this next-generation AI with machine learning powering it, and with all the goodness of cloud, cloud scale, and how horizontally scalable it is. The resources are there. You got the Edge. Everything's perfect for AI 'cause data infrastructure's exploding in value. AI is just the applications. This is a super topic, so what do you guys see in this general area of opportunities right now in the headlines? And I'm sure you guys' phone must be ringing off the hook, metaphorically speaking, or emails and meetings and Zooms. What's going on over there at Neural Magic? >> No, absolutely, and you pretty much nailed most of it. I think that, you know, my background, we've seen for the last 20-plus years. Even just getting enterprise applications kind of built and delivered at scale, obviously, amazing things with AWS and the cloud to help accelerate that. And we just kind of figured out in the last five or so years how to do that productively and efficiently, kind of from an operations perspective. Got development and operations teams. We even came up with DevOps, right? But now, we kind of have this new kind of persona and new workload that developers have to talk to, and then it has to be deployed on those ITOps solutions. And so you pretty much nailed it. Folks are saying, "Well, how do I do this?" These big, generational models or foundational models, as we're calling them, they're great, but enterprises want to do that with their data, on their infrastructure, at scale, at the edge. So for us, yeah, we're helping enterprises accelerate that through optimizing models and then delivering them at scale in a more cost-effective fashion. >> Yeah, and I think one of the things, the benefits of OpenAI we saw, was not only is it open source, then you got also other models that are more proprietary, is that it shows the world that this is really happening, right? It's a whole nother level, and there's also new landscape kind of maps coming out. You got the generative AI, and you got the foundational models, large LLMs. Where do you guys fit into the landscape? Because you guys are in the middle of this. How do you talk to customers when they say, "I'm going down this road. I need help. I'm going to stand this up." This new AI infrastructure and applications, where do you guys fit in the landscape? >> Right, and really, the answer is both. I think today, when it comes to a lot of what for some folks would still be considered kind of cutting edge around computer vision and natural language processing, a lot of our optimization tools and our runtime are based around most of the common computer vision and natural language processing models. So your YOLOs, your BERTs, you know, your DistilBERTs and what have you, so we work to help optimize those, again, who've gotten great performance and great value for customers trying to get those into production. But when you get into the LLMs, and you mentioned some of the open source components there, our research teams have kind of been right in the trenches with those. So kind of the GPT open source equivalent being OPT, being able to actually take, you know, a multi-$100 billion parameter model and sparsify that or optimize that down, shaving away a ton of parameters, and being able to run it on smaller infrastructure. So I think the evolution here, you know, all this stuff came out in the last six months in terms of being turned loose into the wild, but we're staying in the trenches with folks so that we can help optimize those as well and not require, again, the heavy compute, the heavy cost, the heavy power consumption as those models evolve as well. So we're staying right in with everybody while they're being built, but trying to get folks into production today with things that help with business value today. >> Jay, I really appreciate you coming on theCUBE, and before we came on camera, you said you just were on a customer call. I know you got a lot of activity. What specific things are you helping enterprises solve? What kind of problems? Take us through the spectrum from the beginning, people jumping in the deep end of the pool, some people kind of coming in, starting out slow. What are the scale? Can you scope the kind of use cases and problems that are emerging that people are calling you for? >> Absolutely, so I think if I break it down to kind of, like, your startup, or I maybe call 'em AI native to kind of steal from cloud native years ago, that group, it's pretty much, you know, part and parcel for how that group already runs. So if you have a data science team and an ML engineering team, you're building models, you're training models, you're deploying models. You're seeing firsthand the expense of starting to try to do that at scale. So it's really just a pure operational efficiency play. They kind of speak natively to our tools, which we're doing in the open source. So it's really helping, again, with the optimization of the models they've built, and then, again, giving them an alternative to expensive proprietary hardware accelerators to have to run them. Now, on the enterprise side, it varies, right? You have some kind of AI native folks there that already have these teams, but you also have kind of, like, AI curious, right? Like, they want to do it, but they don't really know where to start, and so for there, we actually have an open source toolkit that can help you get into this optimization, and then again, that runtime, that inferencing runtime, purpose-built for CPUs. It allows you to not have to worry, again, about do I have a hardware accelerator available? How do I integrate that into my application stack? If I don't already know how to build this into my infrastructure, does my ITOps teams, do they know how to do this, and what does that runway look like? How do I cost for this? How do I plan for this? When it's just x86 compute, we've been doing that for a while, right? So it obviously still requires more, but at least it's a little bit more predictable. >> It's funny you mentioned AI native. You know, born in the cloud was a phrase that was out there. Now, you have startups that are born in AI companies. So I think you have this kind of cloud kind of vibe going on. You have lift and shift was a big discussion. Then you had cloud native, kind of in the cloud, kind of making it all work. Is there a existing set of things? People will throw on this hat, and then what's the difference between AI native and kind of providing it to existing stuff? 'Cause we're a lot of people take some of these tools and apply it to either existing stuff almost, and it's not really a lift and shift, but it's kind of like bolting on AI to something else, and then starting with AI first or native AI. >> Absolutely. It's a- >> How would you- >> It's a great question. I think that probably, where I'd probably pull back to kind of allow kind of retail-type scenarios where, you know, for five, seven, nine years or more even, a lot of these folks already have data science teams, you know? I mean, they've been doing this for quite some time. The difference is the introduction of these neural networks and deep learning, right? Those kinds of models are just a little bit of a paradigm shift. So, you know, I obviously was trying to be fun with the term AI native, but I think it's more folks that kind of came up in that neural network world, so it's a little bit more second nature, whereas I think for maybe some traditional data scientists starting to get into neural networks, you have the complexity there and the training overhead, and a lot of the aspects of getting a model finely tuned and hyperparameterization and all of these aspects of it. It just adds a layer of complexity that they're just not as used to dealing with. And so our goal is to help make that easy, and then of course, make it easier to run anywhere that you have just kind of standard infrastructure. >> Well, the other point I'd bring out, and I'd love to get your reaction to, is not only is that a neural network team, people who have been focused on that, but also, if you look at some of the DataOps lately, AIOps markets, a lot of data engineering, a lot of scale, folks who have been kind of, like, in that data tsunami cloud world are seeing, they kind of been in this, right? They're, like, been experiencing that. >> No doubt. I think it's funny the data lake concept, right? And you got data oceans now. Like, the metaphors just keep growing on us, but where it is valuable in terms of trying to shift the mindset, I've always kind of been a fan of some of the naming shift. I know with AWS, they always talk about purpose-built databases. And I always liked that because, you know, you don't have one database that can do everything. Even ones that say they can, like, you still have to do implementation detail differences. So sitting back and saying, "What is my use case, and then which database will I use it for?" I think it's kind of similar here. And when you're building those data teams, if you don't have folks that are doing data engineering, kind of that data harvesting, free processing, you got to do all that before a model's even going to care about it. So yeah, it's definitely a central piece of this as well, and again, whether or not you're going to be AI negative as you're making your way to kind of, you know, on that journey, you know, data's definitely a huge component of it. >> Yeah, you would have loved our Supercloud event we had. Talk about naming and, you know, around data meshes was talked about a lot. You're starting to see the control plane layers of data. I think that was the beginning of what I saw as that data infrastructure shift, to be horizontally scalable. So I have to ask you, with Neural Magic, when your customers and the people that are prospects for you guys, they're probably asking a lot of questions because I think the general thing that we see is, "How do I get started? Which GPU do I use?" I mean, there's a lot of things that are kind of, I won't say technical or targeted towards people who are living in that world, but, like, as the mainstream enterprises come in, they're going to need a playbook. What do you guys see, what do you guys offer your clients when they come in, and what do you recommend? >> Absolutely, and I think where we hook in specifically tends to be on the training side. So again, I've built a model. Now, I want to really optimize that model. And then on the runtime side when you want to deploy it, you know, we run that optimized model. And so that's where we're able to provide. We even have a labs offering in terms of being able to pair up our engineering teams with a customer's engineering teams, and we can actually help with most of that pipeline. So even if it is something where you have a dataset and you want some help in picking a model, you want some help training it, you want some help deploying that, we can actually help there as well. You know, there's also a great partner ecosystem out there, like a lot of folks even in the "Startup Showcase" here, that extend beyond into kind of your earlier comment around data engineering or downstream ITOps or the all-up MLOps umbrella. So we can absolutely engage with our labs, and then, of course, you know, again, partners, which are always kind of key to this. So you are spot on. I think what's happened with the kind of this, they talk about a hockey stick. This is almost like a flat wall now with the rate of innovation right now in this space. And so we do have a lot of folks wanting to go straight from curious to native. And so that's definitely where the partner ecosystem comes in so hard 'cause there just isn't anybody or any teams out there that, I literally do from, "Here's my blank database, and I want an API that does all the stuff," right? Like, that's a big chunk, but we can definitely help with the model to delivery piece. >> Well, you guys are obviously a featured company in this space. Talk about the expertise. A lot of companies are like, I won't say faking it till they make it. You can't really fake security. You can't really fake AI, right? So there's going to be a learning curve. They'll be a few startups who'll come out of the gate early. You guys are one of 'em. Talk about what you guys have as expertise as a company, why you're successful, and what problems do you solve for customers? >> No, appreciate that. Yeah, we actually, we love to tell the story of our founder, Nir Shavit. So he's a 20-year professor at MIT. Actually, he was doing a lot of work on kind of multicore processing before there were even physical multicores, and actually even did a stint in computational neurobiology in the 2010s, and the impetus for this whole technology, has a great talk on YouTube about it, where he talks about the fact that his work there, he kind of realized that the way neural networks encode and how they're executed by kind of ramming data layer by layer through these kind of HPC-style platforms, actually was not analogous to how the human brain actually works. So we're on one side, we're building neural networks, and we're trying to emulate neurons. We're not really executing them that way. So our team, which one of the co-founders, also an ex-MIT, that was kind of the birth of why can't we leverage this super-performance CPU platform, which has those really fat, fast caches attached to each core, and actually start to find a way to break that model down in a way that I can execute things in parallel, not having to do them sequentially? So it is a lot of amazing, like, talks and stuff that show kind of the magic, if you will, a part of the pun of Neural Magic, but that's kind of the foundational layer of all the engineering that we do here. And in terms of how we're able to bring it to reality for customers, I'll give one customer quote where it's a large retailer, and it's a people-counting application. So a very common application. And that customer's actually been able to show literally double the amount of cameras being run with the same amount of compute. So for a one-to-one perspective, two-to-one, business leaders usually like that math, right? So we're able to show pure cost savings, but even performance-wise, you know, we have some of the common models like your ResNets and your YOLOs, where we can actually even perform better than hardware-accelerated solutions. So we're trying to do, I need to just dumb it down to better, faster, cheaper, but from a commodity perspective, that's where we're accelerating. >> That's not a bad business model. Make things easier to use, faster, and reduce the steps it takes to do stuff. So, you know, that's always going to be a good market. Now, you guys have DeepSparse, which we've talked about on our CUBE conversation prior to this interview, delivers ML models through the software so the hardware allows for a decoupling, right? >> Yep. >> Which is going to drive probably a cost advantage. Also, it's also probably from a deployment standpoint it must be easier. Can you share the benefits? Is it a cost side? Is it more of a deployment? What are the benefits of the DeepSparse when you guys decouple the software from the hardware on the ML models? >> No you actually, you hit 'em both 'cause that really is primarily the value. Because ultimately, again, we're so early. And I came from this world in a prior life where I'm doing Java development, WebSphere, WebLogic, Tomcat open source, right? When we were trying to do innovation, we had innovation buckets, 'cause everybody wanted to be on the web and have their app and a browser, right? We got all the money we needed to build something and show, hey, look at the thing on the web, right? But when you had to get in production, that was the challenge. So to what you're speaking to here, in this situation, we're able to show we're just a Python package. So whether you just install it on the operating system itself, or we also have a containerized version you can drop on any container orchestration platform, so ECS or EKS on AWS. And so you get all the auto-scaling features. So when you think about that kind of a world where you have everything from real-time inferencing to kind of after hours batch processing inferencing, the fact that you can auto scale that hardware up and down and it's CPU based, so you're paying by the minute instead of maybe paying by the hour at a lower cost shelf, it does everything from pure cost to, again, I can have my standard IT team say, "Hey, here's the Kubernetes in the container," and it just runs on the infrastructure we're already managing. So yeah, operational, cost and again, and many times even performance. (audio warbles) CPUs if I want to. >> Yeah, so that's easier on the deployment too. And you don't have this kind of, you know, blank check kind of situation where you don't know what's on the backend on the cost side. >> Exactly. >> And you control the actual hardware and you can manage that supply chain. >> And keep in mind, exactly. Because the other thing that sometimes gets lost in the conversation, depending on where a customer is, some of these workloads, like, you know, you and I remember a world where even like the roundtrip to the cloud and back was a problem for folks, right? We're used to extremely low latency. And some of these workloads absolutely also adhere to that. But there's some workloads where the latency isn't as important. And we actually even provide the tuning. Now, if we're giving you five milliseconds of latency and you don't need that, you can tune that back. So less CPU, lower cost. Now, throughput and other things come into play. But that's the kind of configurability and flexibility we give for operations. >> All right, so why should I call you if I'm a customer or prospect Neural Magic, what problem do I have or when do I know I need you guys? When do I call you in and what does my environment look like? When do I know? What are some of the signals that would tell me that I need Neural Magic? >> No, absolutely. So I think in general, any neural network, you know, the process I mentioned before called sparcification, it's, you know, an optimization process that we specialize in. Any neural network, you know, can be sparcified. So I think if it's a deep-learning neural network type model. If you're trying to get AI into production, you have cost concerns even performance-wise. I certainly hate to be too generic and say, "Hey, we'll talk to everybody." But really in this world right now, if it's a neural network, it's something where you're trying to get into production, you know, we are definitely offering, you know, kind of an at-scale performant deployable solution for deep learning models. >> So neural network you would define as what? Just devices that are connected that need to know about each other? What's the state-of-the-art current definition of neural network for customers that may think they have a neural network or might not know they have a neural network architecture? What is that definition for neural network? >> That's a great question. So basically, machine learning models that fall under this kind of category, you hear about transformers a lot, or I mentioned about YOLO, the YOLO family of computer vision models, or natural language processing models like BERT. If you have a data science team or even developers, some even regular, I used to call myself a nine to five developer 'cause I worked in the enterprise, right? So like, hey, we found a new open source framework, you know, I used to use Spring back in the day and I had to go figure it out. There's developers that are pulling these models down and they're figuring out how to get 'em into production, okay? So I think all of those kinds of situations, you know, if it's a machine learning model of the deep learning variety that's, you know, really specifically where we shine. >> Okay, so let me pretend I'm a customer for a minute. I have all these videos, like all these transcripts, I have all these people that we've interviewed, CUBE alumnis, and I say to my team, "Let's AI-ify, sparcify theCUBE." >> Yep. >> What do I do? I mean, do I just like, my developers got to get involved and they're going to be like, "Well, how do I upload it to the cloud? Do I use a GPU?" So there's a thought process. And I think a lot of companies are going through that example of let's get on this AI, how can it help our business? >> Absolutely. >> What does that progression look like? Take me through that example. I mean, I made up theCUBE example up, but we do have a lot of data. We have large data models and we have people and connect to the internet and so we kind of seem like there's a neural network. I think every company might have a neural network in place. >> Well, and I was going to say, I think in general, you all probably do represent even the standard enterprise more than most. 'Cause even the enterprise is going to have a ton of video content, a ton of text content. So I think it's a great example. So I think that that kind of sea or I'll even go ahead and use that term data lake again, of data that you have, you're probably going to want to be setting up kind of machine learning pipelines that are going to be doing all of the pre-processing from kind of the raw data to kind of prepare it into the format that say a YOLO would actually use or let's say BERT for natural language processing. So you have all these transcripts, right? So we would do a pre-processing path where we would create that into the file format that BERT, the machine learning model would know how to train off of. So that's kind of all the pre-processing steps. And then for training itself, we actually enable what's called sparse transfer learning. So that's transfer learning is a very popular method of doing training with existing models. So we would be able to retrain that BERT model with your transcript data that we have now done the pre-processing with to get it into the proper format. And now we have a BERT natural language processing model that's been trained on your data. And now we can deploy that onto DeepSparse runtime so that now you can ask that model whatever questions, or I should say pass, you're not going to ask it those kinds of questions ChatGPT, although we can do that too. But you're going to pass text through the BERT model and it's going to give you answers back. It could be things like sentiment analysis or text classification. You just call the model, and now when you pass text through it, you get the answers better, faster or cheaper. I'll use that reference again. >> Okay, we can create a CUBE bot to give us questions on the fly from the the AI bot, you know, from our previous guests. >> Well, and I will tell you using that as an example. So I had mentioned OPT before, kind of the open source version of ChatGPT. So, you know, typically that requires multiple GPUs to run. So our research team, I may have mentioned earlier, we've been able to sparcify that over 50% already and run it on only a single GPU. And so in that situation, you could train OPT with that corpus of data and do exactly what you say. Actually we could use Alexa, we could use Alexa to actually respond back with voice. How about that? We'll do an API call and we'll actually have an interactive Alexa-enabled bot. >> Okay, we're going to be a customer, let's put it on the list. But this is a great example of what you guys call software delivered AI, a topic we chatted about on theCUBE conversation. This really means this is a developer opportunity. This really is the convergence of the data growth, the restructuring, how data is going to be horizontally scalable, meets developers. So this is an AI developer model going on right now, which is kind of unique. >> It is, John, I will tell you what's interesting. And again, folks don't always think of it this way, you know, the AI magical goodness is now getting pushed in the middle where the developers and IT are operating. And so it again, that paradigm, although for some folks seem obvious, again, if you've been around for 20 years, that whole all that plumbing is a thing, right? And so what we basically help with is when you deploy the DeepSparse runtime, we have a very rich API footprint. And so the developers can call the API, ITOps can run it, or to your point, it's developer friendly enough that you could actually deploy our off-the-shelf models. We have something called the SparseZoo where we actually publish pre-optimized or pre-sparcified models. And so developers could literally grab those right off the shelf with the training they've already had and just put 'em right into their applications and deploy them as containers. So yeah, we enable that for sure as well. >> It's interesting, DevOps was infrastructure as code and we had a last season, a series on data as code, which we kind of coined. This is data as code. This is a whole nother level of opportunity where developers just want to have programmable data and apps with AI. This is a whole new- >> Absolutely. >> Well, absolutely great, great stuff. Our news team at SiliconANGLE and theCUBE said you guys had a little bit of a launch announcement you wanted to make here on the "AWS Startup Showcase." So Jay, you have something that you want to launch here? >> Yes, and thank you John for teeing me up. So I'm going to try to put this in like, you know, the vein of like an AWS, like main stage keynote launch, okay? So we're going to try this out. So, you know, a lot of our product has obviously been built on top of x86. I've been sharing that the past 15 minutes or so. And with that, you know, we're seeing a lot of acceleration for folks wanting to run on commodity infrastructure. But we've had customers and prospects and partners tell us that, you know, ARM and all of its kind of variance are very compelling, both cost performance-wise and also obviously with Edge. And wanted to know if there was anything we could do from a runtime perspective with ARM. And so we got the work and, you know, it's a hard problem to solve 'cause the instructions set for ARM is very different than the instruction set for x86, and our deep tensor column technology has to be able to work with that lower level instruction spec. But working really hard, the engineering team's been at it and we are happy to announce here at the "AWS Startup Showcase," that DeepSparse inference now has, or inference runtime now has support for AWS Graviton instances. So it's no longer just x86, it is also ARM and that obviously also opens up the door to Edge and further out the stack so that optimize once run anywhere, we're not going to open up. So it is an early access. So if you go to neuralmagic.com/graviton, you can sign up for early access, but we're excited to now get into the ARM side of the fence as well on top of Graviton. >> That's awesome. Our news team is going to jump on that news. We'll get it right up. We get a little scoop here on the "Startup Showcase." Jay Marshall, great job. That really highlights the flexibility that you guys have when you decouple the software from the hardware. And again, we're seeing open source driving a lot more in AI ops now with with machine learning and AI. So to me, that makes a lot of sense. And congratulations on that announcement. Final minute or so we have left, give a summary of what you guys are all about. Put a plug in for the company, what you guys are looking to do. I'm sure you're probably hiring like crazy. Take the last few minutes to give a plug for the company and give a summary. >> No, I appreciate that so much. So yeah, joining us out neuralmagic.com, you know, part of what we didn't spend a lot of time here, our optimization tools, we are doing all of that in the open source. It's called SparseML and I mentioned SparseZoo briefly. So we really want the data scientists community and ML engineering community to join us out there. And again, the DeepSparse runtime, it's actually free to use for trial purposes and for personal use. So you can actually run all this on your own laptop or on an AWS instance of your choice. We are now live in the AWS marketplace. So push button, deploy, come try us out and reach out to us on neuralmagic.com. And again, sign up for the Graviton early access. >> All right, Jay Marshall, Vice President of Business Development Neural Magic here, talking about performant, cost effective machine learning at scale. This is season three, episode one, focusing on foundational models as far as building data infrastructure and AI, AI native. I'm John Furrier with theCUBE. Thanks for watching. (bright upbeat music)

Published Date : Mar 9 2023

SUMMARY :

of the "AWS Startup Showcase." Thanks for having us. and the machine learning and the cloud to help accelerate that. and you got the foundational So kind of the GPT open deep end of the pool, that group, it's pretty much, you know, So I think you have this kind It's a- and a lot of the aspects of and I'd love to get your reaction to, And I always liked that because, you know, that are prospects for you guys, and you want some help in picking a model, Talk about what you guys have that show kind of the magic, if you will, and reduce the steps it takes to do stuff. when you guys decouple the the fact that you can auto And you don't have this kind of, you know, the actual hardware and you and you don't need that, neural network, you know, of situations, you know, CUBE alumnis, and I say to my team, and they're going to be like, and connect to the internet and it's going to give you answers back. you know, from our previous guests. and do exactly what you say. of what you guys call enough that you could actually and we had a last season, that you want to launch here? And so we got the work and, you know, flexibility that you guys have So you can actually run Vice President of Business

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JayPERSON

0.99+

Jay MarshallPERSON

0.99+

John FurrierPERSON

0.99+

JohnPERSON

0.99+

AWSORGANIZATION

0.99+

fiveQUANTITY

0.99+

Nir ShavitPERSON

0.99+

20-yearQUANTITY

0.99+

AlexaTITLE

0.99+

2010sDATE

0.99+

sevenQUANTITY

0.99+

PythonTITLE

0.99+

MITORGANIZATION

0.99+

each coreQUANTITY

0.99+

Neural MagicORGANIZATION

0.99+

JavaTITLE

0.99+

YouTubeORGANIZATION

0.99+

TodayDATE

0.99+

nine yearsQUANTITY

0.98+

bothQUANTITY

0.98+

BERTTITLE

0.98+

theCUBEORGANIZATION

0.98+

ChatGPTTITLE

0.98+

20 yearsQUANTITY

0.98+

over 50%QUANTITY

0.97+

second natureQUANTITY

0.96+

todayDATE

0.96+

ARMORGANIZATION

0.96+

oneQUANTITY

0.95+

DeepSparseTITLE

0.94+

neuralmagic.com/gravitonOTHER

0.94+

SiliconANGLEORGANIZATION

0.94+

WebSphereTITLE

0.94+

nineQUANTITY

0.94+

firstQUANTITY

0.93+

Startup ShowcaseEVENT

0.93+

five millisecondsQUANTITY

0.92+

AWS Startup ShowcaseEVENT

0.91+

twoQUANTITY

0.9+

YOLOORGANIZATION

0.89+

CUBEORGANIZATION

0.88+

OPTTITLE

0.88+

last six monthsDATE

0.88+

season threeQUANTITY

0.86+

doubleQUANTITY

0.86+

one customerQUANTITY

0.86+

SupercloudEVENT

0.86+

one sideQUANTITY

0.85+

VicePERSON

0.85+

x86OTHER

0.83+

AI/ML: Top Startups Building Foundational ModelsTITLE

0.82+

ECSTITLE

0.81+

$100 billionQUANTITY

0.81+

DevOpsTITLE

0.81+

WebLogicTITLE

0.8+

EKSTITLE

0.8+

a minuteQUANTITY

0.8+

neuralmagic.comOTHER

0.79+

Ravi Mayuram, Couchbase | Couchbase Application Modernization


 

>>Modernizing applications can be a complicated situation. For many folks, it's useful to have some best practices and tangible steps that can remove friction and yield some quick wins. We're now joined by couch based CTO, Ravi meam, who will cover how organizations can approach application modernization, what role the cloud plays and what you need to know about building a business case. Ravi, welcome back to the cube. Good to see you again. >>Very good to see you. Thanks for having me, Dave. >>Yes, our pleasure. Uh, according to a recent couch based digital transformation survey that you guys ran, it was about a 650 respondents, CIOs, CTOs, et cetera. The inertia of legacy technology held back according to the respondents, 82% of enterprises from modernizing their portfolios in 2021. So I wanna talk about the what and the why of modernization. Robbie, what does application modernization mean to you and why is it top of mind for organizations? >>Yeah, I think there have been multiple forces at work here for a while and they have all come to a tipping point with, uh, the pandemic and, uh, uh, it's a combination of factors and, uh, the legacy technologies were built for a different generation of applications. So it's a generational shift that we are undergoing. Uh, part of it is the, the consumption model, which is all cloud based and pay as you go kinda stuff. The other is edge is in the middle of a lot of these conversations together with, uh, the velocity variety, um, of data that you have to actually sort of consume and results that you need to produce. These were all not what the, sort of the, the infrastructure of hold on, which the applications were built on, uh, uh, stand for. So the infrastructure, the substrate requires modernization, uh, in order for the businesses to transform themselves, that's, what's going on. >>We call it digital transformation from a technology perspective, but it's businesses that are transforming, uh, the business models, uh, in front of our eyes. Uh, you know, we have seen the media go from, uh, set up boxes to streaming everywhere, um, like that every business eCommerce has changed, uh, the way we sort of, uh, do any business gaming has changed, uh, the, the banking industry, the healthcare, everything is changing, uh, in terms of the fundamental movement, if you, if you could, uh, sort of say that is to reach the consumer directly and sort of dis intermediate intermediaries. And in that process, the technologies that we had used to build the, the, you know, last previous generation of applications, no longer scale, no longer a nimble enough, uh, no longer cater to the modern, uh, the needs of the modern data and the infrastructure on which, uh, we are standing of these applications. So that's, what's driving the modernization effort. And, uh, in, in that, uh, you know, we have always started say that few years ago, that data is the new oil. Um, so that plays a very critical role in how the data silos and infrastructure that enterprises have is what's holding them back. And, uh, this whole effort is, uh, in, in, in terms of modernizing that infrastructure, uh, through the modern means of, uh, uh, the cloud computing, uh, the modern serverless architectures and microservices, and, uh, the edge and AI play play an important role in this. >>So we're gonna hear later from Amdocs, uh, about their modernization and where couch base helps and fits, but I'd love to hear your perspective as to how couch base helps organizations modernize. >>Right. I think one of the, uh, uh, fundamental things that has happened is that in the last 30, 40 odd years, the data infrastructure has sort of become, uh, a sprawl. Uh, we had built multiple systems, uh, uh, relational databases, cash is, uh, search systems, analytical systems, uh, all, uh, requiring for us to move the data, uh, from one system to the other, in order for you to get the value from those. And this is basically what we call as a data sprawl or database sprawl. And this leads to so many sort of, uh, downstream effects all the way from, uh, data not being available, uh, at the time when the engagement, uh, when the customer is engaged to data governance, security and all those issues, because the threat surface area is wide. And now you're putting all this infrastructure on the modern sort of cloud computing paradigm and, and the costs are sort of ballooning. >>And, uh, because those older infrastructures that were built, uh, when you deploy them on the cloud, uh, it, it creates its ads to the, uh, the complexity of this brawl and on top of the, the cost of this. So, uh, a system like couch base is what, um, uh, simplifies this brawl for, uh, our customers. And it is built for the modern, uh, sort of requirements of scale and performance, low latency, and the flexibility, uh, of being able to sort of not have to go through this whole sort of cycle of whenever you have to have a, a change in your application that touches your data, uh, that it, it actually creates a huge tool in those upgrades and all those life cycle having to CA carry pagers. Uh, I mean, that doesn't work anymore in these days of, I know, five, nine up times and, uh, 24 7, 365 availability of, uh, your services, uh, is so in that area is where couch base sort of helps, uh, our customers to modernize, uh, their sort of data infrastructure. >>It, uh, fuses, um, the multiple technologies that were spread across, uh, into one platform. So it gives a, a simpler programming paradigm, uh, that is one way to scale manage, administer, uh, patch, upgrade. All that mechanism is sort of not just thought through and automated, but it also sort of centralized this, uh, whole thing simplifies at the end of the day, uh, that total task of managing, uh, because that the volume of data that you have to manage now is, you know, orders of magnitude three to four orders of magnitude more than, uh, what it was just a few years ago. And, uh, so in that, uh, containing the sprawl, uh, agility of development, uh, are, are sort of, and the simplicity of deployment and management are some of the key capabilities that, uh, enterprises look to us to solve. And in that, bringing in all the way from cloud to multi-cloud to edge, uh, is how this sort of strategy evolves for enterprises. >>So square this circle for me, cuz in the panel we just had, there's a lot of agreement with what you just said, lift and shift of legacy platforms, doesn't work. Uh, it might work for the cloud vendor to get the data in the cloud, but it generally doesn't work for the customer. And you mentioned sprawl, we talked about this in the panel about, you know, data by its very nature is distributed. We talked about data mesh. There's a lot of skepticism around data mesh, but that that's cool. And you mentioned edge, so yes, I'm interested in the cloud's role here is the idea that you're actually putting all this stuff in one place. How does that fit with the edge? Maybe you could help us understand you're thinking of that and where the cloud fits. >>Yes. Um, you know, it's about, uh, centralizing a data up to a point and decentralizing it's in the magic of how you actually enable that. Um, uh, for example, just your traffic signal, your car, uh, or if you're on a cruise ship, each one is an edge, they all generate petabytes of data. And then you basically, uh, you can consume that, but if you're gonna stream all this data to a centralized place like a cloud that's, uh, you know, most of the data actually is not something that you're gonna store forever. Those are, you know, topical and that information is required at the edge. You should synthesize that information and take the noise from it and discard the signal. So that's where the edge, uh, typically the edge is not some, you know, personal device alone or uh, uh, or a IOT sensor sending data that is also, uh, sort of, uh, one, one element of the edge, but the edge is about decentralizing the cloud. >>So to say, so you can have mul your topologies of not having all your data sit in the cloud centralize someplace behind five firewalls. So when your application tries to reach that all the latency comes into place. So that's what you want to, uh, decentralize and have the data available as close to the engagement of the data with the consumer of it. So in that is the decentralization strategy where you can have multiple techologies, a three, a mesh, uh, however you choose to so that you get to get the data closest. Um, it could be a mobile device. Uh, it could be a, a smaller deployment of a server. It could be, uh, uh, a personal electronic device like watch, or it could be all the way in the IOT gateway. These are the various sort of decentralization of the data that has to happen. >>So it's about moving the data fastest. It's almost like CDN of the data is what, uh, sorry. Uh, for those it's, um, content delivery network is what CDN stands for, where we used to actually move static content in the good old days. That's what made, made our webpages faster. Now we can actually move live data that much faster by using replication technology. So when you move the data towards, towards the edge, what you're trying to do is bring that data closer, uh, to the compute where it's actually happening, as opposed to keeping the data centralized someplace back in the cloud and server and all your application logic is actually sitting on the device or on the edge. So you're constantly, uh, shoveling the data from the cloud to the edge, from edge to the cloud at the time of compute, as opposed to having it available at the time of, uh, um, the consumption of the data. >>That's where the paradigm, uh, shift is actually happening. And, uh, this basically is not about better user experience. It's also about backend networking, other costs that you can actually, uh, gain from, by not having to sort of repeatedly sort of shovel data back and forth. So that's stage strategy that, uh, enterprises are adopting. Now, this is become so to say core part of the architecture of modernization, uh, uh, in terms of where everybody can see this has to move to and, uh, our edge and mobile product, um, also plays a role in, uh, that's one of the other elements aspects of it that customers to look us, uh, look to us >>For. So it's a balance and couch base can play in both places. A lot of the data, if I heard you correctly at the edge is ephemeral, but if I want to do, you know, AI inferencing in real time, I gotta do it at the edge. I can't send it back to the cloud and, and, and do the modeling, you know, post-proces, that's not gonna work. All right, let's talk about the business case, you know, we've, we we've hit on the what and the why, but, you know, how does it get paid for companies sometimes struggle to plan for and budget appropriately for their outcomes? Yes. What do customers need to know about how do they get this past the CFO's office for, in the other business decision makers? >>I think there is an opportunity cost, uh, with the sort of lack of modernization, uh, if, uh, people are doing their classic sort of, so to say it style budgeting, uh, then it will just look like we have to modernize, uh, you know, some older infrastructure. It's not about that. It's about modernizing or making your business relevant, uh, to, uh, to the consumers, because the way consumers, uh, go about consuming your services now is very different from the way you had originally imagined and built for. And in that lies the, the, the transformation, uh, not to see this as a, it, uh, just as an it infrastructure modernization, but more from the standpoint of business transformation and, uh, the tooling that is required for this business transformation to be successful. So it requires the involvement of, um, not leaving it to just, you know, uh, uh, it oriented sort of, uh, uh, thinking of modernizing, but from the standpoint of looking at the, the, the business and what are the transformations that they need to, if they don't keep up with the Jones, they, in this digital divide, they may find themselves in the sort of either the wrong side or in the chasm. >>So I think that mindset, uh, that I was, uh, sort of in addition to sort of, uh, it pushing for this, uh, it's got to have a C-suite, uh, sponsorship understanding and, uh, sort of champion of this, then those initiatives will succeed because, uh, it's not just the technology transformation. It is accompanied by business and sort of, so to say cultural transformation inside the enterprise. >>Yeah. And it's interesting in the survey, it was very much it, you know, survey, I get that and, and the, it pros, the CIOs, et cetera, felt that, that, that, that the it organization was largely responsible for the digital strategy. And I think that was largely a function of, we just came out of the, the pandemic or Hopely coming out of the pandemic. And so they had all these tactical needs, but now you're saying step back, align with the business, make sure the C suite's involved, and that's gonna reduce the friction of, of getting this stuff paid for. >>Correct. And, you know, the, uh, this observation was also there. If you, I must have noticed that, you know, many, uh, of these sort of transf strategies, if you just leave it to like an it thing, they end up being reactive. Uh, but the proactive strategies are the one that actually, uh, succeed because they understand that this is a sort of enterprise transformation. It could be disruptive. Uh, it is what is required for the enterprise to get to the, uh, to the next level, uh, or to be, uh, in this, to be relevant in this sort of modern economy, if you would. So I think that is what, uh, what people are reacting to is the fact that this pandemic has pushed people to modernize quickly. And that may have happened as a reaction to the reality of the situation, but more and more, uh, uh, even among these strategies and more and more initiatives that people are taking, they may have sort of a longer term sort of thinking in this, uh, that requires the, uh, definitely without it's not gonna succeed and they're gonna be in the middle and they'll be, uh, in the forefront of many technology decisions that we have to make, but having a, a C-suite level sponsorship. >>In addition to that, with the impetus of what is the business transformation, this is actually going to achieve, um, those you will see will succeed a lot more because otherwise you, we see that, you know, good, good number of what 80% of these projects fail or, or, or they suffer delays or scale back or never get started, uh, because, you know, uh, the understanding of what is the business value of it is perhaps not, not clearly articulated instead, it just becomes a, a technology modernization conversation without that company benefit. >>Yeah. Got it. Okay. Uh, you guys recently announced some updates to your platform. Can you run us through the, the highlights, you know, what the customers get and, and how it relates to this conversation modernizing application strategies? >>Yes. So, uh, well, we will be, uh, releasing our couch base server 7.1. And, uh, that is what will be the sort of underneath platform for our, the couch base, uh, Capella, which is the, our DBA both, uh, have exciting innovations, um, that we would be putting out. Uh, let me just run through a few things, uh, on the, uh, uh, couch based server seven one, because there are some, uh, amazing, uh, capabilities we have introduced there. We are really excited about the opportunities. This brings couch based into play. Uh, first is we have a, uh, a brand new storage engine that we put in there, which, uh, significant significantly, uh, reduces the, uh, the cost of running couch base. Uh, with this capability, we can actually consume lot less memory and that's, that is like a 10 X improvement on this one. So from that standpoint, we are 10 X more efficient in terms of resource consumption, the expensive memory oriented resource consumption. >>This now allows couch based to sort of not just cater to those high performance, um, you know, hyperscale scenarios that we are known for, but also the more, the classic BIS oriented, uh, applications, which are not that performance sensitive, but they're more cost sensitive. So that's a huge, uh, step forward for couch base because there are a lot more, uh, opportunities where sort of, we become, uh, that much more, uh, cost efficient for enterprises to run. And this is something that, uh, many enterprises have asked for, and we know, uh, many more use cases where we would be more relevant with that innovation. And this has been a, a sort of a long journey building storage engines is, uh, you know, uh, is a very difficult Endover. And we took that on knowing that, uh, what we can achieve here would be a game changer, uh, for couch base. >>And in terms of how, uh, uh, the consolidation of multiple things that you can do in our platform just got this sort of boost of being able to do a lot more with lot less resources. In addition to that, we have done enhancements to our analytics service, uh, with, uh, the work that we have done there. Uh, it, it can sort of do a lot more, um, uh, availability, uh, of the, of, of the analytics service, uh, which, uh, will strengthens the analytics side of the product, which now allows you to run analysis O on J O uh, straight up without requiring the operational side of the, uh, the database. So you can just simply do, uh, straight off analytics stuff, because it, it, it can now, uh, give you the higher availability and disaster recovery that you would want if you're gonna depend on these, uh, systems with that, we are done over some, uh, real good work with Tableau integration, which makes it easy to visualize this, um, uh, uh, and, uh, one other important capability we introduce here is the, um, on, in the entire platform is what we call as user defined functions. >>This now allows us to write custom logic and Java script in the server couch based server. This is, this helps you write procedural logic in the middle of, uh, SQL queries, which is a humongous capability that, you know, and the classical systems process. Now, with that, we have closed the gap. If you know, how to program to sort of classical operational systems, pretty much, you have one to one equivalence of that, uh, in couch. So if you come from the good relational world, uh, it would be very easy breeze for you to understand how to program in this modern, no SQL systems, which both supports, um, uh, SQL as well as the classic asset transaction capabilities. And last, uh, we expanded the support two arm processors, and typically, uh, arm processes, at least save you quarter of, uh, your budget because of it being that much more, uh, uh, cost efficient in terms of, uh, its operational and power capabilities. >>So with that net net, uh, couch based server becomes a lot more, um, uh, cost efficient. And at the same time, it also in one, well becomes that database server, which can both handle your in memory, uh, capabilities that, that speed and hyperscale, as well as, uh, the classical use cases of being, uh, disk, uh, disoriented, uh, classical relational database use cases. Nice. So that, that, that rounds out our offering, it's been a long journey for us to get here from being the high performance, uh, low latency system to, uh, the classical database use case >>Assessment. Yeah. I mean, that's great. You got, you got memory optimization, you mentioned the, the, the, the arm base. Now you're on that curve, which is great software companies love when you get cheaper, faster hardware, uh, you making it easy to speak the language of, you know, traditional stuff. So that's awesome. Um, you and I, you mentioned, uh, Capella, you and I talked about, yes, at couch base connects Capella. You've been moving hard with your DBA strategy, how's it going? And then beyond these announcements, what's what should we look for from couch base? >>You know, uh, our fundamental, uh, mission is to make the developer experience, um, that much more easier, that much, uh, to move all the frictions that, that has existed for developers to adopt couch base. And, uh, the Capella strategy is to leverage the cloud. So you have number one, the ease of development, just bring your browsers, start to learn, develop even simple sample applications and deploy them from there. You can scale, and you can have production level deployments, that whole journey of a developer, along with the ability to sort of have your a, you know, metered billing and pay as you go, uh, uh, pricing, uh, so that it becomes easier for developers to sort of consume this and, uh, show the value of what they can build here. That is our, um, sort of journey of bringing it closer, uh, to our developers and make it simpler for them to sort of, uh, get started and build the, the mission critical applications that they have trusted to build on couch base, to become that much more simpler, faster, and easier for them. So that's the journey. So that's the kind of announcements you will see coming out in Capella. And for that this, this seven one server is, is the platform on which we, we are sort of adding those capabilities to make a Capella that much easier for developers to adopt >>Outstanding. You've been busy and it looks like you've got a lot of value. Yes. All right, we're gonna have to leave it there. Robbie, up next, we bring on the customer perspective with Amdocs. They've got a real world example of a modernization journey that they go through. They had to modernize legacy Oracle WebLogic infrastructure with a microservices architecture, and of course, couch base, keep it right there. You're watching the cube.

Published Date : May 19 2022

SUMMARY :

what you need to know about building a business case. Very good to see you. that you guys ran, it was about a 650 respondents, CIOs, CTOs, et cetera. uh, the pandemic and, uh, uh, it's a combination of factors and, in, in that, uh, you know, we have always started say that few years ago, So we're gonna hear later from Amdocs, uh, about their modernization and uh, from one system to the other, in order for you to get the value from those. availability of, uh, your services, uh, is so in that area at the end of the day, uh, that total task of managing, uh, So square this circle for me, cuz in the panel we just had, there's a lot of agreement with what you just said, that's, uh, you know, most of the data actually is not something that you're gonna store forever. So in that is the decentralization strategy where you can have uh, shoveling the data from the cloud to the edge, from edge to the cloud at the time of compute, to say core part of the architecture of modernization, uh, uh, and, and do the modeling, you know, post-proces, that's not gonna work. uh, you know, some older infrastructure. So I think that mindset, uh, that I was, uh, sort of in addition to sort make sure the C suite's involved, and that's gonna reduce the friction of, but the proactive strategies are the one that actually, uh, succeed because they understand get started, uh, because, you know, uh, the highlights, you know, what the customers get and, and how it relates to this conversation modernizing platform for our, the couch base, uh, Capella, which is the, our DBA both, And this has been a, a sort of a long journey building storage engines is, uh, you know, And in terms of how, uh, uh, the consolidation of multiple things that you can do in our platform and typically, uh, arm processes, at least save you quarter of, the high performance, uh, low latency system to, uh, the classical database use case cheaper, faster hardware, uh, you making it easy to speak the language of, So that's the kind of announcements you will see coming out in Capella. Robbie, up next, we bring on the customer perspective with Amdocs.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Ravi MayuramPERSON

0.99+

DavePERSON

0.99+

2021DATE

0.99+

80%QUANTITY

0.99+

RaviPERSON

0.99+

82%QUANTITY

0.99+

10 XQUANTITY

0.99+

RobbiePERSON

0.99+

fiveQUANTITY

0.99+

two armQUANTITY

0.99+

TableauTITLE

0.99+

bothQUANTITY

0.99+

CapellaORGANIZATION

0.98+

one platformQUANTITY

0.98+

pandemicEVENT

0.98+

JavaTITLE

0.98+

AmdocsORGANIZATION

0.98+

threeQUANTITY

0.97+

both placesQUANTITY

0.97+

24QUANTITY

0.97+

SQLTITLE

0.96+

365QUANTITY

0.96+

firstQUANTITY

0.96+

one wayQUANTITY

0.96+

five firewallsQUANTITY

0.95+

each oneQUANTITY

0.95+

one placeQUANTITY

0.94+

one elementQUANTITY

0.94+

CALOCATION

0.93+

nine upQUANTITY

0.92+

few years agoDATE

0.92+

oneQUANTITY

0.92+

seven one serverQUANTITY

0.9+

one systemQUANTITY

0.9+

650 respondentsQUANTITY

0.85+

JonesPERSON

0.84+

four ordersQUANTITY

0.82+

CouchbaseORGANIZATION

0.81+

40 odd yearsQUANTITY

0.68+

Oracle WebLogicORGANIZATION

0.63+

last 30DATE

0.62+

CapellaLOCATION

0.54+

7QUANTITY

0.5+

server sevenQUANTITY

0.48+

CouchbaseTITLE

0.41+

Sheng Liang, Rancher Labs | KubeCon + CloudNativeCon 2019


 

>> Announcer: Live from San Diego, California, it's theCUBE covering KubeCon and CloudNativeCon. Brought to you by RedHat, the CloudNative Computing Foundation, and its ecosystem partners. >> Stu: Welcome back to theCUBE, I'm Stu Miniman. My cohost for three days of coverage is John Troyer. We're here at KubeCon CloudNativeCon in San Diego, over 12,000 in attendance and happy to welcome back a CUBE alumni and veteran of generations of the stacks that we've seen come together and change over the time, Sheng Liang, who is the co-founder and CEO of Rancher Labs. Thanks so much, great to see you. >> Shang: Thank you Stuart, is very glad to be here. >> All right, so you know Kubernetes, flash to the pan nobody's all that excited about it. I mean, we've seen all these things come and go over the years, Sheng. No but seriously, the excitement is palpable. Every year, you know, so many more people, so many more projects, so much more going on. Help set the stage for you, as to what you see and the importance today of kind of CloudNative in general and you know, this ecosystem specifically. >> Yeah you're so right though, Stuart. Community as a whole and Kubernetes has really come a long way. In the early days, Kubernetes was a uh, you know, somewhat of a technical community, lot of Linux people. But not a whole lot of end users. Not a whole lot of Enterprise customers. I walk in today and just the kind of people I've met, I've probably talked to fifty people already who are just really at the beginning of the show and uh there's a very very large number Enterprise customers. And this does feel like Kubernetes has crossed the chasm and headed in to the mainstream Enterprise market. >> Yeah it's interesting you know I've talked to you know plenty of the people here probably if you brought up things like OpenStack and CloudStack they wouldn't even know what we were talking about. The wave of containerization really seemed to spread far and wide. At Rancher you've done some surveys, give us some of the insight. What are you seeing? You've talked to plenty of customers. Give us where we are with the maturity. >> Definitely, definitely. Enterprise Kubernetes adoption is ready for prime time. You know the So what we're really seeing is some of the early challenges a few years ago a lot of people were having problems with just installing Kubernetes. They were literally just making sure to get people educated about container as a concept. Those have been overcome. Now, uh, we're really facing next generation of growth. And people solve these days solve problems like how do I get my new applications onboarding to Kubernetes. How do I really integrate Kubernetes into my multicloud and hybrid-Cloud strategy? And as Enterprise's need to perform computing in places beyond just the data centers and the cloud, we're also seeing tremendous amount of interest in running Kubernetes on the Edge. So those are some of the major findings of our survey. >> John: That's great. So Sheng I'd love for you to kind of elaborate or elaborate for us where Rancher fits into this. Right. Rancher is, you've been around, you've a mature stack of technology and also some new announcements today so I'd kind of love for you to kind of tell us how you fit in to that landscape you just described. >> Absolutely. This is very exciting and very very fast changing industry. So one of the things that Rancher is able to play very well is we're really able to take work with the community, take the latest and greatest open source technology and actually develop open source products on top this and make that technology useful and consumable for Enterprise at large. So the way we see it, to make Kubernetes work we really need to solve problems at three levels. At the lowest level, the industry need at lot of compliant and compatible certified Kubernetes distros and services. So that's table stakes now. Rancher is a leader in providing CNCF certified Kubernetes distro. We actually provide two of them. One of them is called RKE - Rancher Kubernetes Engine. Something we've been doing it for years. It's really one of the easiest to use and most widely deployed Kubernetes distributions. But we don't force our customers to only use our Kubernetes distribution. Rancher customers can use whatever CNCF certified Kubernetes distribution or Kubernetes services they want. So a lot of our customers use RKE(Rancher Kubernetes Engine) but they also use, when they go to the cloud, they use cloud hosted Kubernetes Services like GKE and EKS. There are really a lot of advantages in using those because cloud providers will help you run these Kubernetes clusters for free. And in many cases they even throw in the infrastructure it takes to run the Kubernetes masters and etcd databases for free. If you're in the cloud, there's really no reason not to be using these Kubernetes services. Now there's one area that Rancher ended up innovating at the Kubernetes distros, despite having these data center focus and cloud focus Kubernetes distros and services. And that is one of our, one of the two big announcements today. And that's called K3S. K3S is a great open source project. It's probably one of the most exciting open source projects in the Kubernetes ecosystem today. And what we did with K3S is we took Kubernetes that's been proven in data center and cloud and we brought it everywhere. So with K3S you can run Kubernetes on a Raspberry Pi. You can run Kubernetes in a surveillance camera. You can run Kubernetes in an ATM machine. You know, we have customers trying to run now Kubernetes in a uh, factory floor. So it really helps us realize our vision of Kubernetes as a new Linux and you run it everywhere. >> Well that's great 'cause you talk about that simplicity that we need and if you start talking about Edge deployment, I don't have the people, I don't have the skillset, and a lot times I don't have the gear, uh, to run that. So you know, help connect the dots as to you know, what led Rancher to do the K3S piece of it and you know, what did we take out? Or what's the differences between K8S and the K3S? >> That's a great question, you know. Even the name "K3S" is actually somewhat a wordplay on K8S You know we kind of cut half of 8 away and you're left with 3. It really happened with some of our early traction we sawing some customers. I remember, in retrospect it wasn't really that long ago. It was like middle of last year, we saw a blog coming out of Chick-fil-A and a group of technical enthusiasts were experimenting with actually running uh, Kubernetes in very, in like Intel Nook servers. You know, they were talking about potentially running three of those servers in every one of their stores and at the time they were using RKE and Rancher Kubernetes Engine to do that. And they run into a lot of issues. I mean to be honest if you think about running Kubernetes in the cloud in the database center, uh these servers have a lot of resources and you also have a dedicated operations teams. You have an SRE to manage them, right? But when you really bring it out into branch offices and Edge computing locations, now all of the sudden, number one, these uh, the software now has to take a lot less resource but also you don't really have SREs monitoring them every day anymore. And you, since these, Kubernetes distro really has to be zero touch and it has to run just like a, you know like a embedded window or Linux server. And that's what K3S was able to accomplish, we were able to really take away lot of the baggage that came with having all the drivers that were necessary to run Kubernetes in the cloud and we were also able to dramatically simplify what it takes to actually start Kubernetes and operate it. >> So unsolicited, I was doing an event right before this one and I asked some people what they looking forward to here at KubeCon. And independently, two different people said, "The thing I'm most excited about is K3S." And I think it's because it's the right slice through Kubernetes. I can run it in my lab. I can run it on my laptop. I can on a stack of Raspberry Pis or Nooks, but I could also run it in production if I, you know I can scale it up >> Stu: Yeah. >> John: And in fact they both got a twinkle in their eye and said well what if this is the future of Kubernetes, like you could take this and you could run it, you know? They were very excited about it. >> Absolutely! I mean, you know, I really think, you know, as a company we survive by, and thrive by delivering the kind of innovation that pushes the market forward right? I mean, we, otherwise people are not going to look at Rancher and say you guys are the originators of Kubernetes technology. So we're very happy to be able to come up with technologies like K3S that effectively greatly broadened the addressable market for everyone. Imagine you were a security vendor and before like all you really got to do is solving security problems. Or if you were a monitoring vendor you were able to solve monitoring problems for a data center and in the cloud. Now with K3S you end up getting to solve the same problems on the Edge and in branch offices. So that's why so many people are so excited about it. >> All right so Sheng you said K3S is one of the announcements this week, what's the rest of the news? >> Yeah so K3S, RKE, and all the GKE, AKS, EKS, they're really the fundamental layer of Kubernetes everywhere. Then on top of that one of the biggest piece of innovation that Rancher labs created is the idea of multi-cluster management. A few years ago it was pretty much of a revolutionary concept. Now it's widely understood. Of course an organization is not going to have just one cluster, they're going to have many clusters. So Rancher is the industry leader for doing multi-cluster management. And these clusters could span clouds, could span data centers, now all the way out to branch offices and the Edge. So we're exhibiting Rancher on the show floor. Everyone, most people I've met here, they know Rancher because of that flash of product. Now our second announcement though is yet another level above Rancher, so what we've seen is in order to really Kubernetes to achieve the next level of adoption in the Enterprise we're seeing you know some of the development teams and especially the less skilled dev ops teams, they're kind of struggling with the learning curve of Kubernetes and also some of the associated technologies around service mesh around Knative, around, you know, CICD, so we created a project called Rio, as in Rio de Janeiro the city. And the nice thing about Rio is it packaged together all these Cloud Native technologies and then we created very easy to use, very simple to understand user experience for developers and dev ops teams. So they no longer have to start with the training course on Kubernetes, on Istio, on Knative, on Tekton, just to get productive. They can pretty much get productive on day one. So that Rio project has hit a very important milestone today, we shipped the beta release for it and we're exhibiting it at the booth as well. >> Well that's great. You know, the beta release of Rio, pulling together a lot of these projects. Can you talk about some folks that, early adopters that have been using them or some folks that have been working with the project? >> Sheng: Yeah absolutely. So I talk about some of the early adoption we're seeing for both K3S and Rio. Uh, what we see the, first of all just the market reception of K3S, as you said, has been tremendous. Couple of even mentioned to you guys today in your earlier interviews. And it is primarily coming from customers who want to run Kubernetes in places you probably haven't quite anticipated before, so I kind of give you two examples. One is actually appliance manufacture. So if you think they used to ship appliances, then you can imagine these appliances come with Linux and they would image their appliance with an OS image with their applications. But what's happening is these applications are becoming so sophisticated they're now talking about running the entire data analytics stack and AI software. So it actually takes Kubernetes not necessarily, because it's one server in a situation of appliance. Kubernetes is not really managing a cluster, but it's managing all the application components and microservices. So they ended up bundling up K3S into their appliance. This is one example. Another example is actually an ISV, that's a very interesting use case as well. So uh, they ship a micro service based application software stack and again their software involves a lot of different complicated components. And they decided to replatform their software on Kubernetes. We've all heard a lot of that! But in their case they have to also ship, they don't just run the software themselves, they have to ship the software to the end users. And most of their end users are not familiar with Kubernetes yet, right? And they don't really want to say, to install our software you go provision the Kubernetes cluster and then you operate it from now on. So what they did is they took K3S and bundled into their application as if it were an application server, almost like a modern day WebLogic and WebSphere, then they shipped the whole thing to their customers. So I thought both of these use cases are really interesting. It really elevates the reach of Kubernetes from just being almost like a cloud platform in the old days to now being an application server. And then I'll also quickly talk about Rio. A lot of interest inside Rio is around really dev ops teams who've had, I mean, we did a survey early on and we found out that a lot of our customers they deploy Kubernetes in services. But they end up building a custom experience on top of their Kubernetes deployment, just so that most of their internal users wouldn't have to take a course on Kubernetes to start using it. So they can just tell that this thing that, this is where my source code is and then every thing from that point on will be automated. So now with Rio they wouldn't have to do that anymore. Effectively Rio is the direct source to URL type of, one step process. And they are able to adopt Rio for that purpose. >> So Sheng, I want to go back to when we started this conversation. You said, you know, the ecosystem growing. That not only, you know, so many vendors here, 129 end users, members of the CNCF. The theme we've been talking about is to really, you know, it's ready for production and people are all embracing it. But to get the vast majority of people, simplicity really needs to come front and center, I think. K3S really punctuates that. What else do we need to do as an ecosystem, you know, Rancher is looking to take a leadership position and help drive this, but what else do you want to see from your peers, the community, overall to help drive this to the promise that it could deliver. >> We really see the adoption of Kubernetes is probably going to wing at three, I mean. We see most organizations go through this three step journey. The first step is you got to install and operate Kubernetes. You know, day one, day two. And I think we've got it down. With K3S it becomes so easy. With GKE it becomes one API call or one simple UI interaction. And CNCS has really stepped up and created a great, you know, compliance certification program, right? So we're not seeing the kind of fragmentation that we saw with some of the other technologies. This is fantastic. Then the second step we see is, which a lot of our customers are going through now, is now you have all the Kubernetes clusters coming from different clouds, different infrastructure, potentially on the Edge. You have a management problem. Now you all of the sudden because we made Kubernetes clusters so easy to obtain you can potentially have a sprawl. If you are not careful you might leave them misconfigured. That could expose a security issue. So really it takes Rancher, it takes our ecosystem partners, like Twistlock, like Aqua. CICD partners, like CloudBees, GitLab. Just everyone really needs to come together, make that, solve that management problem. So not only, uh, you build this Kubernetes infrastructure but then you actually going to get a lot of users and they can use the cluster securely and reliably. Then I think the third step, which I think a lot of work still remain is we really want to focus on growing the footprint of workload, of enterprise workload, in the enterprise. So there the work is honestly just getting started. Anywhere from uh, if you walk into any enterprise you know what percentage of their total workload is running on Kubernetes today? I mean outside of Google and Uber, that percentage is probably very small, right? They're probably in the minority, maybe even in single digit percentage. So, we really need to do a lot of work. You know, we need to uh, Rancher created this project called LongHorn and we also work with a lot of our ecosystem partners in persistence storage area like Portworx, StorageOS, OpenEBS. Lot of us really need to come together and solve this problem of running persistent workload. I mean there was also a lot of talk about it at the keynote this morning, I was very encouraged to hear that. That could easily double, triple the amount of workload that could bring, that could be onboarded into Kubernetes and even experiences like Rio, you know? Make it further simpler, more accessible. That is really in the DNA of Rancher. Rancher wouldn't be surviving and thriving without our insight into how to make our technology consumable and widely adopted. So a lot of work we're doing is really to drive the adoption of Kubernetes in the enterprise beyond, you know, the current state and into something I really don't see in the future, Kubernetes wouldn't be as actually widely used as say AWS or vSphere. That would be my bar for success. Hopefully in a few years we can be talking about that. >> All right, that is a high bar Sheng. We look forward to more conversations with you going forward. Congratulations on the announcement. Great buzz on K3S, and yeah, thanks so much for joining us. >> Thank you very much. >> For John Troyer, I'm Stu Miniman, back with lots more coverage here from KubeCon CloudNativeCon 2019 in San Diego, you're watching theCUBE. [Upbeat music]

Published Date : Nov 19 2019

SUMMARY :

Brought to you by RedHat, Thanks so much, great to see you. and you know, this ecosystem specifically. In the early days, Kubernetes was a uh, you know, plenty of the people here probably if you brought up in running Kubernetes on the Edge. to that landscape you just described. So one of the things that Rancher is able to play very well So you know, help connect the dots as to you know, I mean to be honest if you think about running Kubernetes you know I can scale it up like you could take this and you could run it, you know? and before like all you really got to do So they no longer have to start with the training course You know, the beta release of Rio, just the market reception of K3S, as you said, What else do we need to do as an ecosystem, you know, and created a great, you know, with you going forward. back with lots more coverage here from

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

John TroyerPERSON

0.99+

Stu MinimanPERSON

0.99+

StuartPERSON

0.99+

GoogleORGANIZATION

0.99+

UberORGANIZATION

0.99+

CloudNative Computing FoundationORGANIZATION

0.99+

Rio de JaneiroLOCATION

0.99+

ShangPERSON

0.99+

Rancher LabsORGANIZATION

0.99+

Sheng LiangPERSON

0.99+

129 end usersQUANTITY

0.99+

fifty peopleQUANTITY

0.99+

San Diego, CaliforniaLOCATION

0.99+

RancherORGANIZATION

0.99+

San DiegoLOCATION

0.99+

second stepQUANTITY

0.99+

ShengPERSON

0.99+

bothQUANTITY

0.99+

third stepQUANTITY

0.99+

oneQUANTITY

0.99+

two examplesQUANTITY

0.99+

StuPERSON

0.99+

KubeConEVENT

0.99+

second announcementQUANTITY

0.99+

RedHatORGANIZATION

0.99+

GitLabORGANIZATION

0.99+

KubernetesTITLE

0.99+

CUBEORGANIZATION

0.99+

CNCFORGANIZATION

0.99+

AWSORGANIZATION

0.98+

first stepQUANTITY

0.98+

IntelORGANIZATION

0.98+

three daysQUANTITY

0.98+

todayDATE

0.98+

CloudBeesORGANIZATION

0.98+

threeQUANTITY

0.98+

one serverQUANTITY

0.98+

OneQUANTITY

0.98+

one clusterQUANTITY

0.98+

two different peopleQUANTITY

0.98+

RioORGANIZATION

0.98+

two big announcementsQUANTITY

0.97+

this weekDATE

0.97+

K3STITLE

0.97+

CloudNativeConEVENT

0.97+

one exampleQUANTITY

0.97+

LinuxTITLE

0.96+

WebLogicTITLE

0.96+

WebSphereTITLE

0.96+

over 12,000QUANTITY

0.96+

GKEORGANIZATION

0.96+

K8SCOMMERCIAL_ITEM

0.96+

Larry Socher, Accenture & Ajay Patel, VMware | Accenture Cloud Innovation Day 2019


 

(bright music) >> Hey welcome back, everybody. Jeff Frick here with theCUBE We are high atop San Francisco in the Sales Force Tower in the new Accenture offices, it's really beautiful and as part of that, they have their San Francisco Innovation Hubs. So it's five floors of maker's labs, and 3D printing, and all kinds of test facilities and best practices, innovation theater, and this studio which is really fun to be at. So we're talking about hybrid cloud and the development of cloud and multi-cloud and continuing on this path. Not only are customers on this path, but everyone is kind of on this path as things kind of evolve and transform. We are excited to have a couple of experts in the field we've got Larry Socher, he's the Global Managing Director of Intelligent Cloud Infrastructure Services growth and strategy at Accenture. Larry, great to see you again. >> Great to be here, Jeff. And Ajay Patel, he's the Senior Vice President and General Manager at Cloud Provider Software Business Unit at VMWare and a theCUBE alumni as well. >> Excited to be here, thank you for inviting me. >> So, first off, how do you like the digs up here? >> Beautiful place, and the fact we're part of the innovation team, thank you for that. >> So let's just dive into it. So a lot of crazy stuff happening in the marketplace. Lot of conversations about hybrid cloud, multi-cloud, different cloud, public cloud, movement of back and forth from cloud. Just want to get your perspective today. You guys have been in the middle of this for a while. Where are we in this kind of evolution? Everybody's still kind of feeling themselves out, is it, we're kind of past the first inning so now things are settling down? How do you kind of view the evolution of this market? >> Great question and I think Pat does a really nice job of defining the two definitions. What's hybrid versus multi? And simply put, we look at hybrid as when you have consistent infrastructure. It's the same infrastructure regardless of location. Multi is when you have disparate infrastructure, but are using them in a collective. So just from a from a level setting perspective, the taxonomy is starting to get standardized. Industry is starting to recognize hybrid is the reality. It's not a step in the long journey. It is an operating model that going to exist for a long time. So it's not about location. It's about how do you operate in a multi-cloud and a hybrid cloud world. And together at Accenture VMware have a unique opportunity. Also, the technology provider, Accenture, as a top leader in helping customers figure out where best to land their workload in this hybrid, multi-cloud world. Because workloads are driving decisions. >> Jeff: Right. >> We are going to be in this hybrid, multi-cloud world for many years to come. >> Do I need another layer of abstraction? 'Cause I probably have some stuff that's in hybrid and I probably have some stuff in multi, right? 'Cause those are probably not mutually exclusive, either. >> We talked a lot about this, Larry and I were chatting as well about this. And the reality is the reason you choose a specific cloud, is for those native differentiator capability. So abstraction should be just enough so you can make workloads portable. To be able to use the capability as natively as possible. And by fact that we now at VMware have a native VMware running on every major hyperscaler and on pram, gives you that flexibility you want of not having to abstract away the goodness of the cloud while having a common and consistent infrastructure while tapping into the innovations that the public cloud brings. So, it is the evolution of what we've been doing together from a private cloud perspective to extend that beyond the data center, to really make it an operating model that's independent of location. >> Right, so Larry, I'm curious your perspective when you work with customers, how do you help them frame this? I mean I always feel so sorry for corporate CIAOs. I mean they got security going on like crazy, they go GDPR now I think, right? The California regs that'll probably go national. They have so many things to be worried about. They go to keep up on the latest technology, what's happening in containers. I thought it was doc, now you tell me it's Kubernetes. It's really tough. So how do you help them kind of, put a wrapper around it? >> It's got to start with the application. I mean you look at cloud, you look at infrastructure more broadly I mean. It's there to serve the applications and it's the applications that really drive business value. So I think the starting point has to be application led. So we start off, we have our intelligent engineering guys, our platform guys, who really come in and look and do an application modernization strategy. So they'll do an assessment, you know, most of our clients given their scale and complexity usually have from 500 to 20,000 applications. You know, very large estates. And you got to start to figure out okay what's my current applications? A lot of times they'll use the six Rs methodology and they say hey okay what is it? I'm going to retire this, I no longer need it. It no longer has business value. Or I'm going to replace this with SaaS. I move it to sales force for example, or service now, etcetera . Then they're going to start to look at their workloads and say okay, hey, do I need to re-fact of reformat this. Or re-host it. And one of the things obviously, VMware has done a fantastic job is allowing you to re-host it using their software to find data center, you know, in the hyperscaler's environment. >> We call it just, you know, migrate and then modernize. >> Yeah, exactly. But the modernized can't be missed. I think that's where a lot of times we see clients kind of get in the trap, hey, i'm just going to migrate and then figure it out. You need to start to have a modernization strategy and then, 'cause that's ultimately going to dictate your multi and your hybrid cloud approach, is how those apps evolve and you know the dispositions of those apps to figure out do they get replaced. What data sets need to be adjacent to each other? >> Right, so Ajay, you know we were there when Pat was with Andy and talking about VMware on AWS. And then, you know, Sanjay is showing up at everybody else's conference. He's at Google Cloud talking about VMware on Google Cloud. I'm sure there was a Microsoft show I probably missed you guys were probably there, too. You know, it's kind of interesting, right, from the outside looking in, you guys are not a public cloud, per se, and yet you've come up with this great strategy to give customers the options to adopt VMware in a public cloud and then now we're seeing where even the public cloud providers are saying, "Here, stick this box in your data center". It's like this little piece of our cloud floating around in your data center. So talk about the evolution of the strategy, and kind of what you guys are thinking about 'cause you know you are clearly in a leadership position making a lot of interesting acquisitions. How are you guys see this evolving and how are you placing your bets? >> You know Pat has been always consistent about this and any strategy. Whether it's any cloud or any device. Any workload, if you will, or application. And as we started to think about it, one of the big things we focused on was meeting the customer where he was at in his journey. Depending on the customer, they may simply be trying to figure out working out to get on a data center. All the way, to how to drive an individual transformation effort. And a partner like Accenture, who has the breadth and depth and sometimes the vertical expertise and the insight. That's what customers are looking for. Help me figure out in my journey, first tell me where I'm at, where am I going, and how I make that happen. And what we've done in a clever way in many ways is, we've created the market. We've demonstrated that VMware is the only, consistent infrastructure that you can bet on and leverage the benefits of the private or public cloud. And I often say hybrid's a two-way street now. Which is they are bringing more and more hybrid cloud services on pram. And where is the on pram? It's now the edge. I was talking to the Accenture folks and they were saying the metro edge, right? So you're starting to see the workloads And I think you said almost 40 plus percent of future workloads are now going to be in the central cloud. >> Yeah, and actually there's an interesting stat out there. By 2022, seventy percent of data will be produced and processed outside the cloud. So I mean the edge is about to, as we are on the tipping point of IOT finally taking off beyond smart meters. We're going to see a huge amount of data proliferate out there. So the lines between between public and private have becoming so blurry. You can outpost, you look at, Antheos, Azure Stack for ages. And that's where I think VMware's strategy is coming to fruition. You know they've-- >> Sometimes it's great when you have a point of view and you stick with it against the conventional wisdom. And then all of a sudden everyone is following the herd and you are like, "This is great". >> By the way, Anjay hit on a point about the verticalization. Every one of our clients, different industries have very different paths there. And to the meaning that the customer where they're on their journey. I mean if you talk to a pharmaceutical, you know, GXP compliance, big private cloud, starting to dip their toes into public. You go to Mians and they've been very aggressive public. >> Or in manufacturing with Edge Cloud. >> Exactly. >> So it really varies by industry. >> And that's a very interesting area. Like if you look at all the OT environments of the manufacturing. We start to see a lot of end of life of environments. So what's that next generation of control systems going to run on? >> So that's interesting on the edge because and you've brought up networking a couple times while we've been talking as a potential gate, right, when one of them still in the gates, but we're seeing more and more. We were at a cool event, Churchill Club when they had psy links, micron, and arm talking about shifting more of the compute and store on these edge devices to accommodate, which you said, how much of that stuff can you do at the edge versus putting in? But what I think is interesting is, how are you going to manage that? There is a whole different level of management complexity when now you've got this different level of distributing computing. >> And security. >> And security. Times many, many thousands of these devices all over the place. >> You might have heard recent announcements from VMware around the Carbon Black acquisition. >> Yeah. >> That combined with our workspace one and the pulse IOT, we are now giving you the management framework whether it's for people, for things, or devices. And that consistent security on the client, tied with our network security with NSX all the way to the data center security. We're starting to look at what we call intrinsic security. How do we bake security into the platform and start solving these end to end? And have our partner, Accenture, help design these next generation application architectures, all distributed by design. Where do you put a fence? You could put a fence around your data center but your app is using service now and other SaaS services. So how do you set up an application boundary? And the security model around that? So it's really interesting times. >> You hear a lot about our partnership around software defined data center, around networking. With Villo and NSX. But we've actually been spending a lot of time with the IOT team and really looking and a lot of our vision aligns. Actually looking at they've been working with similar age in technology with Liota where, ultimately the edge computing for IOT is going to have to be containerized. Because you're going to need multiple modalware stacks, supporting different vertical applications. We were actually working with one mind where we started off doing video analytics for predictive maintenance on tires for tractors which are really expensive the shovels, et cetera. We started off pushing the data stream, the video stream, up into Azure but the network became a bottleneck. We couldn't get the modality. So we got a process there. They're now looking into autonomous vehicles which need eight megabits load latency band width sitting at the edge. Those two applications will need to co-exist and while we may have Azure Edge running in a container down doing the video analytics, if Caterpillar chooses Green Grass or Jasper, that's going to have to co-exist. So you're going to see the whole containerization that we are starting to see in the data center, is going to push out there. And the other side, Pulse, the management of the Edge, is going to be very difficult. >> I think the whole new frontier. >> Yeah absolutely. >> That's moving forward and with 5G IntelliCorp. They're trying to provide value added services. So what does that mean from an infrastructure perspective? >> Right, right. >> When do you stay on the 5G radio network versus jumping on a back line? When do you move data versus process on the edge? Those are all business decisions that need to be there into some framework. >> So you guys are going, we can go and go and go. But I want to follow up on your segway on containers. 'Cause containers is such an important part of this story and an enabler to this story. And you guys made and aggressive move with Hep TO. We've had Craig McLuckie on when he was still at Google and Dan, great guys. But it's kind of funny right? 'Cause three years ago, everyone was going to DockerCon right? That was like, we're all about shows. That was the hot show. Now Docker's kind of faded and Kubernetes is really taking off. Why, for people that aren't familiar with Kubernetes, they probably hear it at cocktail parties if they live in the Bay area. Why is containers such an important enabler and what's so special about Kubernetes specifically? >> Do you want to go on the general or? >> Why don't your start off? >> I brought my products stuff for sure. >> If you look at the world its getting much more dynamic. Particularly as you start to get more digitally decoupled applications, you're starting, we've come from a world where a virtual machine might have been up for months or years to all the sudden you have containers that are much more dynamic, allowed to scale quickly, and then they need to be orchestrated. And that's essentially what Kubernetes does, is really start to orchestrate that. And as we get more distributed workloads, you need to coordinate them. You need to be able to scale up as you need for performance etcetera So Kubernetes is an incredible technology that allows you really to optimize the placement of that. So just like the virtual machine changed how we compute, containers now gives us a much more flexible, portable, you can run on any infrastructure at any location. Closer to the data etcetera to do that. >> I think the bold move we made is, we finally, after working with customers and partners like Accenture, we have a very comprehensive strategy. We announced Project Tanzu at our last VM World. And Project Tanzu really focused on three aspects of containers, How do you build applications, which is what Pivotal and the acquisition of Pivotal was driven around. How do we run these on a robust enterprise class run time? And what if you could take every vSphere ESX out there and make it a container platform. Now we have half a million customers. 70 million VM's. All the sudden, that run time we are container enabling with a Project Pacific. So vSphere 7 becomes a common place for running containers and VMs. So that debate of VMs or containers? Done, gone. One place or just spend up containers and resources. And then the more important part is how do I manage this? As you have said. Becoming more of a platform, not just an orchestration technology. But a platform for how do I manage applications. Where I deploy them where it makes more sense. I've decoupled my application needs from the resources and Kubernetes is becoming that platform that allows me to portably. I'm the Java Weblogic guy, right? So this is like distributed Weblogic Java on steroids, running across clouds. So pretty exciting for a middleware guy, this is the next generation middleware. >> And to what you just said, that's the enabling infrastructure that will allow it to roll into future things like edge devices. >> Absolutely. >> You can manage an Edge client. You can literally-- >> the edge, yeah. 'Cause now you've got that connection. >> It's in the fabric that you are going to be able to connect. And networking becomes a key part. >> And one of the key things, and this is going to be the hard part is optimization. So how do we optimize across particularly performance but even cost? >> And security, rewiring security and availability. >> So still I think my all time favorite business book is Clayton Christensen, "Innovator's Dilemma". One of the most important lessons in that book is what are you optimizing for? And by rule, you can't optimize for everything equally. You have to rank order. But what I find really interesting in this conversation and where we're going and the complexity of the size of the data, the complexity of what am I optimizing for now just begs for plight AI. This is not a people problem to solve. This is AI moving fast. >> Smart infrastructure going to adapt. >> Right, so as you look at that opportunity to now apply AI over the top of this thing, opens up tremendous opportunity. >> Absolutely, I mean standardized infrastructure allows you, sorry, allows you to get more metrics. It allows you to build models to optimize infrastructure over time. >> And humans just can't get their head around it. I mean because you do have to optimize across multiple dimensions as performance, as cost. But then that performance is compute, it's the network. In fact the network's always going to be the bottleneck. So you look at it, even with 5G which is an order magnitude more band width, the network will still lag. You go back to Moore's Law, right? It's a, even though it's extended to 24 months, price performance doubles, so the amount of data potentially can exponentially grow our networks don't keep pace. So that optimization is constantly going to have to be tuned as we get even with increases in network we're going to have to keep balancing that. >> Right, but it's also the business optimization beyond the infrastructure optimization. For instance, if you are running a big power generation field of a bunch of turbines, right, you may want to optimize for maintenance 'cause things are running in some steady state but maybe there's an oil crisis or this or that, suddenly the price rises and you are like, forget the maintenance right now, we've got a revenue opportunity that we want to tweak. >> You just talked about which is in a dynamic industry. How do I real time change the behavior? And more and more policy driven, where the infrastructure is smart enough to react, based on the policy change you made. That's the world we want to get to and we are far away from that right now. >> I mean ultimately I think the Kubernetes controller gets an AI overlay and then operators of the future are tuning the AI engines that optimize it. >> Right, right. And then we run into the whole thing which we talked about many times in this building with Dr. Rumman Chowdhury from Accenture. Then you got the whole ethics overlay on top of the business and the optimization and everything else. That's a whole different conversation for another day. So, before we wrap I just want to give you kind of last thoughts. As you know customers are in all different stages of their journey. Hopefully, most of them are at least off the first square I would imagine on the monopoly board. What does, you know, kind of just top level things that you would tell people that they really need just to keep always at the top as they're starting to make these considerations? Starting to make these investments? Starting to move workloads around that they should always have at the top of their mind? >> For me it's very simple. It's really about focus on the business outcome. Leverage the best resource for the right need. And design architectures that are flexible that give you choice, you're not locked in. And look for strategic partners, whether it's technology partners or services partners that allow you to guide. Because if complexity is too high, the number of choices are too high, you need someone who has the breadth and depth to give you that platform which you can operate on. So we want to be the ubiquitous platform from a software perspective. Accenture wants to be that single partner who can help them guide on the journey. So, I think that would be my ask is start thinking about who are your strategic partners? What is your architecture and the choices you're making that give you the flexibility to evolve. Because this is a dynamic market. Once you make decisions today, may not be the ones you need in six months even. >> And that dynanicism is accelerating. If you look at it, I mean, we've all seen change in the industry, of decades in the industry. But the rate of change now, the pace, things are moving so quickly. >> And we need to respond to competitive or business oriented industry. Or any regulations. You have to be prepared for that. >> Well gentleman, thanks for taking a few minutes and great conversation. Clearly you're in a very good space 'cause it's not getting any less complicated any time soon. >> Well, thank you again. And thank you. >> All right, thanks. >> Thanks. >> Larry and Ajay, I'm Jeff, you're watching theCUBE. We are top of San Francisco in the Sales Force Tower at the Accenture Innovation Hub. Thanks for watching. We'll see you next time.

Published Date : Sep 12 2019

SUMMARY :

Larry, great to see you again. And Ajay Patel, he's the Excited to be here, and the fact we're part You guys have been in the of defining the two definitions. We are going to be in this Do I need another layer of abstraction? of the cloud while having a common So how do you help them kind of, to find data center, you know, We call it just, you know, kind of get in the trap, hey, and kind of what you and leverage the benefits of and processed outside the cloud. everyone is following the herd And to the meaning that the customer of the manufacturing. how much of that stuff can you do all over the place. around the Carbon Black acquisition. And the security model around that? And the other side, Pulse, and with 5G IntelliCorp. that need to be there into some framework. And you guys made and the sudden you have containers and the acquisition of And to what you just said, You can manage an Edge client. the edge, yeah. It's in the fabric and this is going to be the And security, rewiring of the size of the data, the complexity going to adapt. AI over the top of this thing, It allows you to build models So you look at it, even with suddenly the price rises and you are like, based on the policy change you made. of the future are tuning the and the optimization may not be the ones you in the industry, of You have to be prepared for that. and great conversation. Well, thank you again. in the Sales Force Tower at

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Ajay PatelPERSON

0.99+

AjayPERSON

0.99+

JeffPERSON

0.99+

LarryPERSON

0.99+

SanjayPERSON

0.99+

Larry SocherPERSON

0.99+

Jeff FrickPERSON

0.99+

AndyPERSON

0.99+

PatPERSON

0.99+

AccentureORGANIZATION

0.99+

AWSORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

seventy percentQUANTITY

0.99+

VMWareORGANIZATION

0.99+

Craig McLuckiePERSON

0.99+

24 monthsQUANTITY

0.99+

VMwareORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

Clayton ChristensenPERSON

0.99+

Innovator's DilemmaTITLE

0.99+

500QUANTITY

0.99+

GXPORGANIZATION

0.99+

two applicationsQUANTITY

0.99+

Rumman ChowdhuryPERSON

0.99+

six monthsQUANTITY

0.99+

two definitionsQUANTITY

0.99+

NSXORGANIZATION

0.99+

five floorsQUANTITY

0.99+

three years agoDATE

0.98+

GDPRTITLE

0.98+

WeblogicORGANIZATION

0.98+

theCUBEORGANIZATION

0.98+

OneQUANTITY

0.98+

Sales Force TowerLOCATION

0.98+

MicrosoftORGANIZATION

0.98+

two-wayQUANTITY

0.98+

2022DATE

0.98+

Project TanzuORGANIZATION

0.98+

firstQUANTITY

0.98+

70 million VMQUANTITY

0.97+

DanPERSON

0.97+

KubernetesTITLE

0.97+

eight megabitsQUANTITY

0.97+

oneQUANTITY

0.97+

20,000 applicationsQUANTITY

0.97+

PivotalORGANIZATION

0.96+

AzureTITLE

0.96+

single partnerQUANTITY

0.96+

almost 40 plus percentQUANTITY

0.96+

Cloud Provider Software Business UnitORGANIZATION

0.96+

CaterpillarORGANIZATION

0.96+

first squareQUANTITY

0.96+

half a million customersQUANTITY

0.95+

todayDATE

0.95+

Accenture VMwareORGANIZATION

0.94+

MiansORGANIZATION

0.94+

DockerTITLE

0.94+

DockerConEVENT

0.94+

Azure EdgeTITLE

0.93+

AnjayPERSON

0.93+

thousandsQUANTITY

0.93+

JavaTITLE

0.93+

Project PacificORGANIZATION

0.93+

vSphere ESXTITLE

0.92+

vSphere 7TITLE

0.91+

Dr.PERSON

0.91+

Accenture Innovation HubLOCATION

0.91+

Ajay Patel, VMware & Peter FitzGibbon, Rackspace | VMworld 2019


 

>> Announcer: Live, from San Francisco celebrating 10 years of high-tech coverage it's theCUBE. Covering VMworld 2019. Brought to you by VMware and its ecosystem partners. >> Welcome back, this is theCUBE two stages, three days of coverage, our tenth year here at the VMworld show. I'm Stu Miniman and my co-host for this segment is Bobby Allan. And welcome back, two of our CUBE alumni. >> How are you? >> As I said back in 2010 we didn't even know what a CUBE alumni was. People were trying to figure out what we're doing but now we have thousands of them and both of these gentlemen have been on the program, a few times. >> Thanks for having us back. >> You're welcome. So, first, over we have Ajay Patel, who I believe was doing another filming evening with our crew-- >> Absolutely >> Earlier today. >> The Accenture Innovation Center. >> Ah, excellent. Beautiful building Accenture has here in San Francisco. >> Ajay: Beautiful (mumbles) >> One of the other benefits of being back in San Francisco is we brought in people and it's really easy to get in and out and do other things in the Valley. But Ajay is the senior vice president and general manager of the cloud provider software business unit inside VMware. And one of his partners is Rackspace. We have Peter FitzGibbon who is the vice president of Product Alliances, with for mentioned Rackspace. >> Yeah, super to be back in San Francisco. It's a great change from Vegas. >> Yeah, you know, there is some debate in the community of course it's a little more expensive here in San Francisco and there are other logistic challenges. We're excited to be back here and yeah, really excited to be talking with both of you. Peter, let's start, you know Rackspace has had a long, long partnership with VMware. When I remember back to like VMware Environments Hosted it's like, Rackspace was the one with the lion's share in that market. And, you know, Rackspace has gone through a lot of changes in the last 10 years that we've been doing this coverage. When I think about multi cloud, all of these environments you've got a nice perspective on this and lots of customers you've worked with. So, give us the update on what you're hearing from customers and your relationship with VMware. >> Yeah, so, 20-year history with VMware that we're very proud of. I would say it's almost being re-birthed in the last two years though. Two years ago, we were one of the first VMware Cloud Verified partners. We launched our VMware Cloud VMware Cloud Foundation Private Cloud. We added that about six months later in customer data centers. We're now one of the major partners of VMware Cloud AWS >> Ajay: VMware Cloud AWS yep. >> And that's one of the areas that we're continuing to expand upon. We announced some new services this week, specifically around VMware Cloud AWS or support of HDX, both for migrations for ongoing support as well as a number of, what we call Rackspace service blocks. Which are additional manage services that we are applying, specifically for VMware Cloud and AWS. So, exciting times at Rackspace and VMware continues to be a look, a major part of our portfolio. >> Ajay: And thank you for all the support, Peter. >> Yeah, so Ajay, bring us up to speed of what's happening in your space you know, a lot of attention gets paid, you know Every time, you know, I saw Sanjay Poon, up on stage at the Goolge clould event, and of course the AWS partnership has been one of the biggest stories in all of tech, for the last couple of years. And that's been extending to, you know first it was like, wait, you know Rackspace has data centers and many of your other partners have data centers, but how did these all, play together and how does the VMware software pull them all together. >> So Stu, I think, you and I have been talking about this world of hybrid multi and we've been arguing, whether it's just a transitionary stage, or here to stay. Hopefully that debate's over, right? Hybrid's a new reality, multi cloud's a new reality and we talk about these hyper scales but you know, Rackspace and many of my VCP partners they've been longstanding in this journey with us. I don't know if you caught Pat's keynote? We demonstrated, that we have over 10 000 data centers through our VCPP network and Rackspace being one of our top 10 partners. So you start, to start seeing this mix of VMware everywhere. Whether it's trough our service provider cloud the customer manage cloud or even a hyper scale VMware cloud. You now have the ubiquitous VMware infrastructure to play with. >> At some point it's just cloud. (chattering) >> That is a great point, when I talk to customers most of them, they have a cloud strategy it's usually not a hybrid or a multi or all these things. Here's the nuance I want to, you know, ask for a second then I definitely want Bobby to jump in with what he's been talking to customers about. You know, hybrid cloud is a reality because customers have their own data centers and they have public cloud. The ideal of multi cloud, customers have multiple clouds, but, you know, one of the definitions I put out there is, multi cloud exists when the multi cloud solution is more valuable than the sum of the pieces. And I'm not sure that we're quite there yet. I think we're starting to move down that path. But what are you both seeing? And does that resonate with what you see today? >> Yeah like, all of our customers have workloads in multiple locations and trying to provide the assessments of where to put the right workloads at the right time is one of the key values that we hold dear. And before we ever talk about where we're going to but a workload we assess whether, what our clients environments is and determine, maybe this is an AWS workload maybe this is a WMS workload maybe this workload really belongs in the data center for, due to laws of the lands laws of gravity and physics. >> And I think, what's happening, really is any application, typically choosing a platform or the cloud service that's driving the decision. Collectively what ends up happening because of that, you are in multiple clouds. So, I think what's it's a result of the reality that applications are driving location and platform choices and the way to drive consistency is trying to pick a few common things whether it's kubernetes as a platform or VMware, right? Those are a way to, kind of, unify these desperate choices that are made individually. That are collectively making each of our customers multi cloud, right? >> Ajay, I want to piggyback on that because you talked about the applications driving a lot of the choices, when applications teams in my experience are, kind of, making the choices they don't care about a centralized strategy and obviously, this very powerful partnership can support multiple places and ways around your workloads. How do you lead the witness, a little bit towards simplification and just because you can do it doesn't mean you should do it. >> Yes, so I think what's happening from our perspective is depending on which side of the IT house you're at if you're part of the core IT that's running and maintaining mission critical systems you're really looking for something that's reliable, performance scalable, secure. And you, maybe, looking at a hardware refresher looking at your data center strategy and you're looking to migrate that workload. You're not really looking to re-change the app just because it's cool. >> Bobby: Right. >> If you're part of digital transformation effort you're looking to say, okay how do I get something out there quickly? >> Bobby: Right. >> How do I integrate on the average my data and application assets while leveraging cloud services? >> Bobby: Right. So, we're seeing this tension in some ways where the, kind of, net new is really pushing the envelope of cloud with self service elasticity, new capability while as the old guard is like I got to keep my running business, running keep it secure. And how do you bridge these two worlds and bring them together? We call it DevOps and, you know, ITA and the traditional, kind of new developer. Reality is, you're trying to bring the two worlds on a common platform. Whether it's VM's or containers and so the exciting part for us is, how do we unify? How do we deliver this experience and give them the choice, where it makes more sense. And blur the lines between public and private. Those are just locations and makes more sense for your customer or your application that you can drive. >> Bobby: Right, excellent. >> We find ourselves in those conversations, all the time trying to bridge two sides of the equation at a customer and trying to get them together on a uniformed strategy and weighing the pros and cons of different locations or different workloads. So, it's not easy, it's not a challenge of course. >> Peter, I'd love you to bring us inside some of those VMware on AWS customers because, you know, some of the first customers I talked to, it was, you know, I'm a VMware shop and there's a part of your group that's like oh my gosh, I can't change and this was a driver saying hey, you don't need to, we can bring you along. But, the value, once again needs to be Oh hey, I need to do some innovative things I want to be able to access some of those cool amazing services that, you know everybody is providing on a daily basis. So, you know, are you seeing that progression are there any interesting use cases that are coming out? >> Progression is the word, we could call it progressive transformation inside Rackspace. Like, you're a VMware customer let's bring you ion the journey towards public cloud. And let's help you leverage those address services. So, we find ourselves in a great position where a very large number of engineers, that support our native AWS workloads, we've brought those two groups together from our VMware expertise and address expertise. So when a customer lands on a VMware address I consider it a failure, if they haven't transformed part of the application in three months. If they're not really consuming those native AWS services. And that's what we really try inject. It's like, get our AWS engineers looking at those workloads let's start consuming those native services and that's what we're finding really exciting about how customers are starting to adopt and starting to plug and play into some of those services. >> Oh I look at it, as you know, you'll see a team Sanjay called it M&MS, migrate and modernize but a part of the migrate is often modernize your infrastructure first by putting on a modern cloud platform. And then modernize your application using cloud services. How it says, it's M-M and M, right, to follow through because it's not just about lifting and shifting keeping the old crap as it is. You got to really start to look at how do you drive innovation drive your Cube to a better place. So that you can operate it more affectively and then modernize for application results. And your service blocks, are really catered to helping that customers. So you can talk a little bit about how they're building the services that compliment our offer. >> Yeah, so our service blocks is... In the past, we offered them one big block manage service to a customer. We realized, let's decompose that and offer the customer what they need at a specific point in time. So we, think about Lego blocks, where at some point you may need, just some support or at some point you might need some architectural services and design and other times you might say cost optimization. That sort of stuff. So over time, we're adding on these Lego blocks if you will, to add a customer, to give them what they need at the point they need it, and not more. So, it's an exciting concept that every month, we're adding more services. We launched a Rackspace manage security service block today specifically for VMware cloud. So, we continue to add these and provide incremental value. >> I want to ask you a little bit of a controversial question. There's a saying, pioneers take the arrows but settlers take the land. >> Right >> So, if I'm a technology leader how do I embrace all this newness without getting shot, partnering with your firms. >> So, you know, we always say lock-ins bad but reality is, we always choose to reject technology platforms. And if you're a VMware customer I hate to say it, you're running on VMware infrastructure you have VMware ecosystem, you have VMware run books you have VMware partners, managing your on-prem assets what if I could you a path forward on any cloud of your choice without having to change any of your day-tot-day operation while leveraging the innovation future. What is the safest path for you, Mr Customer? And so, in this world, you can think of us being laggard in some sense. Because we're not pushing them to a single destination. We're giving them that choice, leveraging the strength. I think the innovative part that we've done today has really brought containers and VM'S in a single solution. We talked about containers killing VM'S two years ago, right? You know, VMware was getting trouble with docker VMware was going to be trouble with Openstack. Where are those two companies today and where is VMware? It's about simplifying for the customer a common solution. And we're taking those choices away and making this easy. Giving partners who can help them on their journey. So, I would say we're the safer choice. >> Okay >> That will be my response. >> Peter, we're not going to ask you about Openstack. (Giggles) >> I'm really back to VMware, it's working progress. (Giggles) >> Interesting point, the settlers right? At this point VMSware and AWS is two years old I think that first year, what was definitely some pioneers our there. But now I think we're really in there where the settlers are coming on and we're seeing large-scale adoption in the platform and now that VMware is offering more and more services, natively we can add more those managed services and help those customers really transform and not worry about the underlying IS that's rock-solid at this point. >> Peter, I would like you to get into it a little bit, kind of, the containerization and the kubernetes, you know, Docker, obviously a lot of hype, but containerization that's hugely important, you know a lot of the keynote this morning was talking about cloud native. I talked to lots of customers, you know there's some that, yes, they will want the VMware journey but many of them say, well, If I'm going to cloud I can just use containers. Why would I have the overhead of VM's? when cloud founders was originally created it was not for that type of environment. So where does that fit into, you know your world containers? >> Yeah, we actually launched some more services on that today as well, some more professional services and manage services, so safely around advanced kubernetes support, across all our platforms so this isn't just a VMware announcement this is on AWS, Microsoft, Badger and Google. So, another exciting progression, or hybrid could story and making investments in those resources to deliver kubernetes. We also launched a cloud native service block today, as well, that is really giving customers access to deep engineering skills and giving them cloud reliability engineers that can help them transform their workloads and get them ready for the cloud. >> I think, for us, if you... Project (mumbles) sorry tan zoo as a solution, and project pacific. Our two marquee announcements we made this week and if you look at the way we're focusing on the bull run manage aspects of the full life cycle and our active participation in the kubernetes community we're starting the beginnings of what I felt, like Java in 2000 when I was at BA, right? Where Weblogic and Java was the runtime for rolling and building new apps. Kubernetes and containers are the new runtime for building distributed apps across Cal platforms. And we're in this early journey and we are uniquely in opposition with the combination of pivotal for build. With project Pacific we're bringing containers into V&V-sphere, so VM's and containers become first class. Trough your point, we demonstrated eight percent performance improvement over bare metal on a V-sphere container based solution. Starting to engineer, based on a key scheduling work that we do in the kernel and in the hypervisor we're driving that deep into the kubernetes platform into the core platform itself. And then manage is going to be the new interesting bit. What is that control panel that everyone is going to fight over? And the manage services partner can help them choose. So, I think the battleground is more and more going to manage I think we secured our base with the runtime. And the bill will be about choice. (Mumbles) >> And Tan zoo is music to our ears we can now, again, focus on what's the additional manage services and service-- >> How do you help customers build apps? And change the engineering culture is what you provide. We just give you the runtime across any of these clouds. >> We want to help everyone, transform applications also transform the culture and how they do their business all that rapport-- >> Engineering transformation is a big one. Sajay transformation we talked about, internally for us VMware, same with our customers. You got to change the mindset of how you build the applications. In this container service based architecture >> Agree, agree >> What else is keeping folks up at night? That you talk to? Love to know that, just hot tail. >> Nothing keeps me up at night it's an exciting world we live in so loaded question, what excites me? What excites me is the progression, that VMware is making and the announcement Lydon video and GPU access link I think, early next year. I think that can be another wave of VMC adoptions. So, not keep me up at night but keep me interesting and excited. >> I think to that point I can build on what Pat said about tech for good, I mean we have a joined customer feeding America, right? We're now taking technology and making it available so that, you know, the 60 000 plus distribution centers they have, are up all the time. They're not even worried about infrastructure. They can focus on feeding the cause which is, I think 47 million people being fed. It's scary, right? >> Well, we want to bring it back to the organization of the discussion, you said you're helping customers with because we are worried you know, about how racking, stacking, configuring how doing all of those things, you know how do you help them? I talked to a number of customers at this show and they said look, my roles in my organization is still hardware to find And it's tough to move into a software role but if I want to get into the6 tech for good I need to be able to uplift my skills uplift my organizations, yeah. >> It's difficult, right? Organizational changes differ for every company but as part of the digital transformation there is also organizational transformation so we're having customers think about what is the progression form a VMware administrator to a DevOps-- >> Or cloud, I bet. (Giggles) >> It's not easy, it's your short answer on that. >> I think for us, is really starting to drive the cultural chance providing the tools and bring the self service in where they can be a coach, right? Be the trailblazer, who can come in and help change your organization. Teach them how to do it right. Not everyone will get there, hopefully bulk of the organization can shift right. >> Peter, I want to give you the final word you know, your partners and customers to understand. Take aways from VMware 2019. >> Yeah, it's great to be here, as usual thanks for having us. I think, Tan Zoo is really exciting. The progression that we're making with adding service blocks on top of VMware and AWS and or other hybrid cloud announcements. So, great to be here, but the Tan Zoo is kind of the story of the show. >> For me, it's a VMware is here to stay. We want to be, be have been, your strategic partner for the last decade. We're here to stay for the next decade. We're going to help you solve these hard complex problems and give you the choice you need. Across a broader ecosystem of partners and solutions. so, very excited to be here and to deliver that value. >> And Peter, thank you so much for joining us again, Bobby Allen, thank you for co-hosting. I'm Stu Miniman and as always thank you for watching theCUBE.

Published Date : Aug 27 2019

SUMMARY :

Brought to you by VMware I'm Stu Miniman and my co-host for this segment and both of these gentlemen So, first, over we have Ajay Patel, has here in San Francisco. and it's really easy to get in and out Yeah, super to be back in San Francisco. Yeah, you know, there is some debate in the last two years though. And that's one of the areas that we're continuing and how does the VMware software pull them all together. but you know, Rackspace and many of my VCP partners At some point it's just cloud. Here's the nuance I want to, you know, ask for a second and determine, maybe this is an AWS workload and the way to drive consistency driving a lot of the choices, when applications teams and you're looking to migrate that workload. And how do you bridge these two worlds and cons of different locations or different workloads. I talked to, it was, you know, I'm a VMware shop And let's help you leverage those address services. So that you can operate it more affectively and offer the customer what they need I want to ask you a little bit of a controversial question. how do I embrace all this newness And so, in this world, you can think of us Peter, we're not going to ask you about Openstack. I'm really back to VMware, it's working progress. in the platform and now that VMware is offering and the kubernetes, you know, Docker, obviously and manage services, so safely around and if you look at the way we're focusing And change the engineering culture is what you provide. how you build the applications. That you talk to? and the announcement Lydon video and GPU access link so that, you know, the 60 000 plus distribution centers of the discussion, you said (Giggles) and bring the self service in you know, your partners and customers So, great to be here, but the Tan Zoo is kind of and give you the choice you need. And Peter, thank you so much

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Ajay PatelPERSON

0.99+

Bobby AllenPERSON

0.99+

AjayPERSON

0.99+

Bobby AllanPERSON

0.99+

Stu MinimanPERSON

0.99+

AWSORGANIZATION

0.99+

PeterPERSON

0.99+

RackspaceORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

2010DATE

0.99+

BobbyPERSON

0.99+

two companiesQUANTITY

0.99+

2000DATE

0.99+

Peter FitzGibbonPERSON

0.99+

CUBEORGANIZATION

0.99+

PatPERSON

0.99+

tenth yearQUANTITY

0.99+

GoogleORGANIZATION

0.99+

20-yearQUANTITY

0.99+

bothQUANTITY

0.99+

10 yearsQUANTITY

0.99+

twoQUANTITY

0.99+

JavaTITLE

0.99+

two groupsQUANTITY

0.99+

AccentureORGANIZATION

0.99+

VegasLOCATION

0.99+

Sanjay PoonPERSON

0.99+

oneQUANTITY

0.99+

firstQUANTITY

0.99+

two sidesQUANTITY

0.99+

three daysQUANTITY

0.99+

two years agoDATE

0.99+

eight percentQUANTITY

0.99+

over 10 000 data centersQUANTITY

0.98+

early next yearDATE

0.98+

three monthsQUANTITY

0.98+

thousandsQUANTITY

0.98+

next decadeDATE

0.98+

47 million peopleQUANTITY

0.98+

BadgerORGANIZATION

0.98+

first yearQUANTITY

0.98+

ITAORGANIZATION

0.98+

StuPERSON

0.98+

eachQUANTITY

0.98+

VMworldEVENT

0.98+

Two years agoDATE

0.98+

Jason Bloomberg, Intellyx | KubeCon + CloudNativeCon EU 2019


 

>> Live from Barcelona, Spain, it's theCUBE! Covering KubeCon and CloudNativeCon Europe 2019. Brought to you by Red Hat, the Cloud Native Computing Foundation, and ecosystem partners. >> Welcome back. This is theCUBE's live coverage of KubeCon, CloudNativeCon 2019 here in Barcelona, Spain. 7,700 here in attendance, here about all the Cloud Native technologies. I'm Stu Miniman; my cohost to the two days of coverage is Corey Quinn. And to help us break down what's happening in this ecosystem, we've brought in Jason Bloomberg, who's the president at Intellyx. Jason, thanks so much for joining us. >> It's great to be here. >> All right. There's probably some things in the keynote I want to talk about, but I also want to get your general impression of the show and beyond the show, just the ecosystem here. Brian Liles came out this morning. He did not sing or rap for us this morning like he did yesterday. He did remind us that the dinners in Barcelona meant that people were a little late coming in here because, even once you've got through all of your rounds of tapas and everything like that, getting that final check might take a little while. They did eventually filter in, though. Always a fun city here in Barcelona. I found some interesting pieces. Always love some customer studies. Conde Nast talking about what they've done with their digital imprint. CERN, who we're going to have on this program. As a science lover, you want to geek out as to how they're finding the Higgs boson and how things like Kubernetes are helping them there. And digging into things like storage, which I worked at a storage company for 10 years. So, understanding that storage is hard. Well, yeah. When containers came out, I was like, "Oh, god, we just fixed it for virtualization, "and it took us a decade. "How are we going to do it this time?" And they actually quoted a crowd chat that we had in our community. Tim Hawken, of course one of the first Kubernetes guys, was in on that. And we're going to have Tim on this afternoon, too. So, just to set a little context there. Jason, what's your impressions of the show? Anything that has changed in your mind from when you came in here to today? Let's get into it from there. >> Well, this is my second KubeCon. The first one I went to was in Seattle in December. What's interesting from a big picture is really how quickly and broadly KubeCon has been adopted in the enterprise. It's still, in the broader scheme of things, relatively new, but it's really taking its place as the only container orchestrator anybody cares about. It sort of squashed the 20-or-so alternative container orchestrators that had a brief day in the sun. And furthermore, large enterprises are rapidly adopting it. It's remarkable how many of them have adopted it and how broadly, how large the deployment. The Conde Nast example was one. But there are quite a number. So we turned the corner, even though it's relatively immature technology. That's the interesting story as well, that there's still pieces missing. It's sort of like flying an airplane while you're still assembling it, which makes it that much more exciting. >> Yeah, one of the things that has excited me over the last 10 years in tech is how fast it takes me to go from ideation to production, has been shrinking. Big data was: "Let's take the thing that used to take five years "and get it down to 18 months." We all remember ERP deployments and how much money and people you need to throw at that. >> It still takes a lot of money and people. >> Right, because it's ERP. I was talking to one of the booths here, and they were doing an informal poll of, "How many of you are going to have Kubernetes "in production in the next six months?" Not testing it, but in production in the next six months, and it was more than half of the people were going to be ramping it up in that kind of environment. Anything architecturally? What's intriguing you? What's the area that you're digging down to? We know that we are not fully mature, and even though we're in production and huge growth, there's still plenty of work to do. >> An interesting thing about the audience here is it's primarily infrastructure engineers. And the show is aimed at the infrastructure engineers, so it's technical. It's focused on people who code for a living at the infrastructure level, not at the application level. So you have that overall context, and what you end up having, then, is a lot of discussions about the various components. "Here's how we do storage." "Here's how we do this, here's how we do that." And it's all these pieces that people now have to assemble, as opposed to thinking of it overall, from the broader context, which is where I like writing about, in terms of the bigger picture. So the bigger picture is really that Cloud Native, broadly speaking, is a new architectural paradigm. It's more than just an architectural trend. It's set of trends that really change the way we think about architecture. >> One interesting piece about Kubernetes, as well. One of the things we're seeing as we see Kubernetes start to expand out is, unlike serverless, it doesn't necessarily require the same level of, oh, just take everything you've done and spend 18 months rewriting it from scratch, and then it works in this new paradigm in a better way. It's much less of a painful conversion process. We saw in the keynote today that they took WebLogic, of all things, and dropped that into Kubernetes. If you can do it with something as challenging, in some respects, and as monolithic as WebLogic, then almost any other stack you're going to see winds up making some sense. >> Right, you mentioned serverless in contrast with Kubernetes, but actually, serverless is part of this Cloud Native paradigm as well. So it's broader than Kubernetes, although Kubernetes has established itself as the container orchestration platform of choice. But it's really an overall story about how we can leverage the best practices we've learned from cloud computing across the entire enterprise IT landscape, both in the cloud and on premises. And Kubernetes is driving this in large part, but it's bigger picture than the technology itself. That's what's so interesting, because it's so transformative, but people here are thinking about trees, not the forest. >> It's an interesting thing you say there, and I'm curious if you can help our community, Because they look at this, and they're like, "Kubernetes, Kubernetes, Kubernetes." Well, a bunch of the things sit on Kubernetes. As they've tried to say, it's a platform of platforms. It's not the piece. Many of the things can be with Kubernetes but don't have to be. So, the whole observability piece. We heard the merging of the OpenCensus, OpenTracing with OpenTelemetry. You don't have to have Kubernetes for that to be a piece of it. It can be serverless underneath it. It can be all these other pieces. Cloud Native architecture sits on top of it. So when you say Cloud Native architecture, what defines that? What are the pieces? How do I have to do it? Is it just, I have to have meditated properly and had a certain sense of being? What do we have to do to be Cloud Native? >> Well, an interesting way of looking at it is: What we have subtracted from the equation, so what is intentionally missing. Cloud Native is stateless, it is codeless, and it is trustless. Now, not to say that we don't have ways of dealing with state, and of course there's still plenty of code, and we still need trust. But those are architectural principals that really percolate through everything we do. So containers are inherently stateless; they're ephemeral. Kubernetes deals with ephemeral resources that come and go as needed. This is key part of how we achieve the scale we're looking for. So now we have to deal with state in a stateless environment, and we need to do that in a codeless way. By codeless, I mean declarative. Instead of saying, how are we going to do something? Let's write code for that, we're going to say, how are we going to do that? Let's write a configuration file, a YAML file, or some other declarative representation of what we want to do. And Kubernetes is driven this way. It's driven by configuration, which means that you don't need to fork it. You don't need to go in and monkey with the insides to do something with it. It's essentially configurable and extensible, as opposed to customizable. This is a new way of thinking about how to leverage open-source infrastructure software. In the past, it was open-source. Let's go in an monkey with the code, because that's one of the benefits of open-source. Nobody wants to do that now, because it's declaratively-driven, and it's configurable. >> Okay, I hear what you're saying, and I like what you're saying. But one of the things that people say here is everyone's a little bit different, and it is not one solution. There's lots of different paths, and that's what's causing a little bit of confusion as to which service mesh, or do I have a couple of pieces that overlap. And every deployment that I see of this is slightly different, so how do I have my cake and eat it, too? >> Well, you mentioned that Kubernetes is a platform of platforms, and there's little discussion of what we're actually doing with the Kubernetes here at the show. Occasionally, there's some talk about AI, and there's some talk about a few other things, but it's really up to the users of Kubernetes, who are now the development teams in the enterprises, to figure out what they want to do with it and, as such, figure out what capabilities they require. Depending upon what applications you're running and the business use cases, you may need certain things more than others. Because AI is very different from websites, it's very different from other things you might be running. So that's part of the benefit of a platform of platforms, is it's inherently configurable. You can pick and choose the capabilities you want without having to go into Kubernetes and fork it. We don't want 12 different Kubernetes that are incompatible with each other, but we're perfectly okay with different flavors that are all based on the same, fundamental, identical code base. >> We take a look at this entire conference, and it really comes across as, yes, it's KubeCon and CloudNativeCon. We look at the, I think, 36 projects that are now being managed by this. But if we look at the conversations of what's happening here, it's very clear that the focus of this show is Kubernetes and friends, where it tends to be taking the limelight of a lot of this. One of the challenges you start seeing as soon as you start moving up the stack, out through the rest of the stack, rather, and seeing what all of these Cloud Native technologies are is, increasingly, they're starting to be defined by what they aren't. I mean, you have the old saw of, serverless runs on servers, and other incredibly unhelpful sentiments. And we talk about what things aren't more so than we do what they are. And what about capabilities story? I don't have an answer for this. I think it's one of those areas where language is hard, and defining what these things are is incredibly difficult. But I see what you're saying. We absolutely are seeing a transformative moment. And one of the strangest things about it, to me at least, is the enthusiasm with which we're seeing large enterprises, that you don't generally think of as being particularly agile or fast-moving, are demonstrating otherwise. They're diving into this in fascinating ways. It's really been enlightening to have conversations for the last couple of days with companies that are embracing this new paradigm. >> Right. Well, in our perspective at Intellyx, we're focusing on digital transformation in the enterprise, which really means putting the customer first and having a customer-driven transformation of IT, as well as the organization itself. And it's hard to think in those terms, in customer-facing terms, when you're only talking about IT infrastructure. Be that as it may, it's still all customer-driven. And this is sometimes the missing piece, is how do we connect what we're doing on the infrastructure side with what customers require from these companies that are implementing it? Often, that missing piece centers on the workload. Because, from the infrastructure perspective, we have a notion of a workload, and we want workload portability. And portability is one of the key benefits of Kubernetes. It gives us a lot of flexibility in terms of scalability and deployment options, as well as resilience and other benefits. But the workload also represents the applications we're putting in front of our end users, whether they're employees or end customers. So that's they key piece that is like the keystone that ties the digital story, that is the customer-facing, technology-driven, technology-empowered story, with the IT infrastructure stories. How do we support the flexibility, scalability, resilience of the workloads that the business needs to meet its business goals? >> Yeah, I'm really glad you brought up that digital transformation piece, because I have two questions, and I want to make sure I'm allowing you to cover both of them. One is, the outcome we from people as well: "I need to be faster, and I need to be agile." But at the same point, which pieces should I, as an enterprise, really need to manage? Many of these pieces, shouldn't I just be able to consume it as a managed service? Because I don't need to worry about all of those pieces. The Google presentation this morning about storage was: You have two options. Path one is: we'll take care of all of that for you. Path two is: here's the level of turtles that you're going to go all the way down, and we all know how complicated storage is, and it's got to work. If I lose my state, if I lose my pieces there, I'm probably out of business or at least in really big trouble. The second piece on that, you talked about the application. And digital transformation. Speed's great and everything, but we've said at Wikibon that the thing that will differentiate the traditional companies and the digitally transformed is data will drive your business. You will have data, it will add value of business, and I don't feel that story has come out yet. Do you see that as the end result from this? And apologies for having two big, complex questions here for you. >> Well, data are core to the digital transformation story, and it's also an essential part of the Kubernetes story. Although, from the infrastructure perspective, we're really thinking more about compute than about data. But of course, everything boils down to the data. That is definitely always a key part of the story. And you're talking about the different options. You could run it yourself or run it as a managed service. This is a key part of the story as well, is that it's not about making a single choice. It's about having options, and this is part of the modern cloud storage. It's not just about, "Okay, we'll put everything in one public cloud." It's about having multiple public clouds, private clouds, on-premises virtualization, as well as legacy environments. This is what you call hybrid IT. Having an abstracted collection of environments that supports workload portability in order to meet the business needs for the infrastructure. And that workload portability, in the context of multiple clouds, that is becoming increasingly dependent on Kubernetes as an essential element of the infrastructure. So Kubernetes is not the be-all and end-all, but it's become an essentially necessary part of the infrastructure, to make this whole vision of hybrid IT and digital transformation work. >> For now. I mean, I maintain that, five years from now, no one is going to care about Kubernetes. And there's two ways that goes. Either it dries up, blows away, and something else replaces it, which I don't find likely, or, more likely, it slips beneath the surface of awareness for most people. >> I would agree, yeah. >> The same way that we're not sitting here, having an in-depth conversation about which distribution of Linux, or what Linux kernel or virtual memory manager we're working with. That stuff has all slipped under the surface, to the point where there are people who care tremendously about this, but you don't need to employ them at every company. And most companies don't even have to think about it. I think Kubernetes is heading that direction. >> Yeah, it looks like it. Obviously, things continue to evolve. Yeah, Linux is a good example. TCP/IP as well. I remember the network protocol wars of the early 90s, before the web came along, and it was, "Are we going to use Banyan VINES, "are we going to use NetWare?" Remember NetWare? "Or are we going to use TCP/IP or Token Ring?" Yeah! >> Thank you. >> We could use GDP, but I don't get it. >> Come on, KOBOL's coming back, we're going to bring back Token Ring, too. >> KOBOL never went away. Token Ring, though, it's long gone. >> I am disappointed in Corey, here, for not asking the question about portability. The concern we have, as you say: okay, I put Kubernetes in here because I want portability. Do I end up with least-common-denominator cloud? I'm making a decision that I'm not going to go deep on some of the pieces, because nice as the IPI lets things through, but we understand if I need to work across multiple environments, I'm usually making a trade-off there. What do you hear from customers? Are they aware that they're doing this? Is this a challenge for people, not getting the full benefit out of whichever primary or whichever clouds they are using? >> Well, portability is not just one thing. It's actually a set of capabilities, depending upon what you are trying to accomplish. So for instance, you may want to simply support backing up your workload, so you want to be able to move it from here to there, to back it up. Or you may want to leverage different public clouds, because different public clouds have different strengths. There may be some portability there. Or you may be doing cloud migration, where you're trying to move from on-premises to cloud, so it's kind of a one-time portability. So there could be a number of reasons why portability is important, and that could impact what it means to you, to move something from here to there. And why, how often you're going to do it, how important it is, whether it's a one-to-many kind of thing, or it's a one-to-one kind of thing. It really depends on what you're trying to accomplish. >> Jason, last thing real quick. What research do you see coming out of this? What follow-up? What should people be looking for from Intellyx in this space in the near future? >> Well, we continue to focus on hybrid IT, which include Kubernetes, as well as some of the interesting trends. One of the interesting stories is how Kubernetes is increasingly being deployed on the edge. And there's a very interesting story there with edge computing, because the telcos are, in large part, driving that, because of their 5G roll-outs. So we have this interesting confluence of disruptive trends. We have 5G, we have edge computing, we have Kubernetes, and it's also a key use case for OpenStack, as well. So it's like all of these interesting trends are converging to meet a new class of challenges. And AI is part of that story as well, because we want to run AI at the edge, as well. That's the sort of thing we do at Intellyx, is try to take multiple disruptive trends and show the big picture overall. And for my articles for SiliconANGLE, that's what I'm doing as well, so stay tuned for those. >> All right. Jason Bloomberg, thank you for helping us break down what we're doing in this environment. And as you said, actually, some people said OpenStack is dead. Look, it's alive and well in the Telco space and actually merging into a lot of these environments. Nothing ever dies in IT, and theCUBE always keeps rolling throughout all the shows. For Corey Quinn, I'm Stu Miniman. We have a full-packed day of interviews here, so be sure to stay with us. And thank you for watching theCUBE. (upbeat techno music)

Published Date : May 22 2019

SUMMARY :

Brought to you by Red Hat, And to help us break down what's happening Tim Hawken, of course one of the first Kubernetes guys, and how broadly, how large the deployment. Yeah, one of the things that has excited me What's the area that you're digging down to? is a lot of discussions about the various components. One of the things we're seeing as we see Kubernetes but it's bigger picture than the technology itself. Many of the things can be with Kubernetes Now, not to say that we don't have But one of the things that people say here is You can pick and choose the capabilities you want One of the challenges you start seeing And portability is one of the key benefits of Kubernetes. One is, the outcome we from people as well: of the infrastructure, to make this whole vision beneath the surface of awareness for most people. And most companies don't even have to think about it. I remember the network protocol wars of the early 90s, we're going to bring back Token Ring, too. KOBOL never went away. because nice as the IPI lets things through, and that could impact what it means to you, What research do you see coming out of this? That's the sort of thing we do at Intellyx, And as you said, actually,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Tim HawkenPERSON

0.99+

JasonPERSON

0.99+

SeattleLOCATION

0.99+

Corey QuinnPERSON

0.99+

Stu MinimanPERSON

0.99+

Brian LilesPERSON

0.99+

Jason BloombergPERSON

0.99+

12QUANTITY

0.99+

BarcelonaLOCATION

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

two questionsQUANTITY

0.99+

five yearsQUANTITY

0.99+

10 yearsQUANTITY

0.99+

Red HatORGANIZATION

0.99+

DecemberDATE

0.99+

bothQUANTITY

0.99+

18 monthsQUANTITY

0.99+

secondQUANTITY

0.99+

CERNORGANIZATION

0.99+

36 projectsQUANTITY

0.99+

20QUANTITY

0.99+

TimPERSON

0.99+

IntellyxORGANIZATION

0.99+

Barcelona, SpainLOCATION

0.99+

two waysQUANTITY

0.99+

second pieceQUANTITY

0.99+

OneQUANTITY

0.99+

two daysQUANTITY

0.99+

7,700QUANTITY

0.99+

KubeConEVENT

0.99+

two optionsQUANTITY

0.99+

KOBOLORGANIZATION

0.99+

oneQUANTITY

0.99+

firstQUANTITY

0.99+

yesterdayDATE

0.98+

one solutionQUANTITY

0.98+

LinuxTITLE

0.98+

GoogleORGANIZATION

0.98+

todayDATE

0.97+

KubernetesTITLE

0.97+

early 90sDATE

0.97+

Cloud NativeTITLE

0.96+

WikibonORGANIZATION

0.96+

more than halfQUANTITY

0.96+

this morningDATE

0.95+

CloudNativeCon Europe 2019EVENT

0.95+

one thingQUANTITY

0.95+

WebLogicTITLE

0.94+

first oneQUANTITY

0.94+

One interesting pieceQUANTITY

0.93+

Path oneQUANTITY

0.93+

single choiceQUANTITY

0.93+

this afternoonDATE

0.92+

CloudNativeCon 2019EVENT

0.92+

Path twoQUANTITY

0.92+

one of the boothsQUANTITY

0.92+

next six monthsDATE

0.91+

Linux kernelTITLE

0.9+

two bigQUANTITY

0.89+

Keynote Analysis | Day 1 | Red Hat Summit 2018


 

>> Announcer: Live from San Francisco, it's theCUBE. Covering Red Hat Summit 2018. Brought to you by Red Hat. >> Hello everyone, welcome to theCUBE's special coverage here at Red Hat Summit. This is exclusive three days of wall-to-wall coverage of theCUBE. I've been covering Red Hat for years. Excited to be back here at Moscone West. I'm John Furrier, the co-host of theCUBE, with my co-host analyst this week, John Troyer. He's the CEO of TechReckoning, an advisory firm in the technology industry as well as an influencer, and he advises on influencer and influencer of communities. I would say it's community focused. John, great to see you. Welcome to the Red Hat Summit. We're going to kick it off! >> Great to be here. Thanks for having me. >> So you know I am pretty bullish on open source. I have been from day one. At my age who have lived through the wars of when it was second class citizen. Now it's first class citizen. Software power in the world. Again, on and on, this is not a new story. What is the new story is the cloud impact to the world of open source and business. We're seeing the results of Amazon just continue to be skyrocketing. You see Microsoft as you're having their developer conference of Microsoft Build this week. Google I/O is also this week. There is a variety of events happening. It's all pointing to cloud economics, cloud scale, and the role of software and data, and Red Hat has been a big time winner in taking advantage of these trends by making some good bets. >> Absolutely. I think one of the words were going to hear a lot this week is OpenShift. They are a container and cloud platform. Hybrid cloud is a super big emphasis here. Hybrid cloud, multi cloud already on stage at the first key note. They had a big stack of machines and they were going out to a multi cloud deployment right there on stage. Open source, also huge this week, right? The key note, the tagline, of the whole conference, if you are interested in open source, you should be here. I think you nailed it. It's going to be about multi cloud. >> It's exciting for me, I got to say. The disruption that's happening obviously with IT, with cloud, is pretty much out there. We pretty much recognize IT as transforming into a whole other look in terms of how it's operating, but the interesting thing that's just happening recently is the overwhelming takeover of Kubernetes and the conversation and in the stack you're seeing a rallying point and a rallying cry and establishing a de facto standard of Kubernetes. The big news of 2018 is, to me, the de facto standard of Kubernetes across a multi cloud, hybrid cloud architecture to allow developers and also infrastructure providers the ability to move workloads around, managing workloads across clouds. This is kind of the holy grail outcome everyone's looking for is how do I get to a true multi cloud world? And I think Kubernetes this year has the stake in the ground to say we're going to make that the interoperable capability. And Red Hat made a bet a couple years ago, three, four years ago. Everyone was scratching their head. What the hell are they doing with Kubernetes? What's Red Hat-- They're looking like geniuses now because of the results. >> Absolutely. In fact, I think by the end my joke is going to be this is the OpenShift Summit. I'll be very interested, John in your observations. You were at KubeCon last week. So that's the open source project and the ecosystem around Kubernetes. Red Hat owns a lot of Kubernetes. Red Hat employs many of the Kubernetes' leaders. They have really taken over from Google in a lot of ways about the implementation and go-forward path for Kubernetes. So this is the show that takes that open source project and packages it into something that an IT buyer can understand and take. >> I got to say one of the things that is interesting, and this is not well-reported in the news. It's a nuanced point but it's kind of an interesting thing, I think an inflection point for Red Hat. By them buying CoreOS has been a really good outcome for both companies. CoreOS, pure open source DNA in that business. Those guys were doing some amazing technology development, and again, all pure open source. Total pure. There is nothing wrong with being a pure open source. My point is, when you have that kind of religious point of view and then the pressure to monetize it Docker has had. We know what happened there. So CoreOS was doing amazing things but it kind of took a lot of pressure from the market. How are you going to make money? You know I always say it's hard to make money when you're trying to do it too early. So CoreOS lands at Red Hat who has generations of commercialization. Those two together is really going to give Red Hat the capability to go to the next level when you talk about applications. It's going to increase their total addressable market. It's going to give them more range. And with Kubernetes becoming the de facto standard, OpenShift now can become a key platform as a service that really enables new applications, new management capabilities. This should expand the RHEL opportunity from a market standpoint in a significant, meaningful way. I think if you're like a financial analyst or you're out there looking at this going, hmm, where's the dots connecting? It's connecting up the stack, software to service, with DevOps, with cloud native, Red Hat is positioned well. So that's my takeaway from KubeCon. >> Interesting. Yeah, before we move away from CoreOS, a lot of announcements today about how Red Hat will be incorporating CoreOS technologies into their platform. They talked about the operator framework. I think one of the bigger pieces of news is that CoreOS' OS, called Container Linux changes its name back to CoreOS and will now be the standard container operating system for Red Hat. That's kind of big news because Red Hat had its own atomic host, its own kind of micro, mini Linux distribution and so now they're switching over to that. They also talked about Tectonic, which actually is a really good automated operations stack, some of those technologies. In the future they will be incorporated into OpenShift. So they were talking a little bit about futures but it at least they've given a roadmap. No one was quite sure what the super-smart rocket scientists at CoreOS were doing here and so now we know a little more. >> And also at KubeCon they announced the open source of the operator framework. It's an open source toolkit for managing Kubernetes clusters. Again, and first of all, I love the CoreOS name. This is all about what Red Hat is doing. Now let's not forget the ecosystem that Red Hat has. So you're talking about a company that's been successful in open source for multiple generations now. Looking forward to this next generation modern infrastructure, you're seeing the stack look completely different with the cloud. If you look at all the presentations from Amazon, Google, Microsoft, the stack is not the old stack. It's a new concept. New things are happening so you've got to swap some pieces out. You get CoreOS, you bring that in, new puzzle piece. But look at the deals they're doing. They did a relationship with IBM, so IBM's back into the fold with Red Hat joining forces. >> Containerizing some of their biggest components like WebLogic and Dv2 and MQ. >> I think the containerization will create a nice compatibility mode, bring these old legacy apps into a modern cloud native architecture and gives that an opportunity to kind of get into the game, but also bring cloud native to the table. >> Absolutely. >> You've got IoT Edge, all these new applications. You just can't go anywhere without hearing about Internet of Things, machine learning, AI, cameras, whatnot. All this is happening. >> Absolutely. So we're going to break it down all week for the next three days. Red Hat Summit. It's all about containers, it's all about the Linux moment, kind of going to the next level. Cloud native, big time data action. All the great stuff happening. All done with open source with projects with new products being commercialized from these projects. This is the open source ethos. This is of course theCUBE coverage. We'll be back with more live coverage here in San Francisco at Moscone West after this short break.

Published Date : May 8 2018

SUMMARY :

Brought to you by Red Hat. an advisory firm in the technology industry Great to be here. What is the new story is the cloud impact It's going to be about multi cloud. in the ground to say we're going to make that Red Hat employs many of the Kubernetes' leaders. the capability to go to the next level They talked about the operator framework. Again, and first of all, I love the CoreOS name. Containerizing some of their biggest components to kind of get into the game, but also bring cloud native All this is happening. This is the open source ethos.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

JohnPERSON

0.99+

John TroyerPERSON

0.99+

AmazonORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

John FurrierPERSON

0.99+

Red HatORGANIZATION

0.99+

2018DATE

0.99+

CoreOSTITLE

0.99+

last weekDATE

0.99+

both companiesQUANTITY

0.99+

TechReckoningORGANIZATION

0.99+

Moscone WestLOCATION

0.99+

CoreOS'TITLE

0.99+

twoQUANTITY

0.99+

three daysQUANTITY

0.98+

theCUBEORGANIZATION

0.98+

this weekDATE

0.98+

Red Hat SummitEVENT

0.98+

this yearDATE

0.97+

four years agoDATE

0.97+

Red Hat Summit 2018EVENT

0.97+

todayDATE

0.97+

Red HatTITLE

0.97+

RHELTITLE

0.96+

LinuxTITLE

0.96+

OpenShift SummitEVENT

0.96+

KubeConEVENT

0.96+

OpenShiftTITLE

0.96+

oneQUANTITY

0.96+

WebLogicTITLE

0.96+

Google I/OEVENT

0.95+

Container LinuxTITLE

0.95+

first key noteQUANTITY

0.94+

Dv2TITLE

0.92+

Microsoft BuildEVENT

0.92+

DevOpsTITLE

0.91+

Day 1QUANTITY

0.89+

KubeConORGANIZATION

0.89+

couple years agoDATE

0.86+

firstQUANTITY

0.86+

MQTITLE

0.84+

first classQUANTITY

0.84+

day oneQUANTITY

0.84+

KubernetesTITLE

0.83+

second classQUANTITY

0.82+

CoreOSORGANIZATION

0.82+

threeDATE

0.8+

KubernetesORGANIZATION

0.78+

one ofQUANTITY

0.74+

DockerORGANIZATION

0.74+

Kubernetes'ORGANIZATION

0.74+

yearsQUANTITY

0.72+

TectonicORGANIZATION

0.69+

HatTITLE

0.68+

Sidhartha Argawal and Mark Cavage, Oracle - DockerCon 2017 - #theCUBE - #DockerCon


 

(upbeat electronic music) >> Announcer: Live, from Austin, Texas, it's theCUBE, covering DockerCon 2017. Brought to you by Docker in support from its eco-system partners. >> Hi, I'm Stu Miniman and welcome back to theCUBE's coverage of DockerCon 2017. Happy to welcome to the program one of the Keynote speakers from this morning. It's Mark Cavage who is the Vice President of Engineering with Oracle, and, also joining, is Sidhartha Argawal who's the Vice President of Product Management and Strategy, also with Oracle. You've been on the programs a few times, thanks for joining us again. And Mark, thank you for joining us for the first time on theCUBE. >> Absolutely, glad to be here. >> So, you know, one of the topics we've been talking about, this week, is kind of the maturation of what goes on in containers, and the thing that jumped out at me is, you know, we talk about all the use cases, some of the cool things you're doing, it's like, "What applications do I run in containers?," pretty much all applications that I'm running. And, I've said, the stickiest application that's out there today is the one that your company does. You know, you talked about the Database, talked about some of your products. You know, Oracle, very well known as to kind of where your applications do. So, you know, on the Keynote this morning, I mean, there was actually like a pretty good round of applause talking about your announcement. So, Mark, let's start with you as to the announcement you made, you know, partnership with Docker. and what's happening. >> Sure. Yeah, no, absolutely. Honestly, like we're really thrilled about it. We're really excited leading up to this. You know, as I say, or as I said, there's a few people that know about that Database and know about Java. So, we got a lot of people using our apps. You know, we've been working with Docker for a few months. It's a great partnership. As we, you know, kind of announced in the partnership, or in the Keynote, sorry, you know, we put out basically everything that's important, right. So, we started with the bedrock software that people are using to build all the modern or their traditional, mission-critical applications, they're now modernized. So, database, WebLogic, Java, Linux, that's all certified now in Docker. So, it's a big deal for us. We're really happy about it. >> Great, it's interesting to hear. It's like, "Oh, we've been a great partnership "for a few months." I mean, you know application development, you know, is like decades it takes for things to change. Talk about how this fits into to kind of overall strategy, the platforms you build, and what's happening at Oracle these days. >> Yeah, I mean developers are wanting to leverage the Oracle content in the containerized format so that they could easily, for example, not have to worry patching, upgrading, et cetera. They could easily move those into production. So, what we're doing is we're connecting a lot with developers by having a series of events called Oracle Code Events where these are free events where we inviting developers to come. The topics are containers, microservices, dev-ops, chatbots, machine learning, and it's not about Oracle delivering all the sessions in those events. We opened up a call for papers and in three months we got 1800 submissions for external speakers to deliver sessions. So, it's about a 50-50 split between external speakers and internal Oracle speakers talking about all exciting, sort of, areas in dev-ops, in containers, in microservices. We created a developer portal so developers can go to that portal and, from Oracle, get access to all the assets that are there. We're creating a Oracle Champions program, called Oracle Gurus, so that people who really good, who really want to be blogging and talking about content, they can get recognized by Oracle. So, we're doing a lot to connect with developers. >> That's great. And, you know, in the Keynote, you talked about this is free for test and dev purposes. Got to ask you about, which probably your favorite question, though, is, you know, the audience... You know, I looked on social media and it's like, "All right, what does this mean "when I containerize from a licensing standpoint?" We've all seen kind of, you know, cloud pricing models, if it's, you know, Oracle versus if I'm using, say, AWS. So, what is the licensing impact when we go to a containerized environment? >> I know, honestly it's not any different than we are today, but, you know, we'll be clarifying it over the next couple months. >> Stu: Okay. >> As I said, we'll be iterating a lot with Docker Store and all their software catalog we put out there. It's, you know, stay tuned for more. >> And I think the one thing to add is that, you know, the key benefit that developers get is, for example, if they go to Docker Hub today. You have 80 different images that different people have put up for WebLogic or for Oracle Databases. You don't know which one you want to use, right. But, when you come to Docker Store, Oracle has certified the images and put those images up. So now, you can get support from Oracle. It's certified by Oracle. And then, if you report problems, Oracle knows which images to fix or what problems to fix as opposed to some random images that might be there on Docker Hub. >> Yep. >> Yep. >> Yeah, that's been a real problem, so it's a big deal. >> Yeah. >> So, we've seen a lot of diversity as to how users can consume the applications. Maybe, give us a little insight as to how things are going in Oracle. I mean, you know, you've got your staff, you've got your cloud, you know, we talked about containers here. I mean, it's, you know, rapid change in something that, you know, overall, I mean, the application they're using doesn't drastically change overnight. Consumption models. >> Yeah, no, you know honestly the company's been going through a huge transformation over the last few years, as I'm sure you've been told, as I'm sure Sidhartha has told you. You know, we're actually containerizing ourselves, internally, across the board. Almost all the new PATH software we're building, almost all of the new IS software we're building, we're building towards that. All of our PATH software, all of our IS software, we're going pay by the hour, fully metered, fully usage-based pricing. >> So, you know, we want to make sure the people can consume in a subscription based format, and it goes across application development, cloud services, across Integration Cloud Services, analytics, management from the cloud, identity, et cetera, everything is on a subscription basis and we're also enabling this on-premise. So, there's developers who work at financially-sensitive companies that have compliance issues, or that work in companies within countries that are data residency issues, and they're unable to benefit from the rapid innovation that's happening in the cloud. So, we're actually providing that same subscription model in their data center. So, we ship an appliance, they start using the appliance, and we're actually delivering the service on that appliance. So, they could do dev-test in the public cloud, and then, you know, do production on-prem where they're meeting the compliance requirements, data residency requirements, and Oracle is managing that environment. You're not buying the appliance. You're actually buying the service just as you were buying it in the public cloud. >> Mark: And the pricing is identical. >> And the pricing is identical between public cloud and what you get delivered as public cloud in a data center, yes. >> One of the things, you know, those of us that watch Oracle for a long time. You know, people have the perception of what Oracle is. I've seen a number of, you know, really good people that I know, Oracle's hired over the last few years. Mark, I mean you were called one, you know, one of those rock star developers. You've got a really good pedigree from the some of the previous clouds. Give us a little insight as to what you see from an engineering culture, you know, architecturally standpoint, you know, is this the Oracle... That, when you joined Oracle, is this what you expected? You know, what's it really like inside? >> Yeah, honestly, as I said, really the company is changing across the board a lot faster than people realize. And that's truth for both, you know, the rock stars that were already in the company and the rock stars that are coming into the company now. You know, you've interviewed the Seattle team before about some of the cloud up there. We've brought in several hundred people from outside companies, from, you know, really strong pedigrees, right, Googles, Amazons, Microsofts, et cetera. We've done a ton of hiring in the Bay Area. We've brought in a lot of start-up talent. We've done, you know... There's been, of course, a few acquisitions. We bring in really solid teams, and then, honestly, just the culture, itself, is changing. Really, you know, transformation to a cloud company is, it actually impacts everything, right. It impacts the way you do support. It impacts the way you do development. It impacts the way you do operations. It impacts everything, so. >> Well, I think, you know, if you think about it, we're going from a company that built airplanes and sold those airplanes to others, for example, Boeing selling airplanes to Air France, et cetera, to actually becoming an airline where you're now not just building the airplane, you're actually flying the airplane, operating the airplane. So, in the Development and Engineering organizations, the engineers are understanding that they need to understand what the impact is on Operations of what they're releasing. They can't say, "Oh, send me the log files. "I'll log a ticket," because by that time it's affected many people. So, one, they have to create transparency into what's happening in production in real-time. Two, be able to respond and react to that in real-time. And the other thing that is a change in culture, both in Engineering and actually across the board including in Sales, is customer success. In cloud, people expect to get value in three months, four months, six months, et cetera. So, having a very significant focus on ensuring customer success within three to four months, right, then, they will renew their subscriptions. They will continue working with us. So, there's actually a very significant change in culture that's happening. And the other thing is, we're not just going after the large enterprises that used to be the bread-and-butter for Oracle, but now we also have small-medium businesses, start-ups, et cetera, saying, "Hey, if I don't have "to worry about installing, managing, configuring, "Oracle Databases, Oracle content, "I can just go use the capabilities that are being provided "by Oracle and pay for it as a subscription." And so, we're really shooting towards developers realizing that the Oracle cloud platform is a open, modern, easy platform. Open, because they have a choice of programming languages, Java, SE, PHP, Ruby. Open, in terms of database choices, not just the Oracle atabase, but MySQL, Cassandra, MongoDB, and Hadoop clusters, and open in terms of choice of deployment shapes, right, where you can have VMs, you can have bare metal, you can have containers, or you could have server-less computing. >> Yeah, you brought up speed. You know the pace of change is just phenomenal. I think about the traditional kind of software life-cycles versus, you know, where Docker is today. I mean, you used to go from 18 month down to six weeks. So, kind of a two-part question. How are you guys, internally, managing that pace of change? And, how are you helping your customers, you know, manage that pace of change? You know, Docker has the CE and the EE. So, you want to be more bleeding edge, everything else, or do you want something that's a little more stable? How do you guys view it internally and externally? >> Yeah, no, that's a great question. Certainly, internally, we're, you know, we're as bleeding edge as... We just talked about this a second ago. You know, we're moving fast. We're shipping software every day. The interesting thing, I find, is actually customers are going through the same transformation. And, most people don't realize when they go to microservices, actually, it's a big organizational change, right. Like, it changes the way that you have to structure your team. It changes the way they communicate with each other. And so, honestly, you know, a huge part... To the previous question, a huge part of this for us is, we need to be doing this because our customers are doing it too, right. So, we need to have empathy. So, we're doing that. >> Well, and I think, in terms of speed, you know, previously Oracle might release on-prem software once every 12, 18, 24 months. Now, I'll give you the example of the Integration Cloud Service. We've had four releases of it, four to five releases of it within a year. So, you know, the rate at which we've actually getting the releases out, getting the content out, means that customers are getting innovation much faster. And also what we're doing is, we're taking input from customers on the releases that have happened so that we're actually prioritizing the input that we're getting plus the roadmap that we've set up to say, "Hey, what should we be working on next?" So, our roadmaps are actually changing inflight. So, it's not like you set the roadmap for the next nine months or 12 months, but you're actually saying, "Hey, but this is the input we got, "and we need to deliver faster," you know, or, "We need to deliver a different set of capabilities "within that same time frame." And I think customers are now getting used to the fact that if they didn't have to get the new build, install the build, manage, configure, make changes, et cetera. They're saying, "I just got the new capabilities. "My application still works "and now if I want to use that capabilities, "I can start leveraging it," right. So, for example, orchestration was added to the Integration Cloud Service. They didn't have to do anything to their existing integrations but now they could use orchestration for more complex integrations if they wanted. >> Yeah, want to give you both a final word on this. Either, you know, conversation you've had with, you know, a customer or partner, or, you know, key takeaway you want to have people beyond what we've covered already. Mark? >> Yeah, no, you know, honestly, I really said it this morning in the Keynote where we really are focused on developers. Developers really are driving decisions these days. We know that. This announcement from us, with Docker, was the first of many things you're going to see. We absolutely committed, so stay tuned for more. >> Mark: One more developer and will, will, will... >> Oh yeah, you told, you warned me about that. >> Yeah, absolutely, Sidhartha. >> I think that, you know, what we've heard is developers are surprised when they find out the capabilities we have to help them build microservices, container-based applications. Being able to have a run time for microservices, being able to have API management for all the API services and microservices, being able to have a monitoring management infrastructure from the cloud so they don't have to install it and having a CI/CD pipeline all provided to them as a service in the cloud, wonderful, that's the feedback that we've gotten for those who've come and tried the Oracle cloud platform. >> All right. Sidhartha, Mark, thank you so much for joining us, giving the update. Congratulations on the announcement today. Know a lot of people will be checking out the Docker Store to understand that is, yeah... Well, we'll have to talk sometime about kind of the enterprise app store, in general, and where these all live, but we'll be back with more coverage, here. You're watching theCUBE. (upbeat electronic music)

Published Date : Apr 19 2017

SUMMARY :

Brought to you by Docker And Mark, thank you for joining us and the thing that jumped out at me is, you know, or in the Keynote, sorry, you know, the platforms you build, and what's happening and it's not about Oracle delivering all the sessions And, you know, in the Keynote, you talked about this is free but, you know, we'll be clarifying it It's, you know, stay tuned for more. that, you know, the key benefit that developers get is, Yeah, that's been a real problem, I mean, you know, you've got your staff, almost all of the new IS software we're building, So, you know, we want to make sure the people can consume between public cloud and what you get delivered One of the things, you know, It impacts the way you do support. Well, I think, you know, if you think about it, software life-cycles versus, you know, Like, it changes the way that you have So, you know, the rate at which we've actually or, you know, key takeaway you want to have people Yeah, no, you know, I think that, you know, what we've heard about kind of the enterprise app store, in general,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
BoeingORGANIZATION

0.99+

Mark CavagePERSON

0.99+

Sidhartha ArgawalPERSON

0.99+

MarkPERSON

0.99+

OracleORGANIZATION

0.99+

AmazonsORGANIZATION

0.99+

AWSORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

six monthsQUANTITY

0.99+

MicrosoftsORGANIZATION

0.99+

four monthsQUANTITY

0.99+

GooglesORGANIZATION

0.99+

1800 submissionsQUANTITY

0.99+

three monthsQUANTITY

0.99+

SidharthaPERSON

0.99+

two-partQUANTITY

0.99+

18 monthQUANTITY

0.99+

six weeksQUANTITY

0.99+

fourQUANTITY

0.99+

bothQUANTITY

0.99+

Austin, TexasLOCATION

0.99+

threeQUANTITY

0.99+

80 different imagesQUANTITY

0.99+

DockerORGANIZATION

0.99+

#DockerConEVENT

0.99+

24 monthsQUANTITY

0.99+

Bay AreaLOCATION

0.99+

SeattleLOCATION

0.99+

DockerCon 2017EVENT

0.99+

MySQLTITLE

0.99+

first timeQUANTITY

0.99+

18QUANTITY

0.99+

TwoQUANTITY

0.98+

todayDATE

0.98+

OneQUANTITY

0.98+

JavaTITLE

0.98+

12QUANTITY

0.98+

five releasesQUANTITY

0.97+

this weekDATE

0.97+

#theCUBEEVENT

0.97+

a yearQUANTITY

0.96+

CassandraTITLE

0.96+

MongoDBTITLE

0.96+

Docker StoreTITLE

0.96+

Air FranceORGANIZATION

0.95+

LinuxTITLE

0.95+

PHPTITLE

0.95+

this morningDATE

0.94+

oneQUANTITY

0.94+

Docker HubORGANIZATION

0.93+

RubyTITLE

0.93+