Image Title

Search Results for ECS:

RuairĂ­ McBride, Arrow ECS & Brian McCloskey, NetApp| NetApp Insight Berlin 2017


 

>> Narrator: Live form Berlin, Germany, it's the Cube, covering NetApp insight 2017, brought to you by NetApp. Welcome back to the Cube's live coverage of NetApp insight 2017, we're here in Berlin, Germany, I'm your host, Rebecca Night along with my cohost Peter Burris. We have two guests on the program now, we have Rory McBride, who is the technical account manager at Aero and Bryan Mclosky, who is the vice president world wide for hyper converge infrastructure at NetApp. Bryan, Rory, thanks so much for coming on the show. >> Thanks. >> Let me start with you, Bryan, talk a little bit, tell our viewers a little bit about the value, that HCI delivers to customers, especially in terms of simplifying the data. >> In a nutshell, what NetApp HCI does is it takes what wold normally be hours and hours to implement a solution and 100s of inputs, generally, over 400 inputs and it simplifies it down to under 30 inputs in an installation, that will be done within 45 minutes. Traditionally HCI solutions have similar implementation characteristics, but you lose some of the enterprise flexibility and scale, that customers of NetApp have come to expect over the years. What we've done is we've provided that simplicity, while allowing customers to have the enterprise capabilities and flexibility, that they've grown accustomed to. >> Is this something, that you are talking with customers, in terms of the simplicity, what were you hearing from customers? >> Most customers these days are challenged of, everybody has to find a way to do more with less or to do minimally a lot more with the same. If you think of NetApp, we've always been wonderful about giving customers a great production experience. When you buy a typical NetApp product, you're gonna own it for three, four or five years and it will continue. NetApp has always been great for that three, four and five year time frame and what we've done with HCI is we really simplified the beginning part of that curve of how do you get it from the time it lands on your dock to implement it and usable by our users in a short manner, that's what HCI has brought to the NetApp portfolio, that's incremental to what was there before. >> One of the advantages to third parties, that work closely with NetApp is, that by having a simpler approach of doing things, you can do more of them, but on the other hand, you want to ensure, that you're also focused on the value add. In the field, when you're sitting down with a customer and working with them to ensure, that they get the value, that they want from these products, how do you affect that balance? As the product becomes simpler to the customer now being able to focus more on other things, other than configuration of limitation. >> We've been able to get to doing something with your data is the key. You needed a little bar of entry, which a lot of the software and hardware providers are trying to do today. I think HCI just has to pull all of that together, which is great. We're hearing from third party vendors, that it's great, that from day one, they've been integrated into the overall portfolio message and I think customers are just gonna be pretty excited with what they can do from zero with this hardware. >> When you think about ultimately how they're gonna spend their time, what are they going to be doing instead of now all this all configuration work? What is Aero gonna be doing now, that you're not doing that value added configuration work? >> Hopefully, we'll be helping to realize the full potential of what they bought, rather than spending a lot of time trying to make the hardware work, they're concentrating more on delivering a service or an application back to the business, it's gonna generate some revenue. In Aero we're talking a lot to people about IOT and it's gonna be the next wave of information, that people are gonna have to deal with and having a stable product, that can support and provide value, you have information back to business, it's gonna be key. >> Bryan, HCI, as you noted, dramatically reduces the time to get to value, not only now, but it also sustains that level of simplicity over the life of the utilization of the product. How does it fit into the rest of the NetApp product set, the rest of the NetApp portfolio? What does it make better, what makes it better in addition to just the HCI product? >> NetApp has a really robust portfolio of offerings, that we, at a high level categorize into our next generation offerings, which are Solid Fire, Flexpod Solid Fire, storage grid and hyper converge and then the traditional NetApp on tap based offerings. What the glue between the whole portfolio is the data fabric and HCI is very tightly integrated into the data fabric, one of the innovations we are delivering is snap mirror integration of the RHCI platform into the traditional on tap family of products. You can seamlessly move data from our hyper converge system to a traditional on tap base system and it also gives you seamless mobility to either your own private cloud or to public cloud platforms. As a company with a wide portfolio, it gives us the ability to be consultative with our partners and our customers. What we want is and we feel customers are best served on NetApp and we want them to use NetApp, and if an on tap base system is a better solution for them than hyper converge, then that's absolutely what we will recommend for them. Into your earlier question about the partners, one of the interesting things with HCI is it's the first time as NetAPP were delivering an integrated system with compute and with a hyperviser, it comes preconfigured with the emware and it's a wonderful opportunity for our partners to add incremental value through the sale cycle to what they've brought to NetApp in the past. Because as NetApp, we're really storage experts, where our partners have a much wider and deeper understanding of the whole ecosystem than we do. It's been interesting for us to have discussions with partners, cuz we're learning a lot, because we're now involved in layers and we're deeply involved at higher levels of the stack, than we have been. >> I'm really interested in that, because you say, that you have this consultative relationship with these customers, how are you able to learn from them, their best practices and then do you transfer what you've learned to other partners and other customers? >> From the customer and we try and disseminate the learning as much as we can, but we're a huge organization with many account teams, but it all starts with what the customers wants to accomplish, minimally they need a solution, that's gonna plug in and do what they expect it to do today. What's the more important part is where what their vision is for where they wanna be three years down the road, five years down the road, 10 years down the road. It's that vision piece, that tends to drive more towards one part of the portfolio, than the other. >> Take us through how this works. You walk into an account, presumably Aero ECS has a customer. The Aero ECS customer says, "Well, we have an issue, that's going to require some specialized capabilities and how we use our data". You can look at a lot of different options, but you immediately think NetApp, what is it, that leads you to NetApp HCI versus on tap, versus Solid Fire, is there immediate characteristic, that you say, "That's HCI"? >> I would say, that the driving factor was the fact, that they wanted something that's simple and easy to manage, they want to get a mango data base up and running or they've got some other application, that really depends on their business. The underlying hardware needs to function. Bryan was saying, that it's got element OS sitting underneath it, which is in its 10th iteration and you've got VM version six, which is the most adopted virtualization platform out there. These are two best breed partnerships coming together and people are happy with that, and can move, and manage it from a single pane of glass moving forward from day one right the way through when they need to transition to a new platform, which is seamless for them. That's great from any application point, because you don't wanna worry about the health of things, you wanna be able to give an application back to the business. We talked about education, this event is gauged towards bringing customers together with NetApp and understanding the messaging around HCI, which is great. >> What are the things, that you keep hearing form customers, does this need for data simplicity, this need for huge time saving products and services? What do you think, if you can think three to five years down the road, what will the next generation of concerns be and how are you, I'm gonna use the word, that we're hearing a lot, future proof, what you're doing now to serve those customers needs of the future? >> Three to five years down the road. I can't predict three to five years out very reliably. >> But you can predict, that they're gonna have more data, they're going to merge it in new and unseen ways and they need to do it more cheaply. >> The future proofing really comes in from the data fabric. With the integration into the data fabric, you could have information, that started on a NetApp system, that was announced eight years ago, seamlessly moves into a solid fire or flash array, which seamlessly moves to a hyperconverge system, which seamlessly moves to your private cloud, which eventually moves off to a public cloud and you can bring it back into any tiers and wherever you want that data in six, seven, eight years, the data fabric will extend to it. Within each individual product, there are investment protection technologies within each one, but it's the data fabric, that should make customers feel comfortable, that no matter where they're gonna end up, taking their first step with NetApp is a step in the right direction. >> The value added ecosystem, that NetApp and others use and Aero ECS has a big play around that, has historically been tied back into hardware assets, how does it feel to be moving more into worrying about your customers data assets? >> I think it's an exciting time to be bringing those things together. At the end of the day, it's what the customer wants, they want a solution, that integrates seamlessly from whether that be the rack right the way up to the application, they want something, that they can get on their phone, they want something they can get on their tablet, they want the same experience regardless whether they're in an airplane or right next to the data center. The demand on data is huge and will only get bigger over the next five years. I was looking at a recent cover of forest magazine, it was from a number of years ago about Nokia and how can anybody ever catch them and where are they now? I think you need to be able to spot the changes and adapt quickly and to steal one of the comments from the key note yesterday, is moving from a survivor to a thriver with your data, it's gonna be key to those companies. >> In talking about the demands on data growing, it's also true, that the demands on data professionals are growing too. How is that changing the way you recruit and retain top talent? >> For us, as NetApp, if you were to look at what we wanted in the CV five years ago, we wanted people, that understood storage, we wanted people, that knew about volumes, that knew about data layouts, that knew how to maximize performance by physical placement of data and now what we're looking for is people, that really understand the whole stack and that can talk to customers about their application needs their business problems, can talk to developers. Because what we've done is we've taken those people, that were good in all those other things I mentioned, when you ask them what did you love about this product, none of them ever came back and said I love the first week I spent installing it. We've taken that away and we've let them do more interesting work. A challenge for us is, us is a collective society, is to make sure we bring people forward from an education perspective skills enablement, so they're capable of rising to that next level of demand, but we're taking a lot of the busy work out. >> Making sure, that they have the skills to be able to take what they're seeing in the data and then take action. >> We want our customers to look at NetApp as data expert, that can work with them on their business problem, not a storage expert, that can explain how an array works. >> Bryan, Rory, thank you so much for coming on the show, it's been a great conversation. >> Thank you. >> Thank you very much. >> You are watching the Cube, we will have more from NetApp insight, I'm Rebecca Night for Peter Burris in just a little bit.

Published Date : Nov 14 2017

SUMMARY :

covering NetApp insight 2017, brought to you by NetApp. that HCI delivers to customers, especially in terms and flexibility, that they've grown accustomed to. or to do minimally a lot more with the same. As the product becomes simpler to the customer now I think HCI just has to pull all of that together, that people are gonna have to deal with the time to get to value, not only now, and it also gives you seamless mobility From the customer and we try and disseminate what is it, that leads you to NetApp HCI and easy to manage, they want to get a mango data base I can't predict three to five years out very reliably. and they need to do it more cheaply. and you can bring it back into any tiers and adapt quickly and to steal one of the comments How is that changing the way you recruit and that can talk to customers about their application needs to be able to take what they're seeing in the data as data expert, that can work with them for coming on the show, it's been a great conversation. we will have more from NetApp insight,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
BryanPERSON

0.99+

Rory McBridePERSON

0.99+

Peter BurrisPERSON

0.99+

threeQUANTITY

0.99+

Brian McCloskeyPERSON

0.99+

ThreeQUANTITY

0.99+

Rebecca NightPERSON

0.99+

five yearsQUANTITY

0.99+

NokiaORGANIZATION

0.99+

sixQUANTITY

0.99+

two guestsQUANTITY

0.99+

RuairĂ­ McBridePERSON

0.99+

fourQUANTITY

0.99+

NetAppORGANIZATION

0.99+

yesterdayDATE

0.99+

10th iterationQUANTITY

0.99+

first stepQUANTITY

0.99+

Bryan McloskyPERSON

0.99+

RoryPERSON

0.99+

five yearQUANTITY

0.99+

NetAppTITLE

0.99+

three yearsQUANTITY

0.99+

sevenQUANTITY

0.99+

OneQUANTITY

0.99+

eight years agoDATE

0.98+

10 yearsQUANTITY

0.98+

eight yearsQUANTITY

0.98+

first timeQUANTITY

0.98+

five years agoDATE

0.98+

Berlin, GermanyLOCATION

0.98+

todayDATE

0.97+

Aero ECSORGANIZATION

0.97+

over 400 inputsQUANTITY

0.97+

HCIORGANIZATION

0.97+

each oneQUANTITY

0.96+

AeroORGANIZATION

0.95+

under 30 inputsQUANTITY

0.95+

45 minutesQUANTITY

0.95+

100s of inputsQUANTITY

0.94+

one partQUANTITY

0.92+

RHCITITLE

0.92+

single paneQUANTITY

0.91+

zeroQUANTITY

0.89+

CubeCOMMERCIAL_ITEM

0.88+

NetAPPTITLE

0.87+

oneQUANTITY

0.87+

2017DATE

0.86+

day oneQUANTITY

0.85+

two best breed partnershipsQUANTITY

0.82+

number of years agoDATE

0.81+

each individual productQUANTITY

0.81+

HCITITLE

0.74+

first weekQUANTITY

0.71+

CubeTITLE

0.69+

Jay Marshall, Neural Magic | AWS Startup Showcase S3E1


 

(upbeat music) >> Hello, everyone, and welcome to theCUBE's presentation of the "AWS Startup Showcase." This is season three, episode one. The focus of this episode is AI/ML: Top Startups Building Foundational Models, Infrastructure, and AI. It's great topics, super-relevant, and it's part of our ongoing coverage of startups in the AWS ecosystem. I'm your host, John Furrier, with theCUBE. Today, we're excited to be joined by Jay Marshall, VP of Business Development at Neural Magic. Jay, thanks for coming on theCUBE. >> Hey, John, thanks so much. Thanks for having us. >> We had a great CUBE conversation with you guys. This is very much about the company focuses. It's a feature presentation for the "Startup Showcase," and the machine learning at scale is the topic, but in general, it's more, (laughs) and we should call it "Machine Learning and AI: How to Get Started," because everybody is retooling their business. Companies that aren't retooling their business right now with AI first will be out of business, in my opinion. You're seeing massive shift. This is really truly the beginning of the next-gen machine learning AI trend. It's really seeing ChatGPT. Everyone sees that. That went mainstream. But this is just the beginning. This is scratching the surface of this next-generation AI with machine learning powering it, and with all the goodness of cloud, cloud scale, and how horizontally scalable it is. The resources are there. You got the Edge. Everything's perfect for AI 'cause data infrastructure's exploding in value. AI is just the applications. This is a super topic, so what do you guys see in this general area of opportunities right now in the headlines? And I'm sure you guys' phone must be ringing off the hook, metaphorically speaking, or emails and meetings and Zooms. What's going on over there at Neural Magic? >> No, absolutely, and you pretty much nailed most of it. I think that, you know, my background, we've seen for the last 20-plus years. Even just getting enterprise applications kind of built and delivered at scale, obviously, amazing things with AWS and the cloud to help accelerate that. And we just kind of figured out in the last five or so years how to do that productively and efficiently, kind of from an operations perspective. Got development and operations teams. We even came up with DevOps, right? But now, we kind of have this new kind of persona and new workload that developers have to talk to, and then it has to be deployed on those ITOps solutions. And so you pretty much nailed it. Folks are saying, "Well, how do I do this?" These big, generational models or foundational models, as we're calling them, they're great, but enterprises want to do that with their data, on their infrastructure, at scale, at the edge. So for us, yeah, we're helping enterprises accelerate that through optimizing models and then delivering them at scale in a more cost-effective fashion. >> Yeah, and I think one of the things, the benefits of OpenAI we saw, was not only is it open source, then you got also other models that are more proprietary, is that it shows the world that this is really happening, right? It's a whole nother level, and there's also new landscape kind of maps coming out. You got the generative AI, and you got the foundational models, large LLMs. Where do you guys fit into the landscape? Because you guys are in the middle of this. How do you talk to customers when they say, "I'm going down this road. I need help. I'm going to stand this up." This new AI infrastructure and applications, where do you guys fit in the landscape? >> Right, and really, the answer is both. I think today, when it comes to a lot of what for some folks would still be considered kind of cutting edge around computer vision and natural language processing, a lot of our optimization tools and our runtime are based around most of the common computer vision and natural language processing models. So your YOLOs, your BERTs, you know, your DistilBERTs and what have you, so we work to help optimize those, again, who've gotten great performance and great value for customers trying to get those into production. But when you get into the LLMs, and you mentioned some of the open source components there, our research teams have kind of been right in the trenches with those. So kind of the GPT open source equivalent being OPT, being able to actually take, you know, a multi-$100 billion parameter model and sparsify that or optimize that down, shaving away a ton of parameters, and being able to run it on smaller infrastructure. So I think the evolution here, you know, all this stuff came out in the last six months in terms of being turned loose into the wild, but we're staying in the trenches with folks so that we can help optimize those as well and not require, again, the heavy compute, the heavy cost, the heavy power consumption as those models evolve as well. So we're staying right in with everybody while they're being built, but trying to get folks into production today with things that help with business value today. >> Jay, I really appreciate you coming on theCUBE, and before we came on camera, you said you just were on a customer call. I know you got a lot of activity. What specific things are you helping enterprises solve? What kind of problems? Take us through the spectrum from the beginning, people jumping in the deep end of the pool, some people kind of coming in, starting out slow. What are the scale? Can you scope the kind of use cases and problems that are emerging that people are calling you for? >> Absolutely, so I think if I break it down to kind of, like, your startup, or I maybe call 'em AI native to kind of steal from cloud native years ago, that group, it's pretty much, you know, part and parcel for how that group already runs. So if you have a data science team and an ML engineering team, you're building models, you're training models, you're deploying models. You're seeing firsthand the expense of starting to try to do that at scale. So it's really just a pure operational efficiency play. They kind of speak natively to our tools, which we're doing in the open source. So it's really helping, again, with the optimization of the models they've built, and then, again, giving them an alternative to expensive proprietary hardware accelerators to have to run them. Now, on the enterprise side, it varies, right? You have some kind of AI native folks there that already have these teams, but you also have kind of, like, AI curious, right? Like, they want to do it, but they don't really know where to start, and so for there, we actually have an open source toolkit that can help you get into this optimization, and then again, that runtime, that inferencing runtime, purpose-built for CPUs. It allows you to not have to worry, again, about do I have a hardware accelerator available? How do I integrate that into my application stack? If I don't already know how to build this into my infrastructure, does my ITOps teams, do they know how to do this, and what does that runway look like? How do I cost for this? How do I plan for this? When it's just x86 compute, we've been doing that for a while, right? So it obviously still requires more, but at least it's a little bit more predictable. >> It's funny you mentioned AI native. You know, born in the cloud was a phrase that was out there. Now, you have startups that are born in AI companies. So I think you have this kind of cloud kind of vibe going on. You have lift and shift was a big discussion. Then you had cloud native, kind of in the cloud, kind of making it all work. Is there a existing set of things? People will throw on this hat, and then what's the difference between AI native and kind of providing it to existing stuff? 'Cause we're a lot of people take some of these tools and apply it to either existing stuff almost, and it's not really a lift and shift, but it's kind of like bolting on AI to something else, and then starting with AI first or native AI. >> Absolutely. It's a- >> How would you- >> It's a great question. I think that probably, where I'd probably pull back to kind of allow kind of retail-type scenarios where, you know, for five, seven, nine years or more even, a lot of these folks already have data science teams, you know? I mean, they've been doing this for quite some time. The difference is the introduction of these neural networks and deep learning, right? Those kinds of models are just a little bit of a paradigm shift. So, you know, I obviously was trying to be fun with the term AI native, but I think it's more folks that kind of came up in that neural network world, so it's a little bit more second nature, whereas I think for maybe some traditional data scientists starting to get into neural networks, you have the complexity there and the training overhead, and a lot of the aspects of getting a model finely tuned and hyperparameterization and all of these aspects of it. It just adds a layer of complexity that they're just not as used to dealing with. And so our goal is to help make that easy, and then of course, make it easier to run anywhere that you have just kind of standard infrastructure. >> Well, the other point I'd bring out, and I'd love to get your reaction to, is not only is that a neural network team, people who have been focused on that, but also, if you look at some of the DataOps lately, AIOps markets, a lot of data engineering, a lot of scale, folks who have been kind of, like, in that data tsunami cloud world are seeing, they kind of been in this, right? They're, like, been experiencing that. >> No doubt. I think it's funny the data lake concept, right? And you got data oceans now. Like, the metaphors just keep growing on us, but where it is valuable in terms of trying to shift the mindset, I've always kind of been a fan of some of the naming shift. I know with AWS, they always talk about purpose-built databases. And I always liked that because, you know, you don't have one database that can do everything. Even ones that say they can, like, you still have to do implementation detail differences. So sitting back and saying, "What is my use case, and then which database will I use it for?" I think it's kind of similar here. And when you're building those data teams, if you don't have folks that are doing data engineering, kind of that data harvesting, free processing, you got to do all that before a model's even going to care about it. So yeah, it's definitely a central piece of this as well, and again, whether or not you're going to be AI negative as you're making your way to kind of, you know, on that journey, you know, data's definitely a huge component of it. >> Yeah, you would have loved our Supercloud event we had. Talk about naming and, you know, around data meshes was talked about a lot. You're starting to see the control plane layers of data. I think that was the beginning of what I saw as that data infrastructure shift, to be horizontally scalable. So I have to ask you, with Neural Magic, when your customers and the people that are prospects for you guys, they're probably asking a lot of questions because I think the general thing that we see is, "How do I get started? Which GPU do I use?" I mean, there's a lot of things that are kind of, I won't say technical or targeted towards people who are living in that world, but, like, as the mainstream enterprises come in, they're going to need a playbook. What do you guys see, what do you guys offer your clients when they come in, and what do you recommend? >> Absolutely, and I think where we hook in specifically tends to be on the training side. So again, I've built a model. Now, I want to really optimize that model. And then on the runtime side when you want to deploy it, you know, we run that optimized model. And so that's where we're able to provide. We even have a labs offering in terms of being able to pair up our engineering teams with a customer's engineering teams, and we can actually help with most of that pipeline. So even if it is something where you have a dataset and you want some help in picking a model, you want some help training it, you want some help deploying that, we can actually help there as well. You know, there's also a great partner ecosystem out there, like a lot of folks even in the "Startup Showcase" here, that extend beyond into kind of your earlier comment around data engineering or downstream ITOps or the all-up MLOps umbrella. So we can absolutely engage with our labs, and then, of course, you know, again, partners, which are always kind of key to this. So you are spot on. I think what's happened with the kind of this, they talk about a hockey stick. This is almost like a flat wall now with the rate of innovation right now in this space. And so we do have a lot of folks wanting to go straight from curious to native. And so that's definitely where the partner ecosystem comes in so hard 'cause there just isn't anybody or any teams out there that, I literally do from, "Here's my blank database, and I want an API that does all the stuff," right? Like, that's a big chunk, but we can definitely help with the model to delivery piece. >> Well, you guys are obviously a featured company in this space. Talk about the expertise. A lot of companies are like, I won't say faking it till they make it. You can't really fake security. You can't really fake AI, right? So there's going to be a learning curve. They'll be a few startups who'll come out of the gate early. You guys are one of 'em. Talk about what you guys have as expertise as a company, why you're successful, and what problems do you solve for customers? >> No, appreciate that. Yeah, we actually, we love to tell the story of our founder, Nir Shavit. So he's a 20-year professor at MIT. Actually, he was doing a lot of work on kind of multicore processing before there were even physical multicores, and actually even did a stint in computational neurobiology in the 2010s, and the impetus for this whole technology, has a great talk on YouTube about it, where he talks about the fact that his work there, he kind of realized that the way neural networks encode and how they're executed by kind of ramming data layer by layer through these kind of HPC-style platforms, actually was not analogous to how the human brain actually works. So we're on one side, we're building neural networks, and we're trying to emulate neurons. We're not really executing them that way. So our team, which one of the co-founders, also an ex-MIT, that was kind of the birth of why can't we leverage this super-performance CPU platform, which has those really fat, fast caches attached to each core, and actually start to find a way to break that model down in a way that I can execute things in parallel, not having to do them sequentially? So it is a lot of amazing, like, talks and stuff that show kind of the magic, if you will, a part of the pun of Neural Magic, but that's kind of the foundational layer of all the engineering that we do here. And in terms of how we're able to bring it to reality for customers, I'll give one customer quote where it's a large retailer, and it's a people-counting application. So a very common application. And that customer's actually been able to show literally double the amount of cameras being run with the same amount of compute. So for a one-to-one perspective, two-to-one, business leaders usually like that math, right? So we're able to show pure cost savings, but even performance-wise, you know, we have some of the common models like your ResNets and your YOLOs, where we can actually even perform better than hardware-accelerated solutions. So we're trying to do, I need to just dumb it down to better, faster, cheaper, but from a commodity perspective, that's where we're accelerating. >> That's not a bad business model. Make things easier to use, faster, and reduce the steps it takes to do stuff. So, you know, that's always going to be a good market. Now, you guys have DeepSparse, which we've talked about on our CUBE conversation prior to this interview, delivers ML models through the software so the hardware allows for a decoupling, right? >> Yep. >> Which is going to drive probably a cost advantage. Also, it's also probably from a deployment standpoint it must be easier. Can you share the benefits? Is it a cost side? Is it more of a deployment? What are the benefits of the DeepSparse when you guys decouple the software from the hardware on the ML models? >> No you actually, you hit 'em both 'cause that really is primarily the value. Because ultimately, again, we're so early. And I came from this world in a prior life where I'm doing Java development, WebSphere, WebLogic, Tomcat open source, right? When we were trying to do innovation, we had innovation buckets, 'cause everybody wanted to be on the web and have their app and a browser, right? We got all the money we needed to build something and show, hey, look at the thing on the web, right? But when you had to get in production, that was the challenge. So to what you're speaking to here, in this situation, we're able to show we're just a Python package. So whether you just install it on the operating system itself, or we also have a containerized version you can drop on any container orchestration platform, so ECS or EKS on AWS. And so you get all the auto-scaling features. So when you think about that kind of a world where you have everything from real-time inferencing to kind of after hours batch processing inferencing, the fact that you can auto scale that hardware up and down and it's CPU based, so you're paying by the minute instead of maybe paying by the hour at a lower cost shelf, it does everything from pure cost to, again, I can have my standard IT team say, "Hey, here's the Kubernetes in the container," and it just runs on the infrastructure we're already managing. So yeah, operational, cost and again, and many times even performance. (audio warbles) CPUs if I want to. >> Yeah, so that's easier on the deployment too. And you don't have this kind of, you know, blank check kind of situation where you don't know what's on the backend on the cost side. >> Exactly. >> And you control the actual hardware and you can manage that supply chain. >> And keep in mind, exactly. Because the other thing that sometimes gets lost in the conversation, depending on where a customer is, some of these workloads, like, you know, you and I remember a world where even like the roundtrip to the cloud and back was a problem for folks, right? We're used to extremely low latency. And some of these workloads absolutely also adhere to that. But there's some workloads where the latency isn't as important. And we actually even provide the tuning. Now, if we're giving you five milliseconds of latency and you don't need that, you can tune that back. So less CPU, lower cost. Now, throughput and other things come into play. But that's the kind of configurability and flexibility we give for operations. >> All right, so why should I call you if I'm a customer or prospect Neural Magic, what problem do I have or when do I know I need you guys? When do I call you in and what does my environment look like? When do I know? What are some of the signals that would tell me that I need Neural Magic? >> No, absolutely. So I think in general, any neural network, you know, the process I mentioned before called sparcification, it's, you know, an optimization process that we specialize in. Any neural network, you know, can be sparcified. So I think if it's a deep-learning neural network type model. If you're trying to get AI into production, you have cost concerns even performance-wise. I certainly hate to be too generic and say, "Hey, we'll talk to everybody." But really in this world right now, if it's a neural network, it's something where you're trying to get into production, you know, we are definitely offering, you know, kind of an at-scale performant deployable solution for deep learning models. >> So neural network you would define as what? Just devices that are connected that need to know about each other? What's the state-of-the-art current definition of neural network for customers that may think they have a neural network or might not know they have a neural network architecture? What is that definition for neural network? >> That's a great question. So basically, machine learning models that fall under this kind of category, you hear about transformers a lot, or I mentioned about YOLO, the YOLO family of computer vision models, or natural language processing models like BERT. If you have a data science team or even developers, some even regular, I used to call myself a nine to five developer 'cause I worked in the enterprise, right? So like, hey, we found a new open source framework, you know, I used to use Spring back in the day and I had to go figure it out. There's developers that are pulling these models down and they're figuring out how to get 'em into production, okay? So I think all of those kinds of situations, you know, if it's a machine learning model of the deep learning variety that's, you know, really specifically where we shine. >> Okay, so let me pretend I'm a customer for a minute. I have all these videos, like all these transcripts, I have all these people that we've interviewed, CUBE alumnis, and I say to my team, "Let's AI-ify, sparcify theCUBE." >> Yep. >> What do I do? I mean, do I just like, my developers got to get involved and they're going to be like, "Well, how do I upload it to the cloud? Do I use a GPU?" So there's a thought process. And I think a lot of companies are going through that example of let's get on this AI, how can it help our business? >> Absolutely. >> What does that progression look like? Take me through that example. I mean, I made up theCUBE example up, but we do have a lot of data. We have large data models and we have people and connect to the internet and so we kind of seem like there's a neural network. I think every company might have a neural network in place. >> Well, and I was going to say, I think in general, you all probably do represent even the standard enterprise more than most. 'Cause even the enterprise is going to have a ton of video content, a ton of text content. So I think it's a great example. So I think that that kind of sea or I'll even go ahead and use that term data lake again, of data that you have, you're probably going to want to be setting up kind of machine learning pipelines that are going to be doing all of the pre-processing from kind of the raw data to kind of prepare it into the format that say a YOLO would actually use or let's say BERT for natural language processing. So you have all these transcripts, right? So we would do a pre-processing path where we would create that into the file format that BERT, the machine learning model would know how to train off of. So that's kind of all the pre-processing steps. And then for training itself, we actually enable what's called sparse transfer learning. So that's transfer learning is a very popular method of doing training with existing models. So we would be able to retrain that BERT model with your transcript data that we have now done the pre-processing with to get it into the proper format. And now we have a BERT natural language processing model that's been trained on your data. And now we can deploy that onto DeepSparse runtime so that now you can ask that model whatever questions, or I should say pass, you're not going to ask it those kinds of questions ChatGPT, although we can do that too. But you're going to pass text through the BERT model and it's going to give you answers back. It could be things like sentiment analysis or text classification. You just call the model, and now when you pass text through it, you get the answers better, faster or cheaper. I'll use that reference again. >> Okay, we can create a CUBE bot to give us questions on the fly from the the AI bot, you know, from our previous guests. >> Well, and I will tell you using that as an example. So I had mentioned OPT before, kind of the open source version of ChatGPT. So, you know, typically that requires multiple GPUs to run. So our research team, I may have mentioned earlier, we've been able to sparcify that over 50% already and run it on only a single GPU. And so in that situation, you could train OPT with that corpus of data and do exactly what you say. Actually we could use Alexa, we could use Alexa to actually respond back with voice. How about that? We'll do an API call and we'll actually have an interactive Alexa-enabled bot. >> Okay, we're going to be a customer, let's put it on the list. But this is a great example of what you guys call software delivered AI, a topic we chatted about on theCUBE conversation. This really means this is a developer opportunity. This really is the convergence of the data growth, the restructuring, how data is going to be horizontally scalable, meets developers. So this is an AI developer model going on right now, which is kind of unique. >> It is, John, I will tell you what's interesting. And again, folks don't always think of it this way, you know, the AI magical goodness is now getting pushed in the middle where the developers and IT are operating. And so it again, that paradigm, although for some folks seem obvious, again, if you've been around for 20 years, that whole all that plumbing is a thing, right? And so what we basically help with is when you deploy the DeepSparse runtime, we have a very rich API footprint. And so the developers can call the API, ITOps can run it, or to your point, it's developer friendly enough that you could actually deploy our off-the-shelf models. We have something called the SparseZoo where we actually publish pre-optimized or pre-sparcified models. And so developers could literally grab those right off the shelf with the training they've already had and just put 'em right into their applications and deploy them as containers. So yeah, we enable that for sure as well. >> It's interesting, DevOps was infrastructure as code and we had a last season, a series on data as code, which we kind of coined. This is data as code. This is a whole nother level of opportunity where developers just want to have programmable data and apps with AI. This is a whole new- >> Absolutely. >> Well, absolutely great, great stuff. Our news team at SiliconANGLE and theCUBE said you guys had a little bit of a launch announcement you wanted to make here on the "AWS Startup Showcase." So Jay, you have something that you want to launch here? >> Yes, and thank you John for teeing me up. So I'm going to try to put this in like, you know, the vein of like an AWS, like main stage keynote launch, okay? So we're going to try this out. So, you know, a lot of our product has obviously been built on top of x86. I've been sharing that the past 15 minutes or so. And with that, you know, we're seeing a lot of acceleration for folks wanting to run on commodity infrastructure. But we've had customers and prospects and partners tell us that, you know, ARM and all of its kind of variance are very compelling, both cost performance-wise and also obviously with Edge. And wanted to know if there was anything we could do from a runtime perspective with ARM. And so we got the work and, you know, it's a hard problem to solve 'cause the instructions set for ARM is very different than the instruction set for x86, and our deep tensor column technology has to be able to work with that lower level instruction spec. But working really hard, the engineering team's been at it and we are happy to announce here at the "AWS Startup Showcase," that DeepSparse inference now has, or inference runtime now has support for AWS Graviton instances. So it's no longer just x86, it is also ARM and that obviously also opens up the door to Edge and further out the stack so that optimize once run anywhere, we're not going to open up. So it is an early access. So if you go to neuralmagic.com/graviton, you can sign up for early access, but we're excited to now get into the ARM side of the fence as well on top of Graviton. >> That's awesome. Our news team is going to jump on that news. We'll get it right up. We get a little scoop here on the "Startup Showcase." Jay Marshall, great job. That really highlights the flexibility that you guys have when you decouple the software from the hardware. And again, we're seeing open source driving a lot more in AI ops now with with machine learning and AI. So to me, that makes a lot of sense. And congratulations on that announcement. Final minute or so we have left, give a summary of what you guys are all about. Put a plug in for the company, what you guys are looking to do. I'm sure you're probably hiring like crazy. Take the last few minutes to give a plug for the company and give a summary. >> No, I appreciate that so much. So yeah, joining us out neuralmagic.com, you know, part of what we didn't spend a lot of time here, our optimization tools, we are doing all of that in the open source. It's called SparseML and I mentioned SparseZoo briefly. So we really want the data scientists community and ML engineering community to join us out there. And again, the DeepSparse runtime, it's actually free to use for trial purposes and for personal use. So you can actually run all this on your own laptop or on an AWS instance of your choice. We are now live in the AWS marketplace. So push button, deploy, come try us out and reach out to us on neuralmagic.com. And again, sign up for the Graviton early access. >> All right, Jay Marshall, Vice President of Business Development Neural Magic here, talking about performant, cost effective machine learning at scale. This is season three, episode one, focusing on foundational models as far as building data infrastructure and AI, AI native. I'm John Furrier with theCUBE. Thanks for watching. (bright upbeat music)

Published Date : Mar 9 2023

SUMMARY :

of the "AWS Startup Showcase." Thanks for having us. and the machine learning and the cloud to help accelerate that. and you got the foundational So kind of the GPT open deep end of the pool, that group, it's pretty much, you know, So I think you have this kind It's a- and a lot of the aspects of and I'd love to get your reaction to, And I always liked that because, you know, that are prospects for you guys, and you want some help in picking a model, Talk about what you guys have that show kind of the magic, if you will, and reduce the steps it takes to do stuff. when you guys decouple the the fact that you can auto And you don't have this kind of, you know, the actual hardware and you and you don't need that, neural network, you know, of situations, you know, CUBE alumnis, and I say to my team, and they're going to be like, and connect to the internet and it's going to give you answers back. you know, from our previous guests. and do exactly what you say. of what you guys call enough that you could actually and we had a last season, that you want to launch here? And so we got the work and, you know, flexibility that you guys have So you can actually run Vice President of Business

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JayPERSON

0.99+

Jay MarshallPERSON

0.99+

John FurrierPERSON

0.99+

JohnPERSON

0.99+

AWSORGANIZATION

0.99+

fiveQUANTITY

0.99+

Nir ShavitPERSON

0.99+

20-yearQUANTITY

0.99+

AlexaTITLE

0.99+

2010sDATE

0.99+

sevenQUANTITY

0.99+

PythonTITLE

0.99+

MITORGANIZATION

0.99+

each coreQUANTITY

0.99+

Neural MagicORGANIZATION

0.99+

JavaTITLE

0.99+

YouTubeORGANIZATION

0.99+

TodayDATE

0.99+

nine yearsQUANTITY

0.98+

bothQUANTITY

0.98+

BERTTITLE

0.98+

theCUBEORGANIZATION

0.98+

ChatGPTTITLE

0.98+

20 yearsQUANTITY

0.98+

over 50%QUANTITY

0.97+

second natureQUANTITY

0.96+

todayDATE

0.96+

ARMORGANIZATION

0.96+

oneQUANTITY

0.95+

DeepSparseTITLE

0.94+

neuralmagic.com/gravitonOTHER

0.94+

SiliconANGLEORGANIZATION

0.94+

WebSphereTITLE

0.94+

nineQUANTITY

0.94+

firstQUANTITY

0.93+

Startup ShowcaseEVENT

0.93+

five millisecondsQUANTITY

0.92+

AWS Startup ShowcaseEVENT

0.91+

twoQUANTITY

0.9+

YOLOORGANIZATION

0.89+

CUBEORGANIZATION

0.88+

OPTTITLE

0.88+

last six monthsDATE

0.88+

season threeQUANTITY

0.86+

doubleQUANTITY

0.86+

one customerQUANTITY

0.86+

SupercloudEVENT

0.86+

one sideQUANTITY

0.85+

VicePERSON

0.85+

x86OTHER

0.83+

AI/ML: Top Startups Building Foundational ModelsTITLE

0.82+

ECSTITLE

0.81+

$100 billionQUANTITY

0.81+

DevOpsTITLE

0.81+

WebLogicTITLE

0.8+

EKSTITLE

0.8+

a minuteQUANTITY

0.8+

neuralmagic.comOTHER

0.79+

Anish Dhar & Ganesh Datta, Cortex | Kubecon + Cloudnativecon Europe 2022


 

>> Narrator: TheCUBE presents Kubecon and Cloudnativecon Europe, 2022. Brought to you by Red Hat, the cloud native computing foundation and its ecosystem partners. >> Welcome to Valencia, Spain in Kubecon, Cloudnativecon Europe, 2022. I'm Keith Townsend and we are in a beautiful locale. The city itself is not that big, 100,000, I mean, sorry, about 800,000 people. And we got out, got to see a little bit of the sites. It is an amazing city. I'm from the US, it's hard to put in context how a city of 800,000 people can be so beautiful. I'm here with Anish Dhar and Ganesh Datta, Co-founder and CTO of Cortex. Anish you're CEO of Cortex. We were having a conversation. One of the things that I asked my client is what is good. And you're claiming to answer the question about what is quality when it comes to measuring microservices? What is quality? >> Yeah, I think it really depends on the company and I think that's really the philosophy we have. When we built Cortex, is that we understood that different companies have different definitions of quality, but they need to be able to be represented in really objective ways. I think what ends up happening in most engineering organizations is that quality lives in people's heads. The engineers who write the services they're often the ones who understand all the intricacies with the service. What are the downstream dependencies, who's on call for this service? Where does the documentation live? All of these things I think impact the quality of the service. And as these engineers leave the company or they switch teams, they often take that tribal knowledge with them. And so I think quality really comes down to being able to objectively codify your best practices in some way and have that distributed to all engineers in the company. >> And to add to that, I think very concrete examples for an organization that's already modern like their idea of quality might be uptime incidents. For somebody that's like going through a modernization strategy, they're trying to get to the 21st century, they're trying to get to Kubernetes. For them, quality means where are we in that journey? Are you on our latest platforms? Are you running CI, are you doing continuous delivery? Like quality can mean a lot of things and so our perspective is how do we give you the tools to say as an organization, here's what quality means to us. >> So at first, my mind was going through when you said quality, Anish, you started out the conversation about having this kind of non-codified set of measurements, historical knowledge, et cetera. I was thinking observability, measuring how much time does it take to have a transaction. But Ganesh you're introducing this new thing. I'm working with this project where we're migrating a monolith application to a set of microservices. And you're telling me Cortex helps me measure the quality of what I'm doing in my project? >> Ganesh: Absolutely. >> How is that? >> Yeah, it's a great question. So I think when you think about observability, you think about uptime and latency and transactions and throughput and all this stuff. And I think that's very high level and I think that's one perspective of what quality is, but as you're going through this journey, you might say like the fact that we're tracking that stuff, the fact that you're using APM, you're using distributed tracing, that is one element of service quality. Maybe service quality means you're doing CICD, you're running vulnerability scans. You're using Docker. Like what that means to us can be very different. So observability is just one aspect of are you doing things the right way? Good to us means you're using SLOs. You are tracking those metrics. You're reporting that somewhere. And so that's like one component for our organization of what quality can mean. >> I'm kind of taken back by this because I've not seen someone kind of give the idea. And I think later on, this is the perfect segment to introduce theCUBE clock in which I'm going to give you a minute to kind of like give me the elevator pitch, but we're going to have the deep conversation right now. When you go in and you... What's the first process you do when you engage in a customer? Does a customer go and get this off of repository, install it, the open source version, and then what? I mean, what's the experience? >> Yeah, absolutely. So we have both a smart and on-prem version of Cortex. It's really straightforward. Basically we have a service discovery onboarding flow where customers can connect to different sets of source for their services. It could be Kubernetes, ECS, Git Repos, APM tools, and then we'll actually automatically map all of that service data with all of the integration data in the company. So we'll take that service and map it to its on call rotation to the JIRA tickets that have the service tag associated with it, to the data algo SLOs. And what that ends ends up producing is this service catalog that has all the information you need to understand your service. Almost like a single pane of glass to work with the service. And then once you have all of that data inside Cortex, then you can start writing scorecards, which grade the quality of those services across those different verticals Ganesh was talking about. Like whether it's a monolith, a microservice transition, whether it's production readiness or security standards, you can really start tracking that. And then engineers start understanding where the areas of risk with my service across reliability or security or operation maturity. I think it gives us in insane visibility into what's actually being built and the quality of that compared to your standards. >> So, okay, I have a standards for SLO that is usually something that is, it might not even be measured. So how do you help me understand that I'm lacking a measurable system for tracking SLO and what's the next step for helping me get that system? >> Yeah, I think our perspective is very much how do we help you create a culture where developers understand what's expected of them? So if SLOs are part of what we consider observability or reliability, then Cortex's perspective is, hey, we want to help your organization adopt SLOs. And so that service cataloging concept, the service catalog says, hey, here's my API integration. Then a scorecard, the organization goes in and says, we want every service owner to define their SLOs, we want you to define your thresholds. We want you to be tracking them, are you passing your SLOs? And so we're not being prescriptive about here's what we think your SLOs should be, ours is more around, hey, we're going to help you like if you care about SLOs, we're going to tell the service owners saying, hey, you need to have at least two SLOs for your service and you got to be tracking them. And the service catalog that data flows from a service catalog into those scorecards. And so we're helping them adopt that mindset of, hey, SLOs are important. It is a component of like a holistic service reliability excellence metric that we care about. >> So what happens when I already have systems for like SLO, how do I integrate that system with Cortex? >> That's one of the coolest things. So the service catalog can be pretty smart about it. So let's say you've sucked in your services from your GitHub. And so now your services are in Cortex. What we can do is we can actually discover from your APM tools, you can say like, hey, for this service, we have guessed that this is the corresponding APM in Datadog. And so from Datadog, here are your SLOs, here are your monitors. And so we can start mapping all the different parts of your world into the Cortex. And that's the power of the service catalog. The service catalog says, given a service, here's everything about that service. Here's the vulnerability scans. Here's the APM, the monitors, the SLOs, the JIRA ticket is like all that stuff comes into a single place. And then our scorecards product can go back out and say, hey, Datadog, tell me about this SLOs for the service. And so we're going to get that information live and then score your services against that. And so we're like integrating with all of your third party tools and integrations to create that single pan of glass. >> Yeah, and to add to that, I think one of the most interesting use cases with scorecards is, okay, which teams have actually adopted SLOs in the first place? I think a lot of companies struggle with how do we make sure engineers defined SLOs are passing them actually care about them. And scorecards can be used to one, which teams are actually meeting these guidelines? And then two, let's get those teams adopted on SLOs. Let's track that, you can do all of that in Cortex, which is I think a really interesting use case that we've seen. >> So let's talk about kind of my use case in the end to end process for integrating Cortex into migrations. So I have this monolithic application, I want to break it into microservices and then I want to ensure that I'm delivering if not, you know what, let's leave it a little bit more open ended. How do I know that I'm better at the end of I was in a monolith before, how do I measure that now that I'm in microservices and on cloud native, that I'm better? >> That's a good question. I think it comes down to, and we talk about this all the time for our customers that are going through that process. You can't define better if you don't define a baseline, like what does good mean to us? And so you need to start by saying, why are we moving to microservices? Is it because we want teams to move faster? Is it because we care about reliability up time? Like what is the core metric that we're tracking? And so you start by defining that as an organization. And that is kind of like a hand wavy thing. Why are we doing microservices? Once you have that, then you define this scorecard. And that's like our golden path. Once we're done doing this microservice migration, can we say like, yes, we have been successful and those metrics that we care about are being tracked. And so where Cortex fits in is from the very first step of creating a service, you can use Cortex to define templates. Like one click, you go in, it spins up a microservice for you that follows all your best practices. And so from there, ideally you're meeting 80% of your standards already. And then you can use scorecards to track historical progress. So you can say, are we meeting our golden path standards? Like if it's uptime, you can track uptime metrics and scorecards. If it's around velocity, you can track velocity metrics. Is it just around modernization? Are you doing CICD and vulnerability scans, like moving faster as a team? You can track that. And so you can start seeing like trends at a per team level, at a per department level, at a per product level saying, hey, we are seeing consistent progress in the metrics that we care about. And this microservice journey is helping us with that. So I think that's the kind of phased progress that we see with Cortex. >> So I'm going to give you kind of a hand wavy thing. We're told that cloud native helps me to do things faster with less defects so that I can do new opportunities. Let's stretch into kind of this non-tech, this new opportunities perspective. I want to be able to move my microservices. I want to be able to move my architecture to microservices, so I reduce call wait time on my customer service calls. So I can easily see how I can measure are we iterating faster? Are we putting out more updates quicker? That's pretty easy to measure. The number of defects, easy to measure. I can imagine a scorecard, but what about this wait time? I don't necessarily manage the call center system, but I get the data. How do I measure that the microservice migration was successful from a business process perspective? >> Yeah, that's a good question. I think it comes down to two things. One, the flexibility of scorecard means you can pipe in that data to Cortex. And what we recommend customers is track the outcome metrics and track the input metrics as well. And so what is the input metric to call wait time? Like maybe it's the fact that if something goes wrong, we have the run books to quickly roll back to an older version that we know is running. That way MTTR is faster. Or when something happens, we know the owner for that service and we can go back to them and say like, hey, we're going to ping you as an incident commander. Those are kind of the input metrics to, if we do these things, then we know our call wait time is going to drop because we're able to respond faster to incidents. And so you want to track those input metrics. And then you want to track the output metrics as well. And so if you have those metrics coming in from your Prometheus or your Datadogs or whatever, you can pipe that into Cortex and say, hey, we're going to look at both of these things holistically. So we want to see is there a correlation between those input metrics like are we doing things the right way, versus are we seeing the value that we want to come out of that? And so I think that's the value of Cortex is not so much around, hey, we're going to be prescriptive about it. It's here's this framework that will let you track all of that and say, are we doing things the right way and is it giving us the value that we want? And being able to report that update to engineer leadership and say, hey, maybe these services are not doing like we're not improving call wait time. Okay, why is that? Are these services behind on the actual input metrics that we care about? And so being able to see that I think is super valuable. >> Yeah, absolutely, I think just to touch on the reporting, I think that's one of the most value add things Cortex can provide. If you think about it, the service is atomic unit of your software. It represents everything that's being built and that bubbles up into teams, products, business units, and Cortex lets you represent that. So now I can, as a CTO, come in and say, hey, these product lines are they actually meeting our standards? Where are the areas of risk? Where should I be investing more resources? I think Cortex is almost like the best way to get the actual health of your engineering organization. >> All right Anish and Ganesh. We're going to go into the speed round here. >> Ganesh: It's time for the Q clock? >> Time for the Q clock. Start the Q clock. (upbeat music) Let's go on. >> Ganesh: Let's do it. >> Anish: Let's do it. >> Let's go on. You're you're 10 seconds in. >> Oh, we can start talking. Okay, well I would say, Anish was just touching on this. For a CTO, their question is how do I know if engineering quality is good? And they don't care about the microservice level. They care about as a business, is my engineering team actually producing. >> Keith: Follow the green, not the dream. (Ganesh laughs) >> And so the question is, well, how do we codify service quality? We don't want this to be a hand wavy thing that says like, oh, my team is good, my team is bad. We want to come in and define here's what service quality means. And we want that to be a number. You want that to be something that can- >> A goal without a timeline is just a dream. >> And CTO comes in and they say, here's what we care about. Here's how we're tracking it. Here are the teams that are doing well. We're going to reward the winners. We're going to move towards a world where every single team is doing service quality. And that's where Cortex can provide. We can give you that visibility that you never have before. >> For that five seconds. >> And hey, your SRE can't be the one handling all this. So let Cortex- >> Shoot the bad guy. >> Shot that, we're done. From Valencia Spain, I'm Keith Townsend. And you're watching theCube. The leader in high tech coverage. (soft music) (soft music) >> Narrator: TheCube presents Kubecon and Cloudnativecon Europe, 2022 brought to you by Red Hat, the cloud native computing foundation and its ecosystem partners. >> Welcome to Valencia, Spain in Kubecon, Cloudnativecon Europe, 2022. I'm Keith Townsend. And we are in a beautiful locale. The city itself is not that big 100,000, I mean, sorry, about 800,000 people. And we got out, got to see a little bit of the sites. It is an amazing city. I'm from the US, it's hard to put in context how a city of 800,000 people can be so beautiful. I'm here with Anish Dhar and Ganesh Datta, Co-founder and CTO of Cortex. Anish you're CEO of Cortex. We were having a conversation. One of the things that I asked my client is what is good. And you're claiming to answer the question about what is quality when it comes to measuring microservices? What is quality? >> Yeah, I think it really depends on the company. And I think that's really the philosophy we have when we build Cortex is that we understood that different companies have different definitions of quality, but they need to be able to be represented in really objective ways. I think what ends up happening in most engineering organizations is that quality lives in people's heads. Engineers who write the services, they're often the ones who understand all the intricacies with the service. What are the downstream I dependencies, who's on call for this service, where does the documentation live? All of these things, I think impact the quality of the service. And as these engineers leave the company or they switch teams, they often take that tribal knowledge with them. And so I think quality really comes down to being able to objectively like codify your best practices in some way, and have that distributed to all engineers in the company. >> And to add to that, I think like very concrete examples for an organization that's already modern their idea of quality might be uptime incidents. For somebody that's like going through a modernization strategy, they're trying to get to the 21st century. They're trying to get to Kubernetes. For them quality means like, where are we in that journey? Are you on our latest platforms? Are you running CI? Are you doing continuous delivery? Like quality can mean a lot of things. And so our perspective is how do we give you the tools to say as an organization here's what quality means to us. >> So at first my mind was going through when you said quality and as you started out the conversation about having this kind of non codified set of measurements, historical knowledge, et cetera. I was thinking observability measuring how much time does it take to have a transaction? But Ganesh you're introducing this new thing. I'm working with this project where we're migrating a monolith application to a set of microservices. And you're telling me Cortex helps me measure the quality of what I'm doing in my project? >> Ganesh: Absolutely. >> How is that? >> Yeah, it's a great question. So I think when you think about observability, you think about uptime and latency and transactions and throughput and all this stuff and I think that's very high level. And I think that's one perspective of what quality is. But as you're going through this journey, you might say like the fact that we're tracking that stuff, the fact that you're using APM, you're using distributed tracing, that is one element of service quality. Maybe service quality means you're doing CICD, you're running vulnerability scans. You're using Docker. Like what that means to us can be very different. So observability is just one aspect of, are you doing things the right way? Good to us means you're using SLOs. You are tracking those metrics. You're reporting that somewhere. And so that's like one component for our organization of what quality can mean. >> Wow, I'm kind of taken me back by this because I've not seen someone kind of give the idea. And I think later on, this is the perfect segment to introduce theCube clock in which I'm going to give you a minute to kind of like give me the elevator pitch, but we're going to have the deep conversation right now. When you go in and you... what's the first process you do when you engage in a customer? Does a customer go and get this off of repository, install it, the open source version and then what, I mean, what's the experience? >> Yeah, absolutely. So we have both a smart and on-prem version of Cortex. It's really straightforward. Basically we have a service discovery onboarding flow where customers can connect to different set of source for their services. It could be Kubernetes, ECS, Git Repos, APM tools, and then we'll actually automatically map all of that service data with all of the integration data in the company. So we'll take that service and map it to its on call rotation to the JIRA tickets that have the service tag associated with it, to the data algo SLOs. And what that ends up producing is this service catalog that has all the information you need to understand your service. Almost like a single pane of glass to work with the service. And then once you have all of that data inside Cortex, then you can start writing scorecards, which grade the quality of those services across those different verticals Ganesh was talking about. like whether it's a monolith, a microservice transition, whether it's production readiness or security standards, you can really start tracking that. And then engineers start understanding where are the areas of risk with my service across reliability or security or operation maturity. I think it gives us insane visibility into what's actually being built and the quality of that compared to your standards. >> So, okay, I have a standard for SLO. That is usually something that is, it might not even be measured. So how do you help me understand that I'm lacking a measurable system for tracking SLO and what's the next step for helping me get that system? >> Yeah, I think our perspective is very much how do we help you create a culture where developers understand what's expected of them? So if SLOs are part of what we consider observability and reliability, then Cortex's perspective is, hey, we want to help your organization adopt SLOs. And so that service cataloging concept, the service catalog says, hey, here's my APM integration. Then a scorecard, the organization goes in and says, we want every service owner to define their SLOs. We want to define your thresholds. We want you to be tracking them. Are you passing your SLOs? And so we're not being prescriptive about here's what we think your SLOs should be. Ours is more around, hey, we're going to help you like if you care about SLOs, we're going to tell the service owners saying, hey, you need to have at least two SLOs for your service and you've got to be tracking them. And the service catalog that data flows from the service catalog into those scorecards. And so we're helping them adopt that mindset of, hey, SLOs are important. It is a component of like a holistic service reliability excellence metric that we care about. >> So what happens when I already have systems for like SLO, how do I integrate that system with Cortex? >> That's one of the coolest things. So the service catalog can be pretty smart about it. So let's say you've sucked in your services from your GitHub. And so now your services are in Cortex. What we can do is we can actually discover from your APM tools, we can say like, hey, for this service we have guessed that this is the corresponding APM in Datadog. And so from Datadog, here are your SLOs, here are your monitors. And so we can start mapping all the different parts of your world into the Cortex. And that's the power of the service catalog. The service catalog says, given a service, here's everything about that service. Here's the vulnerability scans, here's the APM, the monitor, the SLOs, the JIRA ticket, like all that stuff comes into a single place. And then our scorecard product can go back out and say, hey, Datadog, tell me about this SLOs for the service. And so we're going to get that information live and then score your services against that. And so we're like integrating with all of your third party tools and integrations to create that single pan of glass. >> Yeah and to add to that, I think one of the most interesting use cases with scorecards is, okay, which teams have actually adopted SLOs in the first place? I think a lot of companies struggle with how do we make sure engineers defined SLOs are passing them actually care about them? And scorecards can be used to one, which teams are actually meeting these guidelines? And then two let's get those teams adopted on SLOs. Let's track that. You can do all of that in Cortex, which is, I think a really interesting use case that we've seen. >> So let's talk about kind of my use case in the end to end process for integrating Cortex into migrations. So I have this monolithic application, I want to break it into microservices and then I want to ensure that I'm delivering you know what, let's leave it a little bit more open ended. How do I know that I'm better at the end of I was in a monolith before, how do I measure that now that I'm in microservices and on cloud native, that I'm better? >> That's a good question. I think it comes down to, and we talk about this all the time for our customers that are going through that process. You can't define better if you don't define a baseline, like what does good mean to us? And so you need to start by saying, why are we moving to microservices? Is it because we want teams to move faster? Is it because we care about reliability up time? Like what is the core metric that we're tracking? And so you start by defining that as an organization. And that is kind of like a hand wavy thing. Why are we doing microservices? Once you have that, then you define the scorecard and that's like our golden path. Once we're done doing this microservice migration, can we say like, yes, we have been successful. And like those metrics that we care about are being tracked. And so where Cortex fits in is from the very first step of creating a service. You can use Cortex to define templates. Like one click, you go in, it spins up a microservice for you that follows all your best practices. And so from there, ideally you're meeting 80% of your standards already. And then you can use scorecards to track historical progress. So you can say, are we meeting our golden path standards? Like if it's uptime, you can track uptime metrics and scorecards. If it's around velocity, you can track velocity metrics. Is it just around modernization? Are you doing CICD and vulnerability scans, like moving faster as a team? You can track that. And so you can start seeing like trends at a per team level, at a per department level, at a per product level. Saying, hey, we are seeing consistent progress in the metrics that we care about. And this microservice journey is helping us with that. So I think that's the kind of phased progress that we see with Cortex. >> So I'm going to give you kind of a hand wavy thing. We're told that cloud native helps me to do things faster with less defects so that I can do new opportunities. Let's stretch into kind of this non-tech, this new opportunities perspective. I want to be able to move my microservices. I want to be able to move my architecture to microservices so I reduce call wait time on my customer service calls. So, I could easily see how I can measure are we iterating faster? Are we putting out more updates quicker? That's pretty easy to measure. The number of defects, easy to measure. I can imagine a scorecard. But what about this wait time? I don't necessarily manage the call center system, but I get the data. How do I measure that the microservice migration was successful from a business process perspective? >> Yeah, that's a good question. I think it comes down to two things. One, the flexibility of scorecard means you can pipe in that data to Cortex. And what we recommend customers is track the outcome metrics and track the input metrics as well. And so what is the input metric to call wait time? Like maybe it's the fact that if something goes wrong, we have the run book to quickly roll back to an older version that we know is running that way MTTR is faster. Or when something happens, we know the owner for that service and we can go back to them and say like, hey, we're going to ping you as an incident commander. Those are kind the input metrics to, if we do these things, then we know our call wait time is going to drop because we're able to respond faster to incidents. And so you want to track those input metrics and then you want to track the output metrics as well. And so if you have those metrics coming in from your Prometheus or your Datadogs or whatever, you can pipe that into Cortex and say, hey, we're going to look at both of these things holistically. So we want to see is there a correlation between those input metrics? Are we doing things the right way versus are we seeing the value that we want to come out of that? And so I think that's the value of Cortex is not so much around, hey, we're going to be prescriptive about it. It's here's this framework that will let you track all of that and say, are we doing things the right way and is it giving us the value that we want? And being able to report that update to engineer leadership and say, hey, maybe these services are not doing like we're not improving call wait time. Okay, why is that? Are these services behind on like the actual input metrics that we care about? And so being able to see that I think is super valuable. >> Yeah, absolutely. I think just to touch on the reporting, I think that's one of the most value add things Cortex can provide. If you think about it, the service is atomic unit of your software. It represents everything that's being built and that bubbles up into teams, products, business units, and Cortex lets you represent that. So now I can, as a CTO, come in and say, hey, these product lines are they actually meeting our standards? Where are the areas of risk? Where should I be investing more resources? I think Cortex is almost like the best way to get the actual health of your engineering organization. >> All right, Anish and Ganesh. We're going to go into the speed round here. >> Ganesh: It's time for the Q clock >> Time for the Q clock. Start the Q clock. (upbeat music) >> Let's go on. >> Ganesh: Let's do it. >> Anish: Let's do it. >> Let's go on, you're 10 seconds in. >> Oh, we can start talking. Okay, well I would say, Anish was just touching on this, for a CTO, their question is how do I know if engineering quality is good? And they don't care about the microservice level. They care about as a business, is my enduring team actually producing- >> Keith: Follow the green, not the dream. (Ganesh laughs) >> And so the question is, well, how do we codify service quality? We don't want this to be a hand wavy thing that says like, oh, my team is good, my team is bad. We want to come in and define here's what service quality means. And we want that to be a number. You want that to be something that you can- >> A goal without a timeline is just a dream. >> And a CTO comes in and they say, here's what we care about, here's how we're tracking it. Here are the teams that are doing well. We're going to reward the winners. We're going to move towards a world where every single team is doing service quality. And that's what Cortex can provide. We can give you that visibility that you never had before. >> For that five seconds. >> And hey, your SRE can't be the one handling all this. So let Cortex- >> Shoot the bad guy. >> Shot that, we're done. From Valencia Spain, I'm Keith Townsend. And you're watching theCube, the leader in high tech coverage. (soft music)

Published Date : May 20 2022

SUMMARY :

Brought to you by Red Hat, And we got out, got to see and have that distributed to how do we give you the tools the quality of what I'm So I think when you think What's the first process you do that has all the information you need So how do you help me we want you to define your thresholds. And so we can start mapping adopted SLOs in the first place? in the end to end process And so you can start seeing like trends So I'm going to give you And so if you have those metrics coming in and Cortex lets you represent that. the speed round here. Time for the Q clock. You're you're 10 seconds in. the microservice level. Keith: Follow the green, not the dream. And so the question is, well, timeline is just a dream. that you never have before. And hey, your SRE can't And you're watching theCube. 2022 brought to you by Red Hat, And we got out, got to see and have that distributed to how do we give you the tools the quality of what I'm So I think when you think And I think later on, this that has all the information you need So how do you help me And the service catalog that data flows And so we can start mapping You can do all of that in the end to end process And so you can start seeing So I'm going to give you And so if you have those metrics coming in I think just to touch on the reporting, the speed round here. Time for the Q clock. the microservice level. Keith: Follow the green, not the dream. And so the question is, well, timeline is just a dream. that you never had before. And hey, your SRE can't And you're watching theCube,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AnishPERSON

0.99+

Keith TownsendPERSON

0.99+

CortexORGANIZATION

0.99+

80%QUANTITY

0.99+

KeithPERSON

0.99+

Red HatORGANIZATION

0.99+

USLOCATION

0.99+

GaneshPERSON

0.99+

21st centuryDATE

0.99+

100,000QUANTITY

0.99+

10 secondsQUANTITY

0.99+

twoQUANTITY

0.99+

five secondsQUANTITY

0.99+

two thingsQUANTITY

0.99+

firstQUANTITY

0.99+

Valencia, SpainLOCATION

0.99+

800,000 peopleQUANTITY

0.99+

CortexTITLE

0.99+

Valencia SpainLOCATION

0.99+

one elementQUANTITY

0.99+

one aspectQUANTITY

0.99+

bothQUANTITY

0.99+

oneQUANTITY

0.99+

CloudnativeconORGANIZATION

0.99+

one perspectiveQUANTITY

0.99+

DatadogORGANIZATION

0.99+

one componentQUANTITY

0.99+

Ganesh DattaPERSON

0.98+

OneQUANTITY

0.98+

SLOTITLE

0.98+

2022DATE

0.98+

first stepQUANTITY

0.98+

KubeconORGANIZATION

0.97+

about 800,000 peopleQUANTITY

0.97+

one clickQUANTITY

0.97+

DockerCon 2022 023 Shubha Rao


 

(upbeat music) >> Hey, welcome back to theCUBE's cover of DockerCon Mainstage, I'm John Furrier, host of theCUBE. We're here with Shubha Rao, Senior Manager, Product Manager at AWS, in the container services. Shubha, thanks for coming on theCUBE. >> Hi, thank you very much for having me, excited to be here. >> So obviously, we're doing a lot of coverage with AWS recently, on containers, cloud native, microservices and we see you guys always at the events. But tell me about what your role is in the organization? >> Yeah, so I lead the product management and developer advocacy team, in the AWS Container Services group, where we focus on elastic containers. And what I mean by elastic containers, is that, all the AWS opinionated, out of the box solutions that we have for you, like, you know, ECS and App Runner and Elastic BeanStalk. So where we bring in our services in a way that integrates with the AWS ecosystem. And, you know, my team manages the product management and speaking to customers and developers like you all, to understand how we can improve our services for you to use it more seamlessly. >> So, I mean, I know AWS has a lot of services tha t have containers involved with them and it's a lot of integration within the cloud. Amazon's as cloud native as you're going to get at AWS. If I was a new customer, where do I start with containers if you had to give me advice? And then, where I have a nice roadmap to grow within AWS. >> Yeah, no, that's a great question a lot of customers ask us this. We recommend that the customers choose whatever is the best fit for their application needs and for their operational flexibilities. So, if you have an application which you can use, pretty abstract and like end to end managed by AWS service, we recommend that you start at the highest level of abstraction that's okay to use for your application. And that means something like App Runner, where you can bring in a web application and run it like end to end. And if there are things that you want to control and tweak, then you know, we have services like ECS, where you get control and you get flexibility to tweak it to your needs. Be it needs of like, integrations or running your own agents and running your own partner solutions or even customizing how it scales and all the, you know, characteristics related to it. And of course we have, if there are a lot of our customers also run kubernetes, so that is a requirement for you, if your apps are already packaged to run, you know, easily with the kubernetes ecosystem, then we have, yes, for you. So, like application needs, the operational, how much of the operations do you want us to handle? Or how much of it do you want to actually have control over. And with all that, like the highest level of abstraction so that we can do the work on your behalf, which is the goal of AWS. >> Yeah, well, we always hear that all that heavy lifting, undifferentiated heavy lifting, you guys handle all that. Since you're in product management, I have to ask the question 'cause you guys have a little bit longer view, as you have think about what's on the roadmap. What type of customer trends are you seeing in container services? >> We see a lot of trends about customers who want to have the plugability for their, you know, services of choice. And our EKS offerings actually help in that. And we see customers who want an opinionated, you know, give me an out of the box solution, rather than building blocks. And ECS brings you that experience. The new strengths that we are seeing is that a lot of our customer workloads are also on their data centers and in their on-prem like environments. Be it branch offices or data centers or like, you know, other areas. And so we've recently launched the, anywhere offerings for you. So, ECS anywhere, brings you an experience for letting your workloads run and management that you control, where we manage the scaling and orchestration and the whole like, you know, monitoring and troubleshooting aspects of it. Which is the new trend, which seems to be something that our customers use as a way to migrate their applications to the cloud in the long term or just to get, you know, the same experience and the same, like, constructs that they're familiar with, come onto their data centers and their environments. >> You know, Shubha, we hear a lot about containers. It's becoming standard in the enterprise now, mainstream. But customers, when we talk to them, they kind of have this evolution, they start with containers and they realize how great it is and they become container full, right. And then you start to see kind of, them trying to evolve to the next level. And then you start to see EKS come into the equation. We see that in cloud native. Is EKS a container? Or is it a service? How does that work with everything? >> So EKS is a Amazon managed service, container service, where we do the operational set up, you know, upgrades and other things for the customer on their behalf. So basically, you get the same communities APIs that you get to use for your application but we handle a little bit of the integrations and the operations selected to keeping it up and running with high availability. in a way that actually meets your needs for the applications. >> And more and more people are dipping their toe in the water, as we say, with containers. What are some of the things you've seen customers do when they jump in and start implementing that kind of phase one containers? Also, there's a lot of head room beyond that, as you mentioned. What's the first couple steps that they take? They jump in,, is it a learning process? Is it serverless? Where is the connection points all come together? >> Great, so, I want to say that, no one solution that we have, fits all needs. Like, it's not the best case, best thing for all your use cases, and not for all of your applications. So, how it all comes together is that, AWS gives you a ecosystem of tools and capabilities. Some customers want to really build the, you know, castle themselves with each of the Lego block and some customers want it to be a ready made thing. And I want, you know, one of the things that I speak to customers about is, is to rethink which of the knobs and controls do they really need to have, you know because none of the services we have is a one way door. Like, there is always flexibility and, you know ability to move from one service to the other. So, my recommendation is to always like, start with things where Amazon handles many of the heavy lifting, you know, operations for you. And that means starting with something like, serverless offerings, where, like, for example, with Lambda and Forget, we manage the host, we manage the patching, we manage the monitoring. And that would be a great place for you to use ECS offering and, you know, basically get an end to end experience in a couple of days. And over time, if you have more needs, if you have more control, you know, if you want to bring in your own agents and whatever else you have, the option to use your own EC2 Instances or to take it to other, like, you know, parts of the AWS ecosystem, where you want to, you know, tweak it to your needs. >> Well, we're seeing a lot of great traction here at DockerCon. And all the momentum around containers. And then you're starting to get into trust and security supply chain, as open source becomes more exponentially in growth, it's growing like crazy, which is a great thing. So what can we expect to see from your team in the coming months, as this rolls forward? It's not going away anytime soon. It's going to be integrated and keep on scaling. What do we expect from the team in the next month or so? Couple of months. >> Security and, you know, is our number one job. So you will continue to see more and more features, capabilities and integrations, to ensure that your workloads are secure. Availability and scaling are the things that we do, you know, as keep the lights on. So, you should expect to see all of our services growing to make it like, more user friendly, easier, you know, simpler ways to get the whole availability and scaling to your needs, better. And then like, you know, very specifically, I want to touch on a few services. So App Runner, today we have support for public facing web services. You can expect that the number of use cases that you can meet with app runner is going to increase over time. You want to invest into making it AWS end to end workflow experience for our customers because, that's the easiest journey to the cloud. And we don't want you to actually wait for months and years to actually leverage the benefits of what AWS provides. ECS, we've already launched our, like, you know, Forget and Anywhere, to bring you more flexibility in terms of easier networking capabilities, more granular controls in deployment and more controls to actually help you plug in your preferred, you know, solution ties. And in EKS, we are going to continue to keep the communities, you know, versions and, you know, bring simpler experiences for you. >> A lot of nice growth there, containers, EKS, a lot more goodness in the cloud, obviously. We have 30 seconds left. Tell us what you're most excited about personally. And what should the developers pay attention to in this conference around containers and AWS? >> I would say that AWS has a lot of offerings but, you know, speak to us, like, come to us with your questions or, you know, anything that you have, like in terms of feature requests. We are very, very eager and happy to speak to you all. You know, you can engage with us on the container store map, which is on GitHub. Or you can find, you know, many of us in events like this, AWS Summits and, you know, DockerCon and many of the other meetups. Or find us on LinkedIn, we're always happy to chat. >> Yeah, always open, open source. Open source meets cloud scale, meets commercialization. All happening, all great stuff. Shubha, thank you for coming on theCUBE. Thanks for sharing. We'll send it back now to the DockerCon Mainstage. I'm John Furrier with theCUBE. Thanks for watching. (upbeat music)

Published Date : May 11 2022

SUMMARY :

at AWS, in the container services. Hi, thank you very much for microservices and we see you and developers like you all, if you had to give me advice? packaged to run, you know, easily as you have think about in the long term or just to get, you know, And then you start to see kind of, that you get to use for your application in the water, as we say, with containers. or to take it to other, like, you know, And all the momentum around containers. keep the communities, you know, the cloud, obviously. lot of offerings but, you know, Shubha, thank you for coming on theCUBE.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AWSORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

ShubhaPERSON

0.99+

John FurrierPERSON

0.99+

Shubha RaoPERSON

0.99+

DockerConEVENT

0.99+

30 secondsQUANTITY

0.99+

LinkedInORGANIZATION

0.98+

todayDATE

0.98+

one serviceQUANTITY

0.98+

next monthDATE

0.98+

Elastic BeanStalkTITLE

0.96+

App RunnerTITLE

0.96+

first couple stepsQUANTITY

0.96+

LambdaTITLE

0.96+

DockerConORGANIZATION

0.95+

one wayQUANTITY

0.95+

app runnerTITLE

0.92+

eachQUANTITY

0.92+

theCUBEORGANIZATION

0.9+

EKSORGANIZATION

0.87+

GitHubORGANIZATION

0.87+

oneQUANTITY

0.87+

LegoORGANIZATION

0.85+

Couple of monthsQUANTITY

0.8+

ECSTITLE

0.8+

EC2TITLE

0.73+

ForgetTITLE

0.72+

SummitsEVENT

0.59+

ECSORGANIZATION

0.57+

EKSTITLE

0.53+

AppTITLE

0.53+

2022DATE

0.52+

RunnerORGANIZATION

0.49+

023OTHER

0.44+

Martin Glynn, Dell Technologies & Clarke Patterson, Snowflake | Dell Technologies World 2022


 

>> theCube presents Dell Technologies World, brought to you by Dell. >> Hi everyone, welcome back to Dell Technologies World 2022. You're watching theCube's coverage of this, three-day coverage wall to wall. My name is David Vellante John Furrier's here, Lisa Martin, David Nicholson. Talk of the town here is data. And one of the big announcements at the show is Snowflake and Dell partnering up, building ecosystems. Snowflake reaching into on-prem, allowing customers to actually access the Snowflake Data Cloud without moving the data or if they want to move the data they can. This is really one of the hotter announcements of the show. Martin Glynn is here, he's the Senior Director of Storage Product Management at Dell Technologies. And Clark Patterson, he's the Head of Product Marketing for Snowflake. Guys, welcome. >> Thanks for having us. >> So a lot of buzz around this and, you know, Clark, you and I have talked about the need to really extend your data vision. And this really is the first step ever you've taken on-prem. Explain the motivation for this from your customer's perspective. >> Yeah. I mean, if you step back and think about Snowflake's vision and our mission of mobilizing the world's data, it's all around trying to break down silos for however customers define what a silo is, right? So we've had a lot of success breaking down silos from a workload perspective where we've expanded the platform to be data warehousing, and data engineering, and machine learning, and data science, and all the kind of compute intensive ways that people work with us. We've also had a lot of success in our sharing capabilities and how we're breaking down silos of organizations, right? So I can share data more seamlessly within my team, I can do it across totally disparate organizations, and break down silos that way. So this partnership is really like the next leg of the stool, so to speak, where we're breaking down the silos of the the data and where the data lives ultimately, right? So up until this point, Cloud, all focus there, and now we have this opportunity with Dell to expand that and into on-premises world and people can bring all those data sets together. >> And the data target for this Martin, is Dell ECS, right? Your object store, and it's got S3 compatibility. Explain that. >> Yeah, we've actually got sort of two flavors. We'll start with ECS, which is our turnkey object storage solution. Object storage offers sort of the ultimate in flexibility, you know, potential performance, ease of use, right? Which is why it fits so well with Snowflake's mission for sort of unlocking, you know, the data within the data center. So we'll offer it to begin with ECS, and then we also recently announced our software defined object scale solution. So add even more flexibility there. >> Okay. And the clock, the way it works is I can now access non-native Snowflake data using what? Materialized views, external tables, how does that work? >> Some combination of all the above. So we've had in Snowflake a capability called external tables which we refer to, it goes hand in hand with this notion of external stages. Basically through the combination of those two capabilities, it's a metadata layer on data wherever it resides. So customers have actually used this in Snowflake for data lake data outside of Snowflake in the Cloud up until this point. So it's effectively an extension of that functionality into the Dell on-premises world, so that we can tap into those things. So we use the external stages to expose all the metadata about what's in the Dell environment. And then we build external tables in Snowflake so that data looks like it is in Snowflake. And then the experience for the analyst or whomever it is, is exactly as though that data lives in the Snowflake world. >> Okay. So for a while you've allowed non-native Snowflake data but it had to be in the Cloud. >> Correct. >> It was the first time it's on-prem, >> that's correct >> that's the innovation here. Okay. And if I want to bring it into the Cloud, can I? >> Yeah, the connection here will help in a migration sense as well, right? So that's the good thing is, it's really giving the user the choice. So we are integrating together as partners to make connection as seamless as possible. And then the end user will say like, look I've got data that needs to live on-premises, for whatever reasons, data sovereignty whatever they decide. And they can keep it there and still do the analytics in another place. But if there's a need and a desire to use this as an opportunity to migrate some of that data to Cloud, that connection between our two platforms will make that easier. >> Well, Michael always says, "Hey, it's customer choice, we're flexible." So you're cool with that? That's been the mission since we kind of came together, right? Is if our customers needed to stay in their data center, if that makes more sense from a cost perspective or, you know, a data gravity perspective, then they can do that. But we also want to help them unlock the value of that data. So if they need to copy it up to the public Cloud and take advantage of it, we're going to integrate directly with Snowflake to make that really easy to do. >> So there are engineering integrations here, obviously that's required. Can you describe what that looks like? Give us the details on when it's available. >> Sure. So it's going to be sort of second half this year that you'll see, we're demoing it this week, but the availability we second half this year. And fundamentally, it's the way Clark described it, that Snowflake will reach into our S3 interface using the standard S3 interface. We're qualifying between the way they expect that S3 interface to present the data and the way our platform works, just to ensure that there's smooth interaction between the two. So that's sort of the first simplest use case. And then the second example we gave where the customer can copy some of that data up to the public Cloud. We're basically copying between two S3 buckets and making sure that Snowflake's Snowpipe is aware that data's being made available and can easily ingest it. >> And then that just goes into a virtual warehouse- >> Exactly. >> and customer does to know or care. >> Yep Exactly. >> Yeah. >> The compute happens in Snowflake the way it does in any other manner. >> And I know you got to crawl, walk, run second half of this year, but I would imagine, okay, you're going to start with AWS, correct? And then eventually you go to other Clouds. I mean, that's going to take other technical integrations, I mean, obviously. So should we assume there's a roadmap here or is this a one and done? >> I would assume that, I mean, based on our multi-Cloud approach, that's kind of our approach at least, yeah. >> Kind of makes sense, right? I mean, that would seem to be a natural progression. My other thought was, okay, I've got operational systems. They might be transaction systems running on a on a PowerMax. >> Yeah. >> Is there a way to get the data into an object store and make that available, now that opens up even more workloads. I know you're not committing to doing that, but it just, conceptually, it seems like something a customer might want to do. >> Yeah. I, a hundred percent, agree. I mean, I think when we brought our team together we started with a blank slate. It was what's the best solution we can build. We landed on this sort of first step, but we got lots of feedback from a lot of our big joint customers about you know, this system over there, this potential integration over here, and whether it's, you know, PowerMax type systems or other file workloads with native Snowflake data types. You know, I think this is just the beginning, right? We have lots of potential here. >> And I don't think you've announced pricing, right? It's premature for that. But have you thought about, and how are you thinking about the pricing model? I mean, you're a consumption based pricing, is that kind of how this is going to work? Or is it a sort of a new pricing model or haven't you figured that out yet? >> I don't know if you've got any details on that, but from a Snowflake perspective, I would assume it's consistent with how our customers engage with us today. >> Yeah. >> And we'll offer both possibilities, right? So you can either continue with the standard, you know, sort of CapEx motion, maybe that's the most optimal for you from a cost perspective, or you can take advantage through our OpEx option, right? So you can do consumption on-prem also. >> Okay. So it could be a dual model, right? Depending on what the customer wants. If they're a Snowflake customer, obviously it's going to be consumption based, however, you guys price. What's happening, Clark, in in the market? Explain why Snowflake has so much momentum and, you know, traction in the marketplace. >> So like I spent a lot of time doing analysis on why we win and lose, core part of my role. And, you know, there's a couple of, there's really three things that come up consistently as to why people people are really excited about Snowflake platform. One is the most simplest thing of all. It feels like is just ease of use and it just works, right? And I think the way that this platform was built for the Cloud from the ground up all the way back 10 years ago, really a lot allows us to deliver that seamless experience of just like instant compute when you want it, it goes away, you know, only pay for what you use. Very few knobs to turn and things like that. And so people absolutely love that factor. The other is multi-Cloud. So, you know, there's definitely a lot of organizations out there that have a multi-Cloud strategy, and, you know, what that means to them can be highly variable, but regardless, they want to be able to interact across Clouds in some capacity. And of course we are a single platform, like literally one single interface, consistent across all the three Cloud providers that we work upon. And it gives them that flexibility to mix and match Cloud infrastructure under any Snowflake however they see fit. The last piece of it is sharing. And, you know, I think it's that ability as I kind of alluded to around like breaking down organizational silos, and allow people to be able to actually connect with each other in ways that you couldn't do before. Like, if you think about how you and I would've shared data before, I'd be like, "Hey, Dave, I'm going to unload this table into a spreadsheet and I'm going to send it over in email." And there's the whole host of issues that get introduced in that and world, now it's like instantly available. I have a lot of control over it, it's governed it's all these other things. And I can create kind of walled gardens, so to speak, of how far out I want that to go. It could be in a controlled environment of organizations that I want to collaborate with, or I can put it on our marketplace and expose it to the whole world, because I think there's a value in that. And if I choose I can monetize it, right? So those, you know, the ease of use aspect of it, absolutely, it's just a fantastic platform. The multi-Cloud aspect of it and our unique differentiation around sharing in our marketplace and monetization. >> Yeah, on the sharing front. I mean, it's now discoverable. Like if you send me an email, like what'd you call that? When did you send that email? And then the same time I can forward that to somebody else's not governed. >> Yeah. >> All right. So that just be creates a nightmare for the compliance. >> Right. Yeah. You think about how you revoke access in that situation. You just don't, right? Now I can just turn it off and you go in to run your query. >> Don't get access on that data anymore. Yeah. Okay. And then the other thing I wanted to ask you, Clark is Snowflake started really as analytics platform, simplifying data warehousing, you're moving into that world of data science, you know, the whole data lake movement, bringing those two worlds together. You know, I was talking to Ben Ward about this, maybe there's a semantic layer that helps us kind of talk between those two worlds, but you don't care, right? If it's in an object store, it can play in both of those worlds, right? >> That's right. >> Yeah, it's up to you to figure it out and the customer- >> Yeah. >> from a storage standpoint. Here it is, serve it up. >> And that's the thrust of this announcement, right? Is bringing together two great companies, the Dell platform, the Snowflake platform, and allowing organizations to bring that together. And they decide like it, as we all know, customers decide how they're going to build their architecture. And so this is just another way that we're helping them leverage the capabilities of our two great platforms. >> Does this push or pull or little bit of both? I mean, where'd this come from? Or customers saying, "Hey, it would be kind of cool if we could have this." Or is it more, "Hey, what do you guys think?" You know, where are you at with that? >> It was definitely both, right? I mean, so we certainly started with, you know, a high level idea that, you know, the technologies are complimentary, right? I mean, as Clark just described, and at the same time we had customers coming to us saying, "Hey, wait a minute, I'm doing this over here, and this over here, how can I make this easier?" So that was like I said, we started with a blank sheet and lots of long customer conversations and this is what resulted. So >> So what are the sequence of events to kind of roll this out? You said it's second half, you know, when do you start getting customers involved? Do you have your already, you know, to poke at this and what's that look like? >> Yeah, sure. I can weigh in there. So, absolutely. We've had a few of our big customers that have been involved sort of in the design already who understand how they want to use it. So I think our expectation is that now that the sort of demonstrations have been in place, we have some pre functionality, we're going to see some initial testing and usage, some beta type situations with our customers. And then second half, we'll ramp from there. >> It's got to be a huge overlap between Dell customers and Snowflake customers. I mean, it's hundred billion. You can't not bump into Dell somewhere. >> Exactly. Yeah, you know. >> So where do you guys want to see this relationship go, kind of how should we measure success? Maybe you could each give your perspectives of that. >> I mean, for us, I think it's really showing the value of the Snowflake platform in this new world where there's a whole new ecosystem of data that is accessible to us, right? So seeing those organizations that are saying like, "Look, I'm doing new things with on-premises data that I didn't think that I could do before", or, "I'm driving efficiency in how I do analytics, and data engineering, and data science, in ways that I couldn't do before," 'cause they were locked out of using a Snowflake-like technology, right? So I think for me, that's going to be that real excitement. I'm really curious to see how the collaboration and the sharing component comes into this, you know, where you can think of having an on-premises data strategy and a need, right? But you can really connect to Cloud native customers and partners and suppliers that live in the Snowflake ecosystem, and that wasn't possible before. And so that is very conceivable and very possible through this relationship. So seeing how those edges get created in in our world and how people start to collaborate across data, both in the Cloud and on-prem is going to be really exciting. >> I remember I asked Frank, it was kind of early in the pandemic. I asked him, come on, tell me about how you're managing things. And he was awesome. And I asked him to at the time, you know, "You're ever going to do, you know, bring this platform on-prem?" He's like unequivocal, "No way, that's never going to happen. We're not going to do it halfway house ware Cloud only." And I kept thinking, but there's got to be a way to expand that team. There's so much data out there, and so boom, now we see the answer . Martin, from your standpoint, what does success look like? >> I think it starts with our partnership, right? So I've been doing this a long time. Probably the first time I've worked so closely with a partner like Snowflake. Joint customer conversations, joint solutioning, making sure what we're building is going to be really, truly as useful as possible to them. And I think we're going to let them guide us as we go forward here, right? You mentioned, you know, systems or record or other potential platforms. We're going to let them tell us where exactly the most value will come from the integration between the two companies. >> Yeah. Follow data. I mean, remember in the old days a hardware company like Dell would go to an ISP like Snowflake and say, "Hey, we ran some benchmarks. Your software runs really fast on our hardware, can we work together?" And you go, "Yeah, of course. Yeah, no problem." But wow! What a different dynamic it is today. >> Yeah. Yeah, absolutely. >> All right guys. Hey, thanks so much for coming to theCube. It's great to see you. We'll see you at the Snowflake Summit in June. >> Snowflake Summit in a month and a half. >> Looking forward to that. All right. Thank you again. >> Thank you Dave. >> All right. Keep it right there everybody. This is Dave Vellante, wall to wall coverage of Dell Tech World 2022. We'll be right back. (gentle music)

Published Date : May 7 2022

SUMMARY :

brought to you by Dell. And one of the big So a lot of buzz around this the stool, so to speak, And the data target for this for sort of unlocking, you know, the way it works is I can now access of Snowflake in the Cloud but it had to be in the Cloud. it into the Cloud, can I? So that's the good thing is, So if they need to copy Can you describe what that looks like? and the way our platform works, the way it does in any other manner. And I know you got to crawl, walk, run I mean, based on our multi-Cloud approach, I mean, that would seem to and make that available, and whether it's, you is that kind of how this is going to work? I don't know if you've maybe that's the most optimal for you What's happening, Clark, in in the market? and expose it to the whole world, Yeah, on the sharing front. So that just be creates a You think about how you revoke you know, the whole data lake movement, Here it is, serve it up. And that's the thrust of You know, where are you at with that? and at the same time we had customers now that the sort of It's got to be a huge Yeah, you know. So where do you guys want that live in the Snowflake ecosystem, And I asked him to at the time, you know, You mentioned, you know, I mean, remember in the old days We'll see you at the Thank you again. of Dell Tech World 2022.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MichaelPERSON

0.99+

David NicholsonPERSON

0.99+

Lisa MartinPERSON

0.99+

FrankPERSON

0.99+

DellORGANIZATION

0.99+

Clark PattersonPERSON

0.99+

ClarkPERSON

0.99+

MartinPERSON

0.99+

Ben WardPERSON

0.99+

DavePERSON

0.99+

Dave VellantePERSON

0.99+

Martin GlynnPERSON

0.99+

David VellantePERSON

0.99+

Dell TechnologiesORGANIZATION

0.99+

hundred billionQUANTITY

0.99+

firstQUANTITY

0.99+

two companiesQUANTITY

0.99+

JuneDATE

0.99+

AWSORGANIZATION

0.99+

twoQUANTITY

0.99+

this yearDATE

0.99+

second exampleQUANTITY

0.99+

two worldsQUANTITY

0.99+

bothQUANTITY

0.99+

three-dayQUANTITY

0.99+

second halfQUANTITY

0.99+

SnowflakeORGANIZATION

0.99+

this weekDATE

0.99+

John FurrierPERSON

0.99+

two great companiesQUANTITY

0.99+

Snowflake SummitEVENT

0.99+

todayDATE

0.99+

first stepQUANTITY

0.99+

two capabilitiesQUANTITY

0.98+

two great platformsQUANTITY

0.98+

two platformsQUANTITY

0.98+

oneQUANTITY

0.98+

threeQUANTITY

0.98+

eachQUANTITY

0.98+

SnowflakeTITLE

0.97+

first stepQUANTITY

0.97+

a month and a halfQUANTITY

0.97+

Dell Tech World 2022EVENT

0.97+

three thingsQUANTITY

0.97+

SnowflakeEVENT

0.96+

S3TITLE

0.96+

two flavorsQUANTITY

0.95+

single platformQUANTITY

0.95+

OneQUANTITY

0.94+

first timeQUANTITY

0.94+

theCubeCOMMERCIAL_ITEM

0.94+

Dell Technologies WorldEVENT

0.93+

both possibilitiesQUANTITY

0.93+

Dell Technologies World 2022EVENT

0.93+

pandemicEVENT

0.92+

ECSTITLE

0.92+

hundred percentQUANTITY

0.91+

CapExORGANIZATION

0.9+

Clarke PattersonPERSON

0.88+

Martin Glynn, Dell Technologies & Clarke Patterson, Snowflake | Dell Technologies World 2022


 

>> theCube presents Dell Technologies World, brought to you by Dell. >> Hi everyone, welcome back to Dell Technologies World 2022. You're watching theCube's coverage of this, three-day coverage wall to wall. My name is David Vellante John Furrier's here, Lisa Martin, David Nicholson. Talk of the town here is data. And one of the big announcements at the show is Snowflake and Dell partnering up, building ecosystems. Snowflake reaching into on-prem, allowing customers to actually access the Snowflake Data Cloud without moving the data or if they want to move the data they can. This is really one of the hotter announcements of the show. Martin Glynn is here, he's the Senior Director of Storage Product Management at Dell Technologies. And Clark Patterson, he's the Head of Product Marketing for Snowflake. Guys, welcome. >> Thanks for having us. >> So a lot of buzz around this and, you know, Clark, you and I have talked about the need to really extend your data vision. And this really is the first step ever you've taken on-prem. Explain the motivation for this from your customer's perspective. >> Yeah. I mean, if you step back and think about Snowflake's vision and our mission of mobilizing the world's data, it's all around trying to break down silos for however customers define what a silo is, right? So we've had a lot of success breaking down silos from a workload perspective where we've expanded the platform to be data warehousing, and data engineering, and machine learning, and data science, and all the kind of compute intensive ways that people work with us. We've also had a lot of success in our sharing capabilities and how we're breaking down silos of organizations, right? So I can share data more seamlessly within my team, I can do it across totally disparate organizations, and break down silos that way. So this partnership is really like the next leg of the stool, so to speak, where we're breaking down the silos of the the data and where the data lives ultimately, right? So up until this point, Cloud, all focus there, and now we have this opportunity with Dell to expand that and into on-premises world and people can bring all those data sets together. >> And the data target for this Martin, is Dell ECS, right? Your object store, and it's got S3 compatibility. Explain that. >> Yeah, we've actually got sort of two flavors. We'll start with ECS, which is our turnkey object storage solution. Object storage offers sort of the ultimate in flexibility, you know, potential performance, ease of use, right? Which is why it fits so well with Snowflake's mission for sort of unlocking, you know, the data within the data center. So we'll offer it to begin with ECS, and then we also recently announced our software defined object scale solution. So add even more flexibility there. >> Okay. And the clock, the way it works is I can now access non-native Snowflake data using what? Materialized views, external tables, how does that work? >> Some combination of all the above. So we've had in Snowflake a capability called external tables which we refer to, it goes hand in hand with this notion of external stages. Basically through the combination of those two capabilities, it's a metadata layer on data wherever it resides. So customers have actually used this in Snowflake for data lake data outside of Snowflake in the Cloud up until this point. So it's effectively an extension of that functionality into the Dell on-premises world, so that we can tap into those things. So we use the external stages to expose all the metadata about what's in the Dell environment. And then we build external tables in Snowflake so that data looks like it is in Snowflake. And then the experience for the analyst or whomever it is, is exactly as though that data lives in the Snowflake world. >> Okay. So for a while you've allowed non-native Snowflake data but it had to be in the Cloud. >> Correct. >> It was the first time it's on-prem, >> that's correct >> that's the innovation here. Okay. And if I want to bring it into the Cloud, can I? >> Yeah, the connection here will help in a migration sense as well, right? So that's the good thing is, it's really giving the user the choice. So we are integrating together as partners to make connection as seamless as possible. And then the end user will say like, look I've got data that needs to live on-premises, for whatever reasons, data sovereignty whatever they decide. And they can keep it there and still do the analytics in another place. But if there's a need and a desire to use this as an opportunity to migrate some of that data to Cloud, that connection between our two platforms will make that easier. >> Well, Michael always says, "Hey, it's customer choice, we're flexible." So you're cool with that? That's been the mission since we kind of came together, right? Is if our customers needed to stay in their data center, if that makes more sense from a cost perspective or, you know, a data gravity perspective, then they can do that. But we also want to help them unlock the value of that data. So if they need to copy it up to the public Cloud and take advantage of it, we're going to integrate directly with Snowflake to make that really easy to do. >> So there are engineering integrations here, obviously that's required. Can you describe what that looks like? Give us the details on when it's available. >> Sure. So it's going to be sort of second half this year that you'll see, we're demoing it this week, but the availability we second half this year. And fundamentally, it's the way Clark described it, that Snowflake will reach into our S3 interface using the standard S3 interface. We're qualifying between the way they expect that S3 interface to present the data and the way our platform works, just to ensure that there's smooth interaction between the two. So that's sort of the first simplest use case. And then the second example we gave where the customer can copy some of that data up to the public Cloud. We're basically copying between two S3 buckets and making sure that Snowflake's Snowpipe is aware that data's being made available and can easily ingest it. >> And then that just goes into a virtual warehouse- >> Exactly. >> and customer does to know or care. >> Yep Exactly. >> Yeah. >> The compute happens in Snowflake the way it does in any other manner. >> And I know you got to crawl, walk, run second half of this year, but I would imagine, okay, you're going to start with AWS, correct? And then eventually you go to other Clouds. I mean, that's going to take other technical integrations, I mean, obviously. So should we assume there's a roadmap here or is this a one and done? >> I would assume that, I mean, based on our multi-Cloud approach, that's kind of our approach at least, yeah. >> Kind of makes sense, right? I mean, that would seem to be a natural progression. My other thought was, okay, I've got operational systems. They might be transaction systems running on a on a PowerMax. >> Yeah. >> Is there a way to get the data into an object store and make that available, now that opens up even more workloads. I know you're not committing to doing that, but it just, conceptually, it seems like something a customer might want to do. >> Yeah. I, a hundred percent, agree. I mean, I think when we brought our team together we started with a blank slate. It was what's the best solution we can build. We landed on this sort of first step, but we got lots of feedback from a lot of our big joint customers about you know, this system over there, this potential integration over here, and whether it's, you know, PowerMax type systems or other file workloads with native Snowflake data types. You know, I think this is just the beginning, right? We have lots of potential here. >> And I don't think you've announced pricing, right? It's premature for that. But have you thought about, and how are you thinking about the pricing model? I mean, you're a consumption based pricing, is that kind of how this is going to work? Or is it a sort of a new pricing model or haven't you figured that out yet? >> I don't know if you've got any details on that, but from a Snowflake perspective, I would assume it's consistent with how our customers engage with us today. >> Yeah. >> And we'll offer both possibilities, right? So you can either continue with the standard, you know, sort of CapEx motion, maybe that's the most optimal for you from a cost perspective, or you can take advantage through our OpEx option, right? So you can do consumption on-prem also. >> Okay. So it could be a dual model, right? Depending on what the customer wants. If they're a Snowflake customer, obviously it's going to be consumption based, however, you guys price. What's happening, Clark, in in the market? Explain why Snowflake has so much momentum and, you know, traction in the marketplace. >> So like I spent a lot of time doing analysis on why we win and lose, core part of my role. And, you know, there's a couple of, there's really three things that come up consistently as to why people people are really excited about Snowflake platform. One is the most simplest thing of all. It feels like is just ease of use and it just works, right? And I think the way that this platform was built for the Cloud from the ground up all the way back 10 years ago, really a lot allows us to deliver that seamless experience of just like instant compute when you want it, it goes away, you know, only pay for what you use. Very few knobs to turn and things like that. And so people absolutely love that factor. The other is multi-Cloud. So, you know, there's definitely a lot of organizations out there that have a multi-Cloud strategy, and, you know, what that means to them can be highly variable, but regardless, they want to be able to interact across Clouds in some capacity. And of course we are a single platform, like literally one single interface, consistent across all the three Cloud providers that we work upon. And it gives them that flexibility to mix and match Cloud infrastructure under any Snowflake however they see fit. The last piece of it is sharing. And, you know, I think it's that ability as I kind of alluded to around like breaking down organizational silos, and allow people to be able to actually connect with each other in ways that you couldn't do before. Like, if you think about how you and I would've shared data before, I'd be like, "Hey, Dave, I'm going to unload this table into a spreadsheet and I'm going to send it over in email." And there's the whole host of issues that get introduced in that and world, now it's like instantly available. I have a lot of control over it, it's governed it's all these other things. And I can create kind of walled gardens, so to speak, of how far out I want that to go. It could be in a controlled environment of organizations that I want to collaborate with, or I can put it on our marketplace and expose it to the whole world, because I think there's a value in that. And if I choose I can monetize it, right? So those, you know, the ease of use aspect of it, absolutely, it's just a fantastic platform. The multi-Cloud aspect of it and our unique differentiation around sharing in our marketplace and monetization. >> Yeah, on the sharing front. I mean, it's now discoverable. Like if you send me an email, like what'd you call that? When did you send that email? And then the same time I can forward that to somebody else's not governed. >> Yeah. >> All right. So that just be creates a nightmare for the compliance. >> Right. Yeah. You think about how you revoke access in that situation. You just don't, right? Now I can just turn it off and you go in to run your query. >> Don't get access on that data anymore. Yeah. Okay. And then the other thing I wanted to ask you, Clark is Snowflake started really as analytics platform, simplifying data warehousing, you're moving into that world of data science, you know, the whole data lake movement, bringing those two worlds together. You know, I was talking to Ben Ward about this, maybe there's a semantic layer that helps us kind of talk between those two worlds, but you don't care, right? If it's in an object store, it can play in both of those worlds, right? >> That's right. >> Yeah, it's up to you to figure it out and the customer- >> Yeah. >> from a storage standpoint. Here it is, serve it up. >> And that's the thrust of this announcement, right? Is bringing together two great companies, the Dell platform, the Snowflake platform, and allowing organizations to bring that together. And they decide like it, as we all know, customers decide how they're going to build their architecture. And so this is just another way that we're helping them leverage the capabilities of our two great platforms. >> Does this push or pull or little bit of both? I mean, where'd this come from? Or customers saying, "Hey, it would be kind of cool if we could have this." Or is it more, "Hey, what do you guys think?" You know, where are you at with that? >> It was definitely both, right? I mean, so we certainly started with, you know, a high level idea that, you know, the technologies are complimentary, right? I mean, as Clark just described, and at the same time we had customers coming to us saying, "Hey, wait a minute, I'm doing this over here, and this over here, how can I make this easier?" So that was like I said, we started with a blank sheet and lots of long customer conversations and this is what resulted. So >> So what are the sequence of events to kind of roll this out? You said it's second half, you know, when do you start getting customers involved? Do you have your already, you know, to poke at this and what's that look like? >> Yeah, sure. I can weigh in there. So, absolutely. We've had a few of our big customers that have been involved sort of in the design already who understand how they want to use it. So I think our expectation is that now that the sort of demonstrations have been in place, we have some pre functionality, we're going to see some initial testing and usage, some beta type situations with our customers. And then second half, we'll ramp from there. >> It's got to be a huge overlap between Dell customers and Snowflake customers. I mean, it's hundred billion. You can't not bump into Dell somewhere. >> Exactly. Yeah, you know. >> So where do you guys want to see this relationship go, kind of how should we measure success? Maybe you could each give your perspectives of that. >> I mean, for us, I think it's really showing the value of the Snowflake platform in this new world where there's a whole new ecosystem of data that is accessible to us, right? So seeing those organizations that are saying like, "Look, I'm doing new things with on-premises data that I didn't think that I could do before", or, "I'm driving efficiency in how I do analytics, and data engineering, and data science, in ways that I couldn't do before," 'cause they were locked out of using a Snowflake-like technology, right? So I think for me, that's going to be that real excitement. I'm really curious to see how the collaboration and the sharing component comes into this, you know, where you can think of having an on-premises data strategy and a need, right? But you can really connect to Cloud native customers and partners and suppliers that live in the Snowflake ecosystem, and that wasn't possible before. And so that is very conceivable and very possible through this relationship. So seeing how those edges get created in in our world and how people start to collaborate across data, both in the Cloud and on-prem is going to be really exciting. >> I remember I asked Frank, it was kind of early in the pandemic. I asked him, come on, tell me about how you're managing things. And he was awesome. And I asked him to at the time, you know, "You're ever going to do, you know, bring this platform on-prem?" He's like unequivocal, "No way, that's never going to happen. We're not going to do it halfway house ware Cloud only." And I kept thinking, but there's got to be a way to expand that team. There's so much data out there, and so boom, now we see the answer . Martin, from your standpoint, what does success look like? >> I think it starts with our partnership, right? So I've been doing this a long time. Probably the first time I've worked so closely with a partner like Snowflake. Joint customer conversations, joint solutioning, making sure what we're building is going to be really, truly as useful as possible to them. And I think we're going to let them guide us as we go forward here, right? You mentioned, you know, systems or record or other potential platforms. We're going to let them tell us where exactly the most value will come from the integration between the two companies. >> Yeah. Follow data. I mean, remember in the old days a hardware company like Dell would go to an ISP like Snowflake and say, "Hey, we ran some benchmarks. Your software runs really fast on our hardware, can we work together?" And you go, "Yeah, of course. Yeah, no problem." But wow! What a different dynamic it is today. >> Yeah. Yeah, absolutely. >> All right guys. Hey, thanks so much for coming to theCube. It's great to see you. We'll see you at the Snowflake Summit in June. >> Snowflake Summit in a month and a half. >> Looking forward to that. All right. Thank you again. >> Thank you Dave. >> All right. Keep it right there everybody. This is Dave Vellante, wall to wall coverage of Dell Tech World 2022. We'll be right back. (gentle music)

Published Date : May 4 2022

SUMMARY :

brought to you by Dell. And one of the big So a lot of buzz around this the stool, so to speak, And the data target for this for sort of unlocking, you know, the way it works is I can now access of Snowflake in the Cloud but it had to be in the Cloud. it into the Cloud, can I? So that's the good thing is, So if they need to copy Can you describe what that looks like? and the way our platform works, the way it does in any other manner. And I know you got to crawl, walk, run I mean, based on our multi-Cloud approach, I mean, that would seem to and make that available, and whether it's, you is that kind of how this is going to work? I don't know if you've maybe that's the most optimal for you What's happening, Clark, in in the market? and expose it to the whole world, Yeah, on the sharing front. So that just be creates a You think about how you revoke you know, the whole data lake movement, Here it is, serve it up. And that's the thrust of You know, where are you at with that? and at the same time we had customers now that the sort of It's got to be a huge Yeah, you know. So where do you guys want that live in the Snowflake ecosystem, And I asked him to at the time, you know, You mentioned, you know, I mean, remember in the old days We'll see you at the Thank you again. of Dell Tech World 2022.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MichaelPERSON

0.99+

David NicholsonPERSON

0.99+

Lisa MartinPERSON

0.99+

FrankPERSON

0.99+

DellORGANIZATION

0.99+

Clark PattersonPERSON

0.99+

ClarkPERSON

0.99+

MartinPERSON

0.99+

Ben WardPERSON

0.99+

DavePERSON

0.99+

Dave VellantePERSON

0.99+

Martin GlynnPERSON

0.99+

David VellantePERSON

0.99+

Dell TechnologiesORGANIZATION

0.99+

hundred billionQUANTITY

0.99+

firstQUANTITY

0.99+

two companiesQUANTITY

0.99+

JuneDATE

0.99+

AWSORGANIZATION

0.99+

twoQUANTITY

0.99+

this yearDATE

0.99+

second exampleQUANTITY

0.99+

two worldsQUANTITY

0.99+

bothQUANTITY

0.99+

three-dayQUANTITY

0.99+

second halfQUANTITY

0.99+

SnowflakeORGANIZATION

0.99+

this weekDATE

0.99+

John FurrierPERSON

0.99+

two great companiesQUANTITY

0.99+

Snowflake SummitEVENT

0.99+

todayDATE

0.99+

first stepQUANTITY

0.99+

two capabilitiesQUANTITY

0.98+

two great platformsQUANTITY

0.98+

two platformsQUANTITY

0.98+

oneQUANTITY

0.98+

threeQUANTITY

0.98+

eachQUANTITY

0.98+

SnowflakeTITLE

0.97+

first stepQUANTITY

0.97+

a month and a halfQUANTITY

0.97+

Dell Tech World 2022EVENT

0.97+

three thingsQUANTITY

0.97+

SnowflakeEVENT

0.96+

S3TITLE

0.96+

two flavorsQUANTITY

0.95+

single platformQUANTITY

0.95+

OneQUANTITY

0.94+

first timeQUANTITY

0.94+

theCubeCOMMERCIAL_ITEM

0.94+

Dell Technologies WorldEVENT

0.93+

both possibilitiesQUANTITY

0.93+

Dell Technologies World 2022EVENT

0.93+

pandemicEVENT

0.92+

ECSTITLE

0.92+

hundred percentQUANTITY

0.91+

CapExORGANIZATION

0.9+

Clarke PattersonPERSON

0.88+

The Cube at Dell Technologies World 2022 | Dell Technologies World 2022


 

>> Announcer: TheCUBE presents Dell Technologies World brought to you by Dell. >> Welcome back to theCUBE's coverage, day one, Dell Technologies World live from Las Vegas at the Venetian. Lisa Martin here with Dave Vellante and John Furrier. Guys let's talk, first of all, first time back in person since Dell Tech World 2019. Lots going on, lots of news today. I'm going to start with you, Dave, since you're closest to me. What are some of the things that have impressed you at this first in-person event in three years? >> Well, the first thing I want to say is, so John and I, we started theCUBE in 2010, John, right? In Boston, EMC World. Now of course, Dell owns EMC, so wow. It's good to be back here. Dell's built this beautiful set. I'd say the number one thing that's surprised me was how many people were here. Airport was packed, cab lines, the line at the Palazzo, the hotel, to get in was, you know, probably an hour long. And there's, I thought there'd be maybe 5,000 people here. I would say it's closer to eight. So the hall was packed today and everybody was pumped. Michael Dell was so happy to be up on stage. He talked, I dunno if you guys saw his keynote. He basically talked, obviously how great it is to be back, but he talked about their mission, building technologies that enable that better human condition. There was a big, you know, chewy words, right? And then they got into, you know, all the cool stuff they're doing so we can get into it. But they had CVS up on stage, they had USAA on stage. A big theme was trust. Which of course, if you're Dell, you know, you want people to trust you. I guess the other thing is this is the first live event they've had since the VMware spin. >> Right. >> So in 2019 they owned VMware. VMware's no longer a part of the income statement. Dell had a ton of debt back then. Now Dell's balance sheet looks actually better than VMware's because they restructured everything. And so it's a world without VMware where now with VMware their gross margins were in the 30-plus percent range. Now they're down to 20%. So we're now asking what's next for Dell? And they stood up on stage, we can talk about it some more, but a lot of multi-cloud, a lot of cyber resilience, obviously big themes around APEX, you know, hybrid work, John. So, well let's get into that. >> What are some of the key things that you heard today? >> Well, first of all, the customers on stage are always great. Dell's Technologies, 10 years for theCUBE and their history. I saw something back here, 25 years with celebrating precision, the history of Michael Dell's journey and the current Dell Technologies with EMC folded in and a little bit of VMware DNA still in there even though they're separated out. Just has a loyal set of customers. And you roam the hallways here, you see a lot of people know Dell, love Dell. Michael Dell himself was proud to talk before the event about he's number one, Dave, in PC market share. That's been his goal to beat HP for years. (laughing) And so he's got that done. But they're transforming their business cause they have to, the data center is now cloud. Cloud is now the distributed computing. Dell has all the piece parts today. We've covered this three years ago. Now it's turned into multi-cloud, which is multi-vendor, as a service is how the consumers consume, innovate with data, that's kind of the raw material. Future of work, and obviously the partners that they have. So I think Dell is going to continue to maintain the news of being the great in the front lines as a data-center-slash-enterprise, now cloud, Edge player. So, you know, I'm impressed with their constant reinvention of the company and the news hits all the cards: Snowflake partnership, cutting edge company in the cloud, partnership with Snowflake, APEX, their product that's innovating at the Edge, this new kind of product that's going to bring it together. Unifying, all those themes, Dave, are all hitting the marks. >> Chuck Whitten up on stage, obviously he was the multicloud, you know, conversation. And I think the vision that they they're laying out and Jeff Clarke talked about it as well, is a term that John and I coined. We can't remember who coined it, John or me, "supercloud." >> Yeah. (laughing) >> And they're talking about building an abstraction layer, building on top of the clouds, connecting on-prem to the clouds, across clouds, out to the Edge, hiding the underlying complexity, Dell managing all that. That's their vision. It's aspirational today but that really is supercloud. And it's more than multi-cloud. >> You coined the term supercloud. >> Did I? >> We riffed together. I called it sub-cloud. >> Oh, that's right. And then I said, no, it's got to float over. Super! Superman flies. (John laughs) Right, that's right. >> Sub-cloud, not really a good name. Nobody wants to be sub of anything. >> I think my kid gave it to me, John, actually. (laughing) >> Well if we do know that Michael Dell watches theCUBE, he's been on theCUBE many times. He watches theCUBE, clearly he's paying attention! >> Yeah, well I hope so. I mean, we write a lot about this and we talk to a lot of customers and talk to a lot of people. But let's talk about the announcements if we can. So... The APEX cyber recovery service, you know, ransomware recovery. They're now also running that on AWS and Azure. So that's big. We heard Presidio, they was super thrilled about that. So they're... The thing I'd say about that is, you know, Dell used to be really defensive about cloud. Now I think they're leaning in. They're saying, "Hey we're not going to spend, you know, Charles Fitzgerald, the snarky guy, does some good work on CAPEX. I mean, you look at how much the cloud guys are spending on CAPEX a year, $30, $40 billion. >> They can't compete. >> On cloud CAPEX. Dell doesn't want compete. >> John: You can't compete. >> Build on top of that, so that's a gift. So that's cool. You mentioned the Snowflake announcement. I thought that was big. What that is... It's very interesting, so Frank Slootman has always said, "We're not doing a half-way house, we're in the cloud." Okay, so square that circle for me. Now Snowflake's coming on-prem. Well, yeah, what they're doing is allowing customers to keep data in a Dell object store, ECS or other object stores. But use Snowflake. So non-native Snowflake data on-prem. So that expands Snowflake cloud. What it also does is give Dell a little sizzle, a little better partner and there's a path to cloud migration if that's where the customers want to go. >> Well, I mean, I would say that that's a dangerous game because we've seen that movie before, VMware and AWS. >> Yeah but that we've talked about this. Don't you think that was the right move for VMware? >> At the time, but if you don't nurture the relationship AWS will take all those customers, ultimately, from VMware. >> But that product's still doing very well. We'll see with NetApp is another one. NetApp on AWS. I forget what they call it, but yeah, file and AWS. So that was, go ahead. >> I was just going to say, what's the impact of Snowflake? Why do you think Snowflake chose Dell? >> Because Dell's a $101 billion company and they have a huge distribution channel and a lot of common customers. >> They own storage on the premise. >> Yep. And so Snowflake's looking for, you know, storage options on which they can, you know, bring data into their cloud. Snowflake wants the data to go from on-prem into the cloud. There's no question about that. >> And I would add another thing, is that Snowflake can't do what Dell Technologies does on-premises with storage and Dell can't do what Snowflake's doing. So I think it's a mutual short-term and medium-term benefit to say, "Hey you want to run on Snowflake? You need some services there? Great, but come back and use Dell." So that to me, I think that's a win-win for Snowflake. Just the dangerous game is, whoever can develop the higher-level services in the cloud will ultimately be the winner. >> But I think the thing I would say there is, as I said, Snowflake would love for the migration to occur, but they realize it's not always going to happen. And so why not partner with a company like Dell, you know, start that pipeline. And for Dell, hey, you know, why fight fashion, as Jeremy Burton would say. The other thing was Project Alpine, which is file, block and object across cloud. That's again setting up this supercloud. And then APEX. I mean, APEX is the discussion. We had a one-on-one session, a bunch of analysts with Jeff Woodrow who runs ISG. We were supposed to be talking about ISG, all we talked about is APEX. Then we had another session with APEX and all we talked about, of course, is APEX. So, they're still figuring that out, I would say, at this point. They don't quite have product market fit and I think they'd admit that, but they're working hard on scaling engineering, trying to figure out the channel model, the compensation. You know, taking their time even, but moving fast if you know what I mean. >> I mean, Dave, I think the big trend that's jumping out of me here is that, something that we've been covering, the headless cloud, meaning if you can do as a service, which is one of Dell's major points today, that to me, everyone is a PaaS layer. I think everyone that's building digital transformation apps has to be their own SaaS. So they either do that with somebody, a man in service, which fits beautifully into that trend, or do it own. Now e-commerce has this nailed down. Shopify or build your own on top of the cloud. So headless retail's a hot trend. You're going to start to see that come into the enterprise where the enterprise can have their cake and eat it too and take advantage of managed services where they don't have expertise. So those two things right there I think is going to drive a lot of growth for Dell. >> So essentially Lisa, what Dell is doing is saying, "Okay, the timing's good with the VMware spin." They say, "Now we're going to build our own cloud as a service, APEX." And they're starting with infrastructure as a service, you know, storage as a service. Obviously cyber recovery is a service. So you're going to get compute and storage and data protection. Eventually they'll move into other areas. And it's really important for them to do that to have their own cloud, but they've got to build up the ecosystem. Snowflake is a small example. My view, they need hundreds and hundreds of Snowflakes to fill the gaps, you know, move up the stack in middleware and database and DevOps. I mean, they should be partnering with HashiCorp. They should be partnering with all these companies that do DevOps stuff. They should be... I'd like to see them, frankly, partner with competitors to their data protection group. Why, you know, sounds crazy, but if you're going to build a cloud, look at AWS. They partner with everybody, right? And so that's what a true cloud experience looks like. You've got this huge menu. And so I think Dell's going to have to try to differentiate from HP. HPE was first, right, and they're all in. Dell's saying we're going to let the customers tell us where to go. And so they, I think one differentiation is their ecosystem, their ability to build that ecosystem. Yeah, but HP's got a good distribution channel too. Just not as big as Dell's. >> They all got the assets in it, but they're transforming. So I think at the end of the day, as Dell and even HPE transforms, they got to solve the customer problems and reduce the complexity. So again, the managed services piece with APEX is huge. I think having the building blocks for multi hybrid cloud at the Edge, just, you can't go wrong with that. If the customers can deploy it and consume it. >> What were some of the messages that you heard from, you mentioned CVS on stage, USAA on stage. Dell's always been very, very customer-focused. They've got some great brands. What did you hear from that customer's voice that shows you they're going in the right direction? >> Well first of all, the customers are longstanding customers of Dell Technologies, so that's one recognition of the ongoing partnerships. But they're also messaged up with Dell's messaging, right? They're telling the Dell story. And what I heard from the Dell story was moving fast and reducing complexity is their number one goal. They see the cloud option has to be there. Cloud native, Edge came up a little bit and the role of data. So I think all the new application development today that's relevant has a data as code kind of concept. Data engineering is the hottest skillset on the planet right now. And data engineering is not data science. So you start to see top-level CSOs and CIOs saying the new modern applications have to have data embedded in. It's just too hard. It's too hard to find that engineering team. So I heard the customer saying, we love the direction, we love the managed services. And by the way, we want to have that supply chain and cyber risk reduced. So yeah, big endorsement for Dell. >> You know, the biggest transformation in Dell, the two biggest transformations. One was the financials. You know, the income statement is totaled at a $101 billion company, growing at 17% a year. That's actually quite remarkable. But the flip side of that, the other big transformation was the customer. And with the acquisition of EMC but specifically VMware, it changed the whole conversation for Dell with customers. I think pre-2015, you wouldn't have had that type of narrative up on stage with customers. Cause it was, you know, compellant and it was equal logic and it was small businesses. Now you're talking about really deep strategic relationships that were enabled by that transformation. So my point is, to answer your question, it's going to be really interesting to see what happens post-VMware because when VMware came together with Dell, the industry didn't like it. The VMware ecosystem was like (growls) Dell. Okay, but customers loved it, right? And that's one of the things I heard on stage today. They didn't say, oh, well we love the VMware. But he mentioned VMware, the CTO from USAA. So Dell configured this commercial agreement with VMware, Michael Dell's the chairman of both companies. So that was part of the incentive. The other incentive is Dell is the number one distribution channel for VMware. So I think they now have that muscle memory in place where they've earned that trust. And I think that will continue on past the spin. It was actually quite brilliant the way they've orchestrated that. >> Yeah, Lisa, one more thing I want to add to that is that what I heard also was, you got the classic "here's how you be a leader in the modern era." It's a big leadership message. But then when you heard some of the notes, software-defined, multi-cloud with an emphasis on operations, Dave. So, okay, if you're a good leader, stay with Dell in operations. So you see strategy and operations kind of coming together around cloud. But big software defined multi-cloud data operational story. And I think those customers are kind of on that. You know, you got to maintain your operations. DevOps is operations, DevSecOps is operations. So big, like, don't get too greedy on the modern, shiny new toy, you know, in the cloud. >> Yeah, it's a safe bet, right? For infrastructure. I mean, HPE is a good bet too, but I mean Dell's got a way broader portfolio, bigger supply chain. It's got the end-to-end with the desktop, laptop, you know, the client side business, you know, a bigger services organization. And now the big challenge in my mind for Dell is okay, what's next? And I think they got to get into data management, obviously build up as a service, build up their cloud. They need software in their portfolio. I mean, you know, 20% gross margin company, it just, Wall Street's not as interested. You know, if they want to build more value, which they do, they've got to get more into software and I think you're going to see that. Again, I think you're going to see more M&A. I'd love to see more organic R&D instead of stock buybacks but I get why they have to do that. >> Well one of the things I'm looking at, Dave, in terms of what I think the future impact's going to be is the generational shift with the gen-Z and millennials running IT in the modern era. Not your old school rack-and-stack data center mentality. And then ultimately the scoreboard will determine, in my mind, the winner in their race is, where are the workloads running? Right? The workloads, and then also what's the application development scene look like? What do the apps look like? What are they building on? What's scaling them, what's running them? And the Edge is going to be a big part of that. So to me, operations, Edge, workloads and the development and then the workforce shift. >> And I do think Edge, I'm glad you brought up Edge. Edge is, you know, so fragmented but I think there's going to be a massive opportunity in Edge. There's going to be so much compute at the Edge. Dell talked about it, so much data. It's unclear to me right now how they go after that other than in pockets, like we heard from Gill. I believe they're going to do really well in retail. No question there. >> Yeah. >> But there's so much other industrial aisle IT- >> The telco space of towers, Edge. >> And Dell's, you know, Dell's server business, eh okay, it's got Intel and AMD inside, okay great. Their high margins come from storage, not from compute. Not the case with AWS. AWS had 35% operating margins last quarter. Oracle and Microsoft, that's the level that they're at. And I'd love to see Dell figure out a way to get paid more for their compute expertise. And that's going to take some R&D. >> John: Yeah, yeah. >> Last question guys, as we wrap up our wrap of day one. Given everything that we've all been through the last couple of years, what is your overall summary of what Dell announced today? The vibe of the show? How well have they fared the last two years? >> Well, I mean, they had a remarkable last two years. In a large part thanks to the client business. I think today you're seeing, you know, them lift the veil on what's next. And I think their story is coherent. There's, again, financially, they're a much more sound company, much better balance sheet. Not the most attractive income statement from a margin standpoint and they got work to do there. But wow, as far as driving revenue, they know how to sell. >> Yeah, I mean to me, I think looking back to before the pandemic, when we were here on the stage last, we were talking end-to-end, Dell leadership. And I say the biggest thing is Dell's catching up fast, faster than I thought. And I think they got, they're skating to where the puck is going, Dave, and I'll tell you why. The end-to-end I thought wouldn't be a total flyer if the Edge got too dynamic, but the fact that the Edge is growing so fast, it's more complex, that's actually given Dell more time. So to me, what I see happening is Dell having that extra time to nail the Edge piece, cause if they get there, if they get there, then they'll have their core competency. And why do I say that? Cause hardware is back. Server god boxes are going to be back. You're going to see servers at the Edge. And look at the failure of Amazon's Outpost, okay? Amazon's Outpost was essentially hardware. That's Dell's business. So you talk about like compute as a cloud but they really didn't do well with deploying compute like Dell does with servers. EKS is kicking ass at the Edge. So serverless with hardware, I think, is going to be the killer solution at the Edge. A combination of cloud and Edge hardware. And the Edge looks more like a data center than the cloud looks like the data center, so- >> So you're saying hardware matters? >> HardwareMatters.com. >> I think that's what I heard. >> HardwareMatters.com, check out that site, coming soon. (all laughing) >> I think it matters more than ever, you know- >> Blockchain, silicon advances. >> I think reason hardware matters is cause it's barbelling. It's going from the box to the silicon and it's going, you know, upstream into software defined. >> Horizontally, scalability means good silicon at the Edge, under the cover, scaling all the stuff and machine learning and AI in the application. So we've said this on theCUBE now, what, five years now? >> Dave: Yeah, yep. >> Guys, we've got an action packed night tonight. Two days tomorrow and Wednesday. Michael Dell is on tomorrow. Chuck Whitten is on, Jeff Clarke, et cetera, et cetera. Caitlin Gordon is on Wednesday. >> All the heavy hitters are coming on. >> They're coming on, they're going to be... >> Dave: Allison Dew's coming on. >> Allison Dew's coming on. >> We're going to talk about the Matthew McConaughey interview, which was, I thought, fantastic. J.J. Davis is coming on. So we're going to have a great channel discussion, as well, with Cheryl Cook. >> That's right. >> A lot of the product people are coming on. We're going to be talking APEX, it's going to be good. With cyber recovery, the Storage Alchemist is coming on, John! (all laughing) >> Boy, I can't wait to see that one. >> Well stick around guys for our coverage all day tomorrow, Tuesday and Wednesday. Lisa Martin with Dave Vellante and John Furrier coming to you live from the Venetian in Las Vegas. This is Dell Technologies World 2022. We look forward to seeing you tomorrow and the next day. (bouncy, upbeat music)

Published Date : May 3 2022

SUMMARY :

brought to you by Dell. What are some of the things the hotel, to get in was, of the income statement. Cloud is now the distributed computing. And I think the vision that the underlying complexity, I called it sub-cloud. it's got to float over. Sub-cloud, not really a good name. it to me, John, actually. Well if we do know that But let's talk about the Dell doesn't want compete. You mentioned the Snowflake announcement. that that's a dangerous game the right move for VMware? At the time, but if you So that was, go ahead. and a lot of common customers. And so Snowflake's looking for, you know, So that to me, I think that's the migration to occur, I think is going to drive And so I think Dell's going to have to try So again, the managed services in the right direction? They see the cloud option has to be there. And that's one of the things in the modern era." And I think they got to And the Edge is going to but I think there's going to be Not the case with AWS. the last two years? Not the most attractive income statement And I say the biggest thing out that site, coming soon. It's going from the box to the silicon AI in the application. Michael Dell is on tomorrow. they're going to be... We're going to talk about the A lot of the product We look forward to seeing you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Frank SlootmanPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Jeremy BurtonPERSON

0.99+

Jeff WoodrowPERSON

0.99+

Jeff ClarkePERSON

0.99+

DavePERSON

0.99+

Dave VellantePERSON

0.99+

OracleORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

Cheryl CookPERSON

0.99+

Michael DellPERSON

0.99+

Lisa MartinPERSON

0.99+

J.J. DavisPERSON

0.99+

JohnPERSON

0.99+

$30QUANTITY

0.99+

DellORGANIZATION

0.99+

WednesdayDATE

0.99+

Jeff ClarkePERSON

0.99+

John FurrierPERSON

0.99+

AMDORGANIZATION

0.99+

APEXORGANIZATION

0.99+

Allison DewPERSON

0.99+

EMCORGANIZATION

0.99+

AWSORGANIZATION

0.99+

LisaPERSON

0.99+

2019DATE

0.99+

20%QUANTITY

0.99+

Las VegasLOCATION

0.99+

Chuck WhittenPERSON

0.99+

AmazonORGANIZATION

0.99+

2010DATE

0.99+

BostonLOCATION

0.99+

Charles FitzgeraldPERSON

0.99+

$101 billionQUANTITY

0.99+

$40 billionQUANTITY

0.99+

SnowflakeORGANIZATION

0.99+

Two daysQUANTITY

0.99+

tomorrowDATE

0.99+

Caitlin GordonPERSON

0.99+

5,000 peopleQUANTITY

0.99+

Murli Thirumale, Portworx | AWS Summit SF 2022


 

(upbeat music) >> Okay, welcome back everyone to theCUBE's coverage of AWS Summit 2022, here at Moscone Center live on the floor, I'm John Furry host of theCUBE, all the action day two, remember AWS Summit in New York City is coming in the summer. We'll be there as well. Got a great guest Murli Murli who's the VP and GM of Cloud Native Business Unit Portworx, been in theCUBE multiple times. We were just talking about the customer he had on Ford from Detroit, where kubernetes will be this year. >> That's right. >> Great to see you. >> Yeah, same here, John. Great to see. >> So, what's the update? Quickly this, before we get into the country, give the update on what's going on in the company, what's happening? >> Well, you know, we've been acquired by Pure Storage it's well over a year. So we've had one full year of being inside of Pure. It's been wonderful, right? So we've had a great ride so far, The products have been renewed. We've got a bunch of integrations with Pure. We more than doubled our business and more than doubled our head count. So things are going great. >> I always had a, congratulations by the way. And I was going to ask about the integration but before I get there, yeah, we've been always like play some jokes on theCUBE and because serverless is so hot, I've been using storage lists and actually saw a startup yesterday had the word networking lists in their title. So this idea of like making things easier, but me, I mean serverless of this is basically servers that make it easier. >> Yeah, yeah >> So this is kind of where we see Cloud Native going. Can you share your thoughts on how Pure and Portworx are bringing this together? Because you can almost connect the dots in my mind. So say specifically what is the Cloud Native angle with Pure? >> Yeah. So look, I'll kind of start by being captain an obvious, I guess. Just sort of stating some obvious stuff and then get to what I hope will be a little bit more new and interesting. So the obvious stuff to start with is just the fact that Cloud Native is exploding. Containers are exploding. It's kind of a well known fact that 85% of the enterprise organizations around the world are pretty much going to be deploying containers, if not already in the next couple of years, right? So one it's really happening. The, buzz is now, it's not just in the future, the hype is now. The second part of that is it's really part of that is things are going production. 56% of these organizations are in production already. And that's the number is going to climb to 80 fairly quickly. So not only is this stuff being deployed as being deployed in sort of fairly mission critical, especially Greenfield applications. So that's kind of one, right? Now, the second thing that we're seeing is as they go in into production, John, the migraines are starting, right? Customer migraines, right? It's always happens in stuff that they have not looked around the corner and anticipated. So one of them is, again, a fairly obvious one is as they go into production, they need to be able to kind of recover from some oops that happens, right? And the kinds of think about this, right? John, this stuff is rapidly changing, right? Look at how many versions of kubernetes come out on a regular basis. On top of that, you got all these app, virgins, new database virgins, new stuff, vendors like us, ourselves have new virgins. So with all these new virgins, when you put it all together the stack, sometimes misbehave. So you got to kind of, "Hey, let me go recover." Right? You have outages. So essentially the whole area of data protection becomes a lot more critical. That's the migraine that people are beginning to get now, right? They can feel the migraine coming on. The good news is this is not new stuff. People know on- >> John: The DevOps. >> Yeah. Well, and in fact it is that transition from DevOps to ITOps, right? People know that they're going into production, that they need backup and data protection and disaster recovery. So in a way it's kind of good news, bad news, the good news is they know that they need it. The bad news is, it turns out that it's kind of interesting as they go Cloud Native, the technology stack has changed. So 82% of customers who are kind of deploying Cloud Native are worried about data protection. And in fact, I'll go one step further 67% of those people have actually kind of looked at what they can get from existing vendors and are going, "Hey, this is not it. This is not going to do my stuff for me." >> And by the way, just to throw a little bit more gas on that fire is ransomware attacks. So any kind of vulnerability opening? Maybe make people are scared. >> Murli: Absolutely. >> So with- >> Murli: Its a board level topic, right? >> Yeah, and then you bring down the DevOps, which is we all know the innovation formula launch in iterate, pivot, iterate, pivot, then innovation you get the formula, all your metrics, but it's a system. >> Correct. >> Storage is part now of a system when you bring Cloud Native into it, you have a consequence if something changes. >> Murli: Correct. >> So I see that. And the question I have for you is, where are we in the stability side of it? Are we close to getting there and what's coming out to help that, is it more tooling? Because the trend is people are building tools around their Cloud Native thing. I was just talking to MongoDB and they got a database, now that's all tooling. Vertically integrate into the asset or the product, because it integrates with APIs, right? So that makes total sense. >> So I think there's kind of again, a good news, bad news there, right? There's a lot of good news, right? In the world of containers and kubernetes what are some of the good news items, right? A lot of the APIs have settled down have been defined well, CNCF has done a great job promoting that, right? So the APIs are stable, right? Second, the product feature set, have become more stable, particularly sort of the the core kubernetes product security kind of stuff, right? Now what's the bad news. The bad news is, while these things are stable they are not ready for scale in every case yet, right? And when you integrate at scale, so and typically the tipping point is around 20 to 30 nodes, right? So typically when you go beyond 20 to 30 nodes then the stuff starts to come a apart, right? Like, the wheels come off of the train and all of that. And that's typically because there's a lot of the products that were designed for DevOps, are not well suited for ITOps. So really there is a new- >> And the talent culture. >> Exactly. >> Talent and culture sometimes aren't ready or are changing. >> So it's a whole bunch of people trying to use kind of a maturing product set with skill sets that are pretty low, right? So when we get into production, then other factors come into play, high availability, right? Security, you talk about ransomware, disaster recovery backup. So these are things that are sort of, I would say not 101 problems, but 201 problems, so right? This is natural as we go to that part of the thing. And that's the kind of stuff that, Portworx and Pure Storage have been kind of focused on solving. And that's kind of been how we've made our mark in the industry, right? We've helped people really get to production on some of these different points. >> Expectation on both companies have been strong, high quality, obviously performance on Pure side from day one, just did a great job with the products. Now, when you go into Cloud Native you have now this connection okay. To the customer, again I think huge point on the changing landscape. How do you see that IT to DevOps emerging? Because the trend that we're seeing is, abstracting way the complexities of management. So I won't say managed services are more of a trend, they've always been around but the notion of making it easier for customers. >> Yep, absolutely right. >> Super important. So can you guys share what you guys are doing to make it easier because not everyone has a DevOps team. >> Yeah, so look, the number one way things are made more easy, is to make it more consumable by making it as a service. So this is one of the things, here we are, at AWS Summit, right? And delighted to be here by the way. And we have a strategic alliance with with AWS, and specifically, what we're here to announce really is that we're announcing a backup as a SaaS product. Coming up in a few weeks we're going to be giing running on AWS as a service integrated with AWS. So essentially what happens is, if you have a containerized set of applications you're deploying it on EKS, ECS, AWS, what have you. We will automatically provide the ability for that to be backed up scaled and to be very, very container granular, very app specific, right? Yeah, so it's designed specifically for kubernetes. Now here's the kind of key thing to say, right? Backup's been around for a long time. You've interviewed, tons of backup people in the past. But traditional backup is just not going to work for kubernetes. And it's very simple if you think about it, John. >> John: And why is that? >> It's a very simple thing, right? Traditional backup focuses on apps and data, right? Those are the two kind of legs of that. And they create catalogs and then do a great job there. Well, here's, what's happened with Cloud Native. You have a thing inserted in the middle called kubernetes. So when you take a snapshot, I'm now kind of going into a specific kind of, world of storage, right? When you take a snapshot, what Portworx does is we take a 3D snapshot. What you really need to recover, from a backup situation where, you want to go back to the earlier stage to be kubernetes specific, you need a app snapshot, snapshot of the kubernetes spec, pod spec, And third of snapshot of the data. Well, traditional, backup folks are not taking that middle snapshot. So we do a 3D snapshot and we recover all three which is really what you need to be able to kind of like get backed up, get recovered in minutes. >> Okay and so the alternative to not doing that is what? What will happen? >> You To do that, to do your old machine level backup? So what happens with traditional backups are typically VM level or machine level, right? So you're taking a snapshot of the whole kind of machine and server or VM setup and then you recover all of that, and then you run kubernetes on that and then you try to recover it- >> John: To either stand everything up again. >> Yeah, yeah. >> John: Pretty much. >> Yeah. Whereas, what do most people want to do? This is a very different use case, by the way, right? How does this work? What people are doing for kubernetes is they're not doing archival kind of backup. What they're doing is real time, right? You're running an ops. Like I said, you got an oops, "Hey, a new release for one of the new databases then work right? Boom! I want to just go back to like yesterday, right? So how do I do that? Well, here you can just go back for that one database, one app, and recover back to that. So it's operational backup and recovery as opposed to archival backup and recovery. So for that, to be able to recover in seconds, right? You need to be, he kind of want integrated with AWS which is what we are. So it's integrated, it's automated, and it's very, very container granular. And so these three things are the things that make it sort of, very specific way. >> I love the integration story. 'Cause I think that's the big mega trend we're seeing now is is that integrating in. And, but again, it's a systems concept. It's not standalone storage, detached storage. >> Murli: Exactly. >> It's always, even though it might be decoupled a little bit it's glued together through say- >> John, you said it right. The easy button is for the system, right? Not for the individual component. Look, all of us vendors in this ecosystem are going around framing, having a being easy. But when we say that, what do we mean? We mean, oh, I'm easy to use. Well that doesn't help the user. Who's got to put all this stuff together. So it's really kind of making that stack work. >> This is easy to use, but it made these things more complex. This is what we do in the enterprise solve complexity with more complexity. >> Putting the problem to the other guy. Yeah. So it's that end to end ease of use is kind of what I would say, is the number one benefit, right? One it's container specific and designed for kubernetes. And second, it really, really is easy. >> Well, I really like the whole thing and I want to get your thoughts as we close out, what should people know about Pure and Portworx's relationship now and in the Amazon integration, what's the new narrative the north Star's still the same? High performance store, backup, securely recover and deliver the data in whatever mechanism we can. That north Star's clear, never changes, which is great. I feel love about Pure and Cloud Native. It's just taking the blockers away- >> I think the single biggest thing I would say, is all of these things, what we're turning into it is as a service offering. So if we're going to backup as a service our Portworx product now is going to be the Portworx enterprise Pure Storage product is going to be offered as a service. So with, as a service, it's easy to consume. It's easy to deploy. It's fully automated. That's the kind of the single biggest aha! Especially for the folks who are deploying on AWS today, AWS is well known for being easy to use. It's kind of fully automated. Well here, now you have this functionality for Cloud Native workloads. >> Final question, real quick, customer reaction so far, I'm assuming marketplace integration, buying terms, join selling, go to market? >> So yeah, it is integrated billing and all of that is part of that kind of offering, right? So when we say easy, it's not just about being easy to use it's about being easy to buy. It's being easy to expand all of that and scaling. Yeah. And being able to kind of automatically or automagically as I like to say, scale it, right? So all of that is absolutely part of it, right? So it is really kind of... It's not about having the basics anymore. We've been in the market now for six, seven years, so right? We have sort of an advanced offering that not only knows what customer want but anticipates what ones can expect and that's a key difference. >> I was talking to Dr. Matt Wood real quick. I know we got to wrap up on the schedule, but earlier today about AI and business analytics division's running and we were talking about serverless and the impact of serverless. And he really kind of came down the same lines where you are with the storage and the cloud data which is, "Hey, some people just want storage and the elastic leap analytics without all the under the cover stuff." Some people want to look under the covers, fine whatever choice. So really two things, so. >> Yeah, yeah. All the way from you can buy the individual components or you can buy the as a service offering, which just packages it all up in a on easy to consume kind of solution, right? >> Final, final question. What's it like at Pure everything going well, things good? >> We love it, man. I'll tell you these folks have welcomed us with open arms. And look, I've been acquired twice before. And I say this, that one of the key linchpins to a successful integration or acquisition is not just the strategic intent that always exists but really around a common culture. And, we've been blessed. I think the two companies have a strong common culture of being customer first, product excellence, and team wins every time. And these three things kind of have pulled us together. It's been a pleasure. >> One of the benefits of doing the queue for 13 years is that you get the seats things. Scott came on the queue to announce Pure Storage on theCUBE, cuz he was a nobody else. There was, oh, you're never going to get escape Velocity, EMC's going to kill, you never owned you. Nope. >> Well, we're talking about marketplaces and theCUBE is the marketplace of big announcements, John. So this is, delighted- >> Announcements. >> Yeah. Yeah. Well that was the AWS announcement. Yeah. So that's, that is big >> Final words, share the audience. What's what to expect in the next year for you guys? What's the big come news coming down? What's coming around the corner? >> I think you can expect from from Pure and Portworx the as a service set of offerings around, HADR backup, but also a brand new stuff, keep an eye out. We'll be back with John. I hope that talking about this is data services. So we have a Portworx data service product that is going to be announced. And it's magic. It's allowing people to deploy databases in a very, very, it's the easy button for database deployment. >> Congratulations on all your success. The VP and General Manager of the Cloud Native Business Unit. >> You make it sound bigger than it actually is, John. >> Thanks for coming on. Appreciate it. >> Thanks. >> Okay theCUBE coverage be back for more coverage. You're watching theCUBE here, live in Moscone on the ground at an event AWS Summit 2022. I'm John Furrier. Thanks for watching. (upbeat music)

Published Date : Apr 22 2022

SUMMARY :

is coming in the summer. So things are going great. about the integration connect the dots in my mind. So the obvious stuff to start with the good news is they And by the way, just to bring down the DevOps, when you bring Cloud Native into it, And the question I have for you is, So the APIs are stable, right? Talent and culture sometimes And that's the kind of stuff but the notion of making So can you guys share what you guys Yeah, so look, the number one way Those are the two kind of legs of that. John: To either stand So for that, to be able to I love the integration story. The easy button is for the system, right? This is easy to use, So it's that end to end ease of use and deliver the data in That's the kind of the single biggest aha! So all of that is absolutely and the impact of serverless. All the way from you can buy What's it like at Pure everything is not just the strategic intent Scott came on the queue to is the marketplace of So that's, that is big the next year for you guys? it's the easy button of the Cloud Native Business Unit. You make it sound bigger Thanks for coming on. on the ground at an event AWS Summit 2022.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

AWSORGANIZATION

0.99+

Murli MurliPERSON

0.99+

John FurrierPERSON

0.99+

sixQUANTITY

0.99+

DetroitLOCATION

0.99+

two companiesQUANTITY

0.99+

201 problemsQUANTITY

0.99+

13 yearsQUANTITY

0.99+

John FurryPERSON

0.99+

56%QUANTITY

0.99+

85%QUANTITY

0.99+

Moscone CenterLOCATION

0.99+

MosconeLOCATION

0.99+

ScottPERSON

0.99+

Matt WoodPERSON

0.99+

101 problemsQUANTITY

0.99+

New York CityLOCATION

0.99+

67%QUANTITY

0.99+

20QUANTITY

0.99+

one appQUANTITY

0.99+

next yearDATE

0.99+

82%QUANTITY

0.99+

PortworxORGANIZATION

0.99+

Murli ThirumalePERSON

0.99+

yesterdayDATE

0.99+

AmazonORGANIZATION

0.99+

twiceQUANTITY

0.99+

second thingQUANTITY

0.99+

north StarORGANIZATION

0.99+

EMCORGANIZATION

0.99+

two thingsQUANTITY

0.99+

both companiesQUANTITY

0.99+

MurliPERSON

0.99+

SecondQUANTITY

0.99+

Pure StorageORGANIZATION

0.99+

second partQUANTITY

0.98+

80QUANTITY

0.98+

OneQUANTITY

0.98+

oneQUANTITY

0.98+

one databaseQUANTITY

0.98+

this yearDATE

0.98+

thirdQUANTITY

0.98+

CNCFORGANIZATION

0.97+

AWS Summit 2022EVENT

0.97+

one full yearQUANTITY

0.97+

Dr.PERSON

0.97+

FordORGANIZATION

0.97+

PureORGANIZATION

0.97+

three thingsQUANTITY

0.97+

firstQUANTITY

0.97+

AWS SummitEVENT

0.97+

two kindQUANTITY

0.96+

secondQUANTITY

0.96+

seven yearsQUANTITY

0.96+

Cloud NativeTITLE

0.95+

threeQUANTITY

0.95+

todayDATE

0.94+

theCUBEORGANIZATION

0.91+

Keynote Enabling Business and Developer Success | Open Cloud Innovations


 

(upbeat music) >> Hello, and welcome to this startup showcase. It's great to be here and talk about some of the innovations we are doing at AWS, how we work with our partner community, especially our open source partners. My name is Deepak Singh. I run our compute services organization, which is a very vague way of saying that I run a number of things that are connected together through compute. Very specifically, I run a container services organization. So for those of you who are into containers, ECS, EKS, fargate, ECR, App Runner Those are all teams that are within my org. I also run the Amazon Linux and BottleRocketing. So anything AWS does with Linux, both externally and internally, as well as our high-performance computing team. And perhaps very relevant to this discussion, I run the Amazon open source program office. Serving at AWS for over 13 years, almost 14, involved with compute in various ways, including EC2. What that has done has given me a vantage point of seeing how our customers use the services that we build for them, how they leverage various partner solutions, and along the way, how AWS itself has gotten involved with opensource. And I'll try and talk to you about some of those factors and how they impact, how you consume our services. So why don't we get started? So for many of you, you know, one of the things, there's two ways to look at AWS and open-source and Amazon in general. One is the number of contributors you may have. And the number of repositories that contribute to. Those are just a couple of measures. There are people that I work with on a regular basis, who will remind you that, those are not perfect measures. Sometimes you could just contribute to one thing and have outsized impact because of the nature of that thing. But it address being what it is, increasingly we'll look at different ways in which we can help contribute and enhance open source 'cause we consume a lot of it as well. I'll talk about it very specifically from the space that I work in the container space in particular, where we've worked a lot with people in the Kubernetes community. We've worked a lot with people in the broader CNCF community, as well as, you know, small projects that our customers might have got started off with. For example, I want to like talking about is Argo CD from Intuit. We were very actively involved with helping them figure out what to do with it. And it was great to see how into it. And we worked, etc, came together to think about get-ups at the Kubernetes level. And while those are their projects, we've always been involved with them. So we try and figure out what's important to our customers, how we can help and then take because of that. Well, let's talk about a little bit more, here's some examples of the kinds of open source projects that Amazon and AWS contribute to. They arranged from the open JDK. I think we even now have our own implementation of Java, the Corretto open source project. We contribute to projects like rust, where we are very active in the rest foundation from a leadership role as well, the robot operating system, just to pick some, we collaborate with Facebook and actively involved with the pirates project. And there's many others. You can see all the logos in here where we participate either because they're important to us as AWS in the services that we run or they're important to our customers and the services that they consume or the open source projects they care about and how we get to those. How we get and make those decisions is often depends on the importance of that particular project. At that point in time, how much impact they're having to AWS customers, or sometimes very feel that us contributing to that project is super critical because it helps us build more robust services. I'll talk about it in a completely, you know, somewhat different basis. You may have heard of us talk about our new next generation of Amazon Linux 2022, which is based on fedora as its sub stream. One of the reasons we made this decision was it allows us to go and participate in the preneurial project and make sure that the upstream project is robust, stays robust. And that, that what that ends up being is that Amazon Linux 2022 will be a robust operating system with the kinds of capabilities that our customers are asking for. That's just one example of how we think about it. So for example, you know, the Python software foundation is something that we work with very closely because so many of our customers use Python. So we help run something like PyPy which is many, you know, if you're a Python developer, I happened to be a Ruby one, but lots of our customers use Python and helping the Python project be robust by making sure PyPy is available to everybody is something that we help provide credits for help support in other ways. So it's not just code. It can mean many different ways of contributing as well, but in the end code and operations is where we hang our happens. Good examples of this is projects that we will create an open source because it makes sense to make sure that we open source some of the core primitives or foundations that are part of our own services. A great example of that, whether this be things that we open source or things that we contribute to. And I'll talk about both and I'll talk about things near and dear to my heart. There's many examples I've picked the two that I like talking about. The first of these is firecracker. Many of you have heard about it, a firecracker for those of you who don't know is a very lightweight virtual machine manager, which allows you to run these micro VMs. And why was this important many years ago when we started Lambda and quite honestly, Fugate and foggy, it still runs quite a bit in that mode, we used to have to run on VMs like everything else and finding the right VM for the size of tasks that somebody asks for the size of function that somebody asks for is requires us to provision capacity ahead of time. And it also wastes a lot of capacity because Lambda function is small. You won't even if you find the smallest VM possible, those can be a little that can be challenging. And you know, there's a lot of resources that are being wasted. VM start at a particular speed because they have to do a whole bunch of things before the operating system spins up and the virtual machine spins up and we asked ourselves, can we do better? come up with something that allows us to create right size, very lightweight, very fast booting. What's your machines, micro virtual machine that we ended up calling them. That's what led to firecracker. And we open source the project. And today firecrackers use, not just by AWS Lambda or foggy, but by a number of other folks, there's companies like fly IO that are using it. We know people using firecracker to run Kubernetes on prem on bare metal as an example. So we've seen a lot of other folks embrace it and use it as the foundation for building their own serverless services, their own container services. And we think there's a lot of value and learnings that we can bring to the table because we get the experience of operating at scale, but other people can bring to the table cause they may have specific requirements that we may not find it as important from an AWS perspective. So that's firecracker an example of a project where we contribute because we feel it's fundamentally important to us as continually. We were found, you know, we've been involved with continuity from the beginning. Today, we are a whole team that does nothing else, but contribute to container D because container D underlies foggy. It underlies our Kubernetes offerings. And it's increasingly being used by customers directly by their placement. You know, where they're running container D instead of running a full on Docker or similar container engine, what it has allowed us to do is focus on what's important so that we can operate continuously at scale, keep it robust and secure, add capabilities to it that AWS customers need manifested often through foggy Kubernetes, but in the end, it's a win-win for everybody. It makes continuously better. If you want to use containers for yourself on AWS, that's a great way to you. You know, you still, you still benefit from all the work that we're doing. The decision we took was since it's so important to us and our customers, we wanted a team that lived in breathed container D and made sure a super robust and there's many, many examples like that. No, that we ended up participating in, either by taking a project that exists or open sourcing our own. Here's an example of some of the open source projects that we have done from an AWS on Amazon perspective. And there's quite a few when I was looking at this list, I was quite surprised, not quite surprised I've seen the reports before, but every time I do, I have to recount and say, that's a lot more than one would have thought, even though I'd been looking at it for such a long time, examples of this in my world alone are things like, you know, what work had to do with Amazon Linux BottleRocket, which is a container host operating system. That's been open-sourced from day one. Firecracker is something we talked about. We have a project called AWS peril cluster, which allows you to spin up high performance computing clusters on AWS using the kind of schedulers you may use to use like slum. And that's an open source project. We have plenty of source projects in the web development space, in the security space. And more recently things like the open 3d engine, which is something that we are very excited about and that'd be open sourced a few months ago. And so there's a number of these projects that cover everything from tooling to developer, application frameworks, all the way to database and analytics and machine learning. And you'll notice that in a few areas, containers, as an example, machine learning as an example, our default is to go with open source option is where we can open source. And it makes sense for us to do so where we feel the product community might benefit from it. That's our default stance. The CNCF, the cloud native computing foundation is something that we've been involved with quite a bit. You know, we contribute to Kubernetes, be contribute to Envoy. I talked about continuity a bit. We've also contributed projects like CDK 8, which marries the AWS cloud development kit with Kubernetes. It's now a sandbox project in Kubernetes, and those are some of the areas. CNCF is such a wide surface area. We don't contribute to everything, but we definitely participate actively in CNCF with projects like HCB that are critical to eat for us. We are very, very active in just how the project evolves, but also try and see which of the projects that are important to our customers who are running Kubernetes maybe by themselves or some other project on AWS. Envoy is a good example. Kubernetes itself is a good example because in the end, we want to make sure that people running Kubernetes on AWS, even if they are not using our services are successful and we can help them, or we can work on the projects that are important to them. That's kind of how we think about the world. And it's worked pretty well for us. We've done a bunch of work on the Kubernetes side to make sure that we can integrate and solve a customer problem. We've, you know, from everything from models to work that we have done with gravity on our arm processor to a virtual GPU plugin that allows you to share and media GPU resources to the elastic fabric adapter, which are the network device for high performance computing that it can use at Kubernetes on AWS, along with things that directly impact Kubernetes customers like the CDKs project. I talked about work that we do with the container networking interface to the Amazon control of a Kubernetes, which is an open source project that allows you to use other AWS services directly from Kubernetes clusters. Again, you notice success, Kubernetes, not EKS, which is a managed Kubernetes service, because if we want you to be successful with Kubernetes and AWS, whether using our managed service or running your own, or some third party service. Similarly, we worked with premetheus. We now have a managed premetheus service. And at reinvent last year, we announced the general availability of this thing called carpenter, which is a provisioning and auto-scaling engine for Kubernetes, which is also an open source project. But here's the beauty of carpenter. You don't have to be using EKS to use it. Anyone running Kubernetes on AWS can leverage it. We focus on the AWS provider, but we've built it in such a way that if you wanted to take carpenter and implemented on prem or another cloud provider, that'd be completely okay. That's how it's designed and what we anticipated people may want to do. I talked a little bit about BottleRocket it's our Linux-based open-source operating system. And the thing that we have done with BottleRocket is make sure that we focus on security and the needs of customers who want to run orchestrated container, very focused on that problem. So for example, BottleRocket only has essential software needed to run containers, se Linux. I just notice it says that's the lineups, but I'm sure that, you know, Lena Torvalds will be pretty happy. And seeing that SE linux is enabled by default, we use things like DM Verity, and it has a read only root file system, no shell, you can assess it. You can install it if you wanted to. We allowed it to create different bill types, variants as we call them, you can create a variant for a non AWS resource as well. If you have your own homegrown container orchestrator, you can create a variant for that. It's designed to be used in many different contexts and all of that is open sourced. And then we use the update framework to publish and secure repository and kind of how this transactional system way of updating the software. And it's something that we didn't invent, but we have embraced wholeheartedly. It's a bottle rockets, completely open source, you know, have partners like Aqua, where who develop security tools for containers. And for them, you know, something I bought in rocket is a natural partnership because people are running a container host operating system. You can use Aqua tooling to make sure that they have a secure Indiana environment. And we see many more examples like that. You may think so over us, it's all about AWS proprietary technology because Lambda is a proprietary service. But you know, if you look peek under the covers, that's not necessarily true. Lambda runs on top of firecracker, as we've talked about fact crackers and open-source projects. So the foundation of Lambda in many ways is open source. What it also allows people to do is because Lambda runs at such extreme scale. One of the things that firecracker is really good for is running at scale. So if you want to build your own firecracker base at scale service, you can have most of the confidence that as long as your workload fits the design parameters, a firecracker, the battle hardening the robustness is being proved out day-to-day by services at scale like Lambda and foggy. For those of you who don't know service support services, you know, in the end, our goal with serverless is to make sure that you don't think about all the infrastructure that your applications run on. We focus on business logic as much as you can. That's how we think about it. And serverless has become its own quote-unquote "Sort of environment." The number of partners and open-source frameworks and tools that are spun up around serverless. In which case mostly, I mean, Lambda, API gateway. So it says like that is pretty high. So, you know, number of open source projects like Zappa server serverless framework, there's so many that have come up that make it easier for our customers to consume AWS services like Lambda and API gateway. We've also done some of our own tooling and frameworks, a serverless application model, AWS jealous. If you're a Python developer, we have these open service runtimes for Lambda, rust dot other options. We have amount of number of tools that we opened source. So in general, you'll find that tooling that we do runtime will tend to be always be open-sourced. We will often take some of the guts of the things that we use to build our systems like firecracker and open-source them while the control plane, etc, AWS services may end up staying proprietary, which is the case in Lambda. Increasingly our customers build their applications and leverage the broader AWS partner network. The AWS partner network is a network of partnerships that we've built of trusted partners. when you go to the APN website and find a partner, they know that that partner meets a certain set of criteria that AWS has developed, and you can rely on those partners for your own business. So whether you're a little tiny business that wants some function fulfill that you don't have the resources for or large enterprise that wants all these applications that you've been using on prem for a long time, and want to keep leveraging them in the cloud, you can go to APN and find that partner and then bring their solution on as part of your cloud infrastructure and could even be a systems integrator, for example, to help you solve this specific development problem that you may have a need for. Increasingly, you know, one of the things we like to do is work with an apartment community that is full of open-source providers. So a great one, there's so many, and you have, we have a panel discussion with many other partners as well, who make it easier for you to build applications on AWS, all open source and built on open source. But I like to call it a couple of them. The first one of them is TIDELIFT. TIDELIFT, For those of you who don't know is a company that provides SAS based tools to curate track, manage open source catalogs. You know, they have a whole network of maintainers and providers. They help, if you're an independent open developer, or a smart team should probably get to know TIDELIFT. They provide you benefits and, you know, capabilities as a developer and maintainer that are pretty unique and really help. And I've seen a number of our open source community embraced TIDELIFT quite honestly, even before they were part of the APN. But as part of the partner network, they get to participate in things like ISP accelerate and they get to they're officially an advanced tier partner because they are, they migrated the SAS offering onto AWS. But in the end, if you're part of the open source supply chain, you're a maintainer, you are a developer. I would recommend working with TIDELIFT because their goal is making all of you who are developing open source solutions, especially on AWS, more successful. And that's why I enjoy this partnership with them. And I'm looking to do a lot more because I think as a company, we want to make sure that open source developers don't feel like they are not supported because all you have to do is read various forums. It's challenging often to be a maintainer, especially of a small project. So I think with helping with licensing license management, security identification remediation, helping these maintainers is a big part of what TIDELIFT to us and it was great to see them as part of a partner network. Another partner that I like to call sysdig. I actually got introduced to them many years ago when they first launched. And one of the things that happened where they were super interested in some of our serverless stuff. And we've been trying to figure out how we can work together because all of our customers are interested in the capabilities that cystic provides. And over the last few years, he found a number of areas where we can collaborate. So sysdig, I know them primarily in a security company. So people use cystic to secure the bills, detect, you know, do threat response, threat detection, completely continuously validate their posture, get this continuous analytics signal on how they're doing and monitor performance. At the end of it, it's a SAS platform. They have a very nice open source security stack. The one I'm most familiar with. And I think most of you are probably familiar with is Falco. You know, sysdig, a CNCF project has been super popular. It's just to go SSS what 3, 37, 40 million downloads by now. So that's pretty, pretty cool. And they have been a great partner because we've had to do make sure that their solution works at target, which is not a natural place for their software to run, but there was enough demand and interest from our customers that, you know, or both companies leaned in to make sure they can be successful. So last year sister got a security competency. We have a number of specific competencies that we for our partners, they have integration and security hub is great. partners are lean in the way cystic has onto making our customer successful. And working with us are the best partners that we have. And there's a number of open source companies out there built on open source where their entire portfolio is built on open source software or the active participants like we are that we love working with on a day to day basis. So, you know, I think the thing I would like to, as we wind this out in this presentation is, you know, AWS is constantly looking for partnerships because our partners enable our customers. They could be with companies like Redis with Mongo, confluent with Databricks customers. Your default reaction might be, "Hey, these are companies that maybe compete with AWS." but no, I mean, I think we are partners as well, like from somebody at the lower end of the spectrum where people run on top of the services that I own on Linux and containers are SE 2, For us, these partners are just as important customers as any AWS service or any third party, 20 external customer. And so it's not a zero sum game. We look forward to working with all these companies and open source projects from an AWS perspective, a big part of how, where my open source program spends its time is making it easy for our developers to contribute, to open source, making it easy for AWS teams to decide when to open source software or participate in open source projects. Over the last few years, we've made significant changes in how we reduce the friction. And I think you can see it in the results that I showed you earlier in this stock. And the last one is one of the most important things that I say and I'll keep saying that, that we do as AWS is carry the pager. There's a lot of open source projects out there, operationalizing them, running them at scale is not easy. It's not all for whatever reason. It may not have anything to do with the software itself. But our core competency is taking that and being really good at operating it and becoming experts at operating it. And then ideally taking that expertise and experience and operating that project, that software and contributing back upstream. Cause that makes it better for everybody. And I think you'll see us do a lot more of that going forward. We've been doing that for the last few years, you know, in the container space, we do it every day. And I'm excited about the possibilities. With that. Thank you very much. And I hope you enjoy the rest of the showcase. >> Okay. Welcome back. We have Deepak sing here. We just had the keynote closing keynote vice-president of compute services. Deepak. Great to a great keynote, great wisdom and insight from that session. A very notable highlights and cutting edge trends and product information. Thanks for sharing. >> No, anytime it's always good to be here. It's too bad that we still doing this virtually, but always good to talk to you, John. >> We'll get hopefully through this way pretty quickly, I want to jump right in. Cause we don't have a lot of time. I want to get some quick question. You've brought up a good things. Open source innovation. Okay. Going next level. You've seen the rise of super clouds and super apps developing at open source. You're seeing big companies contributing, you know, you mentioned Argo into it. You're seeing that dynamic where companies are forming around this. This is a rising tide. This is, this is actually real. It's not the old school of, okay, here's a project. And then someone manages support and commercialization of it. It's actually platform in cloud scale. This is next gen. >> Yeah. And actually I think it started a few years ago. We can talk about a company that, you know, you're very familiar with as part of this event, which is armory many years ago, Netflix spun off this project called Spinnaker. A Spinnaker is CISED you know, CSED system that was developed at Netflix for their own purposes, but they chose to open solicit. And since then, it's become very popular with customers who want to use it even on prem. And you have a company that spun up on it. I think what's making this world very unique is you have very large companies like Facebook that will build things for themselves like VITAS or Netflix with Spinnaker and open source them. And you can have a lot of discussion about why they chose to do so, etc. But increasingly that's becoming the default when Amazon or Netflix or Facebook or Mehta, I guess you call them these days, build something for themselves for their own needs. The first question we ask ourselves is, should it be opensource? And increasingly we are all saying yes. And here's what happens because of that. It gives an opportunity depending on how you open source it for innovation through commercial deployments, so that you get SaaS companies, you know, that are going to take that product and make it relevant and useful to a very broad number of customers. You build partnerships with cloud providers like AWS, because our customers love this open source project and they need help. And they may choose an AWS managed service, or they may end up working with this partner on a day-to-day basis. And we want to work with that partner because they're making our customers successful, which is one reason all of us are here. So you're having this set of innovation from large companies from, you know, whether they are just consumer companies like Metta infrastructure companies like us, or just random innovation that's happening in an open source project that which ends up in companies being spun up and that foster that innovative innovation and that flywheel that's happening right now. And I think you said that like, this is unique. I mean, you never saw this happen before from so many different directions. >> It really is a nice progression on the business model side as well. You mentioned Argo, which is a great organic thing that was Intuit developed. We just interviewed code fresh. They just presented here in the showcase as well. You seeing the formation around these projects develop now in the community at a different scale. I mean, look at code fresh. I mean, Intuit did it Argo and they're not just supporting it. They're building a platform. So you seeing the dynamics of tools and now emerging the platforms, you mentioned Lambda, okay. Which is proprietary for AWS and your talk powered by open source. So again, open source combined with cloud scale allows for new potential super applications or super clouds that are developing. This is a new phenomenon. This isn't just lift and shift and host on the cloud. This is actually a construction production developer workflow. >> Yeah. And you are seeing consumers, large companies, enterprises, startups, you know, it used to be that startups would be comfortable adopting some of these solutions, but now you see companies of all sizes doing so. And I said, it's not just software it's software, the services increasingly becoming the way these are given, delivered to customers. I actually think the innovation is just getting going, which is why we have this. We have so many partners here who are all in inventing and innovating on top of open source, whether it's developed by them or a broader community. >> Yeah. I liked, I liked the represent container. Do you guys have, did that drove that you've seen a lot of changes and again, with cloud scale and open source, you seeing the dynamics change, whether you're enabling that, and then you see kind of like real big change. So let's take snowflake, a big customer of AWS. They started out as a startup too, but they weren't a data warehouse. They were bringing data warehouse like functionality and then changing everything differently and making it consumable for the cloud. And hence they're huge. So that's a disruption into an incumbent leader or sector. Then you've got new capabilities emerging. What's your thoughts, Deepak? Can you share your vision on how you have the disruption to existing leaders, old guard, if you will, as you guys call them and then new capabilities as these new platforms emerge at a net new functionality, how do you see that emerging? >> Yeah. So I speak from my side of the world. I've lived in over the last few years, which has containers and serverless, right? There's a lot of, if you go to any enterprise and ask them, do you want to modernize the infrastructure? Do you want to take advantage of automated software delivery, continuous delivery infrastructure as code modern observability, all of them will say yes, but they also are still a large enterprise, which has these enterprise level requirements. I'm using the word enterprise a lot. And I usually it's a trigger word for me because so many customers have similar requirements, but I'm using it here as large company with a lot of existing software and existing practices. I think the innovation that's coming and I see a lot of companies doing that is saying, "Hey, we understand the problems you want to solve. We understand the world where you live in, which could be regulated." You want to use all these new modalities. How do we allow you to use all of them? Keep the advantages of switching to a Lambda or switching to, and a service running on far gate, but give you the same capabilities. And I think I'll bring up cystic here because we work so closely with them on Falco. As an example, I just talked about them in my keynote. They could have just said, "Oh no, we'll just support the SE2 and be done with it." They said, "No, we're going to make sure that serverless containers in particular are something that you're going to be really good at because our customers want to use them, but requires us to think differently. And then they ended up developing new things like Falco that are born in this new world, but understand the requirements of the old world. If you get what I'm saying. And I think that a real example. >> Yeah. Oh, well, I mean, first of all, they're smart. So that was pretty obvious for most people that know, sees that you can connect the dots on serverless, which is a great point, but not everyone can see that again, this is what's new and and systig was just found in his backyard. As I found out on my interview, a great, great founder, they would do a new thing. So it was a very easy to connect the dots there again, that's the trend. Well, I got to ask if they're doing that for serverless, you mentioned graviton in your speech and what came out of you mentioned graviton in your speech and what came out of re-invent this past year was all the innovation going on at the compute level with gravitron at many levels in the Silicon. How should companies and open source developers think about how to innovate with graviton? >> Yeah, I mean, you've seen examples from people blogging and tweeting about how fast their applications run and grab it on the price performance benefits that they get, whether it's on, you know, whether it's an observability or other places. something that AWS is going to embrace across a compute something that AWS is going to embrace across a compute portfolio. Obviously you can go find EC2 instances, the gravitron two instances and run on them and that'll be great. But we know that most of our customers, many of our customers are building new applications on serverless containers and serveless than even as containers increasingly with things like foggy, where they don't want to operate the underlying infrastructure. A big part of what we're doing is to make sure that graviton is available to you on every compute modality. You can run it on a C2 forever. You've been running, being able to use ECS and EKS and run and grab it on almost since launch. What do you want me to take it a step further? You elastic Beanstalk customers, elastic Beanstalk has been around for a decade, but you can now use it with graviton. people running ECS on for gate can now use graviton. Lambda customers can pick graviton as well. So we're taking this price performance benefits that you get So we're taking this price performance benefits that you get from graviton and basically putting it across the entire compute portfolio. What it means is every high level service that gets built on compute infrastructure. And you get the price performance benefits, you get the price performance benefits of the lower power consumption of arm processes. So I'm personally excited like crazy. And you know, this has graviton 2 graviton 3 is coming. >> That's incredible. It's an opportunity like serverless was it's pretty obvious. And I think hopefully everyone will jump on that final question as the time's ticking here. I want to get your thoughts quickly. If you look at what's happened with containers over the past say eight years since the original founding of the first Docker instance, if you will, to how that's evolved and then the introduction of Kubernetes and the cloud native wave we're seeing now, what is, how would you describe the relationship between the success Docker, seeing now with Kubernetes in the cloud native construct what's different and why is this combination so successful? >> Yeah. I often say that containers would have, let me rephrase that. what I say is that people would have adopted sort of the modern way of running applications, whether containers came around or not. But the fact that containers came around made that migration and that journey is so much more efficient for people. So right from, I still remember the first doc that Solomon gave Billy announced DACA and starting to use it on customers, starting to get interested all the way to the more sort of advanced orchestration that we have now for containers across the board. And there's so many examples of the way you can do that. Kubernetes being the most, most well-known one. Here's the thing that I think has changed. I think what Kubernetes or Docker, or the whole sort of modern way of building applications has done is it's taken people who would have taken years adopting these practices and by bringing it right to the fingertips and rebuilding it into the APIs. And in the case of Kubernetes building an entire sort of software world around it, the number of, I would say number of decisions people have to take has gone smaller in many ways. There's so many options, the number of decisions that become higher, but the com the speed at which they can get to a result and a production version of an application that works for them is way low. I have not seen anything like what I've seen in the last 6, 7, 8 years of how quickly the most you know, the most I would say is, you know, a company that you would think would never adopt modern technology has been able to go from, this is interesting to getting a production really quickly. And I think it's because the tooling makes it So, and the fact that you see the adoption that you see right and the fact that you see the adoption that you see right from the fact that you could do Docker run Docker, build Docker, you know, so easily back in the day, all the way to all the advanced orchestration you can do with container orchestrator is today. sort of taking all of that away as well. there's never been a better time to be a developer independent of whatever you're trying to build. And I think containers are a big central part of why that's happened. >> Like the recipe, the combination of cloud-scale, the timing of Kubernetes and the containerization concepts just explode as a beautiful thing. And it creates more opportunities and will challenges, which are opportunities that are net new, but it solves the automation piece that we're seeing this again, it's only makes things go faster. >> Yes. >> And that's the key trend. Deepak, thank you so much for coming on. We're seeing tons of open cloud innovations, thanks to the success of your team at AWS and being great participants in the community. We're seeing innovations from startups. You guys are helping enabling that. Of course, they want to live on their own and be successful and build their super clouds and super app. So thank you for spending the time with us. Appreciate. >> Yeah. Anytime. And thank you. And you know, this is a great event. So I look forward to people running software and building applications, using AWS services and all these wonderful partners that we have. >> Awesome, great stuff. Great startups, great next generation leaders emerging. When you see startups, when they get successful, they become the modern software applications platforms out there powering business and changing the world. This is the cube you're watching the AWS startup showcase. Season two episode one open cloud innovations on John Furrier your host, see you next time.

Published Date : Jan 26 2022

SUMMARY :

And the thing that we have We just had the keynote closing but always good to talk to you, John. It's not the old school And I think you said that So you seeing the dynamics but now you see companies and then you see kind How do we allow you to use all of them? sees that you can connect is available to you on Kubernetes and the cloud of the way you can do that. but it solves the automation And that's the key trend. And you know, and changing the world.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AmazonORGANIZATION

0.99+

AWSORGANIZATION

0.99+

DeepakPERSON

0.99+

Lena TorvaldsPERSON

0.99+

FalcoORGANIZATION

0.99+

NetflixORGANIZATION

0.99+

JohnPERSON

0.99+

Deepak SinghPERSON

0.99+

MehtaORGANIZATION

0.99+

twoQUANTITY

0.99+

FacebookORGANIZATION

0.99+

LambdaTITLE

0.99+

firstQUANTITY

0.99+

John FurrierPERSON

0.99+

JavaTITLE

0.99+

PythonTITLE

0.99+

SolomonPERSON

0.99+

two waysQUANTITY

0.99+

OneQUANTITY

0.99+

PyPyTITLE

0.99+

last yearDATE

0.99+

over 13 yearsQUANTITY

0.99+

LinuxTITLE

0.99+

TodayDATE

0.99+

IndianaLOCATION

0.99+

DatabricksORGANIZATION

0.99+

bothQUANTITY

0.99+

Andrew Backes, Armory & Ian Delahorne, Patreon | AWS Startup Showcase S2 E1 | Open Cloud Innovations


 

(upbeat music) >> Welcome to the AWS start up showcase, theCUBE's premiere platform and show. This is our second season, episode one of this program. I'm Lisa Martin, your host here with two guests here to talk about open source. Please welcome Andrew Backes, the VP of engineering at Armory, and one of our alumni, Ian Delahorne, the staff site, reliability engineer at Patreon. Guys, it's great to have you on the program. >> Thank you. >> Good to be back. >> We're going to dig into a whole bunch of stuff here in the next fast paced, 15 minutes. But Andrew, let's go ahead and start with you. Give the audience an overview of Armory, who you guys are, what you do. >> I'd love to. So Armory was founded in 2016 with the vision to help companies unlock innovation through software. And what we're focusing on right now is, helping those companies and make software delivery, continuous, collaborative, scalable, and safe. >> Got it, those are all very important things. Ian help the audience, if anyone isn't familiar with Patreon, it's a very cool platform. Talk to us a little bit about that Ian. >> Absolutely, Patreon is a membership platform for creators to be able to connect with their fans and for fans to be able to subscribe to their favorite creators and help creators get paid and have them earn a living with, just by being connected straight to their audience. >> Very cool, creators like podcasters, even journalists video content writers. >> Absolutely. There's so many, there's everything from like you said, journalists, YouTubers, photographers, 3D modelers. We have a nightclub that's on there, there's several theater groups on there. There's a lot of different creators. I keep discovering new ones every day. >> I like that, I got to check that out, very cool. So Andrew, let's go to your, we talk about enterprise scale and I'm using air quotes here. 'Cause it's a phrase that we use in every conversation in the tech industry, right? Scalability is key. Talk to us about what enterprise scale actually means from Armory's perspective. Why is it so critical? And how do you help enterprises to actually achieve it? >> Yeah, so the, I think a lot of the times when companies think about enterprise scale, they think about the volume of infrastructure, or volume of software that's running at any given time. There's also a few more things that go into that just beyond how many EC2 instances you're running or containers you're running. Also velocity, count how much time does it take you to get features out to your customers and then stability and reliability. Then of course, in enterprises, it isn't as simple as everyone deploying to the same targets. It isn't always just EC2, a lot of the time it's going to be multiple targets, EC2, it's going to be ECS, Lambda. All of these workloads are out there running. And how does a central platform team or a tooling team at a site enable that for users, enable deployment capabilities to those targets? Then of course, on top of that, there's going to be site specific technologies. And how do, how does your deployment tooling integrate with those site specific technologies? >> Is, Andrew is enterprise scale now even more important given the very transformative events, we've seen the last two years? We've seen such acceleration, cloud adoption, digital transformation, really becoming a necessity for businesses to stay alive. Do you think that, that skill now is even more important? >> Definitely, definitely. The, what we see, we've went through a wave of the, the first set of digital transformations, where companies are moving to the cloud and we know that's accelerating quite a bit. So that scale is all moving to the cloud and the amount of multiple targets that are being deployed to at any given moment, they just keep increasing. So that is a concern that companies need to address. >> Let's talk about the value, but we're going to just Spinnaker here in the deployment. But also let's start Andrew with the value that, Armory delivers on top of Spinnaker. What makes this a best of breed solution? >> Yeah, so on top of open-source Spinnaker, there are a lot of other building blocks that you're going to need to deploy at scale. So you're going to need to be able to provide modules or some way of giving your users a reusable building block that is catered to your site. So that is one of the big areas that Armory focuses on, is how can we provide building blocks on top of open source Spinnaker that sites can use to tailor the solution to their needs. >> Got it, tailor it to their needs. Ian let's bring you back into the conversation. Now, talk to us about the business seeds, the compelling event that led Patreon to choose Spinnaker on top of Armory. >> Absolutely. Almost three years ago, we had an outage which resulted in our payment processing slowed down. And that's something we definitely don't want to have happen because this would hinder creator's ability to get paid on time for them to be able to pay their employees, pay their rent, hold that hole, like everything that, everyone that depends on them. And there were many factors that went into this outage and one of them we identified is that it was very hard for us to, with our custom belt deploy tooling, to be able to easily deploy fast and to roll back if things went wrong. So I had used Spinnaker before to previous employer early on, and I knew that, that would be a tool that we could use to solve our problem. The problem was that the SRE team at Patreon at that time was only two people. So Spinnaker is a very complex product. I didn't have the engineering bandwidth to be able to, set up, deploy, manage it on my own. And I had happened to heard of Armory just that week before and was like, "This is the company that could probably help me solve my problems." So I engaged early on with Andrew and the team. And we migrated our customers deployed to, into Spinnaker and help stabilize our deploys and speed them up. >> So you were saying that the deployments were taking way too long before. And of course, as you mentioned from a payment processing perspective, that's people's livelihoods. So that's a pretty serious issue there. You found Armory a week into searching this seems like stuff went pretty quickly. >> And the week before the incident, they had randomly, the, one of the co-founders randomly reached out to me and was like, "We're doing this thing with Armory. You might be interested in this, we're doing this thing with Spinnaker, it's called Armory." And I kind of filed it away. And then they came fortuitous that we were able to use them, like just reach out to them like a week later. >> That is fortuitous, my goodness, what a good outreach and good timing there on Armory's part. And sticking with you a little bit, talk to us about what it is that the business challenges that Armory helps you to resolve? What is it about it that, that just makes you know this is the exact right solution for us? Obviously you talked about not going direct with Spinnaker as a very lean IT team. But what are some of the key business needs that it's solving? >> Yeah, there's several business things that we've been able to leverage Armory for. One of them as I mentioned, they, having a deployment platform that we know will give us, able deploys has been very important. There's been, they have a policy engine module that we use for making sure that certain environments can only be deployed to by certain individuals for compliance issues. We definitely, we use their pipelines as code module for being able to use, build, to build reusable deploy pipelines so that software engineers can easily integrate Spinnaker into their builds. Without having to know a lot about Spinnaker. There's like here, take these, take this pipeline module and add your variables into it, and you'll be off to the races deploying. So those are some of the value adds that Armory has been able to add on top of Spinnaker. On top of that, we use their managed products. So they have a team that's managing our Spinnaker installation, helping us with upgrades, helping up the issues, all that stuff that unlocks us to be able to focus on building our creators. Instead of focusing on operating Spinnaker. >> Andrew, back to you. Talk to me a little bit about as the VP of engineering, the partnership, the relationship that Armory has with Patreon and how symbiotic is it? How much are they helping you to develop the product that Armory is delivering to its customers? >> Yeah, one of the main things we want to make sure we do is help Patreon be successful. So that's, there are going to be some site specific needs there that we want to make sure that we are in tune with and that we're helping with, but really we view it as a partnership. So, Patreon has worked with us. Well, I can't believe it's been three years or kind of a little bit more now. But it's, it, we have had a lot of inner, a lot of feedback sessions, a lot of going back and forth on how we can improve our product to meet the needs of Patreon better. And then of course the wider market. So one thing that is neat about seeing a smaller team, SRE team that Ian is on, is they can depend on us more. They have less bandwidth with themselves to invest into their tooling. So that's the opportunity for us to provide those more mature building blocks to them. So that they can combine those in a way that makes them, that meets their needs and their business needs. >> And Ian, back to you, talk to me about how has the partnership with Armory? You said it's been almost three years now. How has that helped you do your job better as an SRE? What are some of the advantages of that, to that role? >> Yeah, absolutely. Armory has been a great partner to work with. We've used their expertise in helping to bring new features into the open-source Spinnaker. Especially when we decided that we wanted to not only deploy to EC2 instances, but we wanted to play to elastic container service and Lambdas to shift from our normal instance based deploys into the containerization. There were several warrants around the existing elastic container service deploy, and Lambda deploys that we were able to work with Armory and have them champion some changes inside open-source as well as their custom modules to help us be able to shift our displays to those targets. >> Got it. Andrew back over to you, talk to me, I want to walk through, you talked about from an enterprise scale perspective, some of the absolute critical components there. But I want to talk about what Armory has done to help customers like Patreon to address things like speed to market, customer satisfaction as Ian was talking about, the compelling event was payment processing. A lot of content creators could have been in trouble there. Talk to, walk me through how you're actually solving those key challenges that not just Patreon is facing, but enterprises across industries. >> Yeah, of course, so the, talking to specifically to what brought Ian in was, a problem that they needed to fix inside of their system. So when you are rolling out a change like that, you want it to be fast. You want to get that chain, change out very quickly, but you also want to make sure that the deployment system itself is stable and reliable. So the last thing you're going to want is any sort of hiccup with the tool that you're using to fix your product, to roll out changes to your customers. So that is a key focus area for us in everything that we do is we make sure that whenever we're building features that are going to expand capabilities, deployment capabilities. That we're, we are focusing firstly on stability and reliability of the deployment system itself. So those are a few features, a few focus areas that we continually build into the product. And you can, I mean, I'm sure a lot of enterprises know that as soon as you start doing things at massive scale, sometimes the stability and reliability, can, you'll be jeopardized a little bit. Or you start hitting against those limits or what are the, what walls do you encounter? So one of the key things we're doing is building ahead of that, making sure that our features are enabling users to hit deployment scales they've never seen or imagined before. So that's a big part of what Armory is. >> Ian, can you add a number to that in terms of the before Armory and the after in terms of that velocity? >> Absolutely, before Armory our deploys would take some times, somewhere around 45 minutes. And we cut that in half, if not more to down to like the like 16 to 20 minute ranges where we are currently deploying to a few hundred hosts. So, and that is the previous deployment strategy would take longer. If we scaled up the number of instances for big events, like our payment processing we do the first of the month currently. So being able to have that and know that our deploys will take about the same amount of time each time, it will be faster. That helps us bring features to create some fans a lot faster. And the stability aspect has also been very important, knowing that we have a secure way to roll back if needed, which you didn't have previously in case something goes wrong, that's been extremely useful. >> And I can imagine, Ian that velocity is critical because I mean more and more and more these days, there are content creators everywhere in so many different categories that we've talked about. Even nightclubs, that to be able to deliver that velocity through a part, a technology like Armory is table-stakes for against business. >> Absolutely, yeah. >> Andrew, back over to you. I want to kind of finish out here with, in the last couple of years where things have been dynamic. Have you seen any leading indices? I know you guys work with enterprises across organizations and Fortune 500s. But have you seen any industries in particular that are really leaning on Armory to help them achieve that velocity that we've been talking about? >> We have a pretty good spread across the market, but since we are focused on cloud, to deploy to cloud technologies, that's one of the main value props for Armory. So that's going to be enabling deployments to AWS in similar clouds. So the companies that we work with are really ones that have either already gone through that transformation or are on their journey. Then of course, now Kubernetes is a force, it's kind of taken over. So we're getting pulled into even more companies that are embracing Kubernetes. So I wouldn't say that there's an overall trend, but we have customers all across the Fortune 500, all across mid-market to Fortune 500. So there's depending on the complexity of the corporation itself or the enterprise itself we're able to do. I think Ian mentioned our policy engine and a few other features that are really tailored to companies that have restricted environments and moving into the cloud. >> Got it, and that's absolutely critical these days to help organizations pivot multiple times and to get that speed to market. 'Cause that's, of course as consumers, whether we're on the business side or the commercial side, we have an expectation that we're going to be able to get whatever we want A-S-A-P. And especially if that's payments processing, that's pretty critical. Guys, thank you for joining me today, talking about Armory, built on Spinnaker, what it's doing for customers like Patreon. We appreciate your time and your insights. >> Thank you so much. >> Thank you. Thank you so much. >> For my guests, I'm Lisa Martin. You're watching theCUBE's, AWS startup showcase, season two, episode one. (upbeat music)

Published Date : Jan 26 2022

SUMMARY :

Guys, it's great to We're going to dig into to help companies unlock Talk to us a little bit about that Ian. and for fans to be able to subscribe Very cool, creators like everything from like you said, So Andrew, let's go to your, to get features out to your customers for businesses to stay alive. So that scale is all moving to the cloud Spinnaker here in the deployment. that is catered to your site. Now, talk to us about the business seeds, and to roll back if things went wrong. And of course, as you mentioned like just reach out to talk to us about what it is to be able to focus on Andrew, back to you. So that's, there are going to be of that, to that role? and Lambdas to shift from our like speed to market, that are going to expand the like 16 to 20 minute ranges Even nightclubs, that to be Andrew, back over to you. So that's going to be enabling deployments and to get that speed to market. Thank you so much. (upbeat music)

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Ian DelahornePERSON

0.99+

Lisa MartinPERSON

0.99+

AndrewPERSON

0.99+

ArmoryORGANIZATION

0.99+

2016DATE

0.99+

IanPERSON

0.99+

Andrew BackesPERSON

0.99+

16QUANTITY

0.99+

PatreonORGANIZATION

0.99+

SpinnakerORGANIZATION

0.99+

two guestsQUANTITY

0.99+

second seasonQUANTITY

0.99+

AWSORGANIZATION

0.99+

OneQUANTITY

0.99+

oneQUANTITY

0.99+

three yearsQUANTITY

0.99+

two peopleQUANTITY

0.99+

20 minuteQUANTITY

0.99+

EC2TITLE

0.99+

a week laterDATE

0.99+

15 minutesQUANTITY

0.99+

SREORGANIZATION

0.98+

LambdasTITLE

0.98+

todayDATE

0.98+

each timeQUANTITY

0.97+

around 45 minutesQUANTITY

0.96+

LambdaTITLE

0.96+

a weekQUANTITY

0.95+

ECSTITLE

0.94+

first setQUANTITY

0.93+

one thingQUANTITY

0.93+

Ben Mappen, Armory & Ian Delahorne, Patreon | CUBE Conversation


 

>>Welcome to the cube conversation here. I'm Sean ferry with the cube in Palo Alto, California. We've got two great guests here featuring armory who has with them Patrion open-source and talking open source and the enterprise. I'm your host, John ferry with the cube. Thanks for watching guys. Thanks for coming on. Really appreciate. I've got two great guests, Ben mapping, and SVP, a strategic partner in the armory and Ian Della horn, S staff SRE at Patrion gentlemen, you know, open source and enterprise is here and we wouldn't talk about thanks for coming. I appreciate it. >>Yeah. Thank you, John. Really happy to be here. Thank you to the Cuban and your whole crew. I'll start with a quick intro. My name is Ben Mappin, farmers founders, lead strategic partnerships. As John mentioned, you know, it all, it really starts with a premise that traditional businesses, such as hotels, banks, car manufacturers are now acting and behaving much more like software companies than they did in the past. And so if you believe that that's true. What does it mean? It means that these businesses need to get great at delivering their software and specifically to the cloud, like AWS. And that's exactly what armory aims to do for our customers. We're based on opensource Spinnaker, which is a continuous delivery platform. And, and I'm very happy that Ian from Patrion is here to talk about our journey together >>And introduce yourself what you do at Patriot and when Patrion does, and then why you guys together here? What's the, what's the story? >>Absolutely. Hi, John and Ben. Thanks for, thanks for having me. So I am Ian. I am a site reliability engineer at Patrion and Patrion is a membership platform for creators. And what we're our mission is to get creators paid, changing the way the art is valued so that creators can make money by having a membership relationship with, with fans. And we are, we're built on top of AWS and we are using Spinnaker with armory to deploy our applications that, you know, help, help creators get paid. Basically >>Talk about the original story of Ben. How are you guys together? What brought you together? Obviously patron is well-known in the creator circles. Congratulations, by the way, all your success. You've done a great service for the industry and have changed the game you were doing creators before it was fashionable. And also you got some cutting-edge decentralization business models as well. So again, we'll come back to that in a minute, but Ben, talk about how this all comes together. Yeah, >>Yeah. So Ian's got a great kind of origin story on our relationship together. I'll give him a lead in which is, you know, what we've learned over the years from our large customers is that in order to get great at deploying software, it really comes down to three things or at least three things. The first being velocity, you have to ship your software with velocity. So if you're deploying your software once a quarter or even once a year, that does no good to your customers or to your business, like just code sitting in a feature branch on a shelf, more or less not creating any business value. So you have to ship with speed. Second, you have to ship with reliability. So invariably there will be bugs. There will be some outages, but you know, one of the things that armory provides with Spinnaker open sources, the ability to create hardened deployment pipeline so that you're testing the right things at the right times with the right folks involved to do reviews. >>And if there is hopefully not, but if there is a problem in production, you're isolating that problem to a small group of users. And then we call this the progressive deployment or Canary deployment where you're deploying to a small number of users. You measure the results, make sure it's good, expand it and expand it. And so I think, you know, preventing outages is incredibly incredibly important. And then the last thing is being able to deploy multi target multi-cloud. And so in the AWS ecosystem, we're talking about ECS, EKS Lambda. And so I think that these pieces of value or kind of the, the pain points that, that enterprises face can resonate with a lot of companies out there, including ENN Patriot. And so I'll, I'll, I'll let you tell the story. >>Yeah, go ahead. Absolutely. Thanks. Thanks for the intro, man. So background background of our partnership with armory as back in the backend, February of 2019, we had a payments payments slowed down for payments processing, and we were risking not getting creators paid on time, which is a doc great for creators because they rely on us for income to be able to pay themselves, pay their rent or mortgage, but also pay staff because they have video editors, website admins, people that nature work with them. And there were, they're a very, there's a very many root causes to this, to this incident, all kind of culminate at once. One of the things that we saw was that deploying D point fixes to remediate. This took too long or taking at least 45 minutes to deploy a new version of the application. And so we've had continuous delivery before using a custom custom home built, rolling deploy. >>We needed to get that time down. We also needed to be secure in our knowledge of like that deploy was stable. So we had had to place a break in the middle due to various factors that that can happen during the deploy previously, I had used a Spinnaker at previous employers. I have been set it up myself and introduced it. And I knew about, I knew like, oh, this is something we could, this would be great. But the Patriot team, the patron SRE team at that time was two people. So I don't have the ability to manage Spinnaker on my own. It's a complex open-source product. It can do a lot of things. There's a lot of knobs to tweak a lot of various settings and stuff you need to know about tangentially. One of the co-founders of, of armory had been, had to hit, had hit me up earlier. I was like, Hey, have you heard of armory? We're doing this thing, opens our Spinnaker, we're packaging this and managing it, check us out if you want. I kind of like filed it away. Like, okay, well that might be something we can use later. And then like two weeks later, I was like, oh wait, this company that does Spinnaker, I know of them. We should probably have a conversation with them and engage with them. >>And so you hit him up and said, Hey, too many knobs and buttons to push what's the deal. >>Yeah, exactly. Yeah. So I was, I was like, Hey, so by the way, I about that thing, how, how soon can you get someone get someone over here? >>So Ben take us through the progression. Cause that really is how things work in the open source. Open source is really one of those things where a lot of community outreach, a lot of people are literally a one degree or two separation from someone who either wrote the project or is involved in the project. Here's a great example. He saw the need for Spinnaker. The business model was there for him to solve. Okay. Fixes rolling deployments, homegrown all the things, pick your pick, your use case, but he wanted to make it easier. This tends to, this is kind of a pattern. What did you guys do? What's the next step? How did this go from here? >>Yeah. You know, Spinnaker being source is critical to armory's success. Many companies, not just pastry on open source software, I think is not really debatable anymore in terms of being applicable to enterprise companies. But the thing with selling open source software to large companies is that they need a backstop. They need not just enterprise support, but they need features and functionality that enable them to use that software at scale and safely. And so those are really the things that, that we focus on and we use open source as a really, it's a great community to collaborate and to contribute fixes that other companies can use. Other companies contribute fixes and functionality that we then use. But it's, it's really a great place to get feedback and to find new customers that perhaps need that enhanced level of functionality and support. And, and I'm very, very happy that Patrion was one of those companies. >>Okay. So let's talk about the Patrion. Okay. Obviously scaling is a big part of it. You're an SRE site, reliability engineers with folks who don't know what that is, is your, your job is essentially, you know, managing scale. Some say you the dev ops manager, but that's not really right answer. What is the SRE role at patriotics share with folks out there who are either having an SRE. They don't even know it yet or need to have SRS because this is a huge transition that, and new, new and emerging must have role in companies, >>Right? Yeah. We're the history of Patrion covers a lot. We cover a wide swath of a wide swath of, of, of things that we work with and, and areas that we consider to be our, our purview. Not only are we working on working with our AWS environment, but we also are involved in how can we make the site more reliable or performance so that, so that creators fans have a good experience. So we work with our content delivery numbers or caching strategies for caching caching assets. We work inside the application itself for doing performance performance, a hassle. This is also in proving observability with distributed tracing and metrics on a lot of that stuff, but also on the build and deploy side, if we can, if we can get that deploy time faster, like give engineers faster feedback on features that they're working on or bug fixes and also being secure and knowing that the, the code that they're working on it gets delivered reliably. >>Yeah. I think I, you have the continuous delivery is always the, the, the killer killer workflow as both the Spinnaker question here. Well, how has Spinnaker, well, what, how, how does Spinnaker being an open source project help you guys? I mean, obviously open source code is great. How has that been significant and beneficial for both armory and Patrion? >>Yeah, I'll take the first stab at this one. And it starts at the beginning. Spinnaker was created by Netflix and since Netflix open source that four or five years ago, there have been countless and significant contributions from many other companies, including armory, including AWS and those contributions collectively push the industry forward and allow the, the companies that, you know, that use open-source Spinnaker or armory, they can now benefit from all of the collective effort together. So just that community aspect working together is huge. Absolutely huge. And, you know, open source, I guess on the go-to-market side is a big driver for us. You know, there's many, many companies using open-source Spinnaker in production that are not our customers yet. And we, we survey them. We want to know how they're using open-source Spinnaker so that we can then improve open-source Spinnaker, but also build features that are critical for large companies to run at scale, deploy at scale, deploy with velocity and with reliability. >>Yeah. What's your take on, on the benefits of Spinnaker being open source? >>A lot of what Ben, it's been really beneficial to be able to like, be able to go in and look at the source code for components. I've been wondering something like, why is this thing working like this? Or how did they solve this? It's also been useful for, I can go ask the community for, for advice on things. If armory doesn't has the, it doesn't have the time or bandwidth to work on some things I've been able to ask the special interest groups in the source community. Like, can we, can we help improve this or something like that. And I've also been able to commit simple bug fixes for features that I've, that I've needed. I was like, well, I don't need to, I don't need to go engage are very on this. I can just like, I can just write up a simple patch on and have that out for review. >>You know, that's the beautiful thing about open sources. You get the source code and that's, and some people just think it's so easy, Ben, you know, just, Hey, just give me the open source. I'll code it. I got an unlimited resource team. Not, not always the case. So I gotta ask you guys on Patrion. Why use a company like armory, if you have the open source code and armory, why did you build a business on the open source project? Like Spinnaker? >>Yeah. Like I see. Absolutely. Yeah. Like I, like I said earlier, the atrium, the Patrion SRE team was wasn't is fairly small. There's two people. Now we're six. People are still people down. We're six people now. So being sure we could run a Spinnaker on our own if we, if we wanted to. And, but then we'd have no time to do anything else basically. And that's not the best use of our, of our creators money. Our fans, the fans being the creators artists. We have obviously take a percentage on top of that. And we, we need to spend our, that money well, and having armory who's dedicated to the Spinnaker is dedicated, involved the open source project. But also there are experts on this Sunday. It was something that would take me like a week of stumbling around trying to find documentation on how to set this thing up. They done this like 15, 20 times and they can just go, oh yeah, this is what we do for this. And let me go fix it for you >>At score. You know, you've got a teammate. I think that's where, what you're getting at. I got to ask you what other things is that free you up? Because this is the classic business model of life. You know, you have a partner you're moving fast, it slows you down to get into it. Sure. You can do it yourself, but why it's faster to go with it, go together with a partner and a wing man as we will. What things did does that free you up to work on as an SRE? >>Oh, that's freed me up to work on a bigger parts of our build and deploy pipeline. It's freed me up to work on moving from a usage based deploys onto a containerization strategy. It's freed me up to work on more broader observability issues instead of just being laser-focused on running an operating spending. >>Yeah. And that really kind of highlights. I'm glad you said that because it highlights what's going on. You had a lot of speed and velocity. You've got scale, you've got security and you've got new challenges you got to fix in and move fast. It's a whole new world. So again, this is why I love cloud native. Right? So you got open source, you got scale and you guys are applying directly to the, to the infrastructure of the business. So Ben, I got to ask you armory. Co-founder why did you guys build your business on an open source project? Like Spinnaker? What was the mindset? How did you attack this? What did you guys do? Take us through that piece because this is truly a great entrepreneurial story about open source. >>Yeah. Yeah. I'll give you the abridged version, which is that my co-founders and I, we solved the same problem, which is CD at a previous company, but we did it kind of the old fashioned way we home role. We handled it ourselves. We built it on top of Jenkins and it was great for that company, but, and that was kind of the inspiration for us to then ask questions. Hey, is this bigger? We, when at the time we found that Spinnaker had just been, you know, dog food inside of Netflix and they were open sourcing it. And we thought it was a great opportunity for us to partner. But the bigger reason is that Spinnaker is a platform that deploys to other platforms like AWS and Kubernetes and the sheer amount of surface area that's required to build a great product is enormous. And I actually believe that the only way to be successful in this space is to be open source, to have a community of large companies and passionate developers that contribute the roads if you will, to deploy into various targets. >>And so that's the reason, number one for it being open source and us wanting to build our business on top of open source. And then the second reason is because we focus almost exclusively on solving enterprise scale problems. We have a platform that needs to be extensible and open source is by definition extensible. So our customers, I mean, Ian just had a great example, right? Like he needed to fix something he was able to do so solve it in open source. And then, you know, shortly thereafter that that fix in mainline gets into the armory official build and then he can consume his fix. So we see a lot of that from our other customers. And then even, you know, take a very, very large company. They may have custom off that they need to integrate with, but that doesn't, that's not in open-source Spinnaker, but they can go and build that themselves. >>Yeah, it's real. It really is the new modern way to develop. And I, you know, last 80 with startup showcase last season, Emily Freeman gave a talk on, you know, you know, retiring, I call it killing the software, SDLC, the lifecycle of how software was developed in the past. And I got to ask you guys, and, and this cube conversation is that this is kind of like the, the kind of the big wave we're on now is cloud scale, open source, cloud, native data security, all being built in on this in the pipelines to your point is SRS enabling a new infrastructure and a new environment for people to build essentially SAS. So I got to ask you guys as, and you mentioned it Ben, the old way you hand rolled something, Netflix, open source, something, you got to look at Lyft with Envoy. I mean, large-scale comes, are donating their stuff into open source and people getting on top of it and building it. So the world's changed. So we've got to ask you, what's the difference between standing up a SAS application today versus say five to eight years ago, because we all see salesforce.com. You know, they're out there, they built their own data center. Cloud skills changed the dynamics of how software is being built. And with open-source accelerating every quarter, you're seeing more growth in software. How has building a platform for applications changed and how has that changed? How people build SAS applications, Ben, what's your take on this? It's kind of a thought exercise here. >>Yeah. I mean, I wouldn't even call it a thought exercise. We're seeing it firsthand from our customers. And then I'll, you know, I'll, I'll give my answer and you can weigh in on like practical, like what you're actually doing at Patrion with SAS, but the, the costs and the kind of entry fee, if you will, for building a SAS application has tremendously dropped. You don't need to buy servers and put them inside data centers anymore. You just spin up a VM or Kubernetes cluster with AWS. AWS has led the way in public cloud to make this incredible easy. And the tool sets being built around cloud native, like armory and like many other companies in the space are making it even easier. So we're just seeing the proliferation of, of software being developed and, and hopefully, you know, armory is playing a role in, in making it easier and better. >>So before we get to Unum for a second, I just want to just double down on it because there's great conversation that implies that there's going to be a new migration of apps everywhere, right. As tsunami of clutter good or bad, is that good or bad or is it all open source? Is it all good then? >>Absolutely good. For sure. There will be, you know, good stuff developed and not so good stuff developed, but survival of the fittest will hopefully promote those, the best apps with the highest value to the end user and, and society at large and push us all forward. So, >>And what's your take, obviously, Kubernetes, you seeing things like observability talking about how we're managing stateful and services that are being deployed and tear down in real time, automated, all new things are developing. How does building a true scalable SAS application change today versus say five, eight years ago? >>I mean, like you said, there's a, there's a lot, there's a lot of new, both open source. So SAS products available that you can use to build a scale stuff. Like if you're going to need that to build like secure authentication, instead of having to roll that out and you could go with something like Okta raw zero, you can just pull that off the shelf stuff for like managing push notifications before that was like something really hard to really hard to do. Then Firebase came on the scene and also for manic state and application and stuff like that. And also for like being, being able to deliver before >>You had Jenkins, maybe even for that, you didn't really have anything Jenkins came along. And then now you have open-source products like Spinnaker that you can use to deliver. And then you have companies built around that, that you can just go and say, Hey, can you please help us deliver this? Like you just help us, enable us to be able to build, build our products so that we can focus on delivering value to our creators and fans instead of having to focus on, on other things. >>So bill it builds faster. You can compose stuff faster. You don't have to roll your own code. You can just roll your own modules basically, and then exactly what prietary on top of it. Absolutely. Yeah. And that's why commercial open source is booming. Guys. Thank you so much, Ben, congratulations on armory and great to have you on from Patrion well-known success. So we'll accompany you congratulate. If we don't know patriarch, check it out, they have changed the game on creators and leading the industry. Ben. Great, great shot with armory and Spinnaker. Thanks for coming on. Thank you >>So much. Thank you >>So much. Okay. I'm Sean Ferrer here with the cube conversation with Palo Alto. Thanks for watching.

Published Date : Jan 13 2022

SUMMARY :

horn, S staff SRE at Patrion gentlemen, you know, open source and enterprise is here And so if you believe that that's true. our applications that, you know, help, help creators get paid. the game you were doing creators before it was fashionable. So you have to ship with speed. And so I think, you know, preventing outages is One of the things that we saw was that deploying D So I don't have the ability to manage Spinnaker on my own. how soon can you get someone get someone over here? did you guys do? And so those are really the things that, that we focus on and we use you know, managing scale. So we work with our content delivery numbers or how does Spinnaker being an open source project help you guys? And it starts at the beginning. And I've also been able to commit So I gotta ask you guys on Patrion. And let me go fix it for you I got to ask you what other things is that free you up? It's freed me up to work on moving from a usage So Ben, I got to ask you armory. And I actually believe that the only way to be successful in this space is to And then even, you know, take a very, very large company. And I got to ask you guys, And then I'll, you know, I'll, I'll give my answer and you can weigh in on like practical, So before we get to Unum for a second, I just want to just double down on it because there's great conversation that implies that there's going There will be, you know, good stuff developed and And what's your take, obviously, Kubernetes, you seeing things like observability talking about how we're managing So SAS products available that you can use to build a scale stuff. And then now you have open-source products like Spinnaker that you can use to deliver. congratulations on armory and great to have you on from Patrion well-known success. Thank you Thanks for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Sean FerrerPERSON

0.99+

JohnPERSON

0.99+

Ben MappinPERSON

0.99+

Emily FreemanPERSON

0.99+

IanPERSON

0.99+

February of 2019DATE

0.99+

Ian DelahornePERSON

0.99+

BenPERSON

0.99+

sixQUANTITY

0.99+

NetflixORGANIZATION

0.99+

AWSORGANIZATION

0.99+

PatrionORGANIZATION

0.99+

15QUANTITY

0.99+

Ian Della hornPERSON

0.99+

Ben MappenPERSON

0.99+

two peopleQUANTITY

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

SpinnakerORGANIZATION

0.99+

SREORGANIZATION

0.99+

SecondQUANTITY

0.99+

one degreeQUANTITY

0.99+

second reasonQUANTITY

0.99+

two weeks laterDATE

0.99+

PatriotORGANIZATION

0.99+

OneQUANTITY

0.99+

twoQUANTITY

0.99+

six peopleQUANTITY

0.99+

20 timesQUANTITY

0.99+

bothQUANTITY

0.99+

two great guestsQUANTITY

0.98+

PatreonORGANIZATION

0.98+

fourDATE

0.98+

once a yearQUANTITY

0.98+

oneQUANTITY

0.98+

armoryORGANIZATION

0.98+

once a quarterQUANTITY

0.97+

SASORGANIZATION

0.97+

firstQUANTITY

0.97+

three thingsQUANTITY

0.96+

five years agoDATE

0.96+

EnvoyORGANIZATION

0.96+

eight years agoDATE

0.95+

LyftORGANIZATION

0.95+

fiveDATE

0.94+

last 80DATE

0.94+

JenkinsTITLE

0.94+

Sean ferryPERSON

0.93+

ENN PatriotORGANIZATION

0.91+

todayDATE

0.91+

SpinnakerTITLE

0.91+

salesforce.comOTHER

0.91+

first stabQUANTITY

0.9+

Jonsi Stefanson & Anthony Lye, NetApp | AWS re:Invent 2021


 

(upbeat music) >> Welcome back to re:Invent 2021. You're watching theCUBE. My name is Dave Vellante. We're really excited to have Anthony Lye here. He's the Executive Vice President and General Manager of Public Cloud at NetApp. And Jonsi Stefansson as the CTO and VP of cloud at NetApp. Guys, good to see you. >> Same to you. >> Likewise. >> It's great to be back. >> You know, Anthony. Well so, we saw each other virtually at the AWS Storage Day, the big announcement, we're going to talk about that. But I go back and I said this to you several years ago, we were sitting, you know, some after party and you said "We are going to transform NetApp. We are going all in on cloud." We've seen NetApp transform many, many times. This is probably the biggest in history. >> No, I think you're absolutely right. I think, you know, I can't believe it, but you know, it will be five years for me in February. And in those five years, I think we really have done things that nobody expected. And I think we've proven to our existing customers, to our competitors, and now with Amazon, to a whole new set of customers that our intellectual property that we build and the acquisitions that we've done have made a lot of sense. I think we've demonstrated this wonderful concept of symmetry. Customers now understand and believe that a dollar invested in an App, wherever it is on premise or in the cloud is a dollar that moves wherever they want it to move and progresses as their own businesses progress. >> So Jonsi, for the latest announcement that you guys made to integrate ONTAP into the AWS cloud, you had to do some deeper integration, right? It wasn't just wrap your stack and Kubernetes and shove it into the cloud. But can you just talk about what you had to do? What the collaboration was like? >> The collaboration with AWS has been fantastic. It literally took two and a half years, you know, from the point where we decided to agree on the design principles, how we were actually going to deliver this as a service, the integration into every single aspect of AWS, you know, whether it's the console, the FSx, API, the integrations, to all the additional services that AWS has, like RDS, like Aurora, like the SageMaker, like EKS and ECS. And I mean, we are just getting started with the integration points and the collaboration and the teamwork. I would call it teamwork more than a collaboration. The teamwork with all these teams and maybe especially at name who was the leader of the storage sort of a unit in AWS has been fantastic. >> Dave: Yeah. Well so, this is the 10th re:Invent. This is the 9th year we've been here. We've seen a dramatically different cloud than 10 years ago, 15 years ago, and a different storage business. I'm not even sure. I mean, I don't know. I didn't even think about it as the old storage business anymore. Essentially, you're building a cloud on top of clouds. A super cloud if you will. >> Anthony: Yeah, I mean. I think, look, the strategy was, as I said, very, very simple to us, which was, you know, fundamentally companies, you know, run their applications on the basic primitives of compute storage and networking. And the gold standard for file was always ONTAP. And I think what we did, which I think was unique was we didn't just, as you said, throw it onto a cloud, stick it in a virtual machine and tell you, the customer "There. It's ONTAP just as you remember it." We reimagined it. And we architected it to be a cloud service. So it's elastic, it goes up and down. You can change the performance at runtime. And what we really did with Amazon was we wanted to make it a fully managed service. We didn't want people to think about versioning and patching. We wanted to remove all of that and we wanted people to take as much or as little as they needed. And we, and Amazon, we chose that we should own the responsibility for the availability of the service. And we should maintain the service ourselves so that customers of ONTAP can benefit from the solution. But in many ways, customers who've never been ONTAP customers can now take advantage of an enterprise grade file system and all the great things that it does without having to understand how it works. >> And explain why that's important for customers because people, they go, "Wow, you got S3." but it's very simple. Get, put, right? You don't have the full stack of a mature ONTAP. Please explain what that means to customers a little bit. >> You know, file systems are very important things. You know, we basically use them in our work environments every single day, you know. Within your sort of, you know, your Mac book, you have a home directory and sub-directories and files, very elegantly layout applications and layout infrastructures in ways that object repositories cannot. You know, aside from block and file. Sorry, from file and object, you of course, have block storage. And so, file plays a very important role. IDC has file growing at almost twice the pace of object now on the public cloud systems and, you know, file has about 13% of the overall storage market and it's growing. And I don't see any reason why file won't be as big on the Amazon cloud as the S3 has been. >> Dave: So you guys, go ahead, please. >> Yeah. I mean, you also have to take into account that the S3 object storage offerings of AWS is an integrated PaaS in our solution. So that's how we are actually doing automatic tiering. So you actually reap the best of both worlds, where you get the cost management of putting it in object storage, but you get the performance and the data management capabilities that is pretty unprecedented. You know, we are the first store that's offering that can actually do cross-region replication seamlessly by retaining deduplication and compression. But we also play a lot with, you know, block and object storage. So when Anthony was talking about how we've actually delivered this as a service, and this is sort of from our design principles, we are basically delivering this as a software, as a service, because more than an infrastructure as a service, because the stock that we are actually deploying, or the secret sauce of ONTAP, it's a very vast software stack that we are delivering, on top of AWS infrastructure. So I would always call it or categorize it a little bit more than software as a service, rather than infrastructure as a service. >> But it's even more than that, if I'm right, because it's cloud pricing, right? >> Jonsi: Yes. >> So it's not, you're not preying. I mean, when I buy Salesforce, I got to sign up for three-year deal. That's not a consumption-based model. >> Yeah. >> Oh, I think Amazon, you know what Amazon did uniquely and brilliantly was, it retailed technology and it's what makes Amazon so good, is that they choose to sort of simplify things. And when they find benefits as a retailer, they pass them on to the customer and, you know, there's this sort of pay-as-you-go business model, it's really good for the customer. It makes us work harder because, you know, you have to retain your customer sort of every 10 minutes. And that's something that, you know, as you said, with enterprise software and even some of the early SaaS vendors, that's not how it works. And so Amazon has forced us all to be very, very attentive to our customers. >> Dave: And I'd love to talk about what that means for the on-prem business, but if we have time. But you guys won Design Partner of the Year, what's that all about? First of all, congratulations. >> Anthony: Thank you. There's a lot of ISV design partners. You guys came out number one so congratulations on that. What's that all about? Explain what that entailed and how you got that. >> Yeah, I'll say a few words. Maybe Jonsi can add. I mean, the first thing of course is, you know, I S V stands for Independent Software Vendor. So, you know, it's always great because most people would say, "Well, NetApp is on-premise storage hardware." >> Dave: Of course, yeah. >> Which really, we've not really ever been an increasingly with demonstrating that we are a software company and we operate at cloud speed. You know, I can't really take the credit. I would give it to Jonsi and the engineering team. Maybe Jonsi, you can explain, you know, what moral about the award and why I think we were selected. >> So, I mean, I think it says a lot that this is the first time AWS has ever allowed a third party company to be this integrated into their console, into a support ability systems. You know, we make fun of this, me and Anthony all the time, because when we started this, down this path, everybody at NetApp said, "Guys, you're wasting your time." This is why AWS has the marketplace, but we didn't want to go. We already had the marketplace and we wanted to be able to connect to all these associated services and do it in the manner that, you know, this was a true collaboration of engineering teams for a long time to actually deliver the service on both sides so the credit, of course, will always go to the engineers on both sides, even though I designed it, I didn't code it. So, I think that, that alone, being the first to do it in AWS ever. I think we deserve that award. >> So just for our audience, to be clear, we're talking about FSx, ONTAP in the cloud, in the AWS cloud and kind of dance around that. But so that was announced, I guess, in September? >> Anthony: Yes. >> Right? >> Anthony: September, 2nd. >> What's the uptake been like? What's the reaction? >> Unbelievable. >> I'll bet. (laughing) >> No, no, I mean. >> No, I believe it. >> Better than we ever dreamed of. >> Yeah. >> The number of customers, I'm sure I'm not allowed to say the number of customers, but we asked and the fact that, you know, 60% of those customers have never been NetApp customers before, but they see the value in the data management capabilities that we are bringing to the market. >> Dave: So it exceeded expectations and your expectations were probably pretty enthusiastic. >> They were high. >> Yeah. >> I mean, Amazon is on the record. I was with Ed earlier on today, recording a piece and Ed, you know, was very clear that it's one of the fastest growing services now on AWS. You know, it turns out that, you know, the customer base, I think recognizes the, not just the need for a file system, but the uniqueness and capabilities that ONTAP provides, you know, to those customers in how they manage their business and transformations. And so, you know, to be sort of behind the console, to be sort of behind the Amazon CLI and the Amazon API, you see the world very, very differently, you know. I think the Amazon marketplace is a fantastic capability, but I'll tell you, you know, being a core part of the AWS service itself that they sell, that they support, that they bill for. It's a nice place to be. >> So, SaaS company. You're talking to the language of application development, Kubernetes, right? What do you think this means for the future of NetApp specifically, but also generally the on-prem storage business and the storage business in general? >> Well, we just announced our second quarter earnings today and what's happening is our cloud business is growing like crazy. We generated $388 million of ARR and the growth rates are, you know, astronomically high. That is increasingly helping our on-premise business to grow. You know, the nice thing about being in primarily, in the storage and data business is people aren't deleting many things. And the rate at which they're generating information is just accelerating. So, actually the confidence that we give the customer by demonstrating a sort of a cloud first, a sort of principles of all the cloud is actually giving customers to buy more on premise. So, we really don't mind. We are, our job much like Amazon's, is to have this customer obsession and you can't really go wrong, if you just keep asking them what they want. >> Yeah, if you can do so profitably, you're going to be reinvest in your business. Guys, we've got to go. >> Yeah. >> Love to have you back. >> Thank you. >> And you been quite a transformation. You said you're going to do it. You're doing it. So, well done to you. Five years in the making. Okay. This is Dave Vellante for theCUBE, the leader in high-tech coverage. Keep it right there. We'll be right back from AWS re:Invent 21. (upbeat music)

Published Date : Dec 2 2021

SUMMARY :

And Jonsi Stefansson as the we were sitting, you know, I think, you know, I can't and shove it into the cloud. and the collaboration and the teamwork. This is the 9th year we've been here. and all the great things that it does You don't have the full file has about 13% of the Dave: So you guys, because the stock that we I got to sign up for three-year deal. is that they choose to Partner of the Year, and how you got that. I mean, the first thing You know, I can't really take the credit. being the first to do it in AWS ever. in the AWS cloud and kind but we asked and the fact that, you know, and your expectations were And so, you know, and the storage business in general? and the growth rates are, Yeah, if you can do so profitably, And you been quite a transformation.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AnthonyPERSON

0.99+

Dave VellantePERSON

0.99+

Anthony LyePERSON

0.99+

AmazonORGANIZATION

0.99+

Jonsi StefanssonPERSON

0.99+

DavePERSON

0.99+

AWSORGANIZATION

0.99+

FebruaryDATE

0.99+

$388 millionQUANTITY

0.99+

SeptemberDATE

0.99+

60%QUANTITY

0.99+

Jonsi StefansonPERSON

0.99+

JonsiPERSON

0.99+

EdPERSON

0.99+

five yearsQUANTITY

0.99+

three-yearQUANTITY

0.99+

Five yearsQUANTITY

0.99+

two and a half yearsQUANTITY

0.99+

first storeQUANTITY

0.99+

MacCOMMERCIAL_ITEM

0.99+

NetAppTITLE

0.99+

both sidesQUANTITY

0.99+

firstQUANTITY

0.99+

both worldsQUANTITY

0.98+

second quarterDATE

0.98+

todayDATE

0.98+

10 years agoDATE

0.98+

15 years agoDATE

0.98+

SageMakerTITLE

0.98+

ONTAPTITLE

0.97+

AuroraTITLE

0.97+

NetAppORGANIZATION

0.96+

S3TITLE

0.96+

about 13%QUANTITY

0.95+

first thingQUANTITY

0.94+

first timeQUANTITY

0.94+

ECSTITLE

0.94+

EKSTITLE

0.94+

several years agoDATE

0.93+

9th yearQUANTITY

0.93+

RDSTITLE

0.93+

FirstQUANTITY

0.92+

September, 2ndDATE

0.9+

FSxTITLE

0.88+

10 minutesQUANTITY

0.84+

oneQUANTITY

0.82+

Walid Negm, Capgemini Engineering | AWS re:Invent 2021


 

>>Okay, welcome back everyone. To the cubes coverage of ADB has re-invent 2021. I'm John fare with Dave Nicholson. My cohost we're here exploring all the future innovations. We've got a great guest we'll lead negam who's the EVP executive vice president chief research innovation officer cap, Gemini engineering will lead. Thanks for coming on the cube. Thank you. So I love the title, chief research, innovation engineering officer. >>I didn't make it up. They did. >>You got to love the cloud evolution right now because just more and more infrastructure as codes happening. You got this whole data abstraction layer developing where people are starting to see. Okay. I can have horizontally scalable governed data in a data lake. That's smart, someone intelligent and use machine learning. It seems to be the big trend here from AWS. More serverless, more goodness. So engineering kind of on the front lines here kind of making it happen. >>Yeah. So, uh, the question that our clients are asking us is how do these data center technologies moving over into cars, planes, trains, construction, equipment, industrial, right? And you know, maybe two decades ago it was called IOT. Uh, but we're not talking about just sensors, vertical lift aircraft, uh, software-defined cars, um, manufacturing facilities as a whole, you know, how are these data center technologies going to impact these companies? And it's not a architectural shift for say the Evie, the electric vehicle, many OEM, it's a financial transformation, right? Because if they can make their vehicle containerized, uh, if they can monitor the cars, behaviors, they can offer new types of experiences for their clients. So the questions we were asking ourselves is how do you get the cloud into the car? >>Yeah. And software driving, all that. So you've got software defined everything. Now you've got data-driven pun intended with the cars cloud everywhere. How does that look? What are the concerns, obviously, latency moving data around. They got outposts. Am I moving the cloud to the edge? How are you guys thinking? How are customers thinking through the architectural, I guess foundational playbook? Is there one? Yeah. >>I, you know, coming into this, I did ask my, my son, the question is hardware or software more important. And then he, you know, he's not, and he said, you know, we're coding our way out of hardware. It was very interesting insight software rules. That that is for sure. But when we're talking about physical products and these talking about trillions of dollars of investments going into green energy, uh, into autonomous driving into green aviation. So we're not, it's not just the matter of verse here. We're dealing real physical products. I think though the point for us as engineers or as an engineering businesses, how do you co-design hardware and software together? What are the questions you to ask about that machine learning model being moved over from AWS? For example, into the car, is the Silicon going to be able to support the inferencing rates that are required right. In real time and whatnot. So some of the things like that, >>Well, that's been a, it's been an age old battle between the idea that, uh, the flour that's nurtured in a walled garden is always going to be more beautiful than the one that grows out in the meadow. In other words, announcement, uh, at, in Adam's keynote, talking about advances in AWS Silicon. So what's your view on how important that is? You just sort of alluded to it as being important, the co-development of hardware and software together. >>Yeah. We're seeing product makers again, think, you know, anybody from a life sciences company building a digital therapeutics product, maybe a blood glucose monitor or, um, an automotive or even an aerospace, uh, going direct to Silicon asking questions around the performance of the Silicon and designing their experience around that. Right. So, uh, if they need a low latency, low power efficiency, green networks, they're taking those questions in-house or asking those questions in house. So, you know, AWS having a, sort of a portfolio of custom or bespoke Silicon now as part of the architectural discussion. Right? And so I look around here, I see a lot of developers who are going to have to get a little bit more versed in some of these questions around, you know, should I use an arm based chip? You know, do I use this Silicon partner? You know, what happens when I move it into the vehicle? And then I have over the air updates, how do I protect that code in an enclave in the car just to continue to use the so there's are a lot of architectural questions that I don't think software engineers typically ask when they're just dealing in the cloud. Uh, although at the end of the day over time, a lot of these will be abstracted from the developer to some degree, you know, that is just the nature of the game. >>It reminds me of the operating system theory of system software meeting hardware. And because you have software developers just want to code now, you're saying, well, now I'm responsible hardware. Well, not if it's programmer, was there a hard top two it's over, these are big questions and important ones I think is we're in a major inflection point, but it comes back down to, you mentioned aerospace space is the same problem. You can't send that break, fix engineer in space. Right. You've got software now. So you've got trust that security supply chain who's right. And who's doing the hardware now you've got the software supply chain. So a lot of interesting kind of, yeah. >>So you, you, you know, you check them off, back in into it, the supply chain problems with Silicon, and there are now alternatives to try and get around the bottlenecks using high-performance computers versus hundreds of ECS and a vehicle allows you kind of get away from the supply chain shortage. Uh there's you know, folks moving from one architecture to another, to avoid kind of getting locked in and then of course creating your own Silicon, or at least having more ownership over the Silicon. I think suffer defined systems, uh, are the way to go regardless of the industry. Uh, so you're going to make some decisions on performance, characteristics of the hardware, but ultimately you want a software defined system, so you can update it regularly. >>I was talking with doc some of the top hair executives. I talked to, um, the marketplace guys here, Deepak, uh, over at the, here at Amazon and containers comes up. You start to see a trend in containers where you see certified containers because containers are everywhere. You can put malware and containers. So, you know, think about like just hacking software. It's a surface area now. So you bring the software security model in there. So to see this kind of like certified containers, I can imagine certified infrastructure now because I mean, what's a processor, it's just a hardened top to a PC. Now you've got the cloud. If I have hardware, how do I know it's workable? How do I trust it? You know, how could it not be hacked? I don't want my car to be hacked and driven off the road. >>So, so, um, when you're dealing with a payment system or you're dealing with tick-tock different than when you're dealing with a car with life consequences. So we are very active in the software defined transformation of automotive. And it's easy to say, I'm just going to load it up with all this data center technology, but there's safety criticality issues that you have to take into considerations, but containers are well suited for that. Just requires some thought. I mean, my excitement, enthusiasm about this product engineering is if you just take any of these products and, and apply them into a product engineering context, there's so much invention and creativity can happen. Uh, but on the safety side, we're working through security enclaves using containers and hardware based roots of trust. So there's ways around, you know, malware and bad actors at the edge. Um, >>What's your, what's your take on explainable AI? Why got you might as well ask because this comes up a lot, explainable AI is hot in college right now, AI, that can be explained. It's kind of got some policy, uh, to it. What's your thoughts on this AI trend? Cause obviously it's everywhere. Um, I mean, what is explainable AI? Is that even real or how do you explain AI? Is that democratized? >>Yeah. Computer vision is a great example. I think to bring it to life I'm all of the audience probably knows this, but you could, you know, you can tell your kid that this is a cat once and they'll know every single cat out there is a cat, but if you, you, you need a thousands of images, uh, for a computer vision model to learn that this is a cat. And even, you know, you can probably give it an example, um, out of say a remote region of the world and it going to get confused. So to me, explainability is about adding some sort of certainty to the decision-making process. Um, and when there's a, some confusion, be able to understand why that happened. I think in, in automotive or any, even in quality assurance, being able to know that this product was definitively defective or this pedestrian definitively did cross the crosswalk or not. You know, it's very important because it could, you know, there are, there are consequences. So just being able to understand why the algorithm or the model said what it said, why did it make that judgment is super important, super important. >>So I've got to ask you now that we're here, re-invent from your engineering perspectives, you look at the landscape of AWS, the announcements. What, what, how do you think about it to other engineers out there trying to, uh, grok all the technology who really want to put innovation in place, whether it's creating new markets, new categories or innovating their existing business, how do you grab the class out and make it work for you? I mean, from an engineering standpoint, how do you look at AWS and say, how do I make this work better for me? >>Uh, so I mean, over the years, I, um, I think it's true. AWS has started to really look like a utility, you know, the days where it was called utility as a service. And, um, you know, I, I, I did attend a workshop on, I think it was called LightSail or something like that, but they are simplifying the way that you can consume this infrastructure to a degree that is somewhat phenomenal. Uh, and they're building any, yeah, they continue to expand the ecosystem. Um, so I mean, for me, it's, it's a utility. Uh, it's it's, it's, it's, it's, it's consumable. Uh, if you got an idea pick and roll your own. >>Okay. So back back to the, uh, the concept of AI and explainability, uh, one of my cars won't allow me to unlock certain functions because of the way that I drive. No one needs to explain to me why, because I know what I'm doing wrong, but I'm still frustrated by it. So that that's sort of leads to kind of the larger philosophical question to you about what you're seeing, where are we in this kind of leapfrog, constant pace of the technology exists, but people aren't culturally ready to accept it because it feels like right now to me that there isn't anything we can't do with cloud technology from a technical perspective, it can all be done. Swami's keynote today, talking about integrating all sorts of sources of data and actually leveraging them in the cloud. Um, technically possible yet 85% of it spend is still on prem. So, so what's your thought there? What are the, what are the inhibitors, what are the real inhibitors from a technology perspective versus the cultural ones? Uh, setting aside my lack of, uh, adherence to, uh, to driving lawful >>I industry by industry. I think in, um, you know, if you're trying to do a diagnostic on an MRI in an automated way, and there's going to be false positives, false negatives, and yes, we know that yeah, we know that there's going to be a physician participating in the final judgment call. Um, I think just getting a really good comfort level on the trustworthiness of these decision points, um, is really important. And so I don't blame folks for being reticent about, you know, trusting or, or asking some questions about, does this really work and are these autonomous systems as they become more and more precise, are they doing the right thing? Uh, I think there's research that has to be done on agency. You know, am I in patrol? What happened? Did I lose control? I think there's questions around handoffs, you know, and participation in decision-making. So I think just overall, just the broad area of trust and, uh, the relationship between the participants, the humans and the machines still. I think there's some work to do, to be honest with you. I think there's some work to do maybe in a manufacturing facility where everything's automated, you know, maybe it's a solved problem, but in an open road, when the vehicles driving, you know, in the middle afternoon, you know, you probably should ask some more questions. >>Well, I want to ask you what we got a couple of minutes left, real time data near real time, real time, always a big, hot topic. Seeing one more databases out there in the keynote today from Swami real-time are we there yet? How are we dealing with real-time data, software consuming the data? It comes to cars and things that are moving real time versus near real time. It could be life or death. I mean, this is big time. Where are we? >>So, um, I was trying to conduct a web conference. I won't tell the vendor because it has nothing to do with the vendor. Um, and I couldn't get a connection. I couldn't get a connection at reinvent. I just couldn't get it. I'm sorry guys. I can't get it. So I, you know, so we talk about real time talking about real-time operating systems and real time data collection at the edge. Yeah. We're there, we can collect the data and we can deploy a model in, you know, in the aircraft on the train to do predictive analytics. If we got to stream that data back home to the cloud, you know, we better figure out how to make sure we have a reliable and stable connection. 5g is a, you know, is, is, will be deployed, right? And it has ultra low latency, uh, and can achieve those types of, uh, requirements. But, uh, you know, it has to be in the right setting, right? That's to be the right setting and a facility, uh, very well controlled where you understand the density of the cell sites, small cells sound cells, and you really can deploy a, uh, a mobile robot, uh, wirelessly. Yes know, we can do that, but you know, kind of in, in, in other scenarios, we have a lot of ask that question about >>With the connections and making that false, huh? Well, he, thanks for coming on. Great insight, great conversation. Very deep, awesome work. Thanks for coming on and sharing your insights from cap Gemini. We're here in the cube, the worldwide leader in tech coverage live on the floor here at re-invent I'm John fare with Dave Nicholson. We write back.

Published Date : Dec 1 2021

SUMMARY :

So I love the title, I didn't make it up. So engineering kind of on the front lines here kind of making it happen. So the questions we were asking ourselves is how do you get the cloud into the car? Am I moving the cloud to the edge? What are the questions you to ask about that machine learning Well, that's been a, it's been an age old battle between the idea that, uh, the flour to some degree, you know, that is just the nature of the game. ones I think is we're in a major inflection point, but it comes back down to, you mentioned aerospace space is the same Uh there's you know, folks moving from one architecture to another, to avoid kind of getting You start to see a trend in containers where you see certified containers because containers are everywhere. So there's ways around, you know, malware and bad actors Is that even real or how do you explain AI? And even, you know, you can probably give it So I've got to ask you now that we're here, re-invent from your engineering perspectives, you look at the landscape of AWS, look like a utility, you know, the days where it was called utility as a service. So that that's sort of leads to kind of the larger philosophical question to you about what I think in, um, you know, if you're trying to do a diagnostic Well, I want to ask you what we got a couple of minutes left, real time data near But, uh, you know, We're here in the cube, the worldwide leader in tech coverage live on the floor here at re-invent I'm John

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave NicholsonPERSON

0.99+

AmazonORGANIZATION

0.99+

AWSORGANIZATION

0.99+

85%QUANTITY

0.99+

Walid NegmPERSON

0.99+

AdamPERSON

0.99+

SwamiPERSON

0.99+

todayDATE

0.99+

SiliconORGANIZATION

0.99+

two decades agoDATE

0.99+

DeepakPERSON

0.99+

2021DATE

0.99+

hundredsQUANTITY

0.98+

GeminiORGANIZATION

0.96+

Capgemini EngineeringORGANIZATION

0.95+

ADBORGANIZATION

0.92+

oneQUANTITY

0.92+

John farePERSON

0.91+

LightSailTITLE

0.88+

thousands of imagesQUANTITY

0.88+

AWS SiliconORGANIZATION

0.85+

executive vice presidentPERSON

0.85+

John farePERSON

0.84+

EVPPERSON

0.82+

trillions of dollarsQUANTITY

0.78+

onceQUANTITY

0.76+

ECSQUANTITY

0.74+

one architectureQUANTITY

0.72+

EvieORGANIZATION

0.72+

single catQUANTITY

0.71+

afternoonDATE

0.7+

SiliconLOCATION

0.68+

GeminiPERSON

0.66+

timeORGANIZATION

0.65+

InventEVENT

0.64+

coupleQUANTITY

0.55+

minutesQUANTITY

0.53+

chief researchPERSON

0.51+

twoQUANTITY

0.5+

5gORGANIZATION

0.3+

Jonsi Stefanson & Anthony Lye, NetApp | AWS re:Invent 2021


 

(upbeat music) >> Welcome back to re:Invent 2021. You're watching theCUBE. My name is Dave Vellante. We're really excited to have Anthony Lye here. He's the Executive Vice President and General Manager of Public Cloud at NetApp. And Jonsi Stefansson as the CTO and VP of cloud at NetApp. Guys, good to see you. >> Same to you. >> Likewise. >> It's great to be back. >> You know, Anthony. Well so, we saw each other virtually at the AWS Storage Day, the big announcement, we're going to talk about that. But I go back and I said this to you several years ago, we were sitting, you know, some after party and you said "We are going to transform NetApp. We are going all in on cloud." We've seen NetApp transform many, many times. This is probably the biggest in history. >> No, I think you're absolutely right. I think, you know, I can't believe it, but you know, it will be five years for me in February. And in those five years, I think we really have done things that nobody expected. And I think we've proven to our existing customers, to our competitors, and now with Amazon, to a whole new set of customers that our intellectual property that we build and the acquisitions that we've done have made a lot of sense. I think we've demonstrated this wonderful concept of symmetry. Customers now understand and believe that a dollar invested in an App, wherever it is on premise or in the cloud is a dollar that moves wherever they want it to move and progresses as their own businesses progress. >> So Jonsi, for the latest announcement that you guys made to integrate ONTAP into the AWS cloud, you had to do some deeper integration, right? It wasn't just wrap your stack and Kubernetes and shove it into the cloud. But can you just talk about what you had to do? What the collaboration was like? >> The collaboration with AWS has been fantastic. It literally took two and a half years, you know, from the point where we decided to agree on the design principles, how we were actually going to deliver this as a service, the integration into every single aspect of AWS, you know, whether it's the console, the FSx, API, the integrations, to all the additional services that AWS has, like RDS, like Aurora, like the SageMaker, like EKS and ECS. And I mean, we are just getting started with the integration points and the collaboration and the teamwork. I would call it teamwork more than a collaboration. The teamwork with all these teams and maybe especially at name who was the leader of the storage sort of a unit in AWS has been fantastic. >> Dave: Yeah. Well so, this is the 10th re:Invent. This is the 9th year we've been here. We've seen a dramatically different cloud than 10 years ago, 15 years ago, and a different storage business. I'm not even sure. I mean, I don't know. I didn't even think about it as the old storage business anymore. Essentially, you're building a cloud on top of clouds. A super cloud if you will. >> Anthony: Yeah, I mean. I think, look, the strategy was, as I said, very, very simple to us, which was, you know, fundamentally companies, you know, run their applications on the basic primitives of compute storage and networking. And the gold standard for file was always ONTAP. And I think what we did, which I think was unique was we didn't just, as you said, throw it onto a cloud, stick it in a virtual machine and tell you, the customer "There. It's ONTAP just as you remember it." We reimagined it. And we architected it to be a cloud service. So it's elastic, it goes up and down. You can change the performance at runtime. And what we really did with Amazon was we wanted to make it a fully managed service. We didn't want people to think about versioning and patching. We wanted to remove all of that and we wanted people to take as much or as little as they needed. And we, and Amazon, we chose that we should own the responsibility for the availability of the service. And we should maintain the service ourselves so that customers of ONTAP can benefit from the solution. But in many ways, customers who've never been ONTAP customers can now take advantage of an enterprise grade file system and all the great things that it does without having to understand how it works. >> And explain why that's important for customers because people, they go, "Wow, you got S3." but it's very simple. Get, put, right? You don't have the full stack of a mature ONTAP. Please explain what that means to customers a little bit. >> You know, file systems are very important things. You know, we basically use them in our work environments every single day, you know. Within your sort of, you know, your Mac book, you have a home directory and sub-directories and files, very elegantly layout applications and layout infrastructures in ways that object repositories cannot. You know, aside from block and file. Sorry, from file and object, you of course, have block storage. And so, file plays a very important role. IDC has file growing at almost twice the pace of object now on the public cloud systems and, you know, file has about 13% of the overall storage market and it's growing. And I don't see any reason why file won't be as big on the Amazon cloud as the S3 has been. >> Dave: So you guys, go ahead, please. >> Yeah. I mean, you also have to take into account that the S3 object storage offerings of AWS is an integrated PaaS in our solution. So that's how we are actually doing automatic tiering. So you actually reap the best of both worlds, where you get the cost management of putting it in object storage, but you get the performance and the data management capabilities that is pretty unprecedented. You know, we are the first store that's offering that can actually do cross-region replication seamlessly by retaining deduplication and compression. But we also play a lot with, you know, block and object storage. So when Anthony was talking about how we've actually delivered this as a service, and this is sort of from our design principles, we are basically delivering this as a software, as a service, because more than an infrastructure as a service, because the stock that we are actually deploying, or the secret sauce of ONTAP, it's a very vast software stack that we are delivering, on top of AWS infrastructure. So I would always call it or categorize it a little bit more than software as a service, rather than infrastructure as a service. >> But it's even more than that, if I'm right, because it's cloud pricing, right? >> Jonsi: Yes. >> So it's not, you're not preying. I mean, when I buy Salesforce, I got to sign up for three-year deal. That's not a consumption-based model. >> Yeah. >> Oh, I think Amazon, you know what Amazon did uniquely and brilliantly was, it retailed technology and it's what makes Amazon so good, is that they choose to sort of simplify things. And when they find benefits as a retailer, they pass them on to the customer and, you know, there's this sort of pay-as-you-go business model, it's really good for the customer. It makes us work harder because, you know, you have to retain your customer sort of every 10 minutes. And that's something that, you know, as you said, with enterprise software and even some of the early SaaS vendors, that's not how it works. And so Amazon has forced us all to be very, very attentive to our customers. >> Dave: And I'd love to talk about what that means for the on-prem business, but if we have time. But you guys won Design Partner of the Year, what's that all about? First of all, congratulations. >> Anthony: Thank you. There's a lot of ISV design partners. You guys came out number one so congratulations on that. What's that all about? Explain what that entailed and how you got that. >> Yeah, I'll say a few words. Maybe Jonsi can add. I mean, the first thing of course is, you know, I S V stands for Independent Software Vendor. So, you know, it's always great because most people would say, "Well, NetApp is on-premise storage hardware." >> Dave: Of course, yeah. >> Which really, we've not really ever been an increasingly with demonstrating that we are a software company and we operate at cloud speed. You know, I can't really take the credit. I would give it to Jonsi and the engineering team. Maybe Jonsi, you can explain, you know, what moral about the award and why I think we were selected. >> So, I mean, I think it says a lot that this is the first time AWS has ever allowed a third party company to be this integrated into their console, into a support ability systems. You know, we make fun of this, me and Anthony all the time, because when we started this, down this path, everybody at NetApp said, "Guys, you're wasting your time." This is why AWS has the marketplace, but we didn't want to go. We already had the marketplace and we wanted to be able to connect to all these associated services and do within the manner that, you know, this was a true collaboration of engineering teams for a long time to actually deliver the service on both sides so the credit, of course, will always go to the engineers on both sides, even though I designed it, I didn't coat it. So, I think that, that alone, being the first to do it in AWS ever. I think we deserve that award. >> So just for our audience, to be clear, we're talking about FSx, ONTAP in the cloud, in the AWS cloud and kind of dance around that. But so that was announced, I guess, in September? >> Anthony: Yes. >> Right? >> Anthony: September, 2nd. >> What's the uptake been like? What's the reaction? >> Unbelievable. >> I'll bet. (laughing) >> No, no, I mean. >> No, I believe it. >> Better than we ever dreamed of. >> Yeah. >> The number of customers, I'm sure I'm not allowed to say the number of customers, but we asked and the fact that, you know, 60% of those customers have never been NetApp customers before, but they see the value in the data management capabilities that we are bringing to the market. >> Dave: So it exceeded expectations and your expectations were probably pretty enthusiastic. >> They were high. >> Yeah. >> I mean, Amazon is on the record. I was with Ed earlier on today, recording a piece and Ed, you know, was very clear that it's one of the fastest growing services now on AWS. You know, it turns out that, you know, the customer base, I think recognizes the, not just the need for a file system, but the uniqueness and capabilities that ONTAP provides, you know, to those customers in how they manage their business and transformations. And so, you know, to be sort of behind the console, to be sort of behind the Amazon CLI and the Amazon API, you see the world very, very differently, you know. I think the Amazon marketplace is a fantastic capability, but I'll tell you, you know, being a core part of the AWS service itself that they sell, that they support, that they bill for. It's a nice place to be. >> So, SaaS company. You're talking to the language of application development, Kubernetes, right? What do you think this means for the future of NetApp specifically, but also generally the on-prem storage business and the storage business in general? >> Well, we just announced our second quarter earnings today and what's happening is our cloud business is growing like crazy. We generated $388 million of ARR and the growth rates are, you know, astronomically high. That is increasingly helping our on-premise business to grow. You know, the nice thing about being in primarily, in the storage and data business is people aren't deleting many things. And the rate at which they're generating information is just accelerating. So, actually the confidence that we give the customer by demonstrating a sort of a cloud first, a sort of principles of all the cloud is actually giving customers to buy more on premise. So, we really don't mind. We are, our job much like Amazon's, is to have this customer obsession and you can't really go wrong, if you just keep asking them what they want. >> Yeah, if you can do so profitably, you're going to be reinvest in your business. Guys, we've got to go. >> Yeah. >> Love to have you back. >> Thank you. >> And you been quite a transformation. You said you're going to do it. You're doing it. So, well done to you. Five years in the making. Okay. This is Dave Vellante for theCUBE, the leader in high-tech coverage. Keep it right there. We'll be right back from AWS re:Invent 21. (upbeat music)

Published Date : Dec 1 2021

SUMMARY :

And Jonsi Stefansson as the we were sitting, you know, I think, you know, I can't and shove it into the cloud. and the collaboration and the teamwork. This is the 9th year we've been here. and all the great things that it does You don't have the full file has about 13% of the Dave: So you guys, because the stock that we I got to sign up for three-year deal. is that they choose to Partner of the Year, and how you got that. I mean, the first thing You know, I can't really take the credit. being the first to do it in AWS ever. in the AWS cloud and kind but we asked and the fact that, you know, and your expectations were And so, you know, and the storage business in general? and the growth rates are, Yeah, if you can do so profitably, And you been quite a transformation.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AnthonyPERSON

0.99+

Dave VellantePERSON

0.99+

Anthony LyePERSON

0.99+

AmazonORGANIZATION

0.99+

Jonsi StefanssonPERSON

0.99+

DavePERSON

0.99+

AWSORGANIZATION

0.99+

FebruaryDATE

0.99+

SeptemberDATE

0.99+

$388 millionQUANTITY

0.99+

60%QUANTITY

0.99+

Jonsi StefansonPERSON

0.99+

JonsiPERSON

0.99+

EdPERSON

0.99+

five yearsQUANTITY

0.99+

three-yearQUANTITY

0.99+

Five yearsQUANTITY

0.99+

two and a half yearsQUANTITY

0.99+

first storeQUANTITY

0.99+

MacCOMMERCIAL_ITEM

0.99+

NetAppTITLE

0.99+

both sidesQUANTITY

0.99+

firstQUANTITY

0.99+

both worldsQUANTITY

0.98+

todayDATE

0.98+

10 years agoDATE

0.98+

15 years agoDATE

0.98+

SageMakerTITLE

0.98+

second quarterDATE

0.98+

ONTAPTITLE

0.97+

AuroraTITLE

0.97+

NetAppORGANIZATION

0.96+

S3TITLE

0.96+

about 13%QUANTITY

0.95+

first thingQUANTITY

0.94+

first timeQUANTITY

0.94+

ECSTITLE

0.94+

EKSTITLE

0.94+

several years agoDATE

0.93+

9th yearQUANTITY

0.93+

RDSTITLE

0.93+

September, 2ndDATE

0.93+

FSxTITLE

0.92+

FirstQUANTITY

0.92+

10 minutesQUANTITY

0.84+

Executive Vice PresidentPERSON

0.82+

G16 Stephen Orban and Chris Casey


 

>>Okay, welcome back everyone to the cubes coverage here at AWS reinvent 2021, our annual conference here with the cube goes out the ground. We're in person live in person, also a hybrid event online as well. A lot of great content flowing day one in the books keynotes out there, big news wall-to-wall coverage I'm shot for a year. Hosts got a great segment here with AWS marketplace and revolution, how customers are buying and deploying their technologies, DB orbit, and GM radio's marketplace and control services. And Chris Casey, worldwide ed, a business development of data exchange for AWS gentlemen, welcome to the cube, John, >>Thanks for having us >>Pleasure to be here. So I'm a huge fan of the marketplace. People know that I believe that ultimately it's going to be automated at anyway, and that procurement and enterprises as they buy and as people work together and the big theme this year is kind of this whole purpose built stack, where SAS is going to be a lot of integrations where people are working together. You see multiple partners plugging in and snapping into AWS. That was a big part of Adam's keynote today. So this really kind of lays a perfect foundation for the path that you guys have been on, which is partnering, go to market buying and consuming technology. So what's the update. Give us a, uh, an overview high level, Steven of marketplace. >>Yeah, John. And again, thanks for having us. It's awesome to be here, meeting with customers and partners again for the first time in a couple of years, great to be meeting in person and interacting. So we're super excited about where we're going with the marketplace, as you all probably know, customers in every industry are really thinking about how they transform their business using modern technology. And it's not just about the technology that they're building themselves. It's also the tools that they want to get from their partners, which we're super excited to be able to offer them on marketplace. We're about to have our ten-year anniversary. We launched the first version of marketplace in April of 2012. And back then, you know, it was a very simple e-commerce website that builders could come and buy Amazon machine instances and pay by the hour running popular, open source package or operating system software, but we've come an awful long way since then and changed the surface area of the business quite a bit, um, from a product type perspective, we now offer, uh, our partners the opportunity to list and meter their SAS solutions. >>Um, adding to the army base, we allow partners to vend their container images, and we have some new updates I'll share with you in just a second on that this year in 2019 customers asked us for the same experience that they have buying software to apply to the way they licensed data. So we launched AWS data exchange in 2019, and then in 2020 last year, we, we, we recognize that customers wanted to be able to bundle professional services offerings and with the software that they buy. So we launched a professional services offering type two. And then when you start to combine that with all of the different procurement motions that we now support, it's no longer just the self-service e-commerce capabilities, but when customers want to privately negotiate deals with their vendors, they can do so with our private offer capability, which we were the first to launch in 2000, which we then complemented in 2018 with the ability for customers to negotiate with the channel partner, reseller a managed service provider of their choice. So when you start to combine all of these different product type offerings and ways, our partners can go to market through marketplace in an automated way with all of these procurement options. We now have 2000 sellers listing more than 12,000 offerings on the marketplace, which more than 325,000 customers around the world buy either directly from the seller or from the channel partner of their choice. And when you add all that up, we've seen this year alone, billions of products and services sold through the market. >>Wow. What a rocket ship from a catalog to a full-blown comprehensive consumption environment, which by the way, the market wants that fast speed, speed, time to market. Okay. So give me the update a year at reinvent. What announcement did you guys just announced that the partner summit this week? What's the, what's the news. Yeah. So there's a couple of, >>Um, we'll talk about one and then I'll hand it over to Chris to talk about the data exchange announcements. But the first announcement we made at the partner keynote yesterday was around our container offering. So in 2018, we launched the ability for partners to list container based offerings. So their software and containers, whether it be net app Druva, um, Palo Alto or others who are having their security or other software and containers that could then be deployed by customers into the AWS managed container environments. So that could be deployed into Amazon EKS, ECS, or AWS far gate, which is great for customers who run their container workloads and our managed services. But we have a lot of customers who run their own Kubernetes environments either on, um, on AWS, on premises or using another one of the, um, Kubernetes platforms that are out there like red hat open shift. >>So we're a lot of customers just said, I also want that third-party software to be easily deployable into my own Kubernetes environment. So we were super happy to announce on Monday what we call now, the AWS marketplace for containers anywhere, which allows our partners like Apollo Alto or a CrowdStrike or a Cisco to list containers on the marketplace that can be deployed into any Kubernetes environment that the customer is running, whether that be on, on AWS, on premises, into VM-ware Tansu red hat, OpenShift, rancher, um, or wherever they, wherever they're running their Kubernetes workloads. So that's super exciting. And then we have a couple of announcements on data exchange, ed that Chris talk about also >>The dictionary. I'm going to come back to the containers with some really important things I want to drill into. Go ahead. >>There's two pretty significant, which we believe at game-changing capabilities that we've recently announced with data exchange. The first one is AWS data exchange for API APIs, and really why this is quite significant is customers had told us that not a lot, not all of their data use cases were really geared towards them consuming full flat files, which is what we launched data exchange with in terms of a delivery capability two years ago. And so with AWS data exchange for APIs, customers can come and procure an API from a third party data provider and only procure the data that they need via an API request response. Um, what, why this is so significant is for data providers, they can bring their API APIs to AWS data exchange, make them really easily available for data subscribers to find and subscribe to. And then for data subscriber, they're interacting with that API in the same way that they're interacting with other AWS APIs and they can enjoy the same governance and control characteristics using services like I am in CloudTrail. >>Um, so that flexibility in a new delivery type is, is, is really meaningful for data subscribers. The second, uh, announcement that we we really went into yesterday was the preview of Amazon data exchange for Amazon Redshift. And this capability gives customers, um, data subscribers, the ability to access data in the data warehouse supported by Amazon Redshift. And the unique aspect about this is the data subscriber. Doesn't actually have to copy the data out of Amazon Redshift if they don't want to, they can query the data directly. And what's really meaningful for them. There is they know that they're actually querying the latest data that the data provider has because they're actually querying the same data warehouse table that the data provider is publishing into data. Providers really love this, especially those ones, those data providers that were already using Amazon Redshift to store their data, because now they don't have to manage the entitlements and subscription aspects of really making their data available to as many of their data consumers as possible. >>So basically what you're saying is it makes it easier for them to keep an update. They don't have to worry about merchandising that service. They just have API APIs rolled in and the other one is for developers to actually integrate new API APIs into their role and whatever services they're building. Is that right? >>Yeah. And it's, it's really the ultimate flexibility for a developer coming to AWS data exchange. If their use case warrants, them consuming a full dataset, you know, maybe they want to look at 10 years of stock history, you know, using file-based data delivery and immutable copies of those files through our S3 object, data sharing capabilities is fit for their use case. Um, but if they want to dynamically interact with data, AWS data exchange for API APIs is a brand new delivery capability that is really unlocking. And we hope we're really excited to see the innovation >>It's like you're bringing the API economy even further to the customer base on the third party. The question I have for both of you guys on the containers and the API is security because, you know, we've seen with containers, approved containers, being vetted, making sure that they're not going to have any malware in there or API is making sure everything's clean and tight. What's the, what are they? What's the security concerns. Can you share how you guys are talking about that? For sure. >>So it's probably comes as no surprise to you or folks who might be listening or tuning in that security has always been AWS is number one priority. We build it into everything we do. This offering is no different. We scan all of the container images that are published to our catalog before they're exposed to customers for any kind of known vulnerabilities. We're monitoring our catalog every single day now against new ones that might come out and customers actually tell us, it's one of the things that they like about buying software on marketplace, better than let's say other third party repositories that don't have the same level of vetting because they can kind of build that constant trust, um, into, >>And trust is a key cause you can get containers anywhere. You don't know where it's from. So you guys are actually vetting the containers, making sure they're certified. So to speak with Amazon's security check. >>We, we, we are indeed. And, uh, we have a number of security ISV who are participating in both our containers in our containers anywhere. It's one of the most high-performing categories for us. As I said before, we have vendors like CrowdStrike and Cisco and Palo Alto who are, you know, um, um, vending, various different endpoint and network security, um, uh, offerings >>It's my catalogs are for, I mean, this is what trust is all about. Making sure that you guys can put your name behind it in the marketplace. Okay. Let's take it through the consumption. What's the current state of the art with the marketplace with enterprises, you guys have a lot of programs. We're constantly hearing great things about the go to market with joint selling on the top tier. Uh, I think there's like the top tier category. And then you've got all kinds of other incentives for companies to deploy the marketplace and sell their stuff, >>Right? So we're, we're really starting to hit our stride with, uh, co-selling with our partners and some of our, um, you know, our top, most performing partners, they into every feature and capability and incentive program that we develop. Um, give us a lot of feedback on it. Just like we work backwards from customer needs to help them transform their procurement. We work backwards from our partner needs to help them optimize their go to market channel. And, uh, you know, we take feedback from our partners, uh, very seriously. And then we build things like private offers when they want to custom negotiate deals with their customers or channel partner, private offers when they want to do that with the channel partner of their choice. And we're just continuing to listen to that feedback and, and helping them grow their business. And, and, and frankly, you know, while a lot of partners love that we're able to help get them new customers. One of their favorite things about co-selling with us is that they're able to close larger contracts faster because they're doing that in concert with the AWS field teams and taking advantage of the fact that the customer's already building on AWS. >>So I know we've got a couple minutes left. I want to get this out there because I heard it I'd have to add him prior to re-invent. And he said, quote, we don't want, cus customers don't want to reinvent the wheel. And they see, that's why this whole purpose built kind of thing is getting traction. What do you guys got in the marketplace? That's what you'd call leveraging stuff has been built. So customers don't have to rebuild things. >>Yeah. I mean, if you just look back to the very beginning of marketplace, when we launched the marketplace of Amazon machine instances, it was basically pre-built armies that customers could deploy into their own accounts already running the third-party software that they wanted. And when I think about where we're going with things like procurement governance, uh, we developed a thing called a private marketplace where customers could curate the various different solutions from our catalog that they want, because they want to be able to control who in their enterprise can buy what, and that's just a whole bunch of manual work that they would have had to do and reinvent the wheel from every customer to every customer. And instead we just delivered them the capability to do that same with our managed entitlements capability, where they can share entitlements across AWS accounts, within their own organizational, without having to manually track who's used how much of what, and report that back to the seller to make sure that they're compliant with the terms and conditions. We handle all that. So our customers don't have to continue to reinvent that. >>Why? Well, because it's like open source concept. It's like you're building on things that are already built. You can build on top of it. As you guys see these recipes get, or workflows get rolled out, you put them back in the microwave. >>That's right. Always learning from customers and partners. And while we've grown quite a bit, 2000 sellers, 325,000 customers and billions of dollars of products and services sold, we still have so much more to go >>Between data exchange and what you guys got going on. It's not, it's not, it's complex as it gets more and more complex. I know you guys are abstracting away the complexity and the heavy lifting for customers. What's on the horizon for you guys. What are you tackling next? What's the next mountain you're going to climb on. >>There's still more automation we can drive into the co-selling motion. And, uh, um, uh, so that's one, there's more procurement and governance, uh, capabilities that we think we're going to be able to add to customers. Basically what they're telling us is are the chief procurement officers that we face off with. They want to be able to get the best deal at the lowest price, uh, with the best and most favorable terms and conditions. So we're trying to work backwards from that need to make sure we have the right category selection, wherever they might want it, whether it be an infrastructure provider or a line of business, um, uh, a line of business solution and make sure they're able to get exactly that >>Chris, back to you for your vision. I honestly, analytics is a big part of SAS and platform billing and metering and where the data is. Data exchange. Almost imagine that's going to have a nice headroom to it in terms of what you can do with data exchange. Yeah. >>If you look at the announcements we've recently made and sort of our vision for data exchanges to help any AWS customer find subscribe to and use third party data in the cloud. And these two recent announcements really help on that use portion where someone can actually create, you know, shorten the time to value for them using some of our analytics services like Amazon Redshift. So we'll continue to innovate there and listen to customers in terms of their feedback and how we can help them really integrate their data pipelines with the rest of the AWS ecosystem. But we're also continuing to invest in the find and subscribe to portion. Steven talked about some of the automation and we've built data exchange on top of the lot of the plumbing and building blocks that AWS marketplace already had, which was a pretty significant leg up for us, but certainly the way in which people discover and find new datasets that might help them in an analytics problem is certainly an area that, you know, we're going to continue to lean into. >>And exchange has been around for a long, long time. Now it's in the cloud generation and I think you guys have such a great job in the marketplace and this next gen has more and more platform. Specific products are coming out. Partners are snapping together, a lot more integration. So a lot more action coming on integration I can imagine. Right. That's right. Definitely. Right. Thanks for coming on the cube. Really appreciate it, Steve. A great to see you. >>Appreciate it. Thanks for having us always a pleasure. >>Great to have all the action from Amazon here, marketplace continuing to be the preferred way to consume and deploy technology, and soon to be an integration hub for this next generation cloud. I'm Jeff, where to keep your watching the queue of the leader in worldwide tech coverage. Be right back.

Published Date : Dec 1 2021

SUMMARY :

our annual conference here with the cube goes out the ground. So this really kind of lays a perfect foundation for the path that you guys have been on, It's awesome to be here, meeting with customers and partners again for the and we have some new updates I'll share with you in just a second on that this year in 2019 customers So give me the update a year at reinvent. So that could be deployed into Amazon EKS, ECS, or AWS far gate, And then we have a couple of announcements on data exchange, ed that Chris talk about also I'm going to come back to the containers with some really important things I want to drill into. And then for data subscriber, they're interacting with that API in the same way that they're interacting with other And the unique aspect about this is the data subscriber. They just have API APIs rolled in and the other one is for developers to actually integrate If their use case warrants, them consuming a full dataset, you know, maybe they want to look at 10 years of stock The question I have for both of you guys on the containers and the API is security because, you know, So it's probably comes as no surprise to you or folks who might be listening or tuning in that security has So to speak with Amazon's security check. And, uh, we have a number of security ISV who are participating in both What's the current state of the art with the marketplace with enterprises, is that they're able to close larger contracts faster because they're doing that in concert with the AWS So customers don't have to rebuild things. and report that back to the seller to make sure that they're compliant with the terms and conditions. As you guys see these recipes get, or workflows get rolled out, you put them back in the sold, we still have so much more to go What's on the horizon for you guys. They want to be able to get the best deal at the lowest price, uh, with the best and most favorable Chris, back to you for your vision. integrate their data pipelines with the rest of the AWS ecosystem. Now it's in the cloud generation and I think you guys have such Thanks for having us always a pleasure. Great to have all the action from Amazon here, marketplace continuing to be the preferred way to consume

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
CiscoORGANIZATION

0.99+

ChrisPERSON

0.99+

AmazonORGANIZATION

0.99+

StevePERSON

0.99+

JeffPERSON

0.99+

AWSORGANIZATION

0.99+

2018DATE

0.99+

2019DATE

0.99+

AdamPERSON

0.99+

Chris CaseyPERSON

0.99+

April of 2012DATE

0.99+

StevenPERSON

0.99+

2020DATE

0.99+

10 yearsQUANTITY

0.99+

JohnPERSON

0.99+

2000DATE

0.99+

MondayDATE

0.99+

CrowdStrikeORGANIZATION

0.99+

more than 325,000 customersQUANTITY

0.99+

Stephen OrbanPERSON

0.99+

twoQUANTITY

0.99+

2000 sellersQUANTITY

0.99+

more than 12,000 offeringsQUANTITY

0.99+

325,000 customersQUANTITY

0.99+

first oneQUANTITY

0.99+

yesterdayDATE

0.99+

Palo AltoORGANIZATION

0.99+

bothQUANTITY

0.99+

secondQUANTITY

0.99+

DruvaTITLE

0.99+

first timeQUANTITY

0.98+

firstQUANTITY

0.98+

oneQUANTITY

0.98+

two years agoDATE

0.97+

first announcementQUANTITY

0.97+

OneQUANTITY

0.97+

a yearQUANTITY

0.97+

billions of dollarsQUANTITY

0.96+

first versionQUANTITY

0.96+

this weekDATE

0.96+

SASORGANIZATION

0.95+

this yearDATE

0.95+

todayDATE

0.94+

ten-year anniversaryQUANTITY

0.93+

S3TITLE

0.92+

GMORGANIZATION

0.92+

CloudTrailTITLE

0.91+

two recent announcementsQUANTITY

0.89+

RedshiftTITLE

0.89+

OpenShiftTITLE

0.88+

day oneQUANTITY

0.86+

Palo AltoTITLE

0.85+

couple minutesQUANTITY

0.79+

Manu Parbhakar, AWS & Mike Evans, Red Hat | AWS re:Invent 2021


 

(upbeat music) >> Hey, welcome back everyone to theCube's coverage of AWS re:Invent 2021. I'm John Furrier, host of theCube, wall-to-wall coverage in-person and hybrid. The two great guests here, Manu Parbhakar, worldwide Leader, Linux and IBM Software Partnership at AWS, and Mike Evans, Vice President of Technical Business Development at Red Hat. Gentlemen, thanks for coming on theCube. Love this conversation, bringing Red Hat and AWS together. Two great companies, great technologies. It really is about software in the cloud, Cloud-Scale. Thanks for coming on. >> Thanks John. >> So get us into the partnership. Okay. This is super important. Red Hat, well known open source as cloud needs to become clear, doing an amazing work. Amazon, Cloud-Scale, Data is a big part of it. Modern software. Tell us about the partnership. >> Thanks John. Super excited to share about our partnership. As we have been partnering for almost 14 years together. We started in the very early days of AWS. And now we have tens of thousands of customers that are running RHEL on EC2. If you look at over the last three years, the pace of innovation for our joint partnership has only increased. It has manifested in three key formats. The first one is the pace at which RHEL supports new EC2 instances like Arm, Graviton. You know, think a lot of features like Nitro. The second is just the portfolio of new RHEL offerings that we have launched over the last three years. We started with RHEL for sequel, RHEL high availability, RHEL for SAP, and then only last month, we've launched the support for knowledge base for RHEL customers. Mike, you want to talk about what you're doing with OpenShift and Ansible as well? >> Yeah, it's good to be here. It's fascinating to me cause I've been at Red Hat for 21 years now. And vividly remember the start of working with AWS back in 2008, when the cloud was kind of a wild idea with a whole bunch of doubters. And it's been an interesting time, but I feel the next 14 years are going to be exciting in a different way. We now have a very large customer base from almost every industry in the world built on RHEL, and running on AWS. And our goal now is to continue to add additional elements to our offerings, to build upon that and extend it. The largest addition which we're going to be talking a lot about here at the re:Invent show was the partnership in April this year when we launched the Red Hat OpenShift service on AWS as a managed version of OpenShift for containers based workloads. And we're seeing a lot of the customers that have standardized on RHEL on EC2, or ones that are using OpenShift on-premise deployments, as the early adopters of ROSA, but we're also seeing a huge number of new customers who never purchased anything from Red Hat. So, in addition to the customers, we're getting great feedback from systems integrators and ISV partners who are looking to have a software application run both on-premise and in AWS, and with OpenShift being one of the pioneers in enabling both container and harnessing Kubernetes where ROSA is just a really exciting area for us to track and continue to advance together with AWS. >> It's very interesting. Before I get to ROSA, I want to just get the update on Red Hat and IBM, obviously the acquisition part of IBM, how is that impacting the partnership? You can just quickly touch on that. >> Sure. I'll start off and, I mean, Red Hat went from a company that was about 15,000 employees competing with a lot of really large technology companies and we added more than 100,000 field oriented people when IBM acquired Red Hat to help magnify the Red Hat solutions, and the global scale and coverage of IBM is incredible. I like to give two simple examples of people. One is, I remember our salesforce in EMEA telling me they got a $4 million order from a country in Africa theydidn't even know existed. And IBM had 100 people in it, or AT&T is one of Red Hat's largest accounts, and I think at one point we had seven full-time people on it and AT&T is one of IBM's largest accounts and they had two seven storey buildings full of people working with AT&T. So RHELative to AWS, we now also see IBM embracing AWS more with both software, and services, in the magnification of Red Hat based solutions, combined with that embrace should be, create some great growth. And I think IBM is pretty excited about being able to sell Red Hat software as well. >> Yeah, go ahead. >> And Manu I think you have, yeah. >> Yeah. I think there's also, it is definitely very positive John. >> Yeah. >> You know, just the joint work that Red Hat and AWS have done for the last 14 years, working in the trenches supporting our end customers is now also providing lot of Tailwinds for the IBM software partnership. We have done some incredible work over the last 12 months around three broad categories. The first one is around product, what we're doing around customer success, and then what we're doing around sales and marketing. So on the product side, we have listed about 15 products on Marketplace over the course of the last 12 to 15 months. And our goal is to launch all of the IBM Cloud Paks. These are containerized versions of IBM software on Marketplace by the first half of next year. The other feedback that we are getting from our customers is that, hey, we love IBM software running at Amazon, but we like to have a cloud native SaaS version of the software. So there's a lot of work that's going on right now, to make sure that many of these offerings are available in a cloud-native manner. And you're not talking with Db2 Cognos, Maximo, (indistinct), on EC2. The second thing that we're doing is making sure that many of these large enterprise customers are running IBM software, are successful. So our technical teams are attached to the hip, working on the ground floor in making customers like Delta successful in running IBM software on them. I think the third piece around sales and marketing just filing up a vibrant ecosystem, rather how do we modernize and migrate this IBM software on Cloud Paks on AWS? So there's a huge push going on here. So (indistinct), you know, the Red Hat partnership is providing a lot of Tailwinds to accelerate our partnership with IBM software. >> You know, I always, I've been saying all this year in Red Hat summit, as well as Ansible Fest that, distributed computing is coming to large scale. And that's really the, what's happening. I mean, you looking at what you guys are doing cause it's amazing. ROSA Red Hat OpenShift on AWS, very notable to use the term on AWS, which actually means something in the partnership as we learned over the years. How is that going Mike because you launched on theCube in April, ROSA, it had great traction going in. It's in the Marketplace. You've got some integration. It's really a hand in glove situation with Cloud-Scale. Take us through what's the update? >> Yeah, let me, let me let Manu speak first to his AWS view and then I'll add the Red Hat picture. >> Thanks Mike. John for ROSA is part of an entire container portfolio. So if you look at it, so we have ECS, EKS, the managed Kubernetes service. We have the serverless containers with Fargate. We launched ECS case anywhere. And then ROSA is part of an entire portfolio of container services. As you know, two thirds of all container workloads run on AWS. And a big function of that is because we (indistinct) from our customer and then sold them what the requirements are. There are two sets of key customers that are driving the demand and the early adoption of ROSA. The first set of customers that have standardized on OpenShift on-premises. They love the fact that everything that comes out of the box and they would love to use it on Arm. So that's the first (indistinct). The second set of customers are, you know, the large RHEL users on EC2. The tens of thousands of customers that we've talked about that want to move from VM to containers, and want to do DevOps. So it's this set of two customers that are informing our roadmap, as well as our investments around ROSA. We are seeing solid adoption, both in terms of adoption by a customer, as well as the partners and helping, and how our partners are helping our customers in modernizing from VMs to containers. So it's a, it's a huge, it's a huge priority for our container service. And over the next few years, we continue to see, to increase our investment on the product road map here. >> Yeah, from my perspective, first off at the high level in mind, my one of the most interesting parts of ROSA is being integrated in the AWS console and not just for the, you know, where it shows up on the screen, but also all the work behind what that took to get there and why we did it. And we did it because customers were asking both of us, we're saying, look, OpenShift is a platform. We're going to be building and deploying serious applications at incredible scale on it. And it's really got to have joint high-quality support, joint high-quality engineering. It's got to be rock solid. And so we came to agreement with AWS. That was the best way to do that, was to build it in the console, you know, integrated in, into the core of an AWS engineering team with Red Hat engineers, Arm and Arms. So that's, that's a very unique service and it's not like a high level SaaS application that runs above everything, it's down in the bowels and, and really is, needs to be rock solid. So we're seeing, we're seeing great interest, both from end users, as I mentioned, existing customers, new customers, the partner base, you know, how the systems integrators are coming on board. There's lots of business and money to be made in modernizing applications as well as building new cloud native applications. People can, you know, between Red Hat and AWS, we've got some, some models around supporting POCs and customer migrations. We've got some joint investments. it's a really ripe area. >> Yeah. That's good stuff. Real quick. what do you think of ROSA versus EKS and ECS? What's, how should people think about that Mike? (indistinct) >> You got to go for it Manu. Your job is to position all these (indistinct). (indistinct) >> John, ROSA is part of our container portfolio services along with EKS, ECS, Fargate, and any (indistinct) services that we just launched earlier this year. There are, you know, set of customers both that are running OpenShift on-premises that are standardized on ROSA. And then there are large set of RHEL customers that are running RHEL on EC2, that want to use the ROSA service. So, you know, both AWS and Red Hat are now continuing to invest in accelerating the roadmap of the service on our platform. You know, we are working on improving the console experience. Also one of the things we just launched recently is the Amazon controller to Kubernetes, or what , you know, service operators for S3. So over the next few years you will see, you know, significant investment from both Red Hat and AWS in this joint service. And this is an integral part of our overall container portfolio. >> And great stuff to get in the console. That's great, great integration. That's the future. I got to ask about the graviton instances. It's been one of the most biggest success stories, I think we believe in Amazon history in the acquisition of Annapurna, has really created great differentiation. And anyone who's in the software knows if you have good chips powering apps, they go faster. And if the chips are good, they're less expensive. And that's the innovation. We saw that RHEL now supports graviton instances. Tell us more about the Red Hat strategy with graviton and Arms specifically, has that impact your (indistinct) development, and what does it mean for customers? >> Sure. Yeah, it's pretty, it's a pretty fascinating area for me. As I said, I've been a Red Hat for 21 years and my job is actually looking at new markets and new technologies now for Red Hat and work with our largest partners. So, I've been tracking the Arm dynamics for awhile, and we've been working with AWS for over two years, supporting graviton. And it's, I'm seeing more enthusiasm now in terms of developers and, especially for very horizontal, large scale applications. And we're excited to be working with AWS directly on it. And I think it's going to be a fascinating next two years on Arm, personally. >> Many of the specialized processors for training and instances, all that stuff, can be applied to web services and automation like cloud native services, right? Is that, it sounds like a good direction. Take us through that. >> John, on our partnership with Red Hat, we are continuing to iterate, as Mike mentioned, the stuff that we've done around graviton, both the last two years is pretty incredible. And the pace at which we are innovating is improving. Around the (indistinct) and the inferential instances, we are continuing to work with Red Hat and, you know, the support for RHEL should come shortly, very soon. >> Well, my prediction is that the graviton success was going to be applied to every single category. You can get that kind of innovation with this on the software side, just really kind of just, that's the magical, that's the, that's the proven form of software, right? We've been there. Good software powering with some great performance. Manu, Mike, thank you for coming on and sharing the, the news and the partnership update. Congratulations on the partnership. Really good. Thank you. >> Excellent John. Incredible (indistinct). >> Yeah, this is the future software as we see, it's all coming together. Here on theCube, we're bringing all the action, software being powered by chips, is theCube coverage of AWS re:invent 2021. I'm John Furrier, your host. Thanks for watching. (upbeat music)

Published Date : Nov 30 2021

SUMMARY :

in the cloud, Cloud-Scale. about the partnership. The first one is the pace at which RHEL in the world built on RHEL, how is that impacting the partnership? and services, in the magnification it is definitely very positive John. So on the product side, It's in the Marketplace. first to his AWS view that are driving the demand And it's really got to have what do you think You got to go for it Manu. is the Amazon controller to Kubernetes, And that's the innovation. And I think it's going to be Many of the specialized processors And the pace at which we that the graviton success bringing all the action,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

AWSORGANIZATION

0.99+

IBMORGANIZATION

0.99+

Manu ParbhakarPERSON

0.99+

MikePERSON

0.99+

Mike EvansPERSON

0.99+

2008DATE

0.99+

AT&TORGANIZATION

0.99+

John FurrierPERSON

0.99+

two customersQUANTITY

0.99+

21 yearsQUANTITY

0.99+

AT&T.ORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

Red HatTITLE

0.99+

AmazonORGANIZATION

0.99+

AfricaLOCATION

0.99+

ManuPERSON

0.99+

AprilDATE

0.99+

RHELTITLE

0.99+

$4 millionQUANTITY

0.99+

April this yearDATE

0.99+

two setsQUANTITY

0.99+

oneQUANTITY

0.99+

bothQUANTITY

0.99+

100 peopleQUANTITY

0.99+

Red HatTITLE

0.99+

second setQUANTITY

0.99+

DeltaORGANIZATION

0.99+

third pieceQUANTITY

0.99+

first setQUANTITY

0.99+

twoQUANTITY

0.99+

firstQUANTITY

0.99+

over two yearsQUANTITY

0.99+

OneQUANTITY

0.99+

first oneQUANTITY

0.99+

more than 100,000 fieldQUANTITY

0.99+

EC2TITLE

0.99+

Danny Allan, Veeam & James Kirschner, Amazon | AWS re:Invent 2021


 

(innovative music) >> Welcome back to theCUBE's continuous coverage of AWS re:Invent 2021. My name is Dave Vellante, and we are running one of the industry's most important and largest hybrid tech events of the year. Hybrid as in physical, not a lot of that going on this year. But we're here with the AWS ecosystem, AWS, and special thanks to AMD for supporting this year's editorial coverage of the event. We've got two live sets, two remote studios, more than a hundred guests on the program. We're going really deep, as we enter the next decade of Cloud innovation. We're super excited to be joined by Danny Allan, who's the Chief Technology Officer at Veeam, and James Kirschner who's the Engineering Director for Amazon S3. Guys, great to see you. >> Great to see you as well, Dave. >> Thanks for having me. >> So let's kick things off. Veeam and AWS, you guys have been partnering for a long time. Danny, where's the focus at this point in time? What are customers telling you they want you to solve for? And then maybe James, you can weigh in on the problems that customers are facing, and the opportunities that they see ahead. But Danny, why don't you start us off? >> Sure. So we hear from our customers a lot that they certainly want the solutions that Veeam is bringing to market, in terms of data protection. But one of the things that we're hearing is they want to move to Cloud. And so there's a number of capabilities that they're asking us for help with. Things like S3, things like EC2, and RDS. And so over the last, I'll say four or five years, we've been doing more and more together with AWS in, I'll say, two big categories. One is, how do we help them send their data to the Cloud? And we've done that in a very significant way. We support obviously tiering data into S3, but not just S3. We support S3, and S3 Glacier, and S3 Glacier Deep Archive. And more importantly than ever, we do it with immutability because customers are asking for security. So a big category of what we're working on is making sure that we can store data and we can do it securely. Second big category that we get asked about is "Help us to protect the Cloud-Native Workloads." So they have workloads running in EC2 and RDS, and EFS, and EKS, and all these different services knowing Cloud-Native Data Protection. So we're very focused on solving those problems for our customers. >> You know, James, it's interesting. I was out at the 15th anniversary of S3 in Seattle, in September. I was talking to Mai-Lan. Remember we used to talk about gigabytes and terabytes, but things have changed quite dramatically, haven't they? What's your take on this topic? >> Well, they sure have. We've seen the exponential growth data worldwide and that's made managing backups more difficult than ever before. We're seeing traditional methods like tape libraries and secondary sites fall behind, and many organizations are moving more and more of their workloads to the Cloud. They're extending backup targets to the Cloud as well. AWS offers the most storage services, data transfer methods and networking options with unmatched durability, security and affordability. And customers who are moving their Veeam Backups to AWS, they get all those benefits with a cost-effective offsite storage platform. Providing physical separation from on-premises primary data with pay-as-you-go economics, no upfront fees or capital investments, and near zero overhead to manage. AWS and APM partners like Veeam are helping to build secure, efficient, cost-effective backup, and restore solutions using the products you know and trust with the scale and reliability of the AWS Cloud. >> So thank you for that. Danny, I remember I was way back in the old days, it was a VeeamON physical event. And I remember kicking around and seeing this company called Kasten. And I was really interested in like, "You protect the containers, aren't they ephemeral?" And we started to sort of chit-chat about how that's going to change and what their vision was. Well, back in 2020, you purchased Kasten, you formed the Veeam KBU- the Kubernetes Business Unit. What was the rationale behind that acquisition? And then James, I'm going to get you to talk a little bit about modern apps. But Danny, start with the rationale behind the Kasten acquisition. >> Well, one of the things that we certainly believe is that the next generation of infrastructure is going to be based on containers, and there's a whole number of reasons for that. Things like scalability and portability. And there's a number of significant value-adds. So back in October of last year in 2020, as you mentioned, we acquired Kasten. And since that time we've been working through Kasten and from Veeam to add more capabilities and services around AWS. For example, we supported the Bottlerocket launch they just did and actually EKS anywhere. And so we're very focused on making sure that our customers can protect their data no matter whether it's a Kubernetes cluster, or whether it's on-premises in a data center, or if it's running up in the Cloud in EC2. We give this consistent data management experience and including, of course, the next generation of infrastructure that we believe will be based on containers. >> Yeah. You know, James, I've always noted to our audience that, "Hey AWS, they provide rich set of primitives and API's that ISV's like Veeam can take advantage of it." But I wonder if you could talk about your perspective, maybe what you're seeing in the ecosystem, maybe comment on what Veeam's doing. Specifically containers, app modernization in the Cloud, the evolution of S3 to support all these trends. >> Yeah. Well, it's been great to see Veeam expands for more and more AWS services to help joint customers protect their data. Especially since Veeam stores their data in Amazon S3 storage classes. And over the last 15 years, S3 has helped companies around the world optimize their work, so I'd be happy to share some insights into that with you today. When you think about S3 well, you can find virtually every use case across all industries running on S3. That ranges from backup, to (indistinct) data, to machine learning models, the list goes on and on. And one of the reasons is because S3 provides industry leading scalability, availability, durability, security, and performance. Those are characteristics customers want. To give you some examples, S3 stores exabytes the data across millions of hard drives, trillions of objects around the world and regularly peaks at millions of requests per second. S3 can process in a single region over 60 terabytes a second. So in summary, it's a very powerful storage offering. >> Yeah, indeed. So you guys always talking about, you know, working backwards, the customer centricity. I think frankly that AWS sort of change the culture of the entire industry. So, let's talk about customers. Danny do you have an example of a joint customer? Maybe how you're partnering with AWS to try to address some of the challenges in data protection. What are customers is seeing today? >> Well, we're certainly seeing that migration towards the Cloud as James alluded today. And actually, if we're talking about Kubernetes, actually there's a customer that I know of right now, Leidos. They're a fortune 500 Information Technology Company. They deal in the engineering and technology services space, and focus on highly regulated industry. Things like defense and intelligence in the civil space. And healthcare in these very regulated industries. Anyway, they decided to make a big investment in continuous integration, continuous development. There's a segment of the industry called portable DevSecOps, and they wanted to build infrastructure as code that they could deploy services, not in days or weeks or months, but they literally wanted to deploy their services in hours. And so they came to us, and with Kasten K10 actually around Kubernetes, they created a service that could enable them to do that. So they could be fully compliant, and they could deliver the services in, like I say, hours, not days or months. And they did that all while delivering the same security that they need in a cost-effective way. So it's been a great partnership, and that's just one example. We see these all the time, customers who want to combine the power of Kubernetes with the scale of the Cloud from AWS, with the data protection that comes from Veeam. >> Yes, so James, you know at AWS you don't get dinner if you don't have a customer example. So maybe you could share one with us. >> Yeah. We do love working backwards from customers and Danny, I loved hearing that story. One customer leveraging Veeam and AWS is Maritz. Maritz provides business performance solutions that connect people to results, ensuring brands deliver on their customer promises and drive growth. Recently Maritz moved over a thousand VM's and petabytes of data into AWS, using Veeam. Veeam Backup for AWS enables Maritz to protect their Amazon EC2 instances with the backup of the data in the Amazon S3 for highly available, cost-effective, long-term storage. >> You know, one of the hallmarks of Cloud is strong ecosystem. I see a lot of companies doing sort of their own version of Cloud. I always ask "What's the partner ecosystem look like?" Because that is a fundamental requirement, in my view anyway, and attribute. And so, a big part of that, Danny, is channel partners. And you have a 100 percent channel model. And I wonder if we could talk about your strategy in that regard. Why is it important to be all channel? How to consulting partners fit into the strategy? And then James, I'm going to ask you what's the fit with the AWS ecosystem. But Danny, let's start with you. >> Sure, so one of the things that we've learned, we're 15 years old as well, actually. I think we're about two months older, or younger I should say than AWS. I think their birthday was in August, ours was in October. But over that 15 years, we've learned that our customers enjoy the services, and support, and expertise that comes from the channel. And so we've always been a 100 percent channel company. And so one of the things that we've done with AWS is to make sure that our customers can purchase both how and when they want through the AWS marketplace. They have a program called Consulting Partners Private Agreements, or CPPO, I think is what it's known as. And that allows our customers to consume through the channel, but with the terms and bill that they associate with AWS. And so it's a new route-to-market for us, but we continue to partner with AWS in the channel programs as well. >> Yeah. The marketplace is really impressive. James, I wonder if you could maybe add in a little bit. >> Yeah. I think Danny said it well, AWS marketplace is a sales channel for ISV's and consulting partners. It lets them sell their solutions to AWS customers. And we focus on making it really easy for customers to find, buy, deploy, and manage software solutions, including software as a service in just a matter of minutes. >> Danny, you mentioned you're 15 years old. The first time I mean, the name Veeam. The brilliance of tying it to virtualization and VMware. I was at a VMUG when I first met you guys and saw your ascendancy tied to virtualization. And now you're obviously leaning heavily into the Cloud. You and I have talked a lot about the difference between just wrapping your stack in a container and hosting it in the Cloud versus actually taking advantage of Cloud-Native Services to drive further innovation. So my question to you is, where does Veeam fit on that spectrum, and specifically what Cloud-Native Services are you leveraging on AWS? And maybe what have been some outcomes of those efforts, if in fact that's what you're doing? And then James, I have a follow-up for you. >> Sure. So the, the outcomes clearly are just more success, more scale, more security. All the things that James is alluding to, that's true for Veeam it's true for our customers. And so if you look at the Cloud-Native capabilities that we protect today, certainly it began with EC2. So we run things in the Cloud in EC2, and we wanted to protect that. But we've gone well beyond that today, we protect RDS, we protect EFS- Elastic File Services. We talked about EKS- Elastic Kubernetes Services, ECS. So there's a number of these different services that we protect, and we're going to continue to expand on that. But the interesting thing is in all of these, Dave, when we do data protection, we're sending it to S3, and we're doing all of that management, and tiering, and security that our customers know and love and expect from Veeam. And so you'll continue to see these types of capabilities coming from Veeam as we go forward. >> Thank you for that. So James, as we know S3- very first service offered in 2006 on the AWS' Cloud. As I said, theCUBE was out in Seattle, September. It was a great, you know, a little semi-hybrid event. But so over the decade and a half, you really expanded the offerings quite dramatically. Including a number of, you got on-premise services things, like Outposts. You got other services with "Wintery" names. How have you seen partners take advantage of those services? Is there anything you can highlight maybe that Veeam is doing that's notable? What can you share? >> Yeah, I think you're right to call out that growth. We have a very broad and rich set of features and services, and we keep growing that. Almost every day there's a new release coming out, so it can be hard to keep up with. And Veeam has really been listening and innovating to support our joint customers. Like Danny called out a number of the ways in which they've expanded their support. Within Amazon S3, I want to call out their support for our infrequent access, infrequent access One-Zone, Glacier, and Glacier Deep Archive Storage Classes. And they also support other AWS storage services like AWS Outposts, AWS Storage Gateway, AWS Snowball Edge, and the Cold-themed storage offerings. So absolutely a broad set of support there. >> Yeah. There's those, winter is coming. Okay, great guys, we're going to leave it there. Danny, James, thanks so much for coming to theCUBE. Really good to see you guys. >> Good to see you as well, thank you. >> All right >> Thanks for having us. >> You're very welcome. You're watching theCUBE's coverage of 2021 AWS re:Invent, keep it right there for more action on theCUBE, your leader in hybrid tech event coverage, right back. (uplifting music)

Published Date : Nov 30 2021

SUMMARY :

and special thanks to AMD and the opportunities that they see ahead. And so over the last, I'll I was out at the 15th anniversary of S3 of the AWS Cloud. And then James, I'm going to get you is that the next generation the evolution of S3 to some insights into that with you today. of the entire industry. And so they came to us, So maybe you could share one with us. that connect people to results, And then James, I'm going to ask you and expertise that comes from the channel. James, I wonder if you could And we focus on making it So my question to you is, And so if you look at the in 2006 on the AWS' Cloud. AWS Snowball Edge, and the Really good to see you guys. coverage of 2021 AWS re:Invent,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DannyPERSON

0.99+

JamesPERSON

0.99+

AWSORGANIZATION

0.99+

Dave VellantePERSON

0.99+

OctoberDATE

0.99+

Danny AllanPERSON

0.99+

2006DATE

0.99+

James KirschnerPERSON

0.99+

SeattleLOCATION

0.99+

AugustDATE

0.99+

100 percentQUANTITY

0.99+

2020DATE

0.99+

DavePERSON

0.99+

AWS'ORGANIZATION

0.99+

fourQUANTITY

0.99+

AMDORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

VeeamORGANIZATION

0.99+

SeptemberDATE

0.99+

APMORGANIZATION

0.99+

S3TITLE

0.99+

two remote studiosQUANTITY

0.99+

five yearsQUANTITY

0.99+

OneQUANTITY

0.99+

LeidosORGANIZATION

0.99+

KubernetesTITLE

0.99+

KastenORGANIZATION

0.99+

two live setsQUANTITY

0.99+

oneQUANTITY

0.99+

15 yearsQUANTITY

0.98+

todayDATE

0.98+

more than a hundred guestsQUANTITY

0.98+

bothQUANTITY

0.98+

Ian Buck, NVIDIA | AWS re:Invent 2021


 

>>Well, welcome back to the cubes coverage of AWS reinvent 2021. We're here joined by Ian buck, general manager and vice president of accelerated computing at Nvidia I'm. John Ford, your host of the QB. And thanks for coming on. So in video, obviously, great brand congratulates on all your continued success. Everyone who has does anything in graphics knows the GPU's are hot and you guys get great brand great success in the company, but AI and machine learning was seeing the trend significantly being powered by the GPU's and other systems. So it's a key part of everything. So what's the trends that you're seeing, uh, in ML and AI, that's accelerating computing to the cloud. Yeah, >>I mean, AI is kind of drape bragging breakthroughs innovations across so many segments, so many different use cases. We see it showing up with things like credit card, fraud prevention and product and content recommendations. Really it's the new engine behind search engines is AI. Uh, people are applying AI to things like, um, meeting transcriptions, uh, virtual calls like this using AI to actually capture what was said. Um, and that gets applied in person to person interactions. We also see it in intelligence systems assistance for a contact center, automation or chat bots, uh, medical imaging, um, and intelligence stores and warehouses and everywhere. It's really, it's really amazing what AI has been demonstrated, what it can do. And, uh, it's new use cases are showing up all the time. >>Yeah. I'd love to get your thoughts on, on how the world's evolved just in the past few years, along with cloud, and certainly the pandemics proven it. You had this whole kind of full stack mindset initially, and now you're seeing more of a horizontal scale, but yet enabling this vertical specialization in applications. I mean, you mentioned some of those apps, the new enablers, this kind of the horizontal play with enablement for specialization, with data, this is a huge shift that's going on. It's been happening. What's your reaction to that? >>Yeah, it's the innovations on two fronts. There's a horizontal front, which is basically the different kinds of neural networks or AIS as well as machine learning techniques that are, um, just being invented by researchers for, uh, and the community at large, including Amazon. Um, you know, it started with these convolutional neural networks, which are great for image processing, but as it expanded more recently into, uh, recurrent neural networks, transformer models, which are great for language and language and understanding, and then the new hot topic graph neural networks, where the actual graph now is trained as a, as a neural network, you have this underpinning of great AI technologies that are being adventure around the world in videos role is try to productize that and provide a platform for people to do that innovation and then take the next step and innovate vertically. Um, take it, take it and apply it to two particular field, um, like medical, like healthcare and medical imaging applying AI, so that radiologists can have an AI assistant with them and highlight different parts of the scan. >>Then maybe troublesome worrying, or requires more investigation, um, using it for robotics, building virtual worlds, where robots can be trained in a virtual environment, their AI being constantly trained, reinforced, and learn how to do certain activities and techniques. So that the first time it's ever downloaded into a real robot, it works right out of the box, um, to do, to activate that we co we are creating different vertical solutions, vertical stacks for products that talk the languages of those businesses, of those users, uh, in medical imaging, it's processing medical data, which is obviously a very complicated large format data, often three-dimensional boxes in robotics. It's building combining both our graphics and simulation technologies, along with the, you know, the AI training capabilities and different capabilities in order to run in real time. Those are, >>Yeah. I mean, it's just so cutting edge. It's so relevant. I mean, I think one of the things you mentioned about the neural networks, specifically, the graph neural networks, I mean, we saw, I mean, just to go back to the late two thousands, you know, how unstructured data or object store created, a lot of people realize that the value out of that now you've got graph graph value, you got graph network effect, you've got all kinds of new patterns. You guys have this notion of graph neural networks. Um, that's, that's, that's out there. What is, what is a graph neural network and what does it actually mean for deep learning and an AI perspective? >>Yeah, we have a graph is exactly what it sounds like. You have points that are connected to each other, that established relationships and the example of amazon.com. You might have buyers, distributors, sellers, um, and all of them are buying or recommending or selling different products. And they're represented in a graph if I buy something from you and from you, I'm connected to those end points and likewise more deeply across a supply chain or warehouse or other buyers and sellers across the network. What's new right now is that those connections now can be treated and trained like a neural network, understanding the relationship. How strong is that connection between that buyer and seller or that distributor and supplier, and then build up a network that figure out and understand patterns across them. For example, what products I may like. Cause I have this connection in my graph, what other products may meet those requirements, or also identifying things like fraud when, when patterns and buying patterns don't match, what a graph neural networks should say would be the typical kind of graph connectivity, the different kind of weights and connections between the two captured by the frequency half I buy things or how I rate them or give them stars as she used cases, uh, this application graph neural networks, which is basically capturing the connections of all things with all people, especially in the world of e-commerce, it's very exciting to a new application, but applying AI to optimizing business, to reducing fraud and letting us, you know, get access to the products that we want, the products that they have, our recommendations be things that, that excited us and want us to buy things >>Great setup for the real conversation that's going on here at re-invent, which is new kinds of workloads are changing. The game. People are refactoring their business with not just replatform, but actually using this to identify value and see cloud scale allows you to have the compute power to, you know, look at a note on an arc and actually code that. It's all, it's all science, all computer science, all at scale. So with that, that brings up the whole AWS relationship. Can you tell us how you're working with AWS before? >>Yeah. 80 of us has been a great partner and one of the first cloud providers to ever provide GPS the cloud, uh, we most more recently we've announced two new instances, uh, the instance, which is based on the RA 10 G GPU, which has it was supports the Nvidia RTX technology or rendering technology, uh, for real-time Ray tracing and graphics and game streaming is their highest performance graphics, enhanced replicate without allows for those high performance graphics applications to be directly hosted in the cloud. And of course runs everything else as well, including our AI has access to our AI technology runs all of our AI stacks. We also announced with AWS, the G 5g instance, this is exciting because it's the first, uh, graviton or ARM-based processor connected to a GPU and successful in the cloud. Um, this makes, uh, the focus here is Android gaming and machine learning and France. And we're excited to see the advancements that Amazon is making and AWS is making with arm and the cloud. And we're glad to be part of that journey. >>Well, congratulations. I remember I was just watching my interview with James Hamilton from AWS 2013 and 2014. He was getting, he was teasing this out, that they're going to build their own, get in there and build their own connections, take that latency down and do other things. This is kind of the harvest of all that. As you start looking at these new new interfaces and the new servers, new technology that you guys are doing, you're enabling applications. What does, what do you see this enabling as this, as this new capability comes out, new speed, more, more performance, but also now it's enabling more capabilities so that new workloads can be realized. What would you say to folks who want to ask that question? >>Well, so first off I think arm is here to stay and you can see the growth and explosion of my arm, uh, led of course, by grab a tiny to be. I spend many others, uh, and by bringing all of NVIDIA's rendering graphics, machine learning and AI technologies to arm, we can help bring that innovation. That arm allows that open innovation because there's an open architecture to the entire ecosystem. Uh, we can help bring it forward, uh, to the state of the art in AI machine learning, the graphics. Um, we all have our software that we released is both supportive, both on x86 and an army equally, um, and including all of our AI stacks. So most notably for inference the deployment of AI models. We have our, the Nvidia Triton inference server. Uh, this is the, our inference serving software where after he was trained to model, he wanted to play it at scale on any CPU or GPU instance, um, for that matter. So we support both CPS and GPS with Triton. Um, it's natively integrated with SageMaker and provides the benefit of all those performance optimizations all the time. Uh, things like, uh, features like dynamic batching. It supports all the different AI frameworks from PI torch to TensorFlow, even a generalized Python code. Um, we're activating how activating the arm ecosystem as well as bringing all those AI new AI use cases and all those different performance levels, uh, with our partnership with AWS and all the different clouds. >>And you got to making it really easy for people to use, use the technology that brings up the next kind of question I want to ask you. I mean, a lot of people are really going in jumping in the big time into this. They're adopting AI. Either they're moving in from prototype to production. There's always some gaps, whether it's knowledge, skills, gaps, or whatever, but people are accelerating into the AI and leaning into it hard. What advancements have is Nvidia made to make it more accessible, um, for people to move faster through the, through the system, through the process? >>Yeah, it's one of the biggest challenges. The other promise of AI, all the publications that are coming all the way research now, how can you make it more accessible or easier to use by more people rather than just being an AI researcher, which is, uh, uh, obviously a very challenging and interesting field, but not one that's directly in the business. Nvidia is trying to write a full stack approach to AI. So as we make, uh, discover or see these AI technologies come available, we produce SDKs to help activate them or connect them with developers around the world. Uh, we have over 150 different STKs at this point, certain industries from gaming to design, to life sciences, to earth scientist. We even have stuff to help simulate quantum computing. Um, and of course all the, all the work we're doing with AI, 5g and robotics. So, uh, we actually just introduced about 65 new updates just this past month on all those SDKs. Uh, some of the newer stuff that's really exciting is the large language models. Uh, people are building some amazing AI. That's capable of understanding the Corpus of like human understanding, these language models that are trained on literally the continent of the internet to provide general purpose or open domain chatbots. So the customer is going to have a new kind of experience with a computer or the cloud. Uh, we're offering large language, uh, those large language models, as well as AI frameworks to help companies take advantage of this new kind of technology. >>You know, each and every time I do an interview with Nvidia or talk about Nvidia my kids and their friends, they first thing they said, you get me a good graphics card. Hey, I want the best thing in their rig. Obviously the gaming market's hot and known for that, but I mean, but there's a huge software team behind Nvidia. This is a well-known your CEO is always talking about on his keynotes, you're in the software business. And then you had, do have hardware. You were integrating with graviton and other things. So, but it's a software practices, software. This is all about software. Could you share kind of more about how Nvidia culture and their cloud culture and specifically around the scale? I mean, you, you hit every, every use case. So what's the software culture there at Nvidia, >>And it is actually a bigger, we have more software people than hardware people, people don't often realize this. Uh, and in fact that it's because of we create, uh, the, the, it just starts with the chip, obviously building great Silicon is necessary to provide that level of innovation, but as it expanded dramatically from then, from there, uh, not just the Silicon and the GPU, but the server designs themselves, we actually do entire server designs ourselves to help build out this infrastructure. We consume it and use it ourselves and build our own supercomputers to use AI, to improve our products. And then all that software that we build on top, we make it available. As I mentioned before, uh, as containers on our, uh, NGC container store container registry, which is accessible for me to bus, um, to connect to those vertical markets, instead of just opening up the hardware and none of the ecosystem in develop on it, they can with a low-level and programmatic stacks that we provide with Kuda. We believe that those vertical stacks are the ways we can help accelerate and advance AI. And that's why we make as well, >>Ram a little software is so much easier. I want to get that plug for, I think it's worth noting that you guys are, are heavy hardcore, especially on the AI side. And it's worth calling out, uh, getting back to the customers who are bridging that gap and getting out there, what are the metrics they should consider as they're deploying AI? What are success metrics? What does success look like? Can you share any insight into what they should be thinking about and looking at how they're doing? >>Yeah. Um, for training, it's all about time to solution. Um, it's not the hardware that that's the cost, it's the opportunity that AI can provide your business and many, and the productivity of those data scientists, which are developing, which are not easy to come by. So, uh, what we hear from customers is they need a fast time to solution to allow people to prototype very quickly, to train a model to convergence, to get into production quickly, and of course, move on to the next or continue to refine it often. So in training is time to solution for inference. It's about our, your ability to deploy at scale. Often people need to have real time requirements. They want to run in a certain amount of latency, a certain amount of time. And typically most companies don't have a single AI model. They have a collection of them. They want, they want to run for a single service or across multiple services. That's where you can aggregate some of your infrastructure leveraging the trading infant server. I mentioned before can actually run multiple models on a single GPU saving costs, optimizing for efficiency yet still meeting the requirements for latency and the real time experience so that your customers have a good, a good interaction with the AI. >>Awesome. Great. Let's get into, uh, the customer examples. You guys have obviously great customers. Can you share some of the use cases, examples with customers, notable customers? >>Yeah. I want one great part about working in videos as a technology company. You see, you get to engage with such amazing customers across many verticals. Uh, some of the ones that are pretty exciting right now, Netflix is using the G4 instances to CLA um, to do a video effects and animation content. And, you know, from anywhere in the world, in the cloud, uh, as a cloud creation content platform, uh, we work in the energy field that Siemens energy is actually using AI combined with, um, uh, simulation to do predictive maintenance on their energy plants, um, and, and, uh, doing preventing or optimizing onsite inspection activities and eliminating downtime, which is saving a lot of money for the engine industry. Uh, we have worked with Oxford university, uh, which is Oxford university actually has over two, over 20 million artifacts and specimens and collections across its gardens and museums and libraries. They're actually using convenient GPS and Amazon to do enhance image recognition, to classify all these things, which would take literally years with, um, uh, going through manually each of these artifacts using AI, we can click and quickly catalog all of them and connect them with their users. Um, great stories across graphics, about cross industries across research that, uh, it's just so exciting to see what people are doing with our technology together with, >>And thank you so much for coming on the cube. I really appreciate Greg, a lot of great content there. We probably going to go another hour, all the great stuff going on in the video, any closing remarks you want to share as we wrap this last minute up >>Now, the, um, really what Nvidia is about as accelerating cloud computing, whether it be AI, machine learning, graphics, or headphones, community simulation, and AWS was one of the first with this in the beginning, and they continue to bring out great instances to help connect, uh, the cloud and accelerated computing with all the different opportunities integrations with with SageMaker really Ks and ECS. Uh, the new instances with G five and G 5g, very excited to see all the work that we're doing together. >>Ian buck, general manager, and vice president of accelerated computing. I mean, how can you not love that title? We want more, more power, more faster, come on. More computing. No, one's going to complain with more computing know, thanks for coming on. Thank you. Appreciate it. I'm John Farrell hosted the cube. You're watching Amazon coverage reinvent 2021. Thanks for watching.

Published Date : Nov 30 2021

SUMMARY :

knows the GPU's are hot and you guys get great brand great success in the company, but AI and machine learning was seeing the AI. Uh, people are applying AI to things like, um, meeting transcriptions, I mean, you mentioned some of those apps, the new enablers, Yeah, it's the innovations on two fronts. technologies, along with the, you know, the AI training capabilities and different capabilities in I mean, I think one of the things you mentioned about the neural networks, You have points that are connected to each Great setup for the real conversation that's going on here at re-invent, which is new kinds of workloads And we're excited to see the advancements that Amazon is making and AWS is making with arm and interfaces and the new servers, new technology that you guys are doing, you're enabling applications. Well, so first off I think arm is here to stay and you can see the growth and explosion of my arm, I mean, a lot of people are really going in jumping in the big time into this. So the customer is going to have a new kind of experience with a computer And then you had, do have hardware. not just the Silicon and the GPU, but the server designs themselves, we actually do entire server I want to get that plug for, I think it's worth noting that you guys are, that that's the cost, it's the opportunity that AI can provide your business and many, Can you share some of the use cases, examples with customers, notable customers? research that, uh, it's just so exciting to see what people are doing with our technology together with, all the great stuff going on in the video, any closing remarks you want to share as we wrap this last minute up Uh, the new instances with G one's going to complain with more computing know, thanks for coming on.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Ian buckPERSON

0.99+

John FarrellPERSON

0.99+

NvidiaORGANIZATION

0.99+

Ian BuckPERSON

0.99+

AWSORGANIZATION

0.99+

Ian buckPERSON

0.99+

GregPERSON

0.99+

2014DATE

0.99+

AmazonORGANIZATION

0.99+

John FordPERSON

0.99+

James HamiltonPERSON

0.99+

NetflixORGANIZATION

0.99+

G fiveCOMMERCIAL_ITEM

0.99+

NVIDIAORGANIZATION

0.99+

PythonTITLE

0.99+

bothQUANTITY

0.99+

G 5gCOMMERCIAL_ITEM

0.99+

firstQUANTITY

0.99+

oneQUANTITY

0.99+

AndroidTITLE

0.99+

Oxford universityORGANIZATION

0.99+

2013DATE

0.98+

amazon.comORGANIZATION

0.98+

over twoQUANTITY

0.98+

twoQUANTITY

0.98+

first timeQUANTITY

0.97+

single serviceQUANTITY

0.97+

2021DATE

0.97+

two frontsQUANTITY

0.96+

singleQUANTITY

0.96+

over 20 million artifactsQUANTITY

0.96+

eachQUANTITY

0.95+

about 65 new updatesQUANTITY

0.93+

Siemens energyORGANIZATION

0.92+

over 150 different STKsQUANTITY

0.92+

single GPUQUANTITY

0.91+

two new instancesQUANTITY

0.91+

first thingQUANTITY

0.9+

FranceLOCATION

0.87+

two particular fieldQUANTITY

0.85+

SageMakerTITLE

0.85+

TritonTITLE

0.82+

first cloud providersQUANTITY

0.81+

NGCORGANIZATION

0.77+

80 ofQUANTITY

0.74+

past monthDATE

0.68+

x86COMMERCIAL_ITEM

0.67+

lateDATE

0.67+

two thousandsQUANTITY

0.64+

pandemicsEVENT

0.64+

past few yearsDATE

0.61+

G4ORGANIZATION

0.6+

RACOMMERCIAL_ITEM

0.6+

KudaORGANIZATION

0.59+

ECSORGANIZATION

0.55+

10 GOTHER

0.54+

SageMakerORGANIZATION

0.49+

TensorFlowOTHER

0.48+

KsORGANIZATION

0.36+

PA3 Ian Buck


 

(bright music) >> Well, welcome back to theCUBE's coverage of AWS re:Invent 2021. We're here joined by Ian Buck, general manager and vice president of Accelerated Computing at NVIDIA. I'm John Furrrier, host of theCUBE. Ian, thanks for coming on. >> Oh, thanks for having me. >> So NVIDIA, obviously, great brand. Congratulations on all your continued success. Everyone who does anything in graphics knows that GPU's are hot, and you guys have a great brand, great success in the company. But AI and machine learning, we're seeing the trend significantly being powered by the GPU's and other systems. So it's a key part of everything. So what's the trends that you're seeing in ML and AI that's accelerating computing to the cloud? >> Yeah. I mean, AI is kind of driving breakthroughs and innovations across so many segments, so many different use cases. We see it showing up with things like credit card fraud prevention, and product and content recommendations. Really, it's the new engine behind search engines, is AI. People are applying AI to things like meeting transcriptions, virtual calls like this, using AI to actually capture what was said. And that gets applied in person-to-person interactions. We also see it in intelligence assistance for contact center automation, or chat bots, medical imaging, and intelligence stores, and warehouses, and everywhere. It's really amazing what AI has been demonstrating, what it can do, and its new use cases are showing up all the time. >> You know, Ian, I'd love to get your thoughts on how the world's evolved, just in the past few years alone, with cloud. And certainly, the pandemic's proven it. You had this whole kind of fullstack mindset, initially, and now you're seeing more of a horizontal scale, but yet, enabling this vertical specialization in applications. I mean, you mentioned some of those apps. The new enablers, this kind of, the horizontal play with enablement for, you know, specialization with data, this is a huge shift that's going on. It's been happening. What's your reaction to that? >> Yeah. The innovation's on two fronts. There's a horizontal front, which is basically the different kinds of neural networks or AIs, as well as machine learning techniques, that are just being invented by researchers and the community at large, including Amazon. You know, it started with these convolutional neural networks, which are great for image processing, but has expanded more recently into recurrent neural networks, transformer models, which are great for language and language and understanding, and then the new hot topic, graph neural networks, where the actual graph now is trained as a neural network. You have this underpinning of great AI technologies that are being invented around the world. NVIDIA's role is to try to productize that and provide a platform for people to do that innovation. And then, take the next step and innovate vertically. Take it and apply it to a particular field, like medical, like healthcare and medical imaging, applying AI so that radiologists can have an AI assistant with them and highlight different parts of the scan that may be troublesome or worrying, or require some more investigation. Using it for robotics, building virtual worlds where robots can be trained in a virtual environment, their AI being constantly trained and reinforced, and learn how to do certain activities and techniques. So that the first time it's ever downloaded into a real robot, it works right out of the box. To activate that, we are creating different vertical solutions, vertical stacks, vertical products, that talk the languages of those businesses, of those users. In medical imaging, it's processing medical data, which is obviously a very complicated, large format data, often three-dimensional voxels. In robotics, it's building, combining both our graphics and simulation technologies, along with the AI training capabilities and difference capabilities, in order to run in real time. Those are just two simple- >> Yeah, no. I mean, it's just so cutting-edge, it's so relevant. I mean, I think one of the things you mentioned about the neural networks, specifically, the graph neural networks, I mean, we saw, I mean, just go back to the late 2000s, how unstructured data, or object storage created, a lot of people realized a lot of value out of that. Now you got graph value, you got network effect, you got all kinds of new patterns. You guys have this notion of graph neural networks that's out there. What is a graph neural network, and what does it actually mean from a deep learning and an AI perspective? >> Yeah. I mean, a graph is exactly what it sounds like. You have points that are connected to each other, that establish relationships. In the example of Amazon.com, you might have buyers, distributors, sellers, and all of them are buying, or recommending, or selling different products. And they're represented in a graph. If I buy something from you and from you, I'm connected to those endpoints, and likewise, more deeply across a supply chain, or warehouse, or other buyers and sellers across the network. What's new right now is, that those connections now can be treated and trained like a neural network, understanding the relationship, how strong is that connection between that buyer and seller, or the distributor and supplier, and then build up a network to figure out and understand patterns across them. For example, what products I may like, 'cause I have this connection in my graph, what other products may meet those requirements? Or, also, identifying things like fraud, When patterns and buying patterns don't match what a graph neural networks should say would be the typical kind of graph connectivity, the different kind of weights and connections between the two, captured by the frequency of how often I buy things, or how I rate them or give them stars, or other such use cases. This application, graph neural networks, which is basically capturing the connections of all things with all people, especially in the world of e-commerce, is very exciting to a new application of applying AI to optimizing business, to reducing fraud, and letting us, you know, get access to the products that we want. They have our recommendations be things that excite us and want us to buy things, and buy more. >> That's a great setup for the real conversation that's going on here at re:Invent, which is new kinds of workloads are changing the game, people are refactoring their business with, not just re-platforming, but actually using this to identify value. And also, your cloud scale allows you to have the compute power to, you know, look at a note in an arc and actually code that. It's all science, it's all computer science, all at scale. So with that, that brings up the whole AWS relationship. Can you tell us how you're working with AWS, specifically? >> Yeah, AWS have been a great partner, and one of the first cloud providers to ever provide GPUs to the cloud. More recently, we've announced two new instances, the G5 instance, which is based on our A10G GPU, which supports the NVIDIA RTX technology, our rendering technology, for real-time ray tracing in graphics and game streaming. This is our highest performance graphics enhanced application, allows for those high-performance graphics applications to be directly hosted in the cloud. And, of course, runs everything else as well. It has access to our AI technology and runs all of our AI stacks. We also announced, with AWS, the G5 G instance. This is exciting because it's the first Graviton or Arm-based processor connected to a GPU and successful in the cloud. The focus here is Android gaming and machine learning inference. And we're excited to see the advancements that Amazon is making and AWS is making, with Arm in the cloud. And we're glad to be part of that journey. >> Well, congratulations. I remember, I was just watching my interview with James Hamilton from AWS 2013 and 2014. He was teasing this out, that they're going to build their own, get in there, and build their own connections to take that latency down and do other things. This is kind of the harvest of all that. As you start looking at these new interfaces, and the new servers, new technology that you guys are doing, you're enabling applications. What do you see this enabling? As this new capability comes out, new speed, more performance, but also, now it's enabling more capabilities so that new workloads can be realized. What would you say to folks who want to ask that question? >> Well, so first off, I think Arm is here to stay. We can see the growth and explosion of Arm, led of course, by Graviton and AWS, but many others. And by bringing all of NVIDIA's rendering graphics, machine learning and AI technologies to Arm, we can help bring that innovation that Arm allows, that open innovation, because there's an open architecture, to the entire ecosystem. We can help bring it forward to the state of the art in AI machine learning and graphics. All of our software that we release is both supportive, both on x86 and on Arm equally, and including all of our AI stacks. So most notably, for inference, the deployment of AI models, we have the NVIDIA Triton inference server. This is our inference serving software, where after you've trained a model, you want to deploy it at scale on any CPU, or GPU instance, for that matter. So we support both CPUs and GPUs with Triton. It's natively integrated with SageMaker and provides the benefit of all those performance optimizations. Features like dynamic batching, it supports all the different AI frameworks, from PyTorch to TensorFlow, even a generalized Python code. We're activating, and help activating, the Arm ecosystem, as well as bringing all those new AI use cases, and all those different performance levels with our partnership with AWS and all the different cloud instances. >> And you guys are making it really easy for people to use use the technology. That brings up the next, kind of, question I wanted to ask you. I mean, a lot of people are really going in, jumping in big-time into this. They're adopting AI, either they're moving it from prototype to production. There's always some gaps, whether it's, you know, knowledge, skills gaps, or whatever. But people are accelerating into the AI and leaning into it hard. What advancements has NVIDIA made to make it more accessible for people to move faster through the system, through the process? >> Yeah. It's one of the biggest challenges. You know, the promise of AI, all the publications that are coming out, all the great research, you know, how can you make it more accessible or easier to use by more people? Rather than just being an AI researcher, which is obviously a very challenging and interesting field, but not one that's directly connected to the business. NVIDIA is trying to provide a fullstack approach to AI. So as we discover or see these AI technologies become available, we produce SDKs to help activate them or connect them with developers around the world. We have over 150 different SDKs at this point, serving industries from gaming, to design, to life sciences, to earth sciences. We even have stuff to help simulate quantum computing. And of course, all the work we're doing with AI, 5G, and robotics. So we actually just introduced about 65 new updates, just this past month, on all those SDKs. Some of the newer stuff that's really exciting is the large language models. People are building some amazing AI that's capable of understanding the corpus of, like, human understanding. These language models that are trained on literally the content of the internet to provide general purpose or open-domain chatbots, so the customer is going to have a new kind of experience with the computer or the cloud. We're offering those large language models, as well as AI frameworks, to help companies take advantage of this new kind of technology. >> You know, Ian, every time I do an interview with NVIDIA or talk about NVIDIA, my kids and friends, first thing they say is, "Can you get me a good graphics card?" They all want the best thing in their rig. Obviously the gaming market's hot and known for that. But there's a huge software team behind NVIDIA. This is well-known. Your CEO is always talking about it on his keynotes. You're in the software business. And you do have hardware, you are integrating with Graviton and other things. But it's a software practice. This is software. This is all about software. >> Right. >> Can you share, kind of, more about how NVIDIA culture and their cloud culture, and specifically around the scale, I mean, you hit every use case. So what's the software culture there at NVIDIA? >> Yeah, NVIDIA's actually a bigger, we have more software people than hardware people. But people don't often realize this. And in fact, that it's because of, it just starts with the chip, and obviously, building great silicon is necessary to provide that level of innovation. But it's expanded dramatically from there. Not just the silicon and the GPU, but the server designs themselves. We actually do entire server designs ourselves, to help build out this infrastructure. We consume it and use it ourselves, and build our own supercomputers to use AI to improve our products. And then, all that software that we build on top, we make it available, as I mentioned before, as containers on our NGC container store, container registry, which is accessible from AWS, to connect to those vertical markets. Instead of just opening up the hardware and letting the ecosystem develop on it, they can, with the low-level and programmatic stacks that we provide with CUDA. We believe that those vertical stacks are the ways we can help accelerate and advance AI. And that's why we make them so available. >> And programmable software is so much easier. I want to get that plug in for, I think it's worth noting that you guys are heavy hardcore, especially on the AI side, and it's worth calling out. Getting back to the customers who are bridging that gap and getting out there, what are the metrics they should consider as they're deploying AI? What are success metrics? What does success look like? Can you share any insight into what they should be thinking about, and looking at how they're doing? >> Yeah. For training, it's all about time-to-solution. It's not the hardware that's the cost, it's the opportunity that AI can provide to your business, and the productivity of those data scientists which are developing them, which are not easy to come by. So what we hear from customers is they need a fast time-to-solution to allow people to prototype very quickly, to train a model to convergence, to get into production quickly, and of course, move on to the next or continue to refine it. >> John Furrier: Often. >> So in training, it's time-to-solution. For inference, it's about your ability to deploy at scale. Often people need to have real-time requirements. They want to run in a certain amount of latency, in a certain amount of time. And typically, most companies don't have a single AI model. They have a collection of them they want to run for a single service or across multiple services. That's where you can aggregate some of your infrastructure. Leveraging the Triton inference server, I mentioned before, can actually run multiple models on a single GPU saving costs, optimizing for efficiency, yet still meeting the requirements for latency and the real-time experience, so that our customers have a good interaction with the AI. >> Awesome. Great. Let's get into the customer examples. You guys have, obviously, great customers. Can you share some of the use cases examples with customers, notable customers? >> Yeah. One great part about working at NVIDIA is, as technology company, you get to engage with such amazing customers across many verticals. Some of the ones that are pretty exciting right now, Netflix is using the G4 instances to do a video effects and animation content from anywhere in the world, in the cloud, as a cloud creation content platform. We work in the energy field. Siemens energy is actually using AI combined with simulation to do predictive maintenance on their energy plants, preventing, or optimizing, onsite inspection activities and eliminating downtime, which is saving a lot of money for the energy industry. We have worked with Oxford University. Oxford University actually has over 20 million artifacts and specimens and collections, across its gardens and museums and libraries. They're actually using NVIDIA GPU's and Amazon to do enhanced image recognition to classify all these things, which would take literally years going through manually, each of these artifacts. Using AI, we can quickly catalog all of them and connect them with their users. Great stories across graphics, across industries, across research, that it's just so exciting to see what people are doing with our technology, together with Amazon. >> Ian, thank you so much for coming on theCUBE. I really appreciate it. A lot of great content there. We probably could go another hour. All the great stuff going on at NVIDIA. Any closing remarks you want to share, as we wrap this last minute up? >> You know, really what NVIDIA's about, is accelerating cloud computing. Whether it be AI, machine learning, graphics, or high-performance computing and simulation. And AWS was one of the first with this, in the beginning, and they continue to bring out great instances to help connect the cloud and accelerated computing with all the different opportunities. The integrations with EC2, with SageMaker, with EKS, and ECS. The new instances with G5 and G5 G. Very excited to see all the work that we're doing together. >> Ian Buck, general manager and vice president of Accelerated Computing. I mean, how can you not love that title? We want more power, more faster, come on. More computing. No one's going to complain with more computing. Ian, thanks for coming on. >> Thank you. >> Appreciate it. I'm John Furrier, host of theCUBE. You're watching Amazon coverage re:Invent 2021. Thanks for watching. (bright music)

Published Date : Nov 18 2021

SUMMARY :

to theCUBE's coverage and you guys have a great brand, Really, it's the new engine And certainly, the pandemic's proven it. and the community at the things you mentioned and connections between the two, the compute power to, you and one of the first cloud providers This is kind of the harvest of all that. and all the different cloud instances. But people are accelerating into the AI so the customer is going to You're in the software business. and specifically around the scale, and build our own supercomputers to use AI especially on the AI side, and the productivity of and the real-time experience, the use cases examples Some of the ones that are All the great stuff going on at NVIDIA. and they continue to No one's going to complain I'm John Furrier, host of theCUBE.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
John FurrrierPERSON

0.99+

Ian BuckPERSON

0.99+

AmazonORGANIZATION

0.99+

AWSORGANIZATION

0.99+

IanPERSON

0.99+

John FurrierPERSON

0.99+

NVIDIAORGANIZATION

0.99+

Oxford UniversityORGANIZATION

0.99+

James HamiltonPERSON

0.99+

2014DATE

0.99+

NetflixORGANIZATION

0.99+

Amazon.comORGANIZATION

0.99+

G5 GCOMMERCIAL_ITEM

0.99+

PythonTITLE

0.99+

late 2000sDATE

0.99+

GravitonORGANIZATION

0.99+

AndroidTITLE

0.99+

OneQUANTITY

0.99+

oneQUANTITY

0.99+

Accelerated ComputingORGANIZATION

0.99+

firstQUANTITY

0.99+

first timeQUANTITY

0.99+

twoQUANTITY

0.98+

2013DATE

0.98+

A10GCOMMERCIAL_ITEM

0.98+

bothQUANTITY

0.98+

two frontsQUANTITY

0.98+

eachQUANTITY

0.98+

single serviceQUANTITY

0.98+

PyTorchTITLE

0.98+

over 20 million artifactsQUANTITY

0.97+

singleQUANTITY

0.97+

TensorFlowTITLE

0.95+

EC2TITLE

0.94+

G5 instanceCOMMERCIAL_ITEM

0.94+

over 150 different SDKsQUANTITY

0.93+

SageMakerTITLE

0.93+

G5COMMERCIAL_ITEM

0.93+

ArmORGANIZATION

0.91+

first thingQUANTITY

0.91+

single GPUQUANTITY

0.9+

theCUBEORGANIZATION

0.9+

about 65 new updatesQUANTITY

0.89+

two new instancesQUANTITY

0.89+

pandemicEVENT

0.88+

TritonORGANIZATION

0.87+

PA3ORGANIZATION

0.87+

TritonTITLE

0.84+

InventEVENT

0.83+

G5 G.COMMERCIAL_ITEM

0.82+

two simpleQUANTITY

0.8+

Anahad Dhillon, Dell EMC | CUBE Conversation, October 2021


 

(upbeat music) >> Welcome everybody to this CUBE Conversation. My name is Dave Vellante, and we're here to talk about Object storage and the momentum in the space. And what Dell Technologies is doing to compete in this market, I'm joined today by Anahad Dhillon, who's the Product Manager for Dell, EMC's ECS, and new ObjectScale products. Anahad, welcome to theCUBE, good to see you. >> Thank you so much Dave. We appreciate you having me and Dell (indistinct), thanks. >> Its always a pleasure to have you guys on, we dig into the products, talk about the trends, talk about what customers are doing. Anahad before the Cloud, Object was this kind of niche we seen. And you had simple get, put, it was a low cost bit bucket essentially, but that's changing. Tell us some of the trends in the Object storage market that you're observing, and how Dell Technology sees this space evolving in the future please. >> Absolutely, and you hit it right on, right? Historically, Object storage was considered this cheap and deep place, right? Customers would use this for their backup data, archive data, so cheap and deep, no longer the case, right? As you pointed out, the ObjectSpace is now maturing. It's a mature market and we're seeing out there customers using Object or their primary data so, for their business critical data. So we're seeing big data analytics that we use cases. So it's no longer just cheap and deep, now your primary workloads and business critical workloads being put on with an object storage now. >> Yeah, I mean. >> And. >> Go ahead please. >> Yeah, I was going to say, there's not only the extend of the workload being put in, we'll also see changes in how Object storage is being deployed. So now we're seeing a tighter integration with new depth models where Object storage or any storage in general is being deployed. Our applications are being (indistinct), right? So customers now want Object storage or storage in general being orchestrated like they would orchestrate their customer applications. Those are the few key trends that we're seeing out there today. >> So I want to dig into this a little bit with you 'cause you're right. It used to be, it was cheap and deep, it was slow and it required sometimes application changes to accommodate. So you mentioned a few of the trends, Devs, everybody's trying to inject AI into their applications, the world has gone software defined. What are you doing to respond to all these changes in these trends? >> Absolutely, yeah. So we've been making tweaks to our object offering, the ECS, Elastic Cloud Storage for a while. We started off tweaking the software itself, optimizing it for performance use cases. In 2020, early 2020, we actually introduced SSDs to our notes. So customers were able to go in, leverage these SSD's for metadata caching improving their performance quite a bit. We use these SSDs for metadata caching. So the impact on the performance improvement was focused on smaller reads and writes. What we did now is a game changer. We actually went ahead later in 2020, introduced an all flash appliance. So now, EXF900 and ECS all flash appliance, it's all NVME based. So it's NVME SSDs and we leveraged NVME over fabric xx for the back end. So we did it the right way did. We didn't just go in and qualified an SSD based server and ran object storage on it, we invested time and effort into supporting NVME fabric. So we could give you that performance at scale, right? Object is known for scale. We're not talking 10, 12 nodes here, we're talking hundreds of nodes. And to provide you that kind of performance, we went to ahead. Now you've got an NVME based offering EXF900 that you can deploy with confidence, run your primary workloads that require high throughput and low latency. We also come November 5th, are releasing our next gen SDS offering, right? This takes the Troven ECS code that our customers are familiar with that provides the resiliency and the security that you guys expect from Dell. We're re platforming it to run on Kubernetes and be orchestrated by Kubernetes. This is what we announced that VMware 2021. If you guys haven't seen that, is going to go on-demand for VMware 2021, search for ObjectScale and you get a quick demo on that. With ObjectScale now, customers can quickly deploy enterprise grade Object storage on their existing environment, their existing infrastructure, things like VMware, infrastructure like VMware and infrastructure like OpenShift. I'll give you an example. So if you were in a VMware shop that you've got vSphere clusters in your data center, with ObjectScale, you'll be able to quickly deploy your Object enterprise grid Object offering from within vSphere. Or if you are an OpenShift customer, right? If you've got OpenShift deployed in your data center and your Red Hat shop, you could easily go in, use that same infrastructure that your applications are running on, deploy ObjectScale on top of your OpenShift infrastructure and make available Object storage to your customers. So you've got the enterprise grade ECS appliance or your high throughput, low latency use cases at scale, and you've got this software defined ObjectScale, which can deploy on your existing infrastructure, whether that's VMware or Red Hat OpenShift. >> Okay, I got a lot of follow up questions, but let me just go back to one of the earlier things you said. So Object was kind of cheap, deep and slow, but scaled. And so, your step one was metadata caching. Now of course, my understanding is with Object, the metadata and the data within the object. So, maybe you separated that and made it high performance, but now you've taken the next step to bring in NVME infrastructure to really blow away all the old sort of scuzzy latency and all that stuff. Maybe you can just educate us a little bit on that if you don't mind. >> Yeah, absolutely. Yeah, that was exactly the stepped approach that we took. Even though metadata is tightly integrated in Object world, in order to read the actual data, you still got to get to the metadata first, right? So we would cache the metadata into SSDs reducing that lookup that happens for that metadata, right? And that's why it gave you the performance benefit. But because it was just tied to metadata look-ups, the performance for larger objects stayed the same because the actual data read was still happening from the hard drives, right? With the new EXF900 which is all NVME based, we've optimized the our ECS Object code leveraging VME, data sitting on NVME drives, the internet connectivity, the communication is NVME over fabric, so it's through and through NVME. Now we're talking milliseconds and latency and thousands and thousands of transactions per second. >> Got it, okay. So this is really an inflection point for Objects. So these are pretty interesting times at Dell, you got the cloud expanding on prem, your company is building cloud-like capabilities to connect on-prem to the cloud across cloud, you're going out to the edge. As it pertains to Object storage though, it sounds like you're taking a sort of a two product approach to your strategy. Why is that, and can you talk about the go-to market strategy in that regard? >> Absolutely, and yeah, good observation there. So yes and no, so we continued to invest in ECS. ECS continues to stay a product of choice when customer wants that traditional appliance deployment model. But this is a single hand to shape model where you're everything from your hardware to your software the object solution software is all provided by Dell. ECS continues to be the product where customers are looking for that high performance, fine tune appliance use case. ObjectScale comes into play when the needs are software defined. When you need to deploy the storage solution on top of the same infrastructure that your applications are run, right? So yes, in the short-term, in the interim, it's a two product approach of both products taking a very distinct use case. However, in the long-term, we're merging the two quote streams. So in the long-term, if you're an ECS customer and you're running ECS, you will have an in-place data upgrade to ObjectScale. So we're not talking about no forklift upgrades, we're not talking about you're adding additional servers and do a data migration, it's a code upgrade. And then I'll give you an example, today on ECS, we're at code variation 3.6, right? So if you're a customer running ECS, ECS 3.X in the future, and so we've got a roadmap where 3.7 is coming out later on this year. So from 3.X, customers will upgrade the code data in place. Let's call it 4.0, right? And that brings them up to ObjectScale. So there's no nodes left behind, there's an in-place code upgrade from ECS to the ObjectScale merging the two code streams and the long-term, single code, short-term, two products for both solving the very distinct users. >> Okay, let me follow up, put on my customer hat. And I'm hearing that you can tell us with confidence that irrespective of whether a customer invested ECS or ObjectScale, you're not going to put me into a dead-end. Every customer is going to have a path forward as long as their ECS code is up-to-date, is that correct? >> Absolutely, exactly, and very well put, yes. No nodes left behind, investment protection, whether you've got ECS today, or you want to invest into ECS or ObjectScale in the future, correct. >> Talk a little bit more about ObjectScale. I'm interested in kind of what's new there, what's special about this product, is there unique functionality that you're adding to the product? What differentiates it from other Object stores? >> Absolutely, my pleasure. Yeah, so I'll start by reiterating that ObjectScale it's built on that Troven ECS code, right? It's the enterprise grid, reliability and security that our customers expect from Dell EMC, right? Now we're re platforming ECS who allow ObjectScale to be Kubernetes native, right? So we're leveraging that microservices-based architecture, leveraging that native orchestration capabilities of Kubernetes, things like resource isolation or seamless (indistinct), I'm sorry, load balancing and things like that, right? So the in-built native capabilities of Kubernetes. ObjectScale is also build with scale in mind, right? So it delivers limitless scale. So you could start with terabytes and then go up to petabytes and beyond. So unlike other file system-based Object offerings, ObjectScale software would have a limit on your number of object stores, number of buckets, number of objects you store, it's limitless. As long as you can provide the hardware resources under the covers, the software itself is limitless. It allows our customers to start small, so you could start as small as three node and grow their environment as your business grows, right? Hundreds of notes. With ObjectScale, you can deploy workloads at public clouds like scale, but with the reliability and control of a private cloud data, right? So, it's then your own data center. And ObjectScale is S3 compliant, right? So while delivering the enterprise features like global replication, native multi-tenancy, fueling everything from Dev Test Sandbox to globally distributed data, right? So you've got in-built ObjectScale replication that allows you to place your data anywhere you got ObjectScale (indistinct). From edge to core to data center. >> Okay, so it fits into the Kubernetes world. I call it Kubernetes compatible. The key there is automation, because that's the whole point of containers is, right? It allows you to deploy as many apps as you need to, wherever you need to in as many instances and then do rolling updates, have the same security, same API, all that level of consistency. So that's really important. That's how modern apps are being developed. We're in a new age year. It's no longer about the machines, it's about infrastructure as code. So once ObjectScale is generally available which I think is soon, I think it's this year, What should customers do, what's their next step? >> Absolutely, yeah, it's coming out November 2nd. Reach out to your Dell representatives, right? Get an in-depth demo on ObjectScale. Better yet, you get a POC, right? Get a proof of concept, have it set up in your data center and play with it. You can also download the free full featured community edition. We're going to have a community edition that's free up to 30 terabytes of usage, it's full featured. Download that, play with it. If you like it, you can upgrade that free community edition, will license paid version. >> And you said that's full featured. You're not neutering the community edition? >> Exactly, absolutely, it's full featured. >> Nice, that's a great strategy. >> We're confident, we're confident in what we're delivering, and we want you guys to play with it without having your money tied up. >> Nice, I mean, that's the model today. Gone are the days where you got to get new customers in a headlock to get them to, they want to try before they buy. So that's a great little feature. Anahad, thanks so much for joining us on theCUBE. Sounds like it's been a very busy year and it's going to continue to be so. Look forward to see what's coming out with ECS and ObjectScale and seeing those two worlds come together, thank you. >> Yeah, absolutely, it was a pleasure. Thank you so much. >> All right, and thank you for watching this CUBE Conversation. This is Dave Vellante, we'll see you next time. (upbeat music)

Published Date : Oct 5 2021

SUMMARY :

and the momentum in the space. We appreciate you having me to have you guys on, Absolutely, and you of the workload being put in, So you mentioned a few So we could give you that to one of the earlier things you said. And that's why it gave you Why is that, and can you talk about So in the long-term, if And I'm hearing that you or ObjectScale in the future, correct. that you're adding to the product? that allows you to place your data because that's the whole Reach out to your Dell And you said that's full featured. it's full featured. and we want you guys to play with it Gone are the days where you Thank you so much. we'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

DavePERSON

0.99+

November 5thDATE

0.99+

DellORGANIZATION

0.99+

Anahad DhillonPERSON

0.99+

October 2021DATE

0.99+

November 2ndDATE

0.99+

2020DATE

0.99+

two productsQUANTITY

0.99+

EMCORGANIZATION

0.99+

AnahadPERSON

0.99+

ObjectScaleTITLE

0.99+

VMware 2021TITLE

0.99+

todayDATE

0.99+

thousandsQUANTITY

0.99+

vSphereTITLE

0.99+

both productsQUANTITY

0.99+

two productQUANTITY

0.99+

Dell TechnologiesORGANIZATION

0.99+

bothQUANTITY

0.99+

Dell EMCORGANIZATION

0.99+

early 2020DATE

0.98+

OpenShiftTITLE

0.98+

step oneQUANTITY

0.98+

this yearDATE

0.98+

hundreds of nodesQUANTITY

0.98+

two code streamsQUANTITY

0.98+

ECSTITLE

0.97+

12 nodesQUANTITY

0.97+

single codeQUANTITY

0.97+

oneQUANTITY

0.97+

KubernetesTITLE

0.97+

10QUANTITY

0.96+

4.0OTHER

0.96+

Red Hat OpenShiftTITLE

0.95+

3.6OTHER

0.95+

Dell TechnologyORGANIZATION

0.94+

S3TITLE

0.92+

Hundreds of notesQUANTITY

0.92+

two worldsQUANTITY

0.92+

EXF900COMMERCIAL_ITEM

0.92+

up to 30 terabytesQUANTITY

0.91+

ObjectScaleORGANIZATION

0.91+

ECS 3.XTITLE

0.91+

petabytesQUANTITY

0.89+

VMwareTITLE

0.89+

firstQUANTITY

0.87+

3.XTITLE

0.87+

Dev Test SandboxTITLE

0.87+

ECSORGANIZATION

0.86+

Red HatTITLE

0.84+

Anahad Dhillon, Dell EMC | CUBEConversation


 

(upbeat music) >> Welcome everybody to this CUBE Conversation. My name is Dave Vellante, and we're here to talk about Object storage and the momentum in the space. And what Dell Technologies is doing to compete in this market, I'm joined today by Anahad Dhillon, who's the Product Manager for Dell, EMC's ECS, and new ObjectScale products. Anahad, welcome to theCUBE, good to see you. >> Thank you so much Dave. We appreciate you having me and Dell (indistinct), thanks. >> Its always a pleasure to have you guys on, we dig into the products, talk about the trends, talk about what customers are doing. Anahad before the Cloud, Object was this kind of niche we seen. And you had simple get, put, it was a low cost bit bucket essentially, but that's changing. Tell us some of the trends in the Object storage market that you're observing, and how Dell Technology sees this space evolving in the future please. >> Absolutely, and you hit it right on, right? Historically, Object storage was considered this cheap and deep place, right? Customers would use this for their backup data, archive data, so cheap and deep, no longer the case, right? As you pointed out, the ObjectSpace is now maturing. It's a mature market and we're seeing out there customers using Object or their primary data so, for their business critical data. So we're seeing big data analytics that we use cases. So it's no longer just cheap and deep, now your primary workloads and business critical workloads being put on with an object storage now. >> Yeah, I mean. >> And. >> Go ahead please. >> Yeah, I was going to say, there's not only the extend of the workload being put in, we'll also see changes in how Object storage is being deployed. So now we're seeing a tighter integration with new depth models where Object storage or any storage in general is being deployed. Our applications are being (indistinct), right? So customers now want Object storage or storage in general being orchestrated like they would orchestrate their customer applications. Those are the few key trends that we're seeing out there today. >> So I want to dig into this a little bit with you 'cause you're right. It used to be, it was cheap and deep, it was slow and it required sometimes application changes to accommodate. So you mentioned a few of the trends, Devs, everybody's trying to inject AI into their applications, the world has gone software defined. What are you doing to respond to all these changes in these trends? >> Absolutely, yeah. So we've been making tweaks to our object offering, the ECS, Elastic Cloud Storage for a while. We started off tweaking the software itself, optimizing it for performance use cases. In 2020, early 2020, we actually introduced SSDs to our notes. So customers were able to go in, leverage these SSD's for metadata caching improving their performance quite a bit. We use these SSDs for metadata caching. So the impact on the performance improvement was focused on smaller reads and writes. What we did now is a game changer. We actually went ahead later in 2020, introduced an all flash appliance. So now, EXF900 and ECS all flash appliance, it's all NVME based. So it's NVME SSDs and we leveraged NVME over fabric xx for the back end. So we did it the right way did. We didn't just go in and qualified an SSD based server and ran object storage on it, we invested time and effort into supporting NVME fabric. So we could give you that performance at scale, right? Object is known for scale. We're not talking 10, 12 nodes here, we're talking hundreds of nodes. And to provide you that kind of performance, we went to ahead. Now you've got an NVME based offering EXF900 that you can deploy with confidence, run your primary workloads that require high throughput and low latency. We also come November 5th, are releasing our next gen SDS offering, right? This takes the Troven ECS code that our customers are familiar with that provides the resiliency and the security that you guys expect from Dell. We're re platforming it to run on Kubernetes and be orchestrated by Kubernetes. This is what we announced that VMware 2021. If you guys haven't seen that, is going to go on-demand for VMware 2021, search for ObjectScale and you get a quick demo on that. With ObjectScale now, customers can quickly deploy enterprise grade Object storage on their existing environment, their existing it infrastructure, things like VMware, infrastructure like VMware and infrastructure like OpenShift. I'll give you an example. So if you were in a VMware shop that you've got vSphere clusters in your data center, with ObjectScale, you'll be able to quickly deploy your Object enterprise grid Object offering from within vSphere. Or if you are an OpenShift customer, right? If you've got OpenShift deployed in your data center and your Red Hat shop, you could easily go in, use that same infrastructure that your applications are running on, deploy ObjectScale on top of your OpenShift infrastructure and make available Object storage to your customers. So you've got the enterprise grade ECS appliance or your high throughput, low latency use cases at scale, and you've got this software defined ObjectScale, which can deploy on your existing infrastructure, whether that's VMware or Red Hat OpenShift. >> Okay, I got a lot of follow up questions, but let me just go back to one of the earlier things you said. So Object was kind of cheap, deep and slow, but scaled. And so, your step one was metadata caching. Now of course, my understanding is with Object, the metadata and the data within the object. So, maybe you separated that and made it high performance, but now you've taken the next step to bring in NVME infrastructure to really blow away all the old sort of scuzzy latency and all that stuff. Maybe you can just educate us a little bit on that if you don't mind. >> Yeah, absolutely. Yeah, that was exactly the stepped approach that we took. Even though metadata is tightly integrated in Object world, in order to read the actual data, you still got to get to the metadata first, right? So we would cache the metadata into SSDs reducing that lookup that happens for that metadata, right? And that's why it gave you the performance benefit. But because it was just tied to metadata look-ups, the performance for larger objects stayed the same because the actual data read was still happening from the hard drives, right? With the new EXF900 which is all NVME based, we've optimized the our ECS Object code leveraging VME, data sitting on NVME drives, the internet connectivity, the communication is NVME over fabric, so it's through and through NVME. Now we're talking milliseconds and latency and thousands and thousands of transactions per second. >> Got it, okay. So this is really an inflection point for Objects. So these are pretty interesting times at Dell, you got the cloud expanding on prem, your company is building cloud-like capabilities to connect on-prem to the cloud across cloud, you're going out to the edge. As it pertains to Object storage though, it sounds like you're taking a sort of a two product approach to your strategy. Why is that, and can you talk about the go-to market strategy in that regard? >> Absolutely, and yeah, good observation there. So yes and no, so we continued to invest in ECS. ECS continues to stay a product of choice when customer wants that traditional appliance deployment model. But this is a single hand to shape model where you're everything from your hardware to your software the object solution software is all provided by Dell. ECS continues to be the product where customers are looking for that high performance, fine tune appliance use case. ObjectScale comes into play when the needs are software defined. When you need to deploy the storage solution on top of the same infrastructure that your applications are run, right? So yes, in the short-term, in the interim, it's a two product approach of both products taking a very distinct use case. However, in the long-term, we're merging the two quote streams. So in the long-term, if you're an ECS customer and you're running ECS, you will have an in-place data upgrade to ObjectScale. So we're not talking about no forklift upgrades, we're not talking about you're adding additional servers and do a data migration, it's a code upgrade. And then I'll give you an example, today on ECS, we're at code variation 3.6, right? So if you're a customer running ECS, ECS 3.X in the future, and so we've got a roadmap where 3.7 is coming out later on this year. So from 3.X, customers will upgrade the code data in place. Let's call it 4.0, right? And that brings them up to ObjectScale. So there's no nodes left behind, there's an in-place code upgrade from ECS to the ObjectScale merging the two code streams and the long-term, single code, short-term, two products for both solving the very distinct users. >> Okay, let me follow up, put on my customer hat. And I'm hearing that you can tell us with confidence that irrespective of whether a customer invested ECS or ObjectScale, you're not going to put me into a dead-end. Every customer is going to have a path forward as long as their ECS code is up-to-date, is that correct? >> Absolutely, exactly, and very well put, yes. No nodes left behind, investment protection, whether you've got ECS today, or you want to invest into ECS or ObjectScale in the future, correct. >> Talk a little bit more about ObjectScale. I'm interested in kind of what's new there, what's special about this product, is there unique functionality that you're adding to the product? What differentiates it from other Object stores? >> Absolutely, my pleasure. Yeah, so I'll start by reiterating that ObjectScale it's built on that Troven ECS code, right? It's the enterprise grid, reliability and security that our customers expect from Dell EMC, right? Now we're re platforming ECS who allow ObjectScale to be Kubernetes native, right? So we're leveraging that microservices-based architecture, leveraging that native orchestration capabilities of Kubernetes, things like resource isolation or seamless (indistinct), I'm sorry, load balancing and things like that, right? So the in-built native capabilities of Kubernetes. ObjectScale is also build with scale in mind, right? So it delivers limitless scale. So you could start with terabytes and then go up to petabytes and beyond. So unlike other file system-based Object offerings, ObjectScale software would have a limit on your number of object stores, number of buckets, number of objects you store, it's limitless. As long as you can provide the hardware resources under the covers, the software itself is limitless. It allows our customers to start small, so you could start as small as three node and grow their environment as your business grows, right? Hundreds of notes. With ObjectScale, you can deploy workloads at public clouds like scale, but with the reliability and control of a private cloud data, right? So, it's then your own data center. And ObjectScale is S3 compliant, right? So while delivering the enterprise features like global replication, native multi-tenancy, fueling everything from Dev Test Sandbox to globally distributed data, right? So you've got in-built ObjectScale replication that allows you to place your data anywhere you got ObjectScale (indistinct). From edge to core to data center. >> Okay, so it fits into the Kubernetes world. I call it Kubernetes compatible. The key there is automation, because that's the whole point of containers is, right? It allows you to deploy as many apps as you need to, wherever you need to in as many instances and then do rolling updates, have the same security, same API, all that level of consistency. So that's really important. That's how modern apps are being developed. We're in a new age year. It's no longer about the machines, it's about infrastructure as code. So once ObjectScale is generally available which I think is soon, I think it's this year, What should customers do, what's their next step? >> Absolutely, yeah, it's coming out November 2nd. Reach out to your Dell representatives, right? Get an in-depth demo on ObjectScale. Better yet, you get a POC, right? Get a proof of concept, have it set up in your data center and play with it. You can also download the free full featured community edition. We're going to have a community edition that's free up to 30 terabytes of usage, it's full featured. Download that, play with it. If you like it, you can upgrade that free community edition, will license paid version. >> And you said that's full featured. You're not neutering the community edition? >> Exactly, absolutely, it's full featured. >> Nice, that's a great strategy. >> We're confident, we're confident in what we're delivering, and we want you guys to play with it without having your money tied up. >> Nice, I mean, that's the model today. Gone are the days where you got to get new customers in a headlock to get them to, they want to try before they buy. So that's a great little feature. Anahad, thanks so much for joining us on theCUBE. Sounds like it's been a very busy year and it's going to continue to be so. Look forward to see what's coming out with ECS and ObjectScale and seeing those two worlds come together, thank you. >> Yeah, absolutely, it was a pleasure. Thank you so much. >> All right, and thank you for watching this CUBE Conversation. This is Dave Vellante, we'll see you next time. (upbeat music)

Published Date : Sep 14 2021

SUMMARY :

and the momentum in the space. We appreciate you having me to have you guys on, Absolutely, and you of the workload being put in, So you mentioned a few So we could give you that to one of the earlier things you said. And that's why it gave you Why is that, and can you talk about So in the long-term, if And I'm hearing that you or ObjectScale in the future, correct. that you're adding to the product? that allows you to place your data because that's the whole Reach out to your Dell And you said that's full featured. it's full featured. and we want you guys to play with it Gone are the days where you Thank you so much. we'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

DavePERSON

0.99+

November 5thDATE

0.99+

DellORGANIZATION

0.99+

Anahad DhillonPERSON

0.99+

November 2ndDATE

0.99+

2020DATE

0.99+

two productsQUANTITY

0.99+

AnahadPERSON

0.99+

EMCORGANIZATION

0.99+

thousandsQUANTITY

0.99+

VMware 2021TITLE

0.99+

ObjectScaleTITLE

0.99+

two code streamsQUANTITY

0.99+

vSphereTITLE

0.99+

Dell TechnologiesORGANIZATION

0.99+

todayDATE

0.99+

both productsQUANTITY

0.99+

two productQUANTITY

0.99+

bothQUANTITY

0.98+

early 2020DATE

0.98+

Dell EMCORGANIZATION

0.98+

this yearDATE

0.98+

oneQUANTITY

0.98+

step oneQUANTITY

0.98+

ECSTITLE

0.98+

hundreds of nodesQUANTITY

0.98+

OpenShiftTITLE

0.98+

12 nodesQUANTITY

0.97+

KubernetesTITLE

0.97+

single codeQUANTITY

0.97+

singleQUANTITY

0.96+

10QUANTITY

0.96+

two worldsQUANTITY

0.96+

Red Hat OpenShiftTITLE

0.95+

4.0OTHER

0.94+

petabytesQUANTITY

0.94+

Dell TechnologyORGANIZATION

0.94+

S3TITLE

0.94+

ECS 3.XTITLE

0.93+

3.6OTHER

0.91+

VMwareTITLE

0.9+

ObjectScaleORGANIZATION

0.9+

EXF900COMMERCIAL_ITEM

0.9+

Hundreds of notesQUANTITY

0.89+

firstQUANTITY

0.89+

up to 30 terabytesQUANTITY

0.88+

Red HatTITLE

0.88+

Duncan Lennox | AWS Storage Day 2021


 

>>Welcome back to the cubes, continuous coverage of AWS storage day. We're in beautiful downtown Seattle in the great Northwest. My name is Dave Vellante and we're going to talk about file systems. File systems are really tricky and making those file systems elastic is even harder. They've got a long history of serving a variety of use cases as with me as Duncan Lennox. Who's the general manager of Amazon elastic file system. Dunkin. Good to see you again, Dave. Good to see you. So tell me more around the specifically, uh, Amazon's elastic file system EFS you, you know, broad file portfolio, but, but let's narrow in on that. What do we need to know? >>Yeah, well, Amazon elastic file system or EFS as we call it is our simple serverless set and forget elastic file system service. So what we mean by that is we deliver something that's extremely simple for customers to use. There's not a lot of knobs and levers. They need to turn or pull to make it work or manage it on an ongoing basis. The serverless part of it is there's absolutely no infrastructure for customers to manage. We handled that entirely for them. The elastic part then is the file system automatically grows and shrinks as they add and delete data. So they never have to provision storage or risk running out of storage and they pay only for the storage they're actually using. >>What are the sort of use cases and workloads that you see EFS supporting? >>Yeah. Yeah. It has to support a broad set of customer workloads. So it's everything from, you know, serial, highly latency, sensitive applications that customers might be running on-prem today and want to move to the AWS cloud up to massively parallel scale-out workloads that they have as well. >>So. Okay. Are there any industry patterns that you see around that? Are there other industries that sort of lean in more or is it more across the board? We >>See it across the board, although I'd have to say that we see a lot of adoption within compliance and regulated industries. And a lot of that is because of not only our simplicity, but the high levels of availability and durability that we bring to the file system as well. The data is designed for 11 nines of durability. So essentially you don't need to be worrying about your anything happening into your data. And it's a regional service meaning that your file system is available from all availability zones in a particular region for high availability. >>So as part of storage data, we, we saw some, some new tiering announcements. W w w what can you tell us about those >>Super excited to be announcing EFS intelligent tiering? And this is a capability that we're bringing to EFS that allows customers to automatically get the best of both worlds and get cost optimization for their workloads and how it works is the customer can select, uh, using our lifecycle management capability, a policy for how long they want their data to remain active in one of our active storage classes, seven days, for example, or 30 days. And what we do is we automatically monitor every access to every file they have. And if we see no access to a file for their policy period, like seven days or 30 days, we automatically and transparently move that file to one of our cost optimized, optimized storage classes. So they can save up to 92% on their storage costs. Um, one of the really cool things about intelligent tiering then is if that data ever becomes active again and their workload or their application, or their users need to access it, it's automatically moved back to a performance optimized storage class, and this is all completely transparent to their applications and users. >>So, so how, how does that work? Are you using some kind of machine intelligence to sort of monitor things and just learn over time? And like, what if I policy, what if I don't get it quite right? Or maybe I have some quarter end or maybe twice a year, you know, I need access to that. Can you, can the system help me figure >>That out? Yeah. The beauty of it is you don't need to know how your application or workload is accessing the file system or worry about those access patterns changing. So we'll take care of monitoring every access to every file and move the file either to the cost optimized storage class or back to the performance optimized class as needed by your application. >>And then optimized storage classes is again, selected by the system. I don't have to >>It that's right. It's completely transparent. So we will take care of that for you. So you'll set the policy by which you want active data to be moved to the infrequent access cost optimized storage class, like 30 or seven days. And then you can set a policy that says if that data is ever touched again, to move it back to the performance optimized storage class. So that's then all happened automatically by the service on our side. You don't need to do anything >>It's, it's it's serverless, which means what I don't have to provision any, any compute infrastructure. >>That's right. What you get is an end point, the ability to Mount your file system using NFS, or you can also manage your file system from any of our compute services in AWS. So not only directly on an instance, but also from our serverless compute models like AWS Lambda and far gays, and from our container services like ECS and EKS, and all of the infrastructure is completely managed by us. You don't see it, you don't need to worry about it. We scale it automatically for you. >>What was the catalyst for all this? I mean, you know, you got to tell me it's customers, but maybe you could give me some, some insight and add some, some color. Like, what would you decoded sort of what the customers were saying? Did you get inputs from a lot of different places, you know, and you had to put that together and shape it. Uh, tell us, uh, take us inside that sort of how you came to where you are >>Today. Well, you know, I guess at the end of the day, when you think about storage and particularly file system storage, customers always want more performance and they want lower costs. So we're constantly optimizing on both of those dimensions. How can we find a way to deliver more value and lower cost to customers, but also meet the performance needs that their workloads have. And what we found in talking to customers, particularly the customers that EFS targets, they are application administrators, their dev ops practitioners, their data scientists, they have a job they want to do. They're not typically storage specialists. They don't want to have know or learn a lot about the bowels of storage architecture, and how to optimize for what their applications need. They want to focus on solving the business problems. They're focused on whatever those are >>You meaning, for instance. So you took tiering is obvious. You're tiering to lower cost storage, serverless. I'm not provisioning, you know, servers, myself, the system I'm just paying for what I use. The elasticity is a factor. So I'm not having to over provision. And I think I'm hearing, I don't have to spend my time turning knobs. You've talked about that before, because I don't know how much time is spent, you know, tuning systems, but it's gotta be at least 15 to 20% of the storage admins time. You're eliminating that as well. Is that what you mean by sort of cost optimum? Absolutely. >>So we're, we're providing the scale of capacity of performance that customer applications need as they needed without the customer needing to know exactly how to configure the service, to get what they need. We're dealing with changing workloads and changing access patterns. And we're optimizing their storage costs. As at the same time, >>When you guys step back, you get to the whiteboard out, say, okay, what's the north star that you're working because you know, you set the north star. You don't want to keep revisiting that, right? This is we're moving in this direction. How do we get there might change, but what's your north star? Where do you see the future? >>Yeah, it's really all about delivering simple file system storage that just works. And that sounds really easy, but there's a lot of nuance and complexity behind it, but customers don't want to have to worry about how it works. They just need it to work. And we, our goal is to deliver that for a super broad cross section of applications so that customers don't need to worry about how they performance tune or how they cost optimize. We deliver that value for them. >>Yeah. So I'm going to actually follow up on that because I feel like, you know, when you listen to Werner Vogels talk, he gives takes you inside. It's a plumbing sometimes. So what is the, what is that because you're right. That it, it sounds simple, but it's not. And as I said up front file systems, getting that right is really, really challenging. So technically what's the challenges, is it doing this at scale? And, and, and, and, and, and having some, a consistent experience for customers, there's >>Always a challenge to doing what we do at scale. I mean, the elasticity is something that we provide to our customers, but ultimately we have to take their data as bits and put them into Adams at some point. So we're managing infrastructure on the backend to support that. And we also have to do that in a way that delivers something that's cost-effective for customers. So there's a balance and a natural tension there between things like elasticity and simplicity, performance, cost, availability, and durability, and getting that balance right. And being able to cover the maximum cross section of all those things. So for the widest set of workloads, we see that as our job and we're delivering value, and we're doing that >>For our customers. Then of course, it was a big part of that. And of course, when we talk about, you know, the taking away the, the need for tuning, but, but you got to get it right. I mean, you, you, you can't, you can't optimize for every single use case. Right. But you can give great granularity to allow those use cases to be supported. And that seems to be sort of the balancing act that you guys so >>Well, absolutely. It's focused on being a general purpose file system. That's going to work for a broad cross section of, of applications and workloads. >>Right. Right. And that's, that's what customers want. You know, generally speaking, you go after that, that metal Dunkin, I'll give you the last word. >>I just encourage people to come and try out EFS it's as simple as a single click in our console to create a file system and get started. So come give it a, try the >>Button Duncan. Thanks so much for coming back to the cube. It's great to see you again. Thanks, Dave. All right. And keep it right there for more great content from AWS storage day from Seattle.

Published Date : Sep 2 2021

SUMMARY :

Good to see you again, Dave. So they never have to provision storage or risk running out of storage and they pay only for the storage they're actually you know, serial, highly latency, sensitive applications that customers might be running on-prem today Are there other industries that sort of lean in more or is it more across the board? So essentially you don't need to be worrying can you tell us about those And if we see no access to a file for their policy period, like seven days or 30 days, twice a year, you know, I need access to that. access to every file and move the file either to the cost optimized storage class or back I don't have to And then you can set a policy that says if that data is ever touched What you get is an end point, the ability to Mount your file system using NFS, I mean, you know, you got to tell me it's customers, but maybe you could give me some, of storage architecture, and how to optimize for what their applications need. Is that what you mean by sort of cost optimum? to get what they need. When you guys step back, you get to the whiteboard out, say, okay, what's the north star that you're working because you know, a super broad cross section of applications so that customers don't need to worry about how they performance So what is the, what is that because you're right. And being able to cover the maximum cross section And that seems to be sort of the balancing act that you guys so That's going to work for a broad cross section that metal Dunkin, I'll give you the last word. I just encourage people to come and try out EFS it's as simple as a single click in our console to create a file It's great to see you again.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

DavePERSON

0.99+

SeattleLOCATION

0.99+

Duncan LennoxPERSON

0.99+

seven daysQUANTITY

0.99+

30 daysQUANTITY

0.99+

AmazonORGANIZATION

0.99+

30QUANTITY

0.99+

Werner VogelsPERSON

0.99+

AWSORGANIZATION

0.99+

TodayDATE

0.99+

11 ninesQUANTITY

0.98+

bothQUANTITY

0.98+

up to 92%QUANTITY

0.97+

both worldsQUANTITY

0.97+

DunkinPERSON

0.97+

oneQUANTITY

0.97+

todayDATE

0.97+

twice a yearQUANTITY

0.94+

20%QUANTITY

0.93+

AWSEVENT

0.92+

single clickQUANTITY

0.88+

single use caseQUANTITY

0.85+

LambdaTITLE

0.77+

ECSTITLE

0.71+

dayEVENT

0.7+

EFSTITLE

0.65+

Storage Day 2021EVENT

0.61+

northORGANIZATION

0.6+

DuncanPERSON

0.59+

north starORGANIZATION

0.57+

at least 15QUANTITY

0.56+

EKSTITLE

0.53+

AdamsPERSON

0.43+

Ed Naim & Anthony Lye | AWS Storage Day 2021


 

(upbeat music) >> Welcome back to AWS storage day. This is the Cubes continuous coverage. My name is Dave Vellante, and we're going to talk about file storage. 80% of the world's data is in unstructured storage. And most of that is in file format. Devs want infrastructure as code. They want to be able to provision and manage storage through an API, and they want that cloud agility. They want to be able to scale up, scale down, pay by the drink. And the big news of storage day was really the partnership, deep partnership between AWS and NetApp. And with me to talk about that as Ed Naim, who's the general manager of Amazon FSX and Anthony Lye, executive vice president and GM of public cloud at NetApp. Two Cube alums. Great to see you guys again. Thanks for coming on. >> Thanks for having us. >> So Ed, let me start with you. You launched FSX 2018 at re-invent. How has it being used today? >> Well, we've talked about MSX on the Cube before Dave, but let me start by recapping that FSX makes it easy to, to launch and run fully managed feature rich high performance file storage in the cloud. And we built MSX from the ground up really to have the reliability, the scalability you were talking about. The simplicity to support, a really wide range of workloads and applications. And with FSX customers choose the file system that powers their file storage with full access to the file systems feature sets, the performance profiles and the data management capabilities. And so since reinvent 2018, when we launched this service, we've offered two file system choices for customers. So the first was a Windows file server, and that's really storage built on top of Windows server designed as a really simple solution for Windows applications that require shared storage. And then Lustre, which is an open source file system that's the world's most popular high-performance file system. And the Amazon FSX model has really resonated strongly with customers for a few reasons. So first, for customers who currently managed network attached storage or NAS on premises, it's such an easy path to move their applications and their application data to the cloud. FSX works and feels like the NAZA appliances that they're used to, but added to all of that are the benefits of a fully managed cloud service. And second, for builders developing modern new apps, it helps them deliver fast, consistent experiences for Windows and Linux in a simple and an agile way. And then third, for research scientists, its storage performance and its capabilities for dealing with data at scale really make it a no-brainer storage solution. And so as a result, the service is being used for a pretty wide spectrum of applications and workloads across industries. So I'll give you a couple of examples. So there's this class of what we call common enterprise IT use cases. So think of things like end user file shares the corporate IT applications, content management systems, highly available database deployments. And then there's a variety of common line of business and vertical workloads that are running on FSX as well. So financial services, there's a lot of modeling and analytics, workloads, life sciences, a lot of genomics analysis, media and entertainment rendering and transcoding and visual effects, automotive. We have a lot of electronic control units, simulations, and object detection, semiconductor, a lot of EDA, electronic design automation. And then oil and gas, seismic data processing, pretty common workload in FSX. And then there's a class of, of really ultra high performance workloads that are running on FSX as well. Think of things like big data analytics. So SAS grid is a, is a common application. A lot of machine learning model training, and then a lot of what people would consider traditional or classic high performance computing or HPC. >> Great. Thank you for that. Just quick follow-up if I may, and I want to bring Anthony into the conversation. So why NetApp? This is not a Barney deal, this was not elbow grease going into a Barney deal. You know, I love you. You love me. We do a press release. But, but why NetApp? Why ONTAP? Why now? (momentary silence) Ed, that was to you. >> Was that a question for Anthony? >> No, for you Ed. And then I want to bring Anthony in. >> Oh, Sure. Sorry. Okay. Sure. Yeah, I mean it, uh, Dave, it really stemmed from both companies realizing a combined offering would be highly valuable to and impactful for customers. In reality, we started collaborating in Amazon and NetApp on the service probably about two years ago. And we really had a joint vision that we wanted to provide AWS customers with the full power of ONTAP. The complete ONTAP with every capability and with ONTAP's full performance, but fully managed an offer as a full-blown AWS native service. So what that would mean is that customers get all of ONTAP's benefits along with the simplicity and the agility, the scalability, the security, and the reliability of an AWS service. >> Great. Thank you. So Anthony, I have watched NetApp reinvent itself started in workstations, saw you go into the enterprise, I saw you lean into virtualization, you told me at least two years, it might've been three years ago, Dave, we are going all in on the cloud. We're going to lead this next, next chapter. And so, I want you to bring in your perspective. You're re-inventing NetApp yet again, you know, what are your thoughts? >> Well, you know, NetApp and AWS have had a very long relationship. I think it probably dates now about nine years. And what we really wanted to do in NetApp was give the most important constituent of all an experience that helped them progress their business. So ONTAP, you know, the industry's leading shared storage platform, we wanted to make sure that in AWS, it was as good as it was on premise. We love the idea of giving customers this wonderful concept of symmetry. You know, ONTAP runs the biggest applications in the largest enterprises on the planet. And we wanted to give not just those customers an opportunity to embrace the Amazon cloud, but we wanted to also extend the capabilities of ONTAP through FSX to a new customer audience. Maybe those smaller companies that didn't really purchase on premise infrastructure, people that were born in the cloud. And of course, this gives us a great opportunity to present a fully managed ONTAP within the FSX platform, to a lot of non NetApp customers, to our competitors customers, Dave, that frankly, haven't done the same as we've done. And I think we are the benefactors of it, and we're in turn passing that innovation, that, that transformation onto the, to the customers and the partners. >> You know, one is the, the key aspect here is that it's a managed service. I don't think that could be, you know, overstated. And the other is that the cloud nativeness of this Anthony, you mentioned here, our marketplace is great, but this is some serious engineering going on here. So Ed maybe, maybe start with the perspective of a managed service. I mean, what does that mean? The whole ball of wax? >> Yeah. I mean, what it means to a customer is they go into the AWS console or they go to the AWS SDK or the, the AWS CLI and they are easily able to provision a resource provision, a file system, and it automatically will get built for them. And if there's nothing that they need to do at that point, they get an endpoint that they have access to the file system from and that's it. We handle patching, we handle all of the provisioning, we handle any hardware replacements that might need to happen along the way. Everything is fully managed. So the customer really can focus not on managing their file system, but on doing all of the other things that they, that they want to do and that they need to do. >> So. So Anthony, in a way you're disrupting yourself, which is kind of what you told me a couple of years ago. You're not afraid to do that because if we don't do it, somebody else is going to do it because you're, you're used to the old days, you're selling a box and you say, we'll see you next time, you know, three or four years. So from, from your customer's standpoint, what's their reaction to this notion of a managed service and what does it mean to NetApp? >> Well, so I think the most important thing it does is it gives them investment protection. The wonderful thing about what we've built with Amazon in the FSX profile is it's a complete ONTAP. And so one ONTAP cluster on premise can immediately see and connect to an ONTAP environment under FSX. We can then establish various different connectivities. We can use snap mirror technologies for disaster recovery. We can use efficient data transfer for things like dev test and backup. Of course, the wonderful thing that we've done, that we've gone beyond, above and beyond, what anybody else has done is we want to make sure that the actual primary application itself, one that was sort of built using NAS built in an on-premise environment an SAP and Oracle, et cetera, as Ed said, that we can move those over and have the confidence to run the application with no changes on an Amazon environment. So, so what we've really done, I think for customers, the NetApp customers, the non NetApp customers, is we've given them an enterprise grade shared storage platform that's as good in an Amazon cloud as it was in an on-premise data center. And that's something that's very unique to us. >> Can we talk a little bit more about those, those use cases? You know, both, both of you. What are you seeing as some of the more interesting ones that you can share? Ed, maybe you can start. >> Yeah, happy to. The customer discussions that we've, we've been in have really highlighted four cases, four use cases the customers are telling us they'll use a service for. So maybe I'll cover two and maybe Anthony can cover the other two. So, the first is application migrations. And customers are increasingly looking to move their applications to AWS. And a lot of those are applications work with file storage today. And so we're talking about applications like SAP. We're talking about relational databases like SQL server and Oracle. We're talking about vertical applications like Epic and the healthcare space. As another example, lots of media entertainment, rendering, and transcoding, and visual effects workload. workflows require Windows, Linux, and Mac iOS access to the same set of data. And what application administrators really want is they want the easy button. They want fully featured file storage that has the same capabilities, the same performance that their applications are used to. Has extremely high availability and durability, and it can easily enable them to meet compliance and security needs with a robust set of data protection and security capabilities. And I'll give you an example, Accenture, for example, has told us that a key obstacle their clients face when migrating to the cloud is potentially re-architecting their applications to adopt new technologies. And they expect that Amazon FSX for NetApp ONTAP will significantly accelerate their customers migrations to the cloud. Then a second one is storage migrations. So storage admins are increasingly looking to extend their on-premise storage to the cloud. And why they want to do that is they want to be more agile and they want to be responsive to growing data sets and growing workload needs. They want to last to capacity. They want the ability to spin up and spin down. They want easy disaster recovery across geographically isolated regions. They want the ability to change performance levels at any time. So all of this goodness that they get from the cloud is what they want. And more and more of them also are looking to make their company's data accessible to cloud services for analytics and processing. So services like ECS and EKS and workspaces and App Stream and VMware cloud and SageMaker and orchestration services like parallel cluster and AWS batch. But at the same time, they want all these cloud benefits, but at the same time, they have established data management workflows, and they build processes and they've built automation, leveraging APIs and capabilities of on-prem NAS appliances. It's really tough for them to just start from scratch with that stuff. So this offering provides them the best of both worlds. They get the benefits of the cloud with the NAS data management capabilities that they're used to. >> Right. >> Ed: So Anthony, maybe, do you want to talk about the other two? >> Well, so, you know, first and foremost, you heard from Ed earlier on the, the, the FSX sort of construct and how successful it's been. And one of the real reasons it's been so successful is, it takes advantage of all of the latest storage technologies, compute technologies, networking technologies. What's great is all of that's hidden from the user. What FSX does is it delivers a service. And what that means for an ONTAP customer is you're going to have ONTAP with an SLA and an SLM. You're going to have hundreds of thousands of IOPS available to you and sub-millisecond latencies. What's also really important is the design for FSX and app ONTAP was really to provide consistency on the NetApp API and to provide full access to ONTAP from the Amazon console, the Amazon SDK, or the Amazon CLI. So in this case, you've got this wonderful benefit of all of the, sort of the 29 years of innovation of NetApp combined with all the innovation AWS, all presented consistently to a customer. What Ed said, which I'm particularly excited about, is customers will see this just as they see any other AWS service. So if they want to use ONTAP in combination with some incremental compute resources, maybe with their own encryption keys, maybe with directory services, they may want to use it with other services like SageMaker. All of those things are immediately exposed to Amazon FSX for the app ONTAP. We do some really intelligent things just in the storage layer. So, for example, we do intelligent tiering. So the customer is constantly getting the, sort of the best TCO. So what that means is we're using Amazon's S3 storage as a tiered service, so that we can back off code data off of the primary file system to give the customer the optimal capacity, the optimal throughput, while maintaining the integrity of the file system. It's the same with backup. It's the same with disaster recovery, whether we're operating in a hybrid AWS cloud, or we're operating in an AWS region or across regions. >> Well, thank you. I think this, this announcement is a big deal for a number of reasons. First of all, it's the largest market. Like you said, you're the gold standard. I'll give you that, Anthony, because you guys earned it. And so it's a large market, but you always had to make previously, you have to make trade-offs. Either I could do file in the cloud, but I didn't get the rich functionality that, you know, NetApp's mature stack brings, or, you know, you could have wrapped your stack in Kubernete's container and thrown it into the cloud and hosted it there. But now that it's a managed service and presumably you're underneath, you're taking advantage. As I say, my inference is there's some serious engineering going on here. You're taking advantage of some of the cloud native capabilities. Yeah, maybe it's the different, you know, ECE two types, but also being able to bring in, we're, we're entering a new data era with machine intelligence and other capabilities that we really didn't have access to last decade. So I want to, I want to close with, you know, give you guys the last word. Maybe each of you could give me your thoughts on how you see this partnership of, for the, in the future. Particularly from a customer standpoint. Ed, maybe you could start. And then Anthony, you can bring us home. >> Yeah, well, Anthony and I and our teams have gotten to know each other really well in, in ideating around what this experience will be and then building the product. And, and we have this, this common vision that it is something that's going to really move the needle for customers. Providing the full ONTAP experience with the power of a, of a native AWS service. So we're really excited. We're, we're in this for the long haul together. We have, we've partnered on everything from engineering, to product management, to support. Like the, the full thing. This is a co-owned effort, a joint effort backed by both companies. And we have, I think a pretty remarkable product on day one, one that I think is going to delight customers. And we have a really rich roadmap that we're going to be building together over, over the years. So I'm excited about getting this in customer's hands. >> Great, thank you. Anthony, bring us home. >> Well, you know, it's one of those sorts of rare chances where you get to do something with Amazon that no one's ever done. You know, we're sort of sitting on the inside, we are a peer of theirs, and we're able to develop at very high speeds in combination with them to release continuously to the customer base. So what you're going to see here is rapid innovation. You're going to see a whole host of new services. Services that NetApp develops, services that Amazon develops. And then the whole ecosystem is going to have access to this, whether they're historically built on the NetApp APIs or increasingly built on the AWS APIs. I think you're going to see orchestrations. I think you're going to see the capabilities expand the overall opportunity for AWS to bring enterprise applications over. For me personally, Dave, you know, I've demonstrated yet again to the NetApp customer base, how much we care about them and their future. Selfishly, you know, I'm looking forward to telling the story to my competitors, customer base, because they haven't done it. So, you know, I think we've been bold. I think we've been committed as you said, three and a half years ago, I promised you that we were going to do everything we possibly could. You know, people always say, you know, what's, what's the real benefit of this. And at the end of the day, customers and partners will be the real winners. This, this innovation, this sort of, as a service I think is going to expand our market, allow our customers to do more with Amazon than they could before. It's one of those rare cases, Dave, where I think one plus one equals about seven, really. >> I love the vision and excited to see the execution Ed and Anthony, thanks so much for coming back in the Cube. Congratulations on getting to this point and good luck. >> Anthony and Ed: Thank you. >> All right. And thank you for watching everybody. This is Dave Vellante for the Cube's continuous coverage of AWS storage day. Keep it right there. (upbeat music)

Published Date : Sep 2 2021

SUMMARY :

And the big news of storage So Ed, let me start with you. And the Amazon FSX model has into the conversation. I want to bring Anthony in. and NetApp on the service And so, I want you to in the largest enterprises on the planet. And the other is that the cloud all of the provisioning, You're not afraid to do that that the actual primary of the more interesting ones and maybe Anthony can cover the other two. of IOPS available to you and First of all, it's the largest market. really move the needle for Great, thank you. the story to my competitors, for coming back in the Cube. This is Dave Vellante for the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

AnthonyPERSON

0.99+

Dave VellantePERSON

0.99+

Anthony LyePERSON

0.99+

AWSORGANIZATION

0.99+

EdPERSON

0.99+

AmazonORGANIZATION

0.99+

Ed NaimPERSON

0.99+

twoQUANTITY

0.99+

NetAppORGANIZATION

0.99+

29 yearsQUANTITY

0.99+

FSXTITLE

0.99+

BarneyORGANIZATION

0.99+

ONTAPTITLE

0.99+

oneQUANTITY

0.99+

bothQUANTITY

0.99+

threeQUANTITY

0.99+

80%QUANTITY

0.99+

both companiesQUANTITY

0.99+

NetAppTITLE

0.99+

four yearsQUANTITY

0.99+

LinuxTITLE

0.99+

WindowsTITLE

0.99+

MSXORGANIZATION

0.99+

OracleORGANIZATION

0.99+

firstQUANTITY

0.99+

Sandy Carter | AWS Global Public Sector Partner Awards 2021


 

(upbeat music) >> Welcome to the special CUBE presentation of the AWS Global Public Sector Partner Awards Program. I'm here with the leader of the partner program, Sandy Carter, Vice President, AWS, Amazon Web Services @Sandy_Carter on Twitter, prolific on social and great leader. Sandy, great to see you again. And congratulations on this great program we're having here. In fact, thanks for coming out for this keynote. Well, thank you, John, for having me. You guys always talk about the coolest thing. So we had to be part of it. >> Well, one of the things that I've been really loving about this success of public sector we talked to us before is that as we start coming out of the pandemic, is becoming very clear that the cloud has helped a lot of people and your team has done amazing work, just want to give you props for that and say, congratulations, and what a great time to talk about the winners. Because everyone's been working really hard in public sector, because of the pandemic. The internet didn't break. And everyone stepped up with cloud scale and solve some problems. So take us through the award winners and talk about them. Give us an overview of what it is. The criteria and all the specifics. >> Yeah, you got it. So we've been doing this annually, and it's for our public sector partners overall, to really recognize the very best of the best. Now, we love all of our partners, John, as you know, but every year we'd like to really hone in on a couple who really leverage their skills and their ability to deliver a great customer solution. They demonstrate those Amazon leadership principles like working backwards from the customer, having a bias for action, they've engaged with AWS and very unique ways. And as well, they've contributed to our customer success, which is so very important to us and to our customers as well. >> That's awesome. Hey, can we put up a slide, I know we have slide on the winners, I want to look at them, with the tiles here. So here's a list of some of the winners. I see a nice little stars on there. Look at the gold star. I knows IronNet, CrowdStrike. That's General Keith Alexander's company, I mean, super relevant. Presidio, we've interviewed them before many times, got Palantir in there. And is there another one, I want to take a look at some of the other names here. >> In overall we had 21 categories. You know, we have over 1900 public sector partners today. So you'll notice that the awards we did, a big focus on mission. So things like government, education, health care, we spotlighted some of the brand new technologies like Containers, Artificial Intelligence, Amazon Connect. And we also this year added in awards for innovative use of our programs, like think big for small business and PTP as well. >> Yeah, well, great roundup, they're looking forward to hearing more about those companies. I have to ask you, because this always comes up, we're seeing more and more ecosystem discussions when we talk about the future of cloud. And obviously, we're going to, you know, be at Mobile World Congress, theCUBE, back in physical form, again, (indistinct) will continue to go on. The notion of ecosystem is becoming a key competitive advantage for companies and missions. So I have to ask you, why are partners so important to your public sector team? Talk about the importance of partners in context to your mission? >> Yeah, you know, our partners are critical. We drive most of our business and public sector through partners. They have great relationships, they've got great skills, and they have, you know, that really unique ability to meet the customer needs. If I just highlighted a couple of things, even using some of our partners who won awards, the first is, you know, migrations are so critical. Andy talked at Reinvent about still 96% of applications still sitting on premises. So anybody who can help us with the velocity of migrations is really critical. And I don't know if you knew John, but 80% of our migrations are led by partners. So for example, we gave awards to Collibra and Databricks as best lead migration for data as well as Datacom for best data lead migration as well. And that's because they increase the velocity of migrations, which increases customer satisfaction. They also bring great subject matter expertise, in particular around that mission that you're talking about. So for instance, GDIT won best Mission Solution For Federal, and they had just an amazing solution that was a secure virtual desktop that reduced a federal agencies deployment process, from months to days. And then finally, you know, our partners drive new opportunities and innovate on behalf of our customers. So we did award this year for P to P, Partnering to Partner which is a really big element of ecosystems, but it was won by four points and in quizon, and they were able to work together to implement a data, implement a data lake and an AI, ML solution, and then you just did the startup showcase, we have a best startup delivering innovation too, and that was EduTech (indistinct) Central America. And they won for implementing an amazing student registration and early warning system to alert and risks that may impact a student's educational achievement. So those are just some of the reasons why partners are important. I could go on and on. As you know, I'm so passionate about my partners, >> I know you're going to talk for an hour, we have to cut you off a little there. (indistinct) love your partners so much. You have to focus on this mission thing. It was a strong mission focus in the awards this year. Why are customers requiring much more of a mission focused? Is it because, is it a part of the criteria? I mean, we're seeing a mission being big. Why is that the case? >> Well, you know, IDC, said that IT spend for a mission or something with a purpose or line of business was five times greater than IT. We also recently did our CTO study where we surveyed thousands of CTOs. And the biggest and most changing elements today is really not around the technology. But it's around the industry, healthcare, space that we talked about earlier, or government. So those are really important. So for instance, New Reburial, they won Best Emission for Healthcare. And they did that because of their new smart diagnostic system. And then we had a partner when PA consulting for Best Amazon Connect solution around a mission for providing support for those most at risk, the elderly population, those who already had pre existing conditions, and really making sure they were doing what they called risk shielding during COVID. Really exciting and big, strong focus on mission. >> Yeah, and it's also, you know, we've been covering a lot on this, people want to work for a company that has purpose, and that has missions. I think that's going to be part of the table stakes going forward. I got to ask you on the secrets of success when this came up, I love asking this question, because, you know, we're starting to see the playbooks of what I call post COVID and cloud scale 2.0, whatever you want to call it, as you're starting to see this new modern era of success formulas, obviously, large scale value creation mission. These are points we're hearing and keep conversations across the board. What do you see as the secret of success for these parties? I mean, obviously, it's indirect for Amazon, I get that, but they're also have their customers, they're your customers, customers. That's been around for a while. But there's a new model emerging. What are the secrets from your standpoint of success? you know, it's so interesting, John, that you asked me this, because this is the number one question that I get from partners too. I would say the first secret is being able to work backwards from your customer, not just technology. So take one of our award winners Cognizant. They won for their digital tolling solution. And they work backwards from the customer and how to modernize that, or Pariveda, who is one of our best energy solution winners. And again, they looked at some of these major capital projects that oil companies were doing, working backwards from what the customer needed. I think that's number one, working backwards from the customer. Two, is having that mission expertise. So given that you have to have technology, but you also got to have that expertise in the area. We see that as a big secret of our public sector partners. So education cloud, (indistinct) one for education, effectual one for government and not for profit, Accenture won, really leveraging and showcasing their global expansion around public safety and disaster response. Very important as well. And then I would say the last secret of success is building repeatable solutions using those strong skills. So Deloitte, they have a great solution for migration, including mainframes. And then you mentioned early on, CloudStrike and IronNet, just think about the skill sets that they have there for repeatable solutions around security. So I think it's really around working backwards from the customer, having that mission expertise, and then building a repeatable solution, leveraging your skill sets. >> That's a great formula for success. I got you mentioned IronNet, and cybersecurity. One of things that's coming up is, in addition to having those best practices, there's also like real problems to solve, like, ransomware is now becoming a government and commercial problem, right. So (indistinct) seeing that happen a lot in DC, that's a front burner. That's a societal impact issue. That's like a cybersecurity kind of national security defense issue, but also, it's a technical one. And also public sector, through my interviews, I can tell you the past year and a half, there's been a lot of creativity of new solutions, new problems or new opportunities that are not yet identified as problems and I'd love to get your thoughts on my concern is with Jeff Bar yesterday from AWS, who's been blogging all the the news and he is a leader in the community. He was saying that he sees like 5G in the edge as new opportunities where it's creative. It's like he compared to the going to the home improvement store where he just goes to buy one thing. He does other things. And so there's a builder culture. And I think this is something that's coming out of your group more, because the pandemic forced these problems, and they forced new opportunities to be creative, and to build. What's your thoughts? >> Yeah, so I see that too. So if you think about builders, you know, we had a partner, Executive Council yesterday, we had 900, executives sign up from all of our partners. And we asked some survey questions like, what are you building with today? And the number one thing was artificial intelligence and machine learning. And I think that's such a new builders tool today, John, and, you know, one of our partners who won an award for the most innovative AI&ML was Kablamo And what they did was they use AI&ML to do a risk assessment on bushfires or wildfires in Australia. But I think it goes beyond that. I think it's building for that need. And this goes back to, we always talk about #techforgood. Presidio, I love this award that they won for best nonprofit, the Cherokee Nation, which is one of our, you know, Native American heritage, they were worried about their language going out, like completely out like no one being able to speak yet. And so they came to Presidio, and they asked how could we have a virtual classroom platform for the Cherokee Nation? And they created this game that's available on your phone, so innovative, so much of a builder's culture to capture that young generation, so they don't you lose their language. So I do agree. I mean, we're seeing builders everywhere, we're seeing them use artificial intelligence, Container, security. And we're even starting with quantum, so it is pretty powerful of what you can do as a public sector partner. >> I think the partner equation is just so wide open, because it's always been based on value, adding value, right? So adding value is just what they do. And by the way, you make money doing it if you do a good job of adding value. And, again, I just love riffing on this, because Dave and I talked about this on theCUBE all the time, and it comes up all the time in cloud conversations. The lock in isn't proprietary technology anymore, its value, and scale. So you starting to see builders thrive in that environment. So really good points. Great best practice. And I think I'm very bullish on the partner ecosystems in general, and people do it right, flat upside. I got to ask you, though, going forward, because this is the big post COVID kind of conversation. And last time we talked on theCUBE about this, you know, people want to have a growth strategy coming out of COVID. They want to be, they want to have a tail win, they want to be on the right side of history. No one wants to be in the losing end of all this. So last year in 2021 your goals were very clear, mission, migrations, modernization. What's the focus for the partners beyond 2021? What are you guys thinking to enable them, 21 is going to be a nice on ramp to this post COVID growth strategy? What's the focus beyond 2021 for you and your partners? >> Yeah, it's really interesting, we're going to actually continue to focus on those three M's mission, migration and modernization. But we'll bring in different elements of it. So for example, on mission, we see a couple of new areas that are really rising to the top, Smart Cities now that everybody's going back to work and (indistinct) down, operations and maintenance and global defense and using gaming and simulation. I mean, think about that digital twin strategy and how you're doing that. For migration, one of the big ones we see emerging today is data-lead migration. You know, we have been focused on applications and mainframes, but data has gravity. And so we are seeing so many partners and our customers demanding to get their data from on premises to the cloud so that now they can make real time business decisions. And then on modernization. You know, we talked a lot about artificial intelligence and machine learning. Containers are wicked hot right now, provides you portability and performance. I was with a startup last night that just moved everything they're doing to ECS our Container strategy. And then we're also seeing, you know, crippin, quantum blockchain, no code, low code. So the same big focus, mission migration, modernization, but the underpinnings are going to shift a little bit beyond 2021. >> That's great stuff. And you know, you have first of all people don't might not know that your group partners and Amazon Web Services public sector, has a big surface area. You talking about government, health care, space. So I have to ask you, you guys announced in March the space accelerator and you recently announced that you selected 10 companies to participate in the accelerated program. So, I mean, this is this is a space centric, you know, targeting, you know, low earth orbiting satellites to exploring the surface of the Moon and Mars, which people love. And because the space is cool, let's say the tech and space, they kind of go together, right? So take us through, what's this all about? How's that going? What's the selection, give us a quick update, while you're here on this space accelerated selection, because (indistinct) will have had a big blog post that went out (indistinct). >> Yeah, I would be thrilled to do that. So I don't know if you know this. But when I was young, I wanted to be an astronaut. We just helped through (indistinct), one of our partners reach Mars. So Clint, who is a retired general and myself got together, and we decided we needed to do something to help startups accelerate in their space mission. And so we decided to announce a competition for 10 startups to get extra help both from us, as well as a partner Sarafem on space. And so we announced it, everybody expected the companies to come from the US, John, they came from 44 different countries. We had hundreds of startups enter, and we took them through this six week, classroom education. So we had our General Clint, you know, helping and teaching them in space, which he's done his whole life, we provided them with AWS credits, they had mentoring by our partner, Sarafem. And we just down selected to 10 startups, that was what Vernors blog post was. If you haven't read it, you should look at some of the amazing things that they're going to do, from, you know, farming asteroids to, you know, helping with some of the, you know, using small vehicles to connect to larger vehicles, when we all get to space. It's very exciting. Very exciting, indeed, >> You have so much good content areas and partners, exploring, it's a very wide vertical or sector that you're managing. Is there any pattern? Well, I want to get your thoughts on post COVID success again, is there any patterns that you're seeing in terms of the partner ecosystem? You know, whether its business model, or team makeup, or more mindset, or just how they're organizing that that's been successful? Is there like a, do you see a trend? Is there a certain thing, then I've got the working backwards thing, I get that. But like, is there any other observations? Because I think people really want to know, am I doing it right? Am I being a good manager, when you know, people are going to be working remotely more? We're seeing more of that. And there's going to be now virtual events, hybrid events, physical events, the world's coming back to normal, but it's never going to be the same. Do you see any patterns? >> Yeah, you know, we're seeing a lot of small partners that are making an entrance and solving some really difficult problems. And because they're so focused on a niche, it's really having an impact. So I really believe that that's going to be one of the things that we see, I focus on individual creators and companies who are really tightly aligned and not trying to do everything, if you will. I think that's one of the big trends. I think the second we talked about it a little bit, John, I think you're going to see a lot of focus on mission. Because of that purpose. You know, we've talked about #techforgood, with everything going on in the world. As people have been working from home, they've been reevaluating who they are, and what do they stand for, and people want to work for a company that cares about people. I just posted my human footer on LinkedIn. And I got my first over a million hits on LinkedIn, just by posting this human footer, saying, you know what, reply to me at a time that's convenient for you, not necessarily for me. So I think we're going to see a lot of this purpose driven mission, that's going to come out as well. >> Yeah, and I also noticed that, and I was on LinkedIn, I got a similar reaction when I started trying to create more of a community model, not so much have people attend our events, and we need butts in the seats. It was much more personal, like we wanted you to join us, not attend and be like a number. You know, people want to be part of something. This seem to be the new mission. >> Yeah, I completely agree with that. I think that, you know, people do want to be part of something and they want, they want to be part of the meaning of something too, right. Not just be part of something overall, but to have an impact themselves, personally and individually, not just as a company. And I think, you know, one of the other trends that we saw coming up too, was the focus on technology. And I think low code, no code is giving a lot of people entry into doing things I never thought they could do. So I do think that technology, artificial intelligence Containers, low code, no code blockchain, those are going to enable us to even do greater mission-based solutions. >> Low code, no code reduces the friction to create more value, again, back to the value proposition. Adding value is the key to success, your partners are doing it. And of course, being part of something great, like the Global Public Sector Partner Awards list is a good one. And that's what we're talking about here. Sandy, great to see you. Thank you for coming on and sharing your insights and an update and talking more about the 2021, Global Public Sector partner Awards. Thanks for coming on. >> Thank you, John, always a pleasure. >> Okay, the Global Leaders here presented on theCUBE, again, award winners doing great work in mission, modernization, again, adding value. That's what it's all about. That's the new competitive advantage. This is theCUBE. I'm John Furrier, your host, thanks for watching. (upbeat music)

Published Date : Jun 17 2021

SUMMARY :

Sandy, great to see you again. just want to give you props for and to our customers as well. So here's a list of some of the winners. And we also this year added in awards So I have to ask you, and they have, you know, Why is that the case? And the biggest and most I got to ask you on the secrets of success and I'd love to get your thoughts on And so they came to Presidio, And by the way, you make money doing it And then we're also seeing, you know, And you know, you have first of all that they're going to do, And there's going to be now that that's going to be like we wanted you to join us, And I think, you know, and talking more about the 2021, That's the new competitive advantage.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AndyPERSON

0.99+

JohnPERSON

0.99+

DavePERSON

0.99+

DeloitteORGANIZATION

0.99+

Sandy CarterPERSON

0.99+

ClintPERSON

0.99+

AmazonORGANIZATION

0.99+

SandyPERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

John FurrierPERSON

0.99+

CollibraORGANIZATION

0.99+

MarchDATE

0.99+

AustraliaLOCATION

0.99+

AWSORGANIZATION

0.99+

USLOCATION

0.99+

10 companiesQUANTITY

0.99+

21 categoriesQUANTITY

0.99+

Jeff BarPERSON

0.99+

DatabricksORGANIZATION

0.99+

900QUANTITY

0.99+

80%QUANTITY

0.99+

yesterdayDATE

0.99+

MarsLOCATION

0.99+

2021DATE

0.99+

GDITORGANIZATION

0.99+

five timesQUANTITY

0.99+

firstQUANTITY

0.99+

AccentureORGANIZATION

0.99+

10 startupsQUANTITY

0.99+

EduTechORGANIZATION

0.99+

DatacomORGANIZATION

0.99+

last yearDATE

0.99+

IronNetORGANIZATION

0.99+

Keith AlexanderPERSON

0.99+

44 different countriesQUANTITY

0.99+

Global Public Sector Partner AwardsEVENT

0.99+

TwoQUANTITY

0.99+

this yearDATE

0.99+

four pointsQUANTITY

0.99+

LinkedInORGANIZATION

0.99+

IDCORGANIZATION

0.98+

six weekQUANTITY

0.98+

PresidioORGANIZATION

0.98+

@Sandy_CarterPERSON

0.98+

oneQUANTITY

0.98+

CrowdStrikeORGANIZATION

0.98+

MoonLOCATION

0.98+

bothQUANTITY

0.97+

pandemicEVENT

0.97+

Global Public Sector partner AwardsEVENT

0.97+

Central AmericaLOCATION

0.97+

last nightDATE

0.97+

todayDATE

0.97+

ReinventORGANIZATION

0.97+

over 1900 public sector partnersQUANTITY

0.96+

first secretQUANTITY

0.96+

Best Amazon ConnectORGANIZATION

0.96+

DCLOCATION

0.96+

CognizantPERSON

0.96+

OneQUANTITY

0.95+

VernorsPERSON

0.95+

an hourQUANTITY

0.95+

SarafemORGANIZATION

0.95+

Cherokee NationORGANIZATION

0.94+

GeneralPERSON

0.94+

thousands of CTOsQUANTITY

0.94+

ParivedaORGANIZATION

0.93+

secondQUANTITY

0.93+

Vikas Ratna and James Leach | Cisco Future Cloud 2021


 

>> From around the globe it's theCube. Presenting Future Cloud. One event, a world of opportunities. Brought to you by Cisco. >> We're here with Vikas Ratna, who's the director of product management for ECS at Cisco and James Leach is the director of business development for UCS at Cisco as well. We're going to talk about computing in the age of hybrid cloud. Welcome gentlemen, great to see you. >> Thank you. >> Thank you. >> Vikas let's start with you and talk a little bit about computing architectures. We know that they're evolving, they're supporting new data intensive and other workloads, especially as high-performance workload requirements, what's Cisco's point of view on all this? And specifically, I'm interested in your thoughts on fabrics, I mean, it's kind of your wheelhouse, you've got accelerators, What are the workloads that are driving these evolving technologies and how is it impacting customers? What are you seeing? >> Sure, Dave. First of all, very excited to be here today. You're absolutely right. The pace of innovation and foundational platform ingredients have just been phenomenal in recent years. The fabric, accelerators, the drives, the processing power, the core density all have been evolving at just an amazing pace and the pace will only pick up further, but ultimately it is all about applications and the way applications leverage those innovations. And we do see applications evolving quite rapidly. The new classes of applications are evolving to absorb those innovations and deliver much better business values, very, very exciting times, Dave, but talking about the impact on the customers, well these innovations have helped them pretty positively. We do see significant challenges in the data center with a point product based approach of delivering these platform innovations to the applications. What has happened is these innovations today are being packaged as point products to meet the needs of a specific application. And as you know, the different applications have their different needs. Some applications need more tributes, others need more memory, yet others need, you know, more cores. Some need different kinds of fabrics. As a result, if you walk into a data center today, it is pretty common to see many different point products in the data center. This creates a manageability challenge. Imagine the aspect of managing, you know, several different form factors, one you, to you, purpose-built servers or the variety of, you know, blade form factor. You know, this reminds me of the situation we had before smartphones arrived. You remember the days when you, when we used to have a GPS device for navigation system, a cool music device for listening to the music, a phone device for making a call, camera for taking the photos. Right? And we were all excited about it. It's when the smartphones arrived that we realized all those cool innovations could be delivered in a much simpler, much convenient, and easy to consume it through one device and, you know, and that could completely transform our experience. So we see the customers who are benefiting from these innovations to have a way to consume those things in a much more simplistic way than they are able to do it today. >> And I like, look, it's always been about the applications, but to your point, the applications are now moving at a much faster pace. The customer experience is, expectation, is way escalated. And when you combine all these, I love your analogy there Vikas, because when you combine all these capabilities, it allows us to develop new applications, new capabilities, new customer experiences. So that's the, I always say, the next 10 years, they ain't going to be like the last. And James, public cloud obviously is heavily influencing compute design and customer operating models. You know, it's funny, when the public cloud first hit the market, everyone, we were swooning about oh, low cost, standard off-the-shelf servers, you know, and storage devices, but it quickly became obvious that customers needed more. So I wonder if you could comment on this. How are the trends that we've seen from the hyperscalers, how are they filtering into on-prem infrastructure and maybe, you know, maybe there's some differences there as well that you could address? >> Absolutely. So, you know, I'd say first of all, quite frankly, you know, public cloud has completely changed the expectations of how our customers want to consume compute, right? So customers, especially in a public cloud environment, they've gotten used to, or, you know, come to accept that they should consume from the application out, right? They want a very application-focused view, a services-focused view of the world. They don't want to think about infrastructure, right? They want to think about their application. They want to move outward, right? So, this means that the infrastructure basically has to meet the application where it lives. So what that means for us is that, you know, we're taking a different approach. We've decided that, you know, we're not going to chase this, you know, single pane of glass view of the world, which, you know, frankly our customers don't want. They don't want a single pane of glass. What they want is a single operating model. They want an operating model that's similar to what they can get with the public cloud, but they want it across all of their cloud options. They want it across private cloud, across hybrid cloud options, as well. So what that means is they don't want to just consume infrastructure services. They want all of their cloud services from this operating model. So that means that they may want to consume infrastructure services for automation orchestration, but they also need Kubernetes services. They also need virtualization services. They may need Terraform, workload optimization. All of these services have to be available from within the operating model, a consistent operating model, right? So it doesn't matter whether you're talking about private cloud, hybrid cloud, anywhere, where the application lives doesn't matter. What matters is that we have a consistent model, that we think about it from the application out, and frankly, I'd say, you know, this has been the stumbling block for private cloud. Private cloud is hard, right? This is why it hasn't been really solved yet. This is why we had to take a brand new approach. And frankly, it's why we're super excited about X Series and intersight as that, you know, operating model that fits the hybrid cloud better than anything else we've seen. >> This is a Cube first, first time's a technology vendor has ever said that it's not about a single pane of glass because I've been hearing for decades we're going to deliver a single pane of glass. It's going to be seamless and it never happens. It's like a single version of the truth. It's aspirational. And it's just not reality. So can we stay on the X Series for a minute, James, maybe in this context, but in the launch that we saw today, it was like a fire hose of announcements. So, how does the X Series fit into the strategy with intersight, and hybrid cloud in this operating model that you're talking about? >> Right. So, I think it goes hand-in-hand, right? The two pieces go together very well. So we have, you know, this idea of a single operating model that is definitely, you know, something that our customers demand, right? It's what we have to have, but at the same time we need to solve the problems Vikas was talking about before, we need a single infrastructure to go along with that single operating model. So no longer do we need to have silos within the infrastructure that give us different operating models or different sets of benefits, when you want infrastructure that can kind of do all of those configurations, all those applications. And then, you know, the operating model is very important because that's where we abstract the complexity that could come with just throwing all that technology at the infrastructure. So that, you know, this is, you know, the way that we think about it is the data center is not centered, right? It's no longer centered. Applications live everywhere. Infrastructure lives everywhere. And, you know, we need to have that consistent operating model, but we need to do things within the infrastructure as well to take full advantage, right? So we want all the SaaS benefits of a CICD model of, you know, the intersight can bring, we want all of that, you know, proactive recommendation engine with the power of AI behind it, we want the connected support experience. We want all of that, but we want to do it across a single infrastructure. And we think that that's how they tie together. That's why one or the other doesn't really solve the problem, but both together. That's why we're here. That's why we're super excited. >> So Vikas, I make you laugh a little bit. When I was an analyst at IDC, I was a bit deep into infrastructure, And then when I left, I was doing, I was working with application development heads. And like you said, infrastructure, it was just a roadblock, but it was so the tongue-in-cheek is when Cisco announced UCS a decade ago, I totally missed it. I didn't understand it. I thought it was Cisco getting into the traditional server business. And it wasn't until I dug in that I realized that your vision was really to transform infrastructure deployment and management. And change the model. It was like, okay, I got that wrong. But, so let's talk about the, the ecosystem and the joint development efforts that are going on there. X Series, how does it fit into this converged infrastructure business that you've built and grown with partners? You've got storage partners like NetApp and Pure. You got ISV partners in the ecosystem. We see Cohesity, it's been a while since we hung out with all these companies at the Cisco live, hopefully next year, but tell us what's happening in that regard. >> No, absolutely. I'm looking forward to seeing you in the Cisco live next year, Dave. Absolutely. You brought up a very good point. UCS is about the ecosystem that it brings together. It's about making our customers bring up the entire infrastructure, from the core foundational hardware all the way to the application level so that they can all go off and running pretty quick. That converse infrastructure has been one of the cornerstones of our strategy, as you pointed out, in the last decade. And I'm very glad to share that conversed infrastructure continues to be a very popular architecture for several enterprise applications even today. In fact, it is the preferred architecture for mission critical applications, where performance, resiliency, latency, are the critical requirements. They are almost de facto standards for large scale deployments of virtualize and business critical databases and so forth. With X Series, with our partnerships, with our restorative partners, those architectures will absolutely continue and will get better. But in addition, it's a hybrid cloud world. So we are now bringing in the benefits of conversed infrastructure to the world of hybrid cloud. We'll be supporting the hybrid cloud applications now with the CA infrastructure that we have built together with our strong partnership with the store as partners to tell you with the same benefits to the new age applications as well. >> Yeah and that's what customers want, they want that cloud operating model. Right? Go ahead, please. >> I was just going to say, you know, that the CA model will continue to thrive. It will transition out, it will expand the use cases now for the newer use cases that we were beginning to see, Dave, absolutely. >> Great. Thank you for that. And James, like I said earlier today, we heard this huge announcement, a lot of parts to it. And we heard, you know, KD talk about this initiative is, it's really computing built for the next decade. I mean, I like that because it shows some vision and that you've got, you know, a roadmap, that you've thought through the coming changes in workloads and infrastructure management and some of the technology that you can take advantage of beyond just the, you know, one or two product cycles. So, but I want to understand what you've done here specifically that you feel differentiates you from other competitive architectures in the industry. >> Sure. You know, that's a great question. number one. Number two, I'm frankly a little bit concerned at times for customers in general, for our customers, customers in general, because if you look at what's in the market, right? These rinse and repeat systems that were effectively just rehashes of the same old design, right? That we've seen since before 2009 when we brought UCS to market, these are what we're seeing over and over and over again, that's not really going to work anymore, frankly. And I think that people are getting lulled into a false sense of security by seeing those things continually put in the market. We've rethought this from the ground up because frankly, you know, future-proofing starts now, right? If you're not doing it right today, future-proofing isn't even on your radar because you're not even, you're not even today-proofed. So we've rethought the entire chassis, the entire architecture, from the ground up. Okay. If you look at other vendors, if you look at other solutions in the market, what you'll see is things like, you know management inside the chassis. That's a great example. Daisy chaining them together. Like, who needs that? Who wants that? Like, that kind of complexity is, first of all, it's ridiculous. Second of all, if you want to manage across clouds you have to do it from the cloud, right? It's just common sense. You have to move management where it can have the scale and the scope that it needs to impact, you know, your entire domain, your world, which is much larger now than it was before. We're talking about true hybrid cloud here. Right? So, we had to, you know, solve certain problems that existed in the traditional architecture. You know, I can't tell you how many times I heard you know, talk about, you know, the mid plane is a great example. Well, you know, the mid plane in a chassis is a limiting factor. It limits us on how much we can connect or how much bandwidth we have available to the chassis. It limits us on air flow and other things. So how do you solve that problem? Simple. Just get rid of it. Like we just, we took it out, right? It's now no longer a problem. We designed an architecture that doesn't need it. It doesn't rely on it, no forklift upgrades. So as we start moving down the path of needing liquid cooling, or maybe we need to take advantage of some new high performance, low latency fabrics. We can do that with almost no problem at all, right? So we don't have any forklift upgrades. Park your forklift on the side. You won't need it anymore because you can upgrade granularly. You can move along as technologies come into existence that maybe don't even exist today. They may not even be on our radar today to take advantage of but I like to think of these technologies. You know, they're really important to our customers. These are, you know, we can call them disruptive technologies. The reality is that we don't want to disrupt our customers with these technologies. We want to give them these technologies so they can go out and be disruptive themselves, right? And this is the way that we've designed this, from the ground up, to be easy consume and to take advantage of what we know about today and what's coming in the future that we may not even know about. So we think this is a way to give our customers that ultimate capability, flexibility, and future-proofing. >> I like that phrase, true hybrid cloud. It's one that we've used for years. But to me, this is all about that horizontal infrastructure that can support that vision of what true hybrid cloud is. You could support the mission critical applications. You can develop on the system and you can support a variety of workloads. You're not locked into, you know, one narrow stovepipe. And that does have legs. Vikas and James, thanks so much for coming on the program. Great to see you. >> Thank you, we appreciate the time. >> Thank you. >> And thank you for watching. This is Dave Volante for theCube, the leader in digital event coverage. (uplifting music)

Published Date : Jun 2 2021

SUMMARY :

Brought to you by Cisco. and James Leach is the director What are the workloads that are driving Imagine the aspect of managing, you know, and maybe, you know, first of all, quite frankly, you know, the launch that we saw today, So we have, you know, this idea and the joint development as partners to tell you Yeah and that's what customers want, I was just going to say, you know, that And we heard, you know, KD talk about So, we had to, you know, You can develop on the system And thank you for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

GartnerORGANIZATION

0.99+

DavePERSON

0.99+

JohnPERSON

0.99+

Lisa MartinPERSON

0.99+

VikasPERSON

0.99+

LisaPERSON

0.99+

MichaelPERSON

0.99+

DavidPERSON

0.99+

Katherine KosterevaPERSON

0.99+

StevePERSON

0.99+

Steve WoodPERSON

0.99+

JamesPERSON

0.99+

PaulPERSON

0.99+

EuropeLOCATION

0.99+

Andy AnglinPERSON

0.99+

Eric KurzogPERSON

0.99+

Kerry McFaddenPERSON

0.99+

EricPERSON

0.99+

Ed WalshPERSON

0.99+

IBMORGANIZATION

0.99+

Jeff ClarkePERSON

0.99+

LandmarkORGANIZATION

0.99+

AustraliaLOCATION

0.99+

KatherinePERSON

0.99+

AndyPERSON

0.99+

GaryPERSON

0.99+

AmazonORGANIZATION

0.99+

two hoursQUANTITY

0.99+

Paul GillinPERSON

0.99+

ForresterORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

Michael DellPERSON

0.99+

CiscoORGANIZATION

0.99+

JeffPERSON

0.99+

Peter BurrisPERSON

0.99+

Jeff FrickPERSON

0.99+

2002DATE

0.99+

Mandy DhaliwalPERSON

0.99+

John FurrierPERSON

0.99+

2019DATE

0.99+

fiveQUANTITY

0.99+

StarbucksORGANIZATION

0.99+

PolyComORGANIZATION

0.99+

USLOCATION

0.99+

San JoseLOCATION

0.99+

BostonLOCATION

0.99+