SiliconANGLE News | Swami Sivasubramanian Extended Version
(bright upbeat music) >> Hello, everyone. Welcome to SiliconANGLE News breaking story here. Amazon Web Services expanding their relationship with Hugging Face, breaking news here on SiliconANGLE. I'm John Furrier, SiliconANGLE reporter, founder, and also co-host of theCUBE. And I have with me, Swami, from Amazon Web Services, vice president of database, analytics, machine learning with AWS. Swami, great to have you on for this breaking news segment on AWS's big news. Thanks for coming on and taking the time. >> Hey, John, pleasure to be here. >> You know- >> Looking forward to it. >> We've had many conversations on theCUBE over the years, we've watched Amazon really move fast into the large data modeling, SageMaker became a very smashing success, obviously you've been on this for a while. Now with ChatGPT OpenAI, a lot of buzz going mainstream, takes it from behind the curtain inside the ropes, if you will, in the industry to a mainstream. And so this is a big moment, I think, in the industry, I want to get your perspective, because your news with Hugging Face, I think is another tell sign that we're about to tip over into a new accelerated growth around making AI now application aware, application centric, more programmable, more API access. What's the big news about, with AWS Hugging Face, you know, what's going on with this announcement? >> Yeah. First of all, they're very excited to announce our expanded collaboration with Hugging Face, because with this partnership, our goal, as you all know, I mean, Hugging Face, I consider them like the GitHub for machine learning. And with this partnership, Hugging Face and AWS, we'll be able to democratize AI for a broad range of developers, not just specific deep AI startups. And now with this, we can accelerate the training, fine tuning and deployment of these large language models, and vision models from Hugging Face in the cloud. And the broader context, when you step back and see what customer problem we are trying to solve with this announcement, essentially if you see these foundational models, are used to now create like a huge number of applications, suggest like tech summarization, question answering, or search image generation, creative, other things. And these are all stuff we are seeing in the likes of these ChatGPT style applications. But there is a broad range of enterprise use cases that we don't even talk about. And it's because these kind of transformative, generative AI capabilities and models are not available to, I mean, millions of developers. And because either training these elements from scratch can be very expensive or time consuming and need deep expertise, or more importantly, they don't need these generic models, they need them to be fine tuned for the specific use cases. And one of the biggest complaints we hear is that these models, when they try to use it for real production use cases, they are incredibly expensive to train and incredibly expensive to run inference on, to use it at a production scale. So, and unlike web search style applications, where the margins can be really huge, here in production use cases and enterprises, you want efficiency at scale. That's where Hugging Face and AWS share our mission. And by integrating with Trainium and Inferentia, we're able to handle the cost efficient training and inference at scale, I'll deep dive on it. And by teaming up on the SageMaker front, now the time it takes to build these models and fine tune them is also coming down. So that's what makes this partnership very unique as well. So I'm very excited. >> I want to get into the time savings and the cost savings as well on the training and inference, it's a huge issue, but before we get into that, just how long have you guys been working with Hugging Face? I know there's a previous relationship, this is an expansion of that relationship, can you comment on what's different about what's happened before and then now? >> Yeah. So, Hugging Face, we have had a great relationship in the past few years as well, where they have actually made their models available to run on AWS, you know, fashion. Even in fact, their Bloom Project was something many of our customers even used. Bloom Project, for context, is their open source project which builds a GPT-3 style model. And now with this expanded collaboration, now Hugging Face selected AWS for that next generation office generative AI model, building on their highly successful Bloom Project as well. And the nice thing is, now, by direct integration with Trainium and Inferentia, where you get cost savings in a really significant way, now, for instance, Trn1 can provide up to 50% cost to train savings, and Inferentia can deliver up to 60% better costs, and four x more higher throughput than (indistinct). Now, these models, especially as they train that next generation generative AI models, it is going to be, not only more accessible to all the developers, who use it in open, so it'll be a lot cheaper as well. And that's what makes this moment really exciting, because we can't democratize AI unless we make it broadly accessible and cost efficient and easy to program and use as well. >> Yeah. >> So very exciting. >> I'll get into the SageMaker and CodeWhisperer angle in a second, but you hit on some good points there. One, accessibility, which is, I call the democratization, which is getting this in the hands of developers, and/or AI to develop, we'll get into that in a second. So, access to coding and Git reasoning is a whole nother wave. But the three things I know you've been working on, I want to put in the buckets here and comment, one, I know you've, over the years, been working on saving time to train, that's a big point, you mentioned some of those stats, also cost, 'cause now cost is an equation on, you know, bundling whether you're uncoupling with hardware and software, that's a big issue. Where do I find the GPUs? Where's the horsepower cost? And then also sustainability. You've mentioned that in the past, is there a sustainability angle here? Can you talk about those three things, time, cost, and sustainability? >> Certainly. So if you look at it from the AWS perspective, we have been supporting customers doing machine learning for the past years. Just for broader context, Amazon has been doing ML the past two decades right from the early days of ML powered recommendation to actually also supporting all kinds of generative AI applications. If you look at even generative AI application within Amazon, Amazon search, when you go search for a product and so forth, we have a team called MFi within Amazon search that helps bring these large language models into creating highly accurate search results. And these are created with models, really large models with tens of billions of parameters, scales to thousands of training jobs every month and trained on large model of hardware. And this is an example of a really good large language foundation model application running at production scale, and also, of course, Alexa, which uses a large generator model as well. And they actually even had a research paper that showed that they are more, and do better in accuracy than other systems like GPT-3 and whatnot. So, and we also touched on things like CodeWhisperer, which uses generative AI to improve developer productivity, but in a responsible manner, because 40% of some of the studies show 40% of this generated code had serious security flaws in it. This is where we didn't just do generative AI, we combined with automated reasoning capabilities, which is a very, very useful technique to identify these issues and couple them so that it produces highly secure code as well. Now, all these learnings taught us few things, and which is what you put in these three buckets. And yeah, like more than 100,000 customers using ML and AI services, including leading startups in the generative AI space, like stability AI, AI21 Labs, or Hugging Face, or even Alexa, for that matter. They care about, I put them in three dimension, one is around cost, which we touched on with Trainium and Inferentia, where we actually, the Trainium, you provide to 50% better cost savings, but the other aspect is, Trainium is a lot more power efficient as well compared to traditional one. And Inferentia is also better in terms of throughput, when it comes to what it is capable of. Like it is able to deliver up to three x higher compute performance and four x higher throughput, compared to it's previous generation, and it is extremely cost efficient and power efficient as well. >> Well. >> Now, the second element that really is important is in a day, developers deeply value the time it takes to build these models, and they don't want to build models from scratch. And this is where SageMaker, which is, even going to Kaggle uses, this is what it is, number one, enterprise ML platform. What it did to traditional machine learning, where tens of thousands of customers use StageMaker today, including the ones I mentioned, is that what used to take like months to build these models have dropped down to now a matter of days, if not less. Now, a generative AI, the cost of building these models, if you look at the landscape, the model parameter size had jumped by more than thousand X in the past three years, thousand x. And that means the training is like a really big distributed systems problem. How do you actually scale these model training? How do you actually ensure that you utilize these efficiently? Because these machines are very expensive, let alone they consume a lot of power. So, this is where SageMaker capability to build, automatically train, tune, and deploy models really concern this, especially with this distributor training infrastructure, and those are some of the reasons why some of the leading generative AI startups are actually leveraging it, because they do not want a giant infrastructure team, which is constantly tuning and fine tuning, and keeping these clusters alive. >> It sounds like a lot like what startups are doing with the cloud early days, no data center, you move to the cloud. So, this is the trend we're seeing, right? You guys are making it easier for developers with Hugging Face, I get that. I love that GitHub for machine learning, large language models are complex and expensive to build, but not anymore, you got Trainium and Inferentia, developers can get faster time to value, but then you got the transformers data sets, token libraries, all that optimized for generator. This is a perfect storm for startups. Jon Turow, a former AWS person, who used to work, I think for you, is now a VC at Madrona Venture, he and I were talking about the generator AI landscape, it's exploding with startups. Every alpha entrepreneur out there is seeing this as the next frontier, that's the 20 mile stairs, next 10 years is going to be huge. What is the big thing that's happened? 'Cause some people were saying, the founder of Yquem said, "Oh, the start ups won't be real, because they don't all have AI experience." John Markoff, former New York Times writer told me that, AI, there's so much work done, this is going to explode, accelerate really fast, because it's almost like it's been waiting for this moment. What's your reaction? >> I actually think there is going to be an explosion of startups, not because they need to be AI startups, but now finally AI is really accessible or going to be accessible, so that they can create remarkable applications, either for enterprises or for disrupting actually how customer service is being done or how creative tools are being built. And I mean, this is going to change in many ways. When we think about generative AI, we always like to think of how it generates like school homework or arts or music or whatnot, but when you look at it on the practical side, generative AI is being actually used across various industries. I'll give an example of like Autodesk. Autodesk is a customer who runs an AWS and SageMaker. They already have an offering that enables generated design, where designers can generate many structural designs for products, whereby you give a specific set of constraints and they actually can generate a structure accordingly. And we see similar kind of trend across various industries, where it can be around creative media editing or various others. I have the strong sense that literally, in the next few years, just like now, conventional machine learning is embedded in every application, every mobile app that we see, it is pervasive, and we don't even think twice about it, same way, like almost all apps are built on cloud. Generative AI is going to be part of every startup, and they are going to create remarkable experiences without needing actually, these deep generative AI scientists. But you won't get that until you actually make these models accessible. And I also don't think one model is going to rule the world, then you want these developers to have access to broad range of models. Just like, go back to the early days of deep learning. Everybody thought it is going to be one framework that will rule the world, and it has been changing, from Caffe to TensorFlow to PyTorch to various other things. And I have a suspicion, we had to enable developers where they are, so. >> You know, Dave Vellante and I have been riffing on this concept called super cloud, and a lot of people have co-opted to be multicloud, but we really were getting at this whole next layer on top of say, AWS. You guys are the most comprehensive cloud, you guys are a super cloud, and even Adam and I are talking about ISVs evolving to ecosystem partners. I mean, your top customers have ecosystems building on top of it. This feels like a whole nother AWS. How are you guys leveraging the history of AWS, which by the way, had the same trajectory, startups came in, they didn't want to provision a data center, the heavy lifting, all the things that have made Amazon successful culturally. And day one thinking is, provide the heavy lifting, undifferentiated heavy lifting, and make it faster for developers to program code. AI's got the same thing. How are you guys taking this to the next level, because now, this is an opportunity for the competition to change the game and take it over? This is, I'm sure, a conversation, you guys have a lot of things going on in AWS that makes you unique. What's the internal and external positioning around how you take it to the next level? >> I mean, so I agree with you that generative AI has a very, very strong potential in terms of what it can enable in terms of next generation application. But this is where Amazon's experience and expertise in putting these foundation models to work internally really has helped us quite a bit. If you look at it, like amazon.com search is like a very, very important application in terms of what is the customer impact on number of customers who use that application openly, and the amount of dollar impact it does for an organization. And we have been doing it silently for a while now. And the same thing is true for like Alexa too, which actually not only uses it for natural language understanding other city, even national leverages is set for creating stories and various other examples. And now, our approach to it from AWS is we actually look at it as in terms of the same three tiers like we did in machine learning, because when you look at generative AI, we genuinely see three sets of customers. One is, like really deep technical expert practitioner startups. These are the startups that are creating the next generation models like the likes of stability AIs or Hugging Face with Bloom or AI21. And they generally want to build their own models, and they want the best price performance of their infrastructure for training and inference. That's where our investments in silicon and hardware and networking innovations, where Trainium and Inferentia really plays a big role. And we can nearly do that, and that is one. The second middle tier is where I do think developers don't want to spend time building their own models, let alone, they actually want the model to be useful to that data. They don't need their models to create like high school homeworks or various other things. What they generally want is, hey, I had this data from my enterprises that I want to fine tune and make it really work only for this, and make it work remarkable, can be for tech summarization, to generate a report, or it can be for better Q&A, and so forth. This is where we are. Our investments in the middle tier with SageMaker, and our partnership with Hugging Face and AI21 and co here are all going to very meaningful. And you'll see us investing, I mean, you already talked about CodeWhisperer, which is an open preview, but we are also partnering with a whole lot of top ISVs, and you'll see more on this front to enable the next wave of generated AI apps too, because this is an area where we do think lot of innovation is yet to be done. It's like day one for us in this space, and we want to enable that huge ecosystem to flourish. >> You know, one of the things Dave Vellante and I were talking about in our first podcast we just did on Friday, we're going to do weekly, is we highlighted the AI ChatGPT example as a horizontal use case, because everyone loves it, people are using it in all their different verticals, and horizontal scalable cloud plays perfectly into it. So I have to ask you, as you look at what AWS is going to bring to the table, a lot's changed over the past 13 years with AWS, a lot more services are available, how should someone rebuild or re-platform and refactor their application of business with AI, with AWS? What are some of the tools that you see and recommend? Is it Serverless, is it SageMaker, CodeWhisperer? What do you think's going to shine brightly within the AWS stack, if you will, or service list, that's going to be part of this? As you mentioned, CodeWhisperer and SageMaker, what else should people be looking at as they start tinkering and getting all these benefits, and scale up their ups? >> You know, if we were a startup, first, I would really work backwards from the customer problem I try to solve, and pick and choose, bar, I don't need to deal with the undifferentiated heavy lifting, so. And that's where the answer is going to change. If you look at it then, the answer is not going to be like a one size fits all, so you need a very strong, I mean, granted on the compute front, if you can actually completely accurate it, so unless, I will always recommend it, instead of running compute for running your ups, because it takes care of all the undifferentiated heavy lifting, but on the data, and that's where we provide a whole variety of databases, right from like relational data, or non-relational, or dynamo, and so forth. And of course, we also have a deep analytical stack, where data directly flows from our relational databases into data lakes and data virus. And you can get value along with partnership with various analytical providers. The area where I do think fundamentally things are changing on what people can do is like, with CodeWhisperer, I was literally trying to actually program a code on sending a message through Twilio, and I was going to pull up to read a documentation, and in my ID, I was actually saying like, let's try sending a message to Twilio, or let's actually update a Route 53 error code. All I had to do was type in just a comment, and it actually started generating the sub-routine. And it is going to be a huge time saver, if I were a developer. And the goal is for us not to actually do it just for AWS developers, and not to just generate the code, but make sure the code is actually highly secure and follows the best practices. So, it's not always about machine learning, it's augmenting with automated reasoning as well. And generative AI is going to be changing, and not just in how people write code, but also how it actually gets built and used as well. You'll see a lot more stuff coming on this front. >> Swami, thank you for your time. I know you're super busy. Thank you for sharing on the news and giving commentary. Again, I think this is a AWS moment and industry moment, heavy lifting, accelerated value, agility. AIOps is going to be probably redefined here. Thanks for sharing your commentary. And we'll see you next time, I'm looking forward to doing more follow up on this. It's going to be a big wave. Thanks. >> Okay. Thanks again, John, always a pleasure. >> Okay. This is SiliconANGLE's breaking news commentary. I'm John Furrier with SiliconANGLE News, as well as host of theCUBE. Swami, who's a leader in AWS, has been on theCUBE multiple times. We've been tracking the growth of how Amazon's journey has just been exploding past five years, in particular, past three. You heard the numbers, great performance, great reviews. This is a watershed moment, I think, for the industry, and it's going to be a lot of fun for the next 10 years. Thanks for watching. (bright music)
SUMMARY :
Swami, great to have you on inside the ropes, if you And one of the biggest complaints we hear and easy to program and use as well. I call the democratization, the Trainium, you provide And that means the training What is the big thing that's happened? and they are going to create this to the next level, and the amount of dollar impact that's going to be part of this? And generative AI is going to be changing, AIOps is going to be John, always a pleasure. and it's going to be a lot
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Swami | PERSON | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Jon Turow | PERSON | 0.99+ |
John Markoff | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
40% | QUANTITY | 0.99+ |
Autodesk | ORGANIZATION | 0.99+ |
50% | QUANTITY | 0.99+ |
Madrona Venture | ORGANIZATION | 0.99+ |
20 mile | QUANTITY | 0.99+ |
Hugging Face | ORGANIZATION | 0.99+ |
Friday | DATE | 0.99+ |
second element | QUANTITY | 0.99+ |
more than 100,000 customers | QUANTITY | 0.99+ |
AI21 | ORGANIZATION | 0.99+ |
tens of thousands | QUANTITY | 0.99+ |
first podcast | QUANTITY | 0.99+ |
three tiers | QUANTITY | 0.98+ |
SiliconANGLE | ORGANIZATION | 0.98+ |
twice | QUANTITY | 0.98+ |
Bloom Project | TITLE | 0.98+ |
one | QUANTITY | 0.98+ |
SageMaker | ORGANIZATION | 0.98+ |
Hugging Face | TITLE | 0.98+ |
Alexa | TITLE | 0.98+ |
first | QUANTITY | 0.98+ |
GitHub | ORGANIZATION | 0.98+ |
one model | QUANTITY | 0.98+ |
up to 50% | QUANTITY | 0.97+ |
ChatGPT | TITLE | 0.97+ |
First | QUANTITY | 0.97+ |
more than thousand X | QUANTITY | 0.97+ |
amazon.com | ORGANIZATION | 0.96+ |
tens of billions | QUANTITY | 0.96+ |
One | QUANTITY | 0.96+ |
up to 60% | QUANTITY | 0.96+ |
one framework | QUANTITY | 0.96+ |
Yquem | ORGANIZATION | 0.94+ |
three things | QUANTITY | 0.94+ |
Inferentia | ORGANIZATION | 0.94+ |
CodeWhisperer | TITLE | 0.93+ |
four | QUANTITY | 0.92+ |
three sets | QUANTITY | 0.92+ |
three | QUANTITY | 0.92+ |
Twilio | ORGANIZATION | 0.92+ |
Asvin Ramesh, HashiCorp | Palo Alto Networks Ignite22
(upbeat music) >> Announcer: TheCUBE presents Ignite '22 brought to you by Palo Alto Networks. >> Welcome back to Las Vegas guys and girls. Lisa Martin here with Dave Vellante. This is day one of the cube's two day coverage of Palo Alto Networks Ignite at the MGM Grand. Dave, we've been having some great conversations today, we have a great two day lineup execs from Palo Alto, it's partner network, customers, et cetera. Going to be talking about infrastructure as code. We talk about that a lot, how Palo is partnering with its partner ecosystem to really help customers deliver security across the organization. >> We do a predictions post every year. Hopefully you can hear me. So we do this predictions post every year. I've done it for a number of years, and I want to say it was either 2018 or 2019, we predicted that HashiCorp was one of these companies to watch. And then last August, on August 9th, we had supercloud event in Palo Alto. We had David McJannet in, who is the CEO of HashiCorp. And we really see Hashi as a key player in terms of affecting multicloud consistency. Sometimes we call it supercloud, you building on top of the hyperscale cloud. So super excited to have HashiCorp on. >> Really an important conversation. We've got an alumni back with us. Asvin Ramesh is here the senior director of Alliances at HashiCorp. Welcome back. >> Yeah, thank you. Good to be back. >> Great to have you. Talk to us a little bit about what's going on at HashiCorp, your relationship with Palo Alto Networks, and what's in it for customers. >> Yeah, no, no, great question. So, Palo Alto has been a fantastic partner of ours for many years now. We started way back in 2018, 2019 focusing on the basics, putting integrations in place that customers can be using together. And so it's been a great journey. Both are very synergistic. Palo Alto is focused on multicloud, so are we, we focus on cloud infrastructure automation, and ensuring that customers are able to bring in agility, reliability, security, and be able to deliver to their business. And then Palo Alto brings in great security components to that multicloud story. So it's a great story altogether. >> Some of the challenges that organizations have been facing. Palo Alto just released a survey, I think this morning if I can find it here what's next in cyber organizations facing massive headwinds ransomware becoming a household word, business email compromise being a challenge. But also in the last couple of years the massive shift to multi-club or organizations are living an operating need to do so securely. It's no longer nice to have anymore. It's absolutely table stakes for survival, and being able to thrive and grow for any business. >> Yeah, no, I think it's almost a sort of rethinking of how you would build your infrastructure up. So the more times you do it right the better you are built to scale. That's been one of the bedrocks of how we've been working with Palo Alto, which is rethinking how should IT be building their infrastructure in a multicloud world. And I think the market timing is right for both of us in terms of the progress that we've been able to make. >> So, I mean Terraform has really become sort of a key ingredient to the cloud operating model, especially across clouds. Kind of describe how partners, and customers are are implementing that cross-cloud capability. What's that journey look like? What's the level of maturity today? >> Yeah, great question, Dave. So we sort of see customers in three buckets. The first bucket is when customers are in the initial phases of their cloud journey. So they have disparate teams in their business units try out clouds themselves. Typically there is some event that occurs either some sort of a security scare or a a cloud cost event that triggers a rethinking of how they should be thinking about this in a scalable way. So that leads to where the cloud operating model which is a framework that HashiCorp has. And we use that successfully with customers to talk them through how they should be thinking about their process, about how they should be standardizing how people operate, and then the products they should be including, but then you come to that stage, and you start to think about a centralized platform team that is putting in golden workflows, that is putting in as a service mindset for their business units thinking through policies at a corporate level. And then that is a second stage. And then, but this is also in some customers more around public clouds. But then the third stage that we see is when they start embracing their private cloud or the on-prem data center, and have the same principles address across both public clouds, and the on-prem data center, and then Terraform scale for any infrastructure. So, once you start to put these practices in place not just from a technology standpoint, but from a process, and product standpoint, you're easily able to scale with that central platform organization. >> So, it's all about that consistency across your estate irrespective of whether it's on-prem in AWS, Azure, Google, the Edge, maybe. I mean, that's starting, right? >> Asvin: Yes. >> And so when you talk about the... Break it down a little bit process and product, where do you and Palo Alto sort of partner and add value? What's that experience like? >> Yeah, so, I think as I mentioned earlier the bedrock is having ways in which customers are able to use our products together, right? And then being able to evangelize the usage of that product. So one example I'll give you is with Prisma Cloud, and Terraform Cloud to your point about Terraform earlier. So customers can be using Prisma Cloud with Terraform Cloud in a way that you can get security context telemetry during an infrastructure run, and then use policies that you have in Prisma Cloud to be able to get or run or to implement or run or make sure essentially it is adhering to your security policy or any other audits that you want to create or any other cost that you want to be able to control. >> Where are your customer conversations these days? We know that security is a board level conversation. Interestingly, in that same survey that Palo Alto released this morning that I mentioned they found that there's a big lack of alignment between the board and the C-suite staff, the executive suite in terms of security. Where are your conversations, and how are you maybe facilitating that alignment that needs to be there? Because security it's not a nice to have. >> Yeah, I think in our experience, the alignment is there. I think especially with the macro environment it's more about where where do you allocate those resources. I think those are conversations that we're just starting to see happen, but I think it's the natural progression of how the environment is moving, and maybe another quarter or two, I think we'll see greater alignment there. >> So, and I saw some data that said I guess it was a study you guys did 90% of customer say multicloud is working for them. That surprised me 'cause you hear all this negativity around multicloud, I've been kind of negative about multicloud to be honest. Like that's a symptom of MNA, and a or multi-vendor. But how do you interpret that? When they say multicloud is working? How so? >> Yeah, I think the maturity of customers are varied as I mentioned through the stages, right? So, there are customers who even in the initial phases of their journey where they have different business units using different clouds, and from a C standpoint that might still look like multicloud, right? Though the way we think about it is you should be really in stage two, and stage three to real leverage the real power of multicloud. But I think it's that initial hump that you need to go through, and being able to get oriented towards it, have the right set of skillsets, the thought process, the product, the process in place. And once you have that then you'll start reaping the benefits over a period of time, especially when some other environments events happen, and you're able to easily adjust to that because you're leveraging this multicloud environment, and you have a clear policy of where you'll use which cloud. >> So I interpreted that data as, okay, multicloud is working from the standpoint of we are multicloud, okay? So, and our business is working, but when I talk to customers, they want more to your point, they want that consistent experience. And so it's been by, to use somebody else's term, by default. Chuck Whitten I think came up with that term versus by design. And now I think they have an objective of, okay, let's make multicloud work even better. Maybe I can say that. And so what does that experience look like? That means a common experience all the way through my stack, my infrastructure stack, which is that's going to be interesting to see how that goes down 'cause you got three separate clouds, and are doing their own APIs. But certainly from a security standpoint, the PaaS layer, even as I go up the stack, how do you see that outcome, and say the next two to five years? >> Yeah, so, we go back to our customers, and they're very successful ones who've used the cloud operating model. And for us the cloud operating model for us includes four layers. So on the infrastructure layer, we have Terraform and Packer, on the security layer we have Vault and Boundary, on the networking layer we have Consul, and then on applications we have Nomad and Waypoint. But then you really look at, from a people process, and product standpoint, for people it's how do you standardize the workflows that they're able to use, right? So if you have a central platform team in place that is looking at common use cases that multiple business units are using. and then creates a golden workflow, for example, right? For these various business units to be able to use or creates what we call a system of record for cloud adoption it helps multiple business units then latch onto this work that this central platform team is doing. And they need to have a product mindset, right? So not like a project that you just start and end with. You have this continuous improvement mindset within that platform team. And they build these processes, they build these golden workflows, they build these policies in place, and then they offer that as a service to the business units to be able to use. So that increases the adoption of multicloud. And also more importantly, you can then allow that multicloud usage to be governed in the way that aligns with your overall corporate objectives. And obviously in self-interest, you'd use Terraform or Vault because you can then use it across multiple clouds. >> Well, let's say I buy into that. Okay, great. So I want that common experience 'cause so when you talk about infrastructure, take us through an example. So when I hear infrastructure, I say, okay if I'm using an S3 bucket over here an Azure blob over there, they got different APIs, they got different primitives. I want you to abstract that away. Is that what you do? >> Yeah, so I think we've seen different use cases being used across different clouds too. So I don't think it's sort of as simple as, hey, should I use this or that? It is ensuring that the common tool that you use to be able to leverage safer provisioning, right? Is Terraform. So the central team is then trained in not only just usage of Terraform open source, but their Terraform cloud, which is our managed service, and Terraform enterprise which is the self-managed, but on-prem product, it's them being qualified to be able to build these consistent workflows using whatever tool that they have or whatever skew that they have from Terraform. And then applying business logic on top of that to your point about, hey, we'd like to use AWS for these kind of workloads. We'd like to use GCP, for example, on data or use Microsoft Azure for some other type of- >> Collaboration >> Right? But the common tooling, right? Remains around the usage of Terraform, and they've trained their teams there's a standard workflow, there's standard process around it. >> Asvin, I was looking at that survey the HashiCorp state of cloud strategy survey, and it talked about skill shortages as being the number one barrier to multicloud. We talk about the cyber skills gap all the time. It's huge. It's obviously a huge issue. I saw some numbers just the other day that there's 26 million developers but there's less than 3 million cybersecurity professionals. How does HashiCorp and Palo Alto Networks, how do you help customers address that skills gap so that they that they can leverage multicloud as a driver of the business? >> Yeah, another great question. So I think I'd say in two or three different ways. One is be able to provide greater documentation for our customers to be able to self use the product so that with the existing people, for example, you build out a known example, right? You're trying to achieve this goal here is how you use our products together. And so they'll be able to self-service, right? So that's one. Second is obviously both of us have great services partners, so we are always working with these services partners to get their teams trained and scaled up around these skill gaps. And I think I'd say the third which is where we see a lot of adoption is around usage of the managed services that we have. If you take Palo Alto's example in this Palo Alto will speak better to it, but they have SOC services, right? That you can consume. So, they're performing that service for you. Similarly, on our side we have a HashiCorp Cloud Platform, HCP, where you can consume Vault as a service, you can consume Consul as a service. Terraform cloud is a managed service, so you don't need as many people to be able to run that service. And we abstract all the complexity associated with that by ourselves, right? So I'd say these are the three ways that we address it. >> So Zero Trust across big buzzword. We heard this in this morning keynotes, AWS is always saying, well, we'll talk about it too, but, okay, customers are starting to talk about Zero Trust. You talk to CISOs, they're like, yes, we're adopting this mentality of unless you're trusted, we don't trust you. So, okay, cool. So you think about the cloud you've got the shared responsibility model, and then you've got the application developers are being asked to do more, secure the code. You got the CISO now has to deal with not only the shared responsibility model, but shared responsibility models across clouds, and got to bring his or her security ethos to the app dev team, and then you got to audit kind of making sure they're like the last line of defense. So my question is when you think about code security and Zero Trust in that new environment the problem with a lot of the clouds is they don't make the CISOs life any easier. So I got to believe that your objective with Palo Alto is to actually make the organization's lives easier. So, how do you deal with all that complexity in specifically in a Zero Trust multicloud environment? >> Yeah, so I'll give you a specific example. So, on code to cloud security which is one of Palo Alto's sort of key focus area is that Prisma Cloud and Terraform Cloud example that I gave, right? Where you'd be able to use what we call run tasks essentially, web hook integrations to be able to get a run or provide some telemetry back to Prisma Cloud for customers to be able to make a decision. On the Zero Trust side, we partner both on the Prisma Cloud side, and the Cortex XSOAR side around our products of Vault and and Consul. So what Vault does is it allows you to control secrets, it allows you to store secrets. So a Prisma Cloud or a Cortex customer can be using secrets from Vault familiarly for that particular transaction or workflow itself, right? Rather than, and so it's based on identity, and not on the basis of just the secret sort of lying around. Same thing with console helps you with discovery, and management of services. So, Cortex and you can automate, a lot of this work can get automated using the product that I talked about from Zero Trust. I think the key thing for Zero Trust in our view is it is a end destination, right? So it'll take certain time, depends on the enterprise, depends on where things are. It's a question of specifically focusing on value that Palo Alto and HashiCorp's products bring to solve specific use cases within that Zero Trust bucket, and solve one problem at a time rather than try to say that, hey, only Palo Alto, and only HashiCorp or whatever will solve everything in Zero Trust, right? Because that is not going to be- >> And to your point, it's never going to end, right? I mean you're talk about Cortex bringing a lot of automation. You guys bring a lot of automation now Palo Alto just bought Cider Security. Now we're getting into supply chain. I mean it going to hit it at the edge and IoT, the people don't want another IoT stove pipe. >> Lisa: No. >> Right? They want that to be part of the whole picture. So, you're never done. >> Yeah, no, but it is this continuous journey, right? And again, different companies are different parts of that journey, and then you go and rinse and repeat, you maybe acquire another company, and then they have a different maturity, so you get them on board on this. And so we see this as a multi-generational shift as Dave like to call it. And we're happy to be in the middle of it with Palo Alto Networks. >> It's definitely a multi-generational shift. Asvin, it's been great having you back on theCUBE. Thank you for giving us the update on what Hashi and Palo Alto are doing, the value in it for customers, the cloud operating model. And we should mention that HashiCorp yesterday just won a Technology Partner of the Year award. Congratulations. Yes. >> We're very, very thrilled with the recognition from Palo Alto Networks for the Technology Partner of the Year. >> Congrats. >> Thank you Keep up the great partnership. Thank you so much. We appreciate your insights. >> Thank you so much. >> For our guest, and for Dave Vellante, I'm Lisa Martin, live in Las Vegas. You watching theCUBE, the leader in live enterprise and emerging tech coverage. (upbeat music)
SUMMARY :
brought to you by Palo Alto Networks. This is day one of the So super excited to have HashiCorp on. the senior director of Good to be back. Great to have you. and be able to deliver to their business. the massive shift to multi-club So the more times you do it right sort of a key ingredient to So that leads to where So, it's all about that And so when you talk about the... and Terraform Cloud to your that needs to be there? of how the environment is moving, So, and I saw some data that said that you need to go through, and say the next two to five years? So that increases the Is that what you do? It is ensuring that the common tool But the common tooling, right? as a driver of the business? for our customers to be and got to bring his or her security ethos and not on the basis of just the secret And to your point, it's be part of the whole picture. and then you go and rinse and repeat, Partner of the Year award. for the Technology Partner of the Year. Thank you so much. the leader in live enterprise
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Asvin Ramesh | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
HashiCorp | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
2018 | DATE | 0.99+ |
2019 | DATE | 0.99+ |
Chuck Whitten | PERSON | 0.99+ |
David McJannet | PERSON | 0.99+ |
Palo Alto Networks | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Palo Alto | ORGANIZATION | 0.99+ |
90% | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
two day | QUANTITY | 0.99+ |
Palo | ORGANIZATION | 0.99+ |
Zero Trust | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
Asvin | PERSON | 0.99+ |
both | QUANTITY | 0.99+ |
third | QUANTITY | 0.99+ |
Second | QUANTITY | 0.99+ |
Terraform | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Vault | ORGANIZATION | 0.99+ |
August 9th | DATE | 0.99+ |
Both | QUANTITY | 0.99+ |
Cortex | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
last August | DATE | 0.98+ |
multicloud | ORGANIZATION | 0.98+ |
third stage | QUANTITY | 0.98+ |
three ways | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
first bucket | QUANTITY | 0.97+ |
Zero Trust | ORGANIZATION | 0.97+ |
Consul | ORGANIZATION | 0.97+ |
Hashi | ORGANIZATION | 0.96+ |
three buckets | QUANTITY | 0.96+ |
less than 3 million cybersecurity | QUANTITY | 0.96+ |
one problem | QUANTITY | 0.95+ |
second stage | QUANTITY | 0.95+ |
quarter | QUANTITY | 0.95+ |
Vishal Lall, HPE | HPE Discover 2022
>>the Cube presents H P E discovered 2022. Brought to you by H P E. >>Hi, buddy Dave Balon and Jon Ferrier Wrapping up the cubes. Coverage of day two, hp Discover 2022. We're live from Las Vegas. Vishal Lall is here. He's the senior vice president and general manager for HP ES Green Lake Cloud Services Solutions. Michelle, good to see you again. >>Likewise. David, good to see you. It was about a year ago that we met here. Or maybe nine months >>ago. That's right. Uh, September of last year. A new role >>for you. Is that right? I was starting that new role when I last met you. Yeah, but it's been nine months. Three quarters? What have you learned so far? I mean, it's been quite a right, right? I mean, when I was starting off, I had, you know, about three priorities we've executed on on all of them. So, I mean, if you remember back then they we talked about, you know, improving a cloud experience. We talked about data and analytics being a focus area and then building on the marketplace. I think you heard a lot of that over the last couple of days here. Right? So we've enhanced our cloud experience. We added a private cloud, which was the big announcement yesterday or day before yesterday that Antonio made so that's been I mean, we've been testing that with customers. Great feedback so far. Right? And we're super excited about that. And, uh, you know, uh, down there, the test drive section people are testing that. So we're getting really, really good feedback. Really good acceptance from customers on the data and Analytics side. We you know, we launched the S three connector. We also had the analytics platform. And then we launched data fabric as a service a couple of days ago, right, which is kind of like back into that hybrid world. And then on the marketplace side, we've added a tonne of partners going deep with them about 80 plus partners now different SVS. So again, I think, uh, great. I think we've accomplished a lot over the last three quarters or so lot more to be done. Though >>the marketplace is really interesting to us because it's a hallmark of cloud. You've got to have a market price. Talk about how that's evolving and what your vision is for market. Yes, >>you're exactly right. I mean, having a broad marketplace provides a full for the platform, right? It's a chicken and egg. You need both. You need a good platform on which a good marketplace can set, but the vice versa as well. And what we're doing two things there, Right? One Is we expanding coverage of the marketplace. So we're adding more SVS into the marketplace. But at the same time, we're adding more capabilities into the marketplace. So, for example, we just demoed earlier today quickly deploy capabilities, right? So we have an I S p in the marketplace, they're tested. They are, uh, the work with the solution. But now you can you can collect to deploy directly on our infrastructure over time, the lad, commerce capabilities, licencing capabilities, etcetera. But again, we are super excited about that capability because I think it's important from a customer perspective. >>I want to ask you about that, because that's again the marketplace will be the ultimate arbiter of value creation, ecosystem and marketplace. Go hand in hand. What's your vision for what a successful ecosystem looks like? What's your expectation now that Green Lake is up and running. I stay up and running, but like we've been following the announcement, it just gets better. It's up to the right. So we're anticipating an ecosystem surge. Yeah. What are you expecting? And what's your vision for? How the ecosystem is going to develop out? Yeah. I >>mean, I've been meeting with a lot of our partners over the last couple of days, and you're right, right? I mean, I think of them in three or four buckets right there. I s V s and the I S P is coming to two forms right there. Bigger solutions, right? I think of being Nutanix, right, Home wall, big, bigger solutions. And then they are smaller software packages. I think Mom would think about open source, right? So again, one of them is targeted to developers, the other to the I t. Tops. But that's kind of one bucket, right? I s P s, uh, the second is around the channel partners who take this to market and they're asking us, Hey, this is fantastic. Help us understand how we can help you take this to market. And I think the other bucket system indicators right. I met with a few today and they're all excited about. They're like, Hey, we have some tooling. We have the manage services capabilities. How can we take your cloud? Because they build great practise around extent around. Sorry. Aws around? Uh, sure. So they're like, how can we build a similar practise around Green Lake? So again, those are the big buckets. I would say. Yeah, >>that's a great answer. Great commentary. I want to just follow up on that real quick. You don't mind? So a couple things we're seeing observing I want to get your reaction to is with a i machine learning. And the promise of that vertical specialisation is creating unique opportunities on with these platforms. And the other one is the rise of the managed service provider because expertise are hard to come by. You want kubernetes? Good luck finding talent. So managed services seem to be exploding. How does that fit into the buckets? Or is it all three buckets or you guys enable that? How do you see that coming? And then the vertical piece? >>A really good question. What we're doing is through our software, we're trying to abstract a lot of the complexity of take communities, right? So we are actually off. We have actually automated a whole bunch of communities functionality in our software, and then we provide managed services around it with very little. I would say human labour associated with it is is software manage? But at the same time we are. What we are trying to do is make sure that we enable that same functionality to our partners. So a lot of it is software automation, but then they can wrap their services around it, and that way we can scale the business right. So again, our first principle is automated as much as we can to software right abstract complexity and then as needed, uh, at the Manus Services. >>So you get some functionality for HP to have it and then encourage the ecosystem to fill it in or replicated >>or replicated, right? I mean, I don't think it's either or it should be both right. We can provide many services or we should have our our partners provide manage services. That's how we scale the business. We are the end of the day. We are product and product company, right, and it can manifest itself and services. That discussion was consumed, but it's still I p based. So >>let's quantify, you know, some of that momentum. I think the last time you call your over $800 million now in a are are you gotta You're growing at triple digits. Uh, you got a big backlog. Forget the exact number. Uh, give us a I >>mean, the momentum is fantastic Day. Right. So we have about $7 billion in total contract value, Right? Significant. We have 1600 customers now. Unique customers are running Green Lake. We have, um, your triple dip growth year over year. So the last quarter, we had 100% growth year over year. So again, fantastic momentum. I mean, the other couple, like one other metric I would like to talk about is the, um the stickiness factor associated tension in our retention, right? As renewal's is running in, like, high nineties, right? So if you think about it, that's a reflection of the value proposition of, like, >>that's that's kind of on a unit basis, if you will. That's the number >>on the revenue basis on >>revenue basis. Okay? >>And the 1600 customers. He's talking about the size and actually big numbers. Must be large companies that are. They're >>both right. So I'll give you some examples, right? So I mean, there are large companies. They come from different industries. Different geography is we're seeing, like, the momentum across every single geo, every single industry. I mean, just to take some examples. BMW, for example. Uh, I mean, they're running the entire electrical electric car fleet data collection on data fabric on Green Lake, right? Texas Children's Health on the on the healthcare side. Right On the public sector side, I was with with Carl Hunt yesterday. He's the CEO of County of Essex, New Jersey. So they are running the entire operations on Green Lake. So just if you look at it, Barclays the financial sector, right? I mean, they're running 100,000 workloads of three legs. So if you just look at the scale large companies, small companies, public sector in India, we have Steel Authority of India, which is the largest steel producer there. So, you know, we're seeing it across multiple industries. Multiple geography is great. Great uptake. >>Yeah. We were talking yesterday on our wrap up kind of dissecting through the news. I want to ask you the question that we were riffing on and see if we can get some clarity on it. If I'm a customer, CI or C so or buyer HP have been working with you or your team for for years. What's the value proposition? Finish this sentence. I work with HPV because blank because green like, brings new value proposition. What is that? Fill in that blank for >>me. So I mean, as we, uh, talked with us speaking with customers, customers are looking at alternatives at all times, right? Sometimes there's other providers on premises, sometimes as public cloud. And, uh, as we look at it, uh, I mean, we have value propositions across both. Right. So from a public cloud perspective, some of the challenges that our customers cr around latency around, uh, post predictability, right? That variability cost is really kind of like a challenge. It's around compliance, right? Uh, things of that nature is not open systems, right? I mean, sometimes, you know, they feel locked into a cloud provider, especially when they're using proprietary services. So those are some of the things that we have solved for them as compared to kind of like, you know, the other on premises vendors. I would say the marketplace that we spoke about earlier is huge differentiator. We have this huge marketplace. Now that's developing. Uh, we have high levels of automation that we have built, right, which is, uh, you know, which tells you about the TCO that we can drive for the customers. What? The other thing that is really cool that be introduced in the public in the private cloud is fungible itty across infrastructure. Right? So basically on the same infrastructure you can run. Um, virtual machines, containers, bare metals, any application he wants, you can decommission and commission the infrastructure on the fly. So what it does, is it no matter where it is? Uh, on premises, right? Yeah, earlier. I mean, if you think about it, the infrastructure was dedicated for a certain application. Now we're basically we have basically made it compose herbal, right? And that way, what? Really? Uh, that doesnt increases utilisation so you can get increased utilisation. High automation. What drives lower tco. So you've got a >>horizontal basically platform now that handle a variety of work and >>and these were close. Can sit anywhere to your point, right? I mean, we could have a four node workload out in a manufacturing setting multiple racks in a data centre, and it's all run by the same cloud prints, same software train. So it's really extensive. >>And you can call on the resources that you need for that particular workload. >>Exactly what you need them exactly. Right. >>Excellent. Give you the last word kind of takeaways from Discover. And where when we talk, when we sit down and talk next year, it's about where do you want to be? >>I mean, you know, I think, as you probably saw from discovered, this is, like, very different. Antonio did a live demo of our product, right? Uh, visual school, right? I mean, we haven't done that in a while, so I mean, you started. It >>didn't die like Bill Gates and demos. No, >>no, no, no. I think, uh, so I think you'll see more of that from us. I mean, I'm focused on three things, right? I'm focused on the cloud experience we spoke about. So what we are doing now is making sure that we increase the time for that, uh, make it very, you know, um, attractive to different industries to certifications like HIPAA, etcetera. So that's kind of one focus. So I just drive harder at that adoption of that of the private out, right across different industries and different customer segments. The second is more on the data and analytics I spoke about. You will have more and more analytic capabilities that you'll see, um, building upon data fabric as a service. And this is a marketplace. So that's like it's very specific is the three focus areas were driving hard. All right, we'll be watching >>number two. Instrumentation is really keen >>in the marketplace to I mean, you mentioned Mongo. Some other data platforms that we're going to see here. That's going to be, I think. Critical for Monetisation on the on on Green Lake. Absolutely. Uh, Michelle, thanks so much for coming back in the Cube. >>Thank you. Thanks for coming. All >>right, keep it right. There will be John, and I'll be back up to wrap up the day with a couple of heavies from I d. C. You're watching the cube. Mhm. Mm mm. Mhm.
SUMMARY :
Brought to you by H P E. Michelle, good to see you again. David, good to see you. Uh, September of last year. I mean, when I was starting off, I had, you know, about three priorities we've executed on the marketplace is really interesting to us because it's a hallmark of cloud. I mean, having a broad marketplace provides a full for the platform, I want to ask you about that, because that's again the marketplace will be the ultimate arbiter of I s V s and the I S P is coming And the other one is the rise of the managed service provider because expertise are hard to come by. So again, our first principle is automated as much as we can to software right abstract complexity I mean, I don't think it's either or it should be both right. I think the last time you call your over $800 million now So the last quarter, we had 100% growth year over year. that's that's kind of on a unit basis, if you will. And the 1600 customers. So just if you look at it, Barclays the financial sector, right? I want to ask you the question that we were riffing So basically on the same infrastructure you can run. I mean, we could have a four node workload Exactly what you need them exactly. And where when we talk, when we sit down and talk next year, it's about where do you want to be? I mean, you know, I think, as you probably saw from discovered, this is, like, very different. I'm focused on the cloud experience we spoke about. Instrumentation is really keen in the marketplace to I mean, you mentioned Mongo. Thanks for coming. right, keep it right.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
BMW | ORGANIZATION | 0.99+ |
Vishal Lall | PERSON | 0.99+ |
Jon Ferrier | PERSON | 0.99+ |
India | LOCATION | 0.99+ |
Dave Balon | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Barclays | ORGANIZATION | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Michelle | PERSON | 0.99+ |
yesterday | DATE | 0.99+ |
three | QUANTITY | 0.99+ |
John | PERSON | 0.99+ |
1600 customers | QUANTITY | 0.99+ |
last quarter | DATE | 0.99+ |
September | DATE | 0.99+ |
Carl Hunt | PERSON | 0.99+ |
S three | COMMERCIAL_ITEM | 0.99+ |
next year | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
HP ES Green Lake Cloud Services Solutions | ORGANIZATION | 0.99+ |
Green Lake | LOCATION | 0.99+ |
today | DATE | 0.99+ |
over $800 million | QUANTITY | 0.99+ |
about $7 billion | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
nine months | QUANTITY | 0.98+ |
Antonio | PERSON | 0.98+ |
Bill Gates | PERSON | 0.98+ |
one | QUANTITY | 0.98+ |
three legs | QUANTITY | 0.98+ |
two forms | QUANTITY | 0.98+ |
first principle | QUANTITY | 0.98+ |
2022 | DATE | 0.98+ |
about 80 plus partners | QUANTITY | 0.98+ |
Discover | ORGANIZATION | 0.98+ |
four buckets | QUANTITY | 0.98+ |
Steel Authority of India | ORGANIZATION | 0.97+ |
100,000 workloads | QUANTITY | 0.97+ |
two things | QUANTITY | 0.97+ |
One | QUANTITY | 0.97+ |
one focus | QUANTITY | 0.96+ |
couple | QUANTITY | 0.96+ |
Three quarters | QUANTITY | 0.95+ |
three things | QUANTITY | 0.95+ |
Manus Services | ORGANIZATION | 0.94+ |
Essex, New Jersey | LOCATION | 0.94+ |
hp | ORGANIZATION | 0.93+ |
day two | QUANTITY | 0.93+ |
Texas Children's Health | ORGANIZATION | 0.92+ |
about a year ago | DATE | 0.89+ |
TCO | ORGANIZATION | 0.88+ |
nine months >>ago | DATE | 0.88+ |
HPE | ORGANIZATION | 0.88+ |
HPV | ORGANIZATION | 0.87+ |
every single industry | QUANTITY | 0.86+ |
couple of days ago | DATE | 0.85+ |
three focus areas | QUANTITY | 0.85+ |
last three quarters | DATE | 0.84+ |
Mongo | ORGANIZATION | 0.84+ |
tonne of partners | QUANTITY | 0.84+ |
last year | DATE | 0.83+ |
100% growth | QUANTITY | 0.8+ |
HIPAA | TITLE | 0.8+ |
Green Lake | ORGANIZATION | 0.75+ |
single geo | QUANTITY | 0.75+ |
P E | ORGANIZATION | 0.69+ |
last couple of days | DATE | 0.68+ |
I t. Tops | ORGANIZATION | 0.66+ |
County | ORGANIZATION | 0.65+ |
earlier today | DATE | 0.64+ |
nineties | QUANTITY | 0.62+ |
H | PERSON | 0.61+ |
P | ORGANIZATION | 0.6+ |
H P E. | ORGANIZATION | 0.6+ |
SVS | ORGANIZATION | 0.58+ |
years | QUANTITY | 0.56+ |
Discover 2022 | COMMERCIAL_ITEM | 0.56+ |
Pure Storage At Your Storage Service Full Show V1
>>When AWS introduced the modern cloud in 2006, many people didn't realize the impact that it would have on the industry, but some did see the future of an as a service economy coming. I mean, SAS offerings came out several years before. And the idea of applying some of these concepts to infrastructure and simplifying deployment and management, you know, kinda looked enticing to a lot of customers and a subscription model, or, but yet a consumption model was seen as a valuable proposition by many customers. Why not apply it to infrastructure? And why should the hyperscalers have all the fun welcome to at your storage service? My name is Dave ante. And as an analyst at the time, I was excited about the, as a service trend early on. And one of the companies that caught my attention back in the beginning of last decade was pure storage. >>Pure not only was delivering cloud- simplicity, but it's no forklift approach to infrastructure was ahead of its time. And that's why we're here today to dig into what's happening with the, as a service trends that we see popping up all over the world today, we're gonna dig into three sessions with noted experts in the field. First pre Darie is the general manager of the digital experience business unit at pure storage. He's gonna join us. And then we bring in Steve McDowell, Steve's a senior analyst for data and storage at more insights and strategy, a well known consultancy and analyst firm. And finally, we close with Amil sta Emil is the chief commercial officer and chief marketing officer at open line, open lines, a managed service provider. They serve the mid-market and Emil's got a very wide observation space. He's gonna share what he's seeing with customers. So sit back and enjoy the show. >>The cloud has popularized many useful concepts in the past decade, working backwards from the customer two pizza teams, a DevOps mindset, the shared responsibility model in security. And of course the shift from CapEx to OPEX and as a service consumption models. The last item is what we're here to talk about today. Pay for consumption is attractive because you're not over provisioning. At least not the way you used to you'd have to buy for peak capacity events, but there are always two sides to every story and well pay for use more closely ties. It consumption to business value procurement teams. Don't always love the uncertainty of the cloud bill each month, but consumption pricing. And as a service models are here to stay in software and hardware. Hello, I'm Dave ante and welcome to at your storage service made possible by pure storage. And with me is Pash DJI. Who's the general manager of the digital experience business unit at pure Pash. Welcome to the program. >>Thanks Dave. Thanks for having me. >>You bet. Okay. We've seen this shift to, as a service, the, as a service economy, subscription models, and this as a service movement have gained real momentum. It's it's clear over the past several years, what's driving this shift. Is it pressure from investors and technology companies that are chasing the all important ARR, their annual recurring revenue stream? Is it customer driven? Give us your insights. >>Well, look, um, I think we'll do some definitional stuff first. I think we often mix the definition of a subscription and a service, but, you know, subscription is, Hey, I can go for pay up front or pay as I go. Service is more about how do I not buy something just by the outcome. So, you know, the concept of delivering storage as a service means, what do you want in storage performance, capacity availability? Like that's what you want. Well, how do you get that without having to worry about the labor of planning capacity management, those labor elements are what's driving it. So I think in the world where you have to do more with less and in a world where security becomes increasingly important, where standardization will allow you to secure your landscape against ransomware and those types of things, those trends are driving the ation of storage and the only way to deliver that is storage as a service. >>So that's, that's good. You maybe thinking about it differently than some of the other companies that I talked to, but so you, you, you've made inroads here pretty big inroads actually, and changed the thinking in enterprise data storage with a huge emphasis on simplicity. That's really pures rayon Detra. How does storage as a service fit into your innovation agenda overall? >>Well, our innovation agenda started, as you mentioned with the simplicity, you know, a decade ago with the evergreen architecture, that architecture was beyond the box. How do you go ahead and say, I can improve performance or capacity as I need it? Well, that's a foundational element to deliver a service because once you have that technology, you can say, oh, you know what? You've subscribed to this performance level. You want to raise your performance level and yes, that'll be a higher dollar per gig or dollar per terabyte. But how do you do that without a data migration? How do you do that with a non disruptive service change? How do you do that with a delivery via a software update, those elements of non disruptive updates. When you think SAS, Salesforce, you don't know when Salesforce doesn't update, you don't know when they're increasing something, adding a new capability just shows up. It's not a disruptive event. So to drive that standardization and sation and service delivery, you need to keep that simplicity of delivery first and foremost, and you can't allow, like, if the goal was, I want to change from this service tier to that service tier and a person needed to show up and do a day data migration, that's kind of useless. You've broken the experience of flexibility for a customer. >>Okay. So I like the Salesforce analogy, but I wanna jump out, do a little side for a second. So I I've gotta, I've gotta make some commitment to pure, right. Some baseline commitment. And if I do, then I can dial up and pay for what I use and I can dial it down. Correct? Correct. Okay. I can't do that with Salesforce. <laugh> right. I could dial up, but then I'm stuck with those licenses. So you have a better model in Salesforce. I would argue. Okay. Yeah, >>I would, I would agree with that. >>Okay. So, and I gotta pay for everything up front anyway. Um, let's go back. I was kind of pushing at you a little bit at my upfront, you know, about, you know, the ARR model, the, the all important, you know, financial metric, but let's talk from the customers standpoint. What are the benefits of consuming storage as a service from your customer's perspective? >>Well, one is when you start your storage journey, do you really know what you need? And I would argue most of the time people are guessing, right? It's like, well, I think I need this. This is the performance I think I need. Or this is the capacity I think I need. And, you know, with the scientific method, you actually deploy something and you're like, do I need more? Do I need less? You find out as you're deploying. So in a storage as a service world, when you have the ability to move up performance levels or move out capacity levels, and you have that flexibility, then you have the ability to just to meet demand as you deploy. And that's the most important element of meeting business needs today. The applications you deploy are not in your control when you're providing storage to your end consumers. >>Yeah. They're gonna want different levels of storage. They're gonna want different performance thresholds. That's kind of a pay, you know, pay for performance type culture, right? You can use HR analogies for it. You pay for performance. You want top talent, you pay for it. You want top storage performance, you pay for it. Um, you don't, you can pay less and you can actually get lower performance, tiers, not everything is a tier one application. And you need the ability to deploy it. But when you start, how do you know the way your end customers are gonna be consuming? Or do you need a dictated upfront? Cause that's infrastructure dictating business inflexibility, and you never want to be in that position. >>I, I got another analogy for you. It's like, you know, we do a lot of hosting at our home and you know, like Thanksgiving, right? And you go to the liquor store and say, okay, what should I get? Should we get red wine? We gotta go white wine. We gotta get some beer. Should I get bubbles? Yeah, I get some bubbles. Cause you don't know what people are gonna have. And so you over provision everything <laugh> and then there's a run on bubbles and you're like, ah, we run outta bubbles. So you just over buy, but there's a liquor store that actually will take it back. So I gotta do business with those guys every time. Cuz it's way more flexible. I can dial up capacity or can dial up performance and dial it back down if I don't use it >>Or you or you're gonna be drinking a lot more the next few weeks. >>Yeah, exactly. Which is the last thing you want. Okay. So let's talk about how pure kind of meets this as a service demand. You've touched upon your, your differentiators from others in the market. Um, you know, love to hear about the momentum. What, what are you seeing out there? >>Yeah. Look, our business is growing well, largely built on, you know, what customers need. Um, specifically where the market is at today is there's a set of folks that are interested in the financial transformation of CapEx to OPEX, where like that definitely exists in the industry around how do I get a pay use model? The next kind of more advanced customer is interested in how do I go ahead and remove labor to deliver storage? And a service gets you there on top of a subscription. The most sophisticated customer says, how do I separate storage production with consumption and production of storage. Being a storage producer should be about standardization. So I could do policy based management. Why is that important? You know, coming back to some of the things I said earlier in the world where ransomware attacks are common, you need the standardized security policies. >>Linux has new vulnerabilities every, every other day, like find 2, 2, 3 critical vulnerabilities a week. How do you stay on top of it? The complexity of staying on top of it should be, look, let's standardize and make it a vendor problem. And assume the vendor's gonna deliver this to me. So that standardization allows you to have business policies that allow you to stay current and modern. I would argue in, you know, the traditional storage and appliance world, you buy something and the day a, the day after you buy it, it's worthless. It's like driving a car off a lot, right? The very next day, the car's not worth what it was when you bought it. Storage is the same way. So how do you ensure that your storage stays current? How do you ensure that it gets like a fine line that gets better, better with age? Well, if you're not buying storage and you're buying a performance SLA, it's up to the vendor to meet that SLA. So it actually never gets worse over time. This is the way you modernize technology and avoid technology debt as a customer. >>Yeah. I mean, just even though words you're using in the way you're thinking about this precaution, I think are, are, are different. Uh, and I love the concept of essentially taking my labor cost and transferring them to pures R and D I mean, that's essentially what you're talking about here. Um, so let's, let's, let's stick with the, the, the tech for a minute. What do you see as new or emerging technologies that are helping accelerate this shift toward the, as a service economy? >>Well, the first thing is I always tell people, you can't deliver a service without monitoring, because if you can't monitor something, how you're gonna know what your, whether you're meeting your service level obligation, right? So everything starts with data monitoring. The next step layering on the technology. Differentiation is if you need to deliver a service level, OB obligation on top of that data monitoring, you need the ability to flexibly, meet whatever performance obligations you have in a tight time window. So supply chain and being able to deliver anywhere becomes important. So if you use the analogy today of how Tesla works or a IOT system works, you have a SaaS management that actually provides instructions that push pushes those instructions and policies to the edge. In Tesla's case, that happens to be the car it'll push software updates to the car. It'll push new map updates to the car, but the car is running independently. >>It's not like if the car becomes disconnected from the internet, it's gonna crash and drive you off the road in the same way. What if you think about storage as something that needs to be wherever your application is? So people think about cloud as a destination. I think that's a fallacy. You have to think about the world in the world in the view of an application, an application needs data, and that data needs to sit in storage wherever that application sits. So for us, the storage system is just an edge device. It can be sitting in your data center, it can be sitting in a Equinix. It can be sitting in hosted, an MSP can run. It can, can even be sitting in the public cloud, but how do you have central monitoring and central management where you can push policies to update all those devices? >>Very similar to an I IOT system. So the technology advantage of doing that means that you can operate anywhere and ensure you have a consistent set of policies, a consistent set of protection, a consistent set of, you know, prevention against ransomware attack, regardless of your application, regardless of, uh, you know, where it sits, regardless of what content in you're on that approach is very similar to the way the T industry has been updating and monitoring edge devices, nest, thermostats, you know, Tesla cars, those types of things. That's the thinking that needs to come to. And that's the foundation on which we built PI as a service. >>So that implies, or at least I infer that you've obviously got control of the experience on Preem, but you're extending that, uh, into AWS, Google Azure, which suggests to me that you have to hide the underlying complexity of the primitives and APIs in that world. And then eventually, actually today, cuz you're treating everything like the edge out to the edge, you know, maybe, maybe mini pure at some point in time. But so I call that super cloud that abstraction layer that floats above all the clouds on-prem and adds that layer of value. And is this singular experience? What you're talking about pushing, you know, policy throughout, is that the right way to think about it and how does this impact the ability to deliver true storage as a service? >>Oh, uh, that's absolutely the right way of thinking about it. The things that you think about from a, an abstraction kind of fall in three buckets, first, you need management. So how do you ensure a consistent management experience creating volumes, deleting volumes, creating buckets, creating files, creating directories, like management of objects and create a consistent API across the entire landscape. The second one is monitoring, how do you measure utilization and performance obligations or capacity obligations or uh, you know, policy violations, wherever you're at. And then the third one is more of a business one, which is procurement because you can't do it independent of procurement. Meaning what happens when you run out, you need to increase your reserve commits. Do you want to go on demand? How do you integrate it into company's procurement models, such that you can say, I can use what I need and any, it's not like every change order is a request of procurement. That's gonna break an as a service delivery model. So to get embedded in a customer's landscape where they don't have to worry about storage, you have to provide that consistency on management, monitoring and procurement across the tech. And yes, this is deep technology problems, whether it's running our storage on AWS or Azure or running it on prem or, you know, at some point in the future, maybe even, um, you know, pure mini at the edge. Right. <laugh> so, you know, tho all of those things are tied to our pure, a service delivery. >>Yeah, technically non-trivial but uh, Hey, you guys are on it. Well, we gotta leave it there. Pash. Thank you. Great stuff. Really appreciate your time. >>All right. Thanks for having me, man. >>You're very welcome. Okay. In a moment, Steve McDowell from more insights and strategies, it's gonna give us the analyst perspective on, as a service, you're watching the cube, the leader in high tech enterprise coverage. >>Why are customers making the change to pure as a service >>Other vendors, offering flexible consumption models will promise you the world on the surface. It's just what you need. But then you notice the asterisk that dreaded fine print. That turns just what you need into long-term commitments, disruptive upgrades and unpredictable costs, pure storage, launched pure as a service to provide the flexibility to respond to your ever changing needs. With clear per unit costs, no large upfront purchases and no asterisks. A usage based model should be simple, innovative, and adapt with the changing market. Unlike other vendors, pure is offering exactly that with options, for service tiers and short term contracts in a single unified subscription that allows you to improve your discounts over time. Pure makes sure you can grow and upgrade without ever taking your environment offline and without the constant worry of hidden costs with complete billing, transparency, unlike any other, you only pay for what you use and pure one helps track and predict demand from day to day, making sure you never outgrow your storage. So why are customers making the change to pure as a service convenient solutions with unlimited potential without the dreaded fine print? It's as simple as that, >>We're back with Steve McDowell, the principal analyst for data and storage at more insights and strategy. Hey Steve, great to have you on, tell us a little bit about yourself. You got a really interesting background and kind of a blend of engineering and strategy and what's your research focus? >>Yeah, so my research, my focus area is data and storage and all the things around that, right? Whether it's OnPrim or cloud or, or, or, you know, software as a service. Uh, my background, as you said, is a blend, right? I grew up as an engineer. I started off as an OS developer at IBM. Uh, came up through the ranks and, and shifted over into corporate strategy and product marketing and product management. Uh, and I've been doing, uh, working as an industry analyst now for about five years, more insights and strategy. >>Steve, how do you see this playing out in the next three to five years? I mean, cloud got it all started. It's gonna snowballing, you know, however you look at it, percent of spending on storage that you think is gonna land in as a service. How, how do you see the evolution here? >>I think it buyers are looking at as a service, a consumption based is, is, uh, uh, you know, a natural model. It extends the data center, brings all of the flexibility, all of the goodness that I get from public cloud, but without all of the downside and uncertainty around cost and security and things like that, right. That also come with a public cloud and it's delivered by technology providers that I trust and that I know, and that I've worked with, you know, for, in some cases, decades. So I don't know that we have hard data on how much, uh, adoption there is of the model, but we do know that it's trending up, uh, you know, and every infrastructure provider at this point has some flavor of offering in the space. So it's, it's clearly popular with CIOs and, and it practitioners alike. >>So Steve organizations are at a they're different levels of maturity in their, their transformation journeys. And of course, as a result, they're gonna have different storage needs that are aligned with their bottom line business objectives. From an it buyer perspective, you may have data on this, even if it's anecdotal, where does storage as a service actually fit in and can it be a growth lever >>Can absolutely be, uh, a growth leader. Uh, it, it gives me the flexibility as, as an it architect to scale my business over time, without worrying about how much money I have to invest in, in storage hardware. Right? So I, I get kind of, again, that cloudlike flexibility in terms of procurement and deployment. Uh, but it gives me that control by oftentimes being on site within my permit. And I manage it like a storage array that I own. Uh, so you know, it, it's, it's beautiful for, for organizations that are scaling and, and it's equally nice for organizations that just wanna manage and control cost over time. Um, so it's, it's a model that makes a lot of sense and fits and, and certainly growing in adoption and popularity. >>How about from a technology vendor perspective you've worked for in the, in the tech industry mm-hmm <affirmative> for, for companies? What do you think is gonna define the winners and losers in this space? If you were running strategy for, uh, storage company, what would you say? >>I, I think the days of, of a storage administrator managing, you know, rate levels and recovering and things of that sort are over, right, what would, what these organizations like pure delivering, but they're offerings is, is simplicity. It's a push button approach to deploying storage to the applications and workloads that need it, right. It becomes storage as a utility. So it's not just the, you know, the consumption based economic model of, of, uh, as a service. Uh, it, it's also the manageability that comes with that, or the flexibility of management that comes with that. I can push a button, deploy bites to, to, uh, you know, a workload that needs it. Um, and it just becomes very simple, right. For the storage administrator in a way that, you know, kind of old school OnPrim storage can't really deliver. >>You know, I wanna, I wanna ask you, I mean, I've been thinking about this because again, a lot of companies are, are, you know, moving, hopping on the, as a service bandwagon, I feel like, okay, in and of itself, that's not where the innovation lives, the innovation is gonna come from making that singular experience from on-prem to the clouds across clouds, maybe eventually out to the edge. Um, do you, do you, where do you see the innovation in as a service? >>Well, there there's two levels of innovation, right? One, one is business model innovation, right? I, I now have an organizational flexibility to build the infrastructure, to support my digital transformation efforts. Um, but on the product side and the offering side, it really is, as you said, it's about the integration of experience. Every enterprise today touches a cloud in some way, shape or form, right. I have data spread, not just in my data center, but at the edge, uh, oftentimes in a public cloud, maybe a private cloud, I don't know where my data is and it really lands on the storage providers to help me manage that and deliver that, uh, uh, manageability experience, uh, to, to the it administrators. So when I look at innovation in this space, you know, it's not just a storage array and rack that I'm leasing, right? This is not another lease model. It's really fully integrated, you know, end to end management of my data and, and, you know, and all of the things around that. >>Yeah. So you, to your point about a lease model is if you're doing a lease, you know, yeah. You can shift CapEx to OPEX, but you're still committed to, to, you have to over provision, whereas here, and I wanted to ask you about that. It's, it's, it's, it's an interesting model, right? Cuz you gotta read the fine print. Of course the fine print says you gotta commit to some level typically. And then if, you know, if you go over you, you charge for what you use and you can scale that back down and that's, that's gotta be very attractive for folks. I, I wonder if you will ever see like true cloud-like consumption pricing, that is two edges to it. Right. You see consumption based pricing in some of the software models and you know yeah. People like it, the lines of business maybe cuz they pay in by the drink, but then procurement hates it cuz they don't have predictability. How do you see the pricing models? Do you see that maturing or do you think we're sort of locked in on, on where we're at? >>No, I, I do. I do see that maturing. Right? And, and when you work with a company like pure to understand their consumption based and as a service offerings, uh, it, it really is sitting down and understanding where your data needs are going to scale, right? You, you buy in at a certain level, uh, you have capacity planning. You can expand if you need to, you can shrink if you need to. So it really does put more control in the hands of the it buyer than uh, well certainly then traditional CapEx based on-prem but also more control than you would get, you know, working with an Amazon or an Azure. >>Okay. Thanks Steve. We'll leave it there for now. I'd love to have you back. Keep it right there at your storage service continues in a moment. >>Some things are meant to last your storage should be one of them say hello to the evergreen storage program, say goodbye to refreshes and rebates. Forget planned downtime, performance impact and data migrations. Forget forklift upgrades. Evergreen storage starts with your agile storage architecture and covers the entire life cycle of the array from first purchase to ongoing use. And whenever it's time to modernize and grow, your satisfaction is covered with an evergreen subscription. You can get a full refund within 30 days for any reason, >>Our right size guarantee lets you buy just the storage you need never too much. Never not enough. Your array software is all inclusive. Even future releases and features maintenance and support costs remain constant throughout the life of your array. Proactive expert support is a true white glove experience. Evergreen maintenance ensures availability of any replacement components. Meet the demands of your business and protect your investment. Evergreen gold includes controller upgrades every three years. And if something unplanned comes up, evergreen gold provides upgrade flex the leading anytime upgrade feature to upgrade controllers whenever you need it. As you expand evergreen gold provides credits to consolidate storage with denser more modern flash. Evergreen is your subscription to continuous innovation for storage that lasts 10 years or more. Some things are meant to last make your storage. One of them >>We're back at your storage service. Emil Stan is here. He's the chief commercial officer and chief marketing officer of open line. Thank you Emil for coming on the cube. Appreciate your time. >>Thank you, David. Nice. Uh, glad to be here. >>Yes. Yeah. So tell us about open line. You're a managed service provider. What's your focus? >>Yeah, we're actually a cloud managed service provider and I do put cloud in front of the managed services because it's not just only the spheres that we manage. We have to manage the clouds as well nowadays. And then unfortunately, everybody only thinks there's one cloud, but it's always multiple layers in the cloud. So we have a lot of work in integrating it. We're a cloud manages provider in the Netherlands, focusing on, uh, companies who have head office in the Netherlands, mainly in the, uh, healthcare local government, social housing logistics department. And then in the midst size companies between say 250 to 10,000 office employees. Uh, and that's what we do. We provide 'em with excellent cloud managed services, uh, as it should be >>Interesting, you know, a lot early on in the cloud days, highly regulated industries like healthcare government were somewhat afraid of the cloud. So I'm sure that's one of the ways in which you provide value to your customers is helping them become cloud proficient. Maybe you could talk a little bit more about the value prop to customers. Why do they do business with you? >>And I think, uh, there are a number of reasons why they do business with us or choose to choose for our manage services provider that first of course are looking for stability and continuity. Uh, and, and from a cost perspective, predict predictable costs. But nowadays you also have a shortage in personnel and knowledge. So, and it's not always very easy for them to access, uh, those skill sets because most it, people just want to have, uh, a great variety in work, what they are doing, uh, towards, towards the local government, uh, healthcare, social housing. They actually, uh, a sector that, uh, that are really in between embracing the public cloud, but also have a lot of legacy and, and bringing together best of all, worlds is what we do. So we also bring them comfort. We do understand what legacy, uh, needs from a manager's perspective. We also know how to leverage the benefits in the public cloud. Uh, and, uh, I'd say from a marketing perspective, actually we focus on using an ideal cloud, being a mix of traditional and future based cloud. >>Thank you. I, you know, I'd like to get your perspective on this idea of as a service and the, as a service economy that we often talk about on the cube. I mean, you work with a lot of different companies. We talked about some of the industries and, and increasingly it seems like organizations are focused more on outcomes, continuous value delivery via, you know, suites of services and, and they're leaning into platforms versus one off product offerings, you know, do you see that? How do you see your customers reacting to this as a service trend? >>Yeah. Uh, to be honest, sometimes it makes it more complex because services like, look at your Android or iPhone, you can buy apps, uh, and download apps the way you want to. So they have a lot of apps about how do you integrate it into one excellent workflow, something that works for you, David or works for me. Uh, so the difficulty, some sometimes lies in, uh, the easy accessibility that you have to those solutions, but nobody takes into account that they're all part of a chain, a workflow supply chain, uh, and, and, uh, they're being hyped as well. So what we also have a lot of time in, in, in, in managing our customers is that the tremendous feature push feature push that there is from technology providers, SaaS providers. Whereas if you provide 10 features, you only need one or two, uh, but the other eight are very distracting from your prime core business. Uh, so there's a natural way in that people are embracing, uh, SA solutions, embracing cloud solutions. Uh, but what's not taken into account as much is that we love to see it is the way that you integrate all those solutions toward something that's workable for the person that's actually using them. And it's seldomly that somebody is only using one solution. There's always a chain of solutions. Um, so yeah, there are a lot of opportunities, but also a lot of challenges for us, but also for our customers, >>You see that trend toward, as a service continuing, or do you actually see based on what you're just saying that pendulum, you know, swinging back and forth, somebody comes out with a new sort of feature product and that, you know, changes the dynamic or do you see as a service really having legs? >>Ah, I, I think that's very, very good question, David, because that's something that's keeping our busy all the time. We do see a trend in a service looking at, uh, talk about pure later on. We also use pure as a service more or less. Yeah. And that really helps us. Uh, but you see, uh, um, that sometimes people make a step too, too fast, too quick, not well thought of, and then you see what they call sort of cloud repatriation, tend that people go back to what they're doing and then they stop innovating or stop leveraging. The possibilities are actually there. Uh, so from our consultancy, our guidance and architecture point of view, we try to help them as much as possible to think in a SA thought, but just don't use the, cloud's just another data center. Uh, and so it's all about managing the maturity on our side, but on our customer side as well. >>So I'm interested in how your sort of your philosophy and, and as relates, I think in, in, in terms of how you work with pure, but how do you stay tightly in lockstep with your customers so that you don't over rotate so that you don't and send them to over rotate, but then you're not also, you don't wanna be too late to the game. How, how do you manage all that? >>Oh, there's, there's, there's a world of interactions between us and our customers. And so I think a well known, uh, uh, thing that people is customer intimacy. That's very important for us to get to know our customers and get to predict which way they're moving. But the, the thing that we add to it is also the ecosystem intimacy. So no, the application and services landscape, our customers know the primary providers and work with them, uh, to, to, to create something that, that really fits the customers. They just not looked at from our own silo where a cloud managed service provider that we actually work in the ecosystem with, with, with, with the primary providers. And we have, I think with the average customers, I think we have, uh, uh, in a month we have so much interactions on our operational level and technical levels, strategic level. >>We do bring together our customers also, and to jointly think about what we can do together, what we independently can never reach. Uh, but we also involve our customers in, uh, defining our own strategy. So we have something we call a customer involvement board. So we present a strategy and say, does it make sense? Eh, this is actually what you need also. So we take a lot of our efforts into our customers and we do also, uh, understand the significant moments of truth. We are now in this, in this broadcast, David there. So you can imagine that at this moment, not thinking go wrong. Yeah. If, if, if the internet stops that we have a problem. And now, so we, we actually know that this broadcast is going on for our customers and we manage that. It's always on, uh, uh, where in the other moments in the week, we might have a little less attention, but this moment we should be there. And these moments of truth that we really embrace, we got them well described. Everybody working out line knows what the moment of truth is for our customers. Uh, uh, so we have a big logistics provider. For instance, you does not have to ask us to, uh, have, uh, a higher availability on black Friday or cyber Monday. We know that's the most important part in the year for him or her. Does it answer your question, David? >>Yes. We know as well. You know, when these big, the big game moments you have to be on your top, uh, top of your game, uh, you know, the other thing Emil about this as a service approach that I really like is, is it's a lot of it is consumption based and the data doesn't lie, you can see adoption, you know, daily, weekly, monthly. And so I wonder how you're leveraging pure as a service specifically in what kind of patterns you're seeing in, in, in the adoption. >>Uh, yeah, pure as a service for our customers is mainly never visible. Uh, we provide storage services to provide storage solutions, storage over is part of a bigger thing of a server of application. Uh, so the real benefits, to be honest, of course, towards our customer, it's all flash, uh, uh, and they have the fastest, fastest storage is available. But for ourself, we, uh, we use less resources to manage our storage. We have far more that we have a near to maintenance free storage solution now because we have it as a service and we work closely together with pure. Uh, so, uh, actually the way we treat our customers is that way pure treats us as well. And that's why there's a used click. So the real benefits, uh, uh, how we leverage is it normally we had a bunch of guys managing our storage. Now we only have one and knowing that's a shortage of it, personnel, the other persons can well be, uh, involved in other parts of our services or in other parts of an innovation. So, uh, that's simply great. >>You know, um, my takeaway the meal is that you've made infrastructure, at least, least the storage infrastructure, invisible to your customers, which is the way it should be. You didn't have to worry about it. And you've, you've also attacked the, the labor problem. You're not, you know, provisioning lungs anymore, or, you know, tuning the storage, you know, with, with arms and legs. So that's huge. So that gets me into the next topic, which is business transformation. That, that means that I can now start to attack the operational model. So I've got a different it model. Now I'm not managing infrastructure same way. So I have to shift those resources. And I'm presuming that it's a bus now becomes a business transformation discussion. How are you seeing your customers shift those resources and focus more on their business as a result of this sort of as a service trend? >>I think I do not know if they, they transform their business. Thanks to us. I think that they can more leverage their own business. They have less problems, less maintenance, et cetera, cetera, but we also add new, uh, certainties to it, like, uh, uh, the, the latest service we we released was imutable storage being the first in the Netherlands offering this thanks to, uh, thanks to the pure technology, but for customers, it takes them to give them a good night rest because, you know, we have some, uh, geopolitical issues in the world. Uh, there's a lot of hacking. People have a lot of ransomware attacks and, and we just give them a good night rest. So from a business transformation, does it transform their business? I think that gives them a comfort in running your business, knowing that certain things are well arranged. You don't have to worry about that. We will do that. We'll take it out of your hands and you just go ahead and run your business. Um, so to me, it's not really a transformation is just using the right opportunities at the right moment. >>The imutable piece is interesting because, because, but speaking of as a service, you know, anybody can go on the dark web and buy ransomware as a service. I mean, as it's seeing the, as a service economy hit, hit everywhere, the good and the, and the not so good. Um, and so I presume that your customers are, are looking at, I imutability as another service capability of the service offering and really rethinking, maybe because of the recent, you know, ransomware attacks, rethinking how they, they approach, uh, business continuance, business resilience, disaster recovery. Do you see that? >>Yep, definitely. Definitely. I tell not all of them yet. Imutable storage. So it's like an insurance as well, which you have when you have imutable storage and you have been, you have a ransomware attack at least have you part of data, which never, if data is corrupted, you cannot restore it. If your hardware is broken, you can order new hardware. Every data is corrupted. You cannot order new data. Now we got that safe and well. And so we offer them the possibility to, to do the forensics and free up their, uh, the data without tremendous loss of time. Uh, but you also see that you raise the new, uh, how do you say, uh, the new baseline for other providers as well? Eh, so there's security of the corporate information security officer, the CIO, they're all very happy with that. And they, they, they raise the baseline for us as well. So they can look at other security topics and look from say, security operation center. Cuz now we can really focus on our prime business risks because from a technical perspective, we got it covered. How can we manage the business risk, uh, which is a combination of people, processes and technology. >>Right. Makes sense. Okay. I'll give you the last word. Uh, talk about your relationship with pure, where you wanna see that that going in the future. >>Uh, I hope we've be working together for a long time. Uh, I, I ex experienced them very involved. Uh, it's not, we have done the sell and now it's all up to you now. We were closely working together. I know if I talk to my prime architect, Marcel height is very happy and it looks a little more or less if we work with pure, like we're working with colleagues, not with a supplier and a customer, uh, and uh, the whole pure concept is fascinating. Uh, I, uh, I had the opportunity to visit San Francisco head office and they told me to fish in how they launched, uh, pure being, if you want to implement it, it had to be on one credit card. The, the, the menu had to be on one credit card. Just a simple thought of put that as your big area, audacious goal to make the simplest, uh, implementable storage available. But for us, uh, it gives me the expectation that there will be a lot of more surprises with pur in the near future. Uh, and for us as a provider, what we, uh, literally really look forward to is that, that for us, these new developments will not be new migrations. It will be a gradual growth of our services or storage services. Uh, so that's what I expect. And that was what I, and we look forward to. >>Yeah, that's great. Uh, thank you so much, Emil, for coming on the, the cube and, and sharing your thoughts and best of luck to you in the future. >>Thank you. You're welcome. Thanks for having me. >>You're very welcome. Okay. In a moment, I'll be back to give you some closing thoughts on at your storage service. You're watching the cube, the leader in high tech enterprise coverage. >>Welcome to evergreen, a place where organizations grow and thrive rooted in the modern data experience in evergreen people find a seamless, simple way to leverage data through market leading sustainable technology, financial flexibility, and effortless management, allowing everyone to innovate with data confidently. Welcome to pure storage. >>Now, if you're interested in hearing more about Pure's growing portfolio of technology and services and how they're transforming the enterprise data experience, be sure to register for pure accelerate tech Fest. 22 digital event is also taking place as an in-person event. On June 8th, you can register at pure storage.com/accelerate, pure storage.com/accelerate. You're watching the cue, the leader in enterprise and emerging tech coverage.
SUMMARY :
you know, kinda looked enticing to a lot of customers and a subscription model, First pre Darie is the general manager of the digital experience At least not the way you used to you'd have to buy for Is it pressure from investors and technology companies that are chasing the all important ARR, the definition of a subscription and a service, but, you know, subscription is, and changed the thinking in enterprise data storage with a huge emphasis on simplicity. and service delivery, you need to keep that simplicity of delivery So you have a better model in Salesforce. you know, the ARR model, the, the all important, you know, financial metric, but let's talk from the customers And, you know, with the scientific method, you actually deploy something and you're like, And you need the ability to deploy It's like, you know, we do a lot of hosting at our home and you know, Which is the last thing you want. And a service gets you there on top of a subscription. So how do you ensure that your storage stays current? What do you see as new or emerging technologies that Well, the first thing is I always tell people, you can't deliver a It's not like if the car becomes disconnected from the internet, it's gonna crash and drive you off the road in uh, you know, where it sits, regardless of what content in you're on that approach is Google Azure, which suggests to me that you have to hide the underlying complexity you know, at some point in the future, maybe even, um, you know, pure mini at the edge. Yeah, technically non-trivial but uh, Hey, you guys are on it. Thanks for having me, man. the leader in high tech enterprise coverage. from day to day, making sure you never outgrow your storage. Hey Steve, great to have you on, tell us a little bit about yourself. Whether it's OnPrim or cloud or, or, or, you know, software as a service. It's gonna snowballing, you know, however you look at it, percent of spending on storage adoption there is of the model, but we do know that it's trending up, uh, you know, and every infrastructure provider From an it buyer perspective, you may have data on this, Uh, so you know, it, it's, it's beautiful for, For the storage administrator in a way that, you know, kind of old school OnPrim storage can't are, you know, moving, hopping on the, as a service bandwagon, I feel like, It's really fully integrated, you know, end to end management of my data and, And then if, you know, if you go over you, You can expand if you need to, you can shrink if you need to. I'd love to have you back. life cycle of the array from first purchase to ongoing use. feature to upgrade controllers whenever you need it. Thank you Emil for coming on the cube. What's your focus? only the spheres that we manage. Interesting, you know, a lot early on in the cloud days, highly regulated industries you also have a shortage in personnel and knowledge. I, you know, I'd like to get your perspective on this idea of as a service and the, much is that we love to see it is the way that you integrate all those solutions toward something that's workable Uh, but you I think in, in, in terms of how you work with pure, but how do you stay tightly So no, the application and services landscape, So you can imagine that at this moment, not thinking go wrong. You know, when these big, the big game moments you have to be on your So the real benefits, uh, uh, how we leverage is it normally we had a bunch of guys managing You're not, you know, provisioning lungs anymore, or, you know, tuning the storage, but for customers, it takes them to give them a good night rest because, you know, service offering and really rethinking, maybe because of the recent, you know, So it's like an insurance as well, which you have when you have imutable storage and you have been, where you wanna see that that going in the future. Uh, it's not, we have done the sell and now it's all up to you now. of luck to you in the future. Thanks for having me. You're very welcome. everyone to innovate with data confidently. you can register at pure storage.com/accelerate,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Steve | PERSON | 0.99+ |
Darie | PERSON | 0.99+ |
Steve McDowell | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Emil Stan | PERSON | 0.99+ |
Netherlands | LOCATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
2006 | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Tesla | ORGANIZATION | 0.99+ |
June 8th | DATE | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Emil | PERSON | 0.99+ |
10 features | QUANTITY | 0.99+ |
OPEX | ORGANIZATION | 0.99+ |
San Francisco | LOCATION | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
two sides | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
third one | QUANTITY | 0.99+ |
SAS | ORGANIZATION | 0.99+ |
eight | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
one credit card | QUANTITY | 0.99+ |
two levels | QUANTITY | 0.99+ |
CapEx | ORGANIZATION | 0.99+ |
one cloud | QUANTITY | 0.98+ |
Evergreen | ORGANIZATION | 0.98+ |
second one | QUANTITY | 0.98+ |
about five years | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
one solution | QUANTITY | 0.98+ |
five years | QUANTITY | 0.98+ |
2 | QUANTITY | 0.98+ |
Salesforce | ORGANIZATION | 0.98+ |
Thanksgiving | EVENT | 0.97+ |
last decade | DATE | 0.97+ |
250 | QUANTITY | 0.97+ |
each month | QUANTITY | 0.97+ |
pure storage.com/accelerate | OTHER | 0.97+ |
a decade ago | DATE | 0.97+ |
Marcel height | PERSON | 0.96+ |
Linux | TITLE | 0.96+ |
10 years | QUANTITY | 0.96+ |
first purchase | QUANTITY | 0.96+ |
Equinix | ORGANIZATION | 0.95+ |
first thing | QUANTITY | 0.95+ |
two pizza teams | QUANTITY | 0.95+ |
30 days | QUANTITY | 0.95+ |
10,000 office employees | QUANTITY | 0.95+ |
two edges | QUANTITY | 0.95+ |
Pash DJI | ORGANIZATION | 0.95+ |
single | QUANTITY | 0.94+ |
Pure | ORGANIZATION | 0.94+ |
Azure | TITLE | 0.93+ |
Brian Schwarz, Google Cloud | VeeamON 2022
(soft intro music) >> Welcome back to theCUBE's coverage of VeeamON 2022. Dave Vellante with David Nicholson. Brian Schwarz is here. We're going to stay on cloud. He's the director of product management at Google Cloud. The world's biggest cloud, I contend. Brian, thanks for coming on theCUBE. >> Thanks for having me. Super excited to be here. >> Long time infrastructure as a service background, worked at Pure, worked at Cisco, Silicon Valley guy, techie. So we're going to get into it here. >> I love it. >> I was saying before, off camera. We used to go to Google Cloud Next every year. It was an awesome show. Guys built a big set for us. You joined, right as the pandemic hit. So we've been out of touch a little bit. It's hard to... You know, you got one eye on the virtual event, but give us the update on Google Cloud. What's happening generally and specifically within storage? >> Yeah. So obviously the Cloud got a big boost during the pandemic because a lot of work went online. You know, more things kind of being digitally transformed as people keep trying to innovate. So obviously the growth of Google Cloud, has got a big tailwind to it. So business has been really good, lots of R&D investment. We obviously have an incredible set of technology already but still huge investments in new technologies that we've been bringing out over the past couple of years. It's great to get back out to events to talk to people about 'em. Been a little hard the last couple of years to give people some of the insights. When I think about storage, huge investments, one of the things that some people know but I think it's probably underappreciated is we use the same infrastructure for Google Cloud that is used for Google consumer products. So Search and Photos and all the public kind of things that most people are familiar with, Maps, et cetera. Same infrastructure at the same time is also used for Google Cloud. So we just have this tremendous capability of infrastructure. Google's got nine products that have a billion users most of which many people know. So we're pretty good at storage pretty good at compute, pretty good at networking. Obviously a lot of that kind of shines through on Google Cloud for enterprises to bring their applications, lift and shift and/or modernize, build new stuff in the Cloud with containers and things like that. >> Yeah, hence my contention that Google has the biggest cloud in the world, like I said before. Doesn't have the most IS revenue 'cause that's a different business. You can't comment, but I've got Google Cloud running at $12 billion a year run rate. So a lot of times people go, "Oh yeah, Google they're third place going for the bronze." But that is a huge business. There aren't a lot of 10, $12 billion infrastructure companies. >> In a rapidly growing market. >> And if you do some back of napkin math, whatever, give me 10, 15, let's call it 15% of that, to storage. You've got a big storage business. I know you can't tell us how big, but it's big. And if you add in all the stuff that's not in GCP, you do a lot of storage. So you know storage, you understand the technology. So what is the state of technology? You have a background in Cisco, nearly a networking company, they used to do some storage stuff sort of on the side. We used to say they're going to buy NetApp, of course that never happened. That would've made no sense. Pure Storage, obviously knows storage, but they were a disk array company essentially. Cloud storage, what's different about it? What's different in the technology? How does Google think about it? >> You know, I always like to tell people there's some things that are the same and familiar to you, and there's some things that are different. If I start with some of the differences, object storage in the Cloud, like just fundamentally different. Object storage on-prem, it's been around for a while, often used as kind of like a third tier of storage, maybe a backup target, compliance, something like that. In the cloud, object storage is Tier one storage. Public reference for us, Spotify, okay, use object storage for all the songs out there. And increasingly we see a lot of growth in-- >> Well, how are you defining Tier one storage in that regard? Again, are you thinking streaming service? Okay. Fine. Transactional? >> Spotify goes down and I'm pissed. >> Yeah. This is true. (Dave laughing) >> Not just you, maybe a few million other people too. One is importance, business importance. Tier one applications like critical to the business, like business down type stuff. But even if you look at it for performance, for capabilities, object storage in the cloud, it's a different thing than it was. >> Because of the architecture that you're deploying? >> Yeah. And the applications that we see running on it. Obviously, a huge growth in our business in AI and analytics. Obviously, Google's pretty well known in both spaces, BigQuery, obviously on the analytics side, big massive data warehouses and obviously-- >> Gets very high marks from customers. >> Yeah, very well regarded, super successful, super popular with our customers in Google Cloud. And then obviously AI as well. A lot of AI is about getting structure from unstructured data. Autonomous vehicles getting pictures and videos around the world. Speech recognition, audio is a fundamentally analog signal. You're trying to train computers to basically deal with analog things and it's all stored in object storage, machine learning on top of it, creating all the insights, and frankly things that computers can deal with. Getting structure out of the unstructured data. So you just see performance capabilities, importance as it's really a Tier one storage, much like file and block is where have kind of always been. >> Depending on, right, the importance. Because I mean, it's a fair question, right? Because we're used to thinking, "Oh, you're running your Oracle transaction database on block storage." That's Tier one. But Spotify's pretty important business. And again, on BigQuery, it is a cloud-native born in the cloud database, a lot of the cloud databases aren't, right? And that's one of the reasons why BigQuery is-- >> Google's really had a lot of success taking technologies that were built for some of the consumer services that we build and turning them into cloud-native Google Cloud. Like HDFS, who we were talking about, open source technologies came originally from the Google file system. Now we have a new version of it that we run internally called Colossus, incredible technologies that are cloud scale technologies that you can use to build things like Google Cloud storage. >> I remember one of the early Hadoop worlds, I was talking to a Google engineer and saying, "Well, wow, that's so cool that Hadoop came. You guys were the main spring of that." He goes, "Oh, we're way past Hadoop now." So this is early days of Hadoop (laughs) >> It's funny whenever Google says consumer services, usually consumer indicates just for me. But no, a consumer service for Google is at a scale that almost no business needs at a point in time. So you're not taking something and scaling it up-- >> Yeah. They're Tier one services-- for sure. >> Exactly. You're more often pairing it down so that a fortune 10 company can (laughs) leverage it. >> So let's dig into data protection in the Cloud, disaster recovery in the Cloud, Ransomware protection and then let's get into why Google. Maybe you could give us the trends that you're seeing, how you guys approach it, and why Google. >> Yeah. One of the things I always tell people, there's certain best practices and principles from on-prem that are just still applicable in the Cloud. And one of 'em is just fundamentals around recovery point objective and recovery time objective. You should know, for your apps, what you need, you should tier your apps, get best practice around them and think about those in the Cloud as well. The concept of RPO and RTO don't just magically go away just 'cause you're running in the Cloud. You should think about these things. And it's one of the reasons we're here at the VeeamON event. It's important, obviously, they have a tremendous skill in technology, but helping customers implement the right RPO and RTO for their different applications. And they also help do that in Google Cloud. So we have a great partnership with them, two main offerings that they offer in Google. One is integration for their on-prem things to use, basically Google as a backup target or DR target and then cloud-native backups they have some technologies, Veeam backup for Google. And obviously they also bought Kasten a while ago. 'Cause they also got excited about the container trend and obviously great technologies for those customers to use those in Google Cloud as well. >> So RPO and RTO is kind of IT terms, right? But we think of them as sort of the business requirement. Here's the business language. How much data are you willing to lose? And the business person says, "What? I don't want to lose any data." Oh, how big's your budget, right? Oh, okay. That's RPO. RTO is how fast you want to get it back? "How fast do you want to get it back if there's an outage?" "Instantly." "How much money do you want to spend on that?" "Oh." Okay. And then your application value will determine that. Okay. So that's what RPO and RTO is for those who you may not know that. Sometimes we get into the acronym too much. Okay. Why Google Cloud? >> Yeah. When I think about some of the infrastructure Google has and like why does it matter to a customer of Google Cloud? The first couple things I usually talk about is networking and storage. Compute's awesome, we can talk about containers and Kubernetes in a little bit, but if you just think about core infrastructure, networking, Google's got one of the biggest networks in the world, obviously to service all these consumer applications. Two things that I often tell people about the Google network, one, just tremendous backbone bandwidth across the regions. One of the things to think about with data protection, it's a large data set. When you're going to do recoveries, you're pushing lots of terabytes often and big pipes matter. Like it helps you hit the right recovery time objective 'cause you, "I want to do a restore across the country." You need good networks. And obviously Google has a tremendous network. I think we have like 20 subsea cables that we've built underneath the the world's oceans to connect the world on the internet. >> Awesome. >> The other thing that I think is really underappreciated about the Google network is how quickly you get into it. One of the reasons all the consumer apps have such good response time is there's a local access point to get into the Google network somewhere close to you almost anywhere in the world. I'm sure you can find some obscure place where we don't have an access point, but look Search and Photos and Maps and Workspace, they all work so well because you get in the Google network fast, local access points and then we can control the quality of service. And that underlying substrate is the same substrate we have in Google Cloud. So the network is number one. Second one in storage, we have some really incredible capabilities in cloud storage, particularly around our dual region and multi-region buckets. The multi-region bucket, the way I describe it to people, it's a continent sized bucket. Single bucket name, strongly consistent that basically spans a continent. It's in some senses a little bit of the Nirvana of storage. No more DR failover, right? In a lot of places, traditionally on-prem but even other clouds, two buckets, failover, right? Orchestration, set up. Whenever you do orchestration, the DR is a lot more complicated. You got to do more fire drills, make sure it works. We have this capability to have a single name space that spans regions and it has strong read after write consistency, everything you drop into it you can read back immediately. >> Say I'm on the west coast and I have a little bit of an on-premises data center still and I'm using Veeam to back something up and I'm using storage within GCP. Trace out exactly what you mean by that in terms of a continent sized bucket. Updates going to the recovery volume, for lack of a better term, in GCP. Where is that physically? If I'm on the west coast, what does that look like? >> Two main options. It depends again on what your business goals are. First option is you pick a regional bucket, multiple zones in a Google Cloud region are going to store your data. It's resilient 'cause there's three zones in the region but it's all in one region. And then your second option is this multi-region bucket, where we're basically taking a set of the Google Cloud regions from around North America and storing your data basically in the continent, multiple copies of your data. And that's great because if you want to protect yourself from a regional outage, right? Earthquake, natural disaster of some sort, this multi-region, it basically gives you this DR protection for free and it's... Well, it's not free 'cause you have to pay for it of course, but it's a free from a failover perspective. Single name space, your app doesn't need to know. You restart the app on the east coast, same bucket name. >> Right. That's good. >> Read and write instantly out of the bucket. >> Cool. What are you doing with Veeam? >> So we have this great partnership, obviously for data protection and DR. And I really often segment the conversation into two pieces. One is for traditional on-prem customers who essentially want to use the Cloud as either a backup or a DR target. Traditional Veeam backup and replication supports Google Cloud targets. You can write to cloud storage. Some of these advantages I mentioned. Our archive storage, really cheap. We just actually lowered the price for archive storage quite significantly, roughly a third of what you find in some of the other competitive clouds if you look at the capabilities. Our archive class storage, fast recovery time, right? Fast latency, no hours to kind of rehydrate. >> Good. Storage in the cloud is overpriced. >> Yeah. >> It is. It is historically overpriced despite all the rhetoric. Good. I didn't know that. I'm glad to hear. >> Yeah. So the archive class store, so you essentially read and write into this bucket and restore. So it's often one of the things I joke with people about. I live in Silicon Valley, I still see the tape truck driving around. I really think people can really modernize these environments and use the cloud as a backup target. You get a copy of your data off-prem. >> Don't you guys use tape? >> Well, we don't talk a lot about-- >> No comment. Just checking. >> And just to be clear, when he says cloud storage is overpriced, he thinks that a postage stamp is overpriced, right? >> No. >> If I give you 50 cents, are you going to deliver a letter cross country? No. Cloud storage, it's not overpriced. >> Okay. (David laughing) We're going to have that conversation. I think it's historically overpriced. I think it could be more attractive, relative to the cost of the underlying technology. So good for you guys pushing prices. >> Yeah. So this archive class storage, is one great area. The second area we really work with Veeam is protecting cloud-native workloads. So increasingly customers are running workloads in the Cloud, they run VMware in the Cloud, they run normal VMs, they run containers. Veeam has two offerings in Google that essentially help customers protect that data, hit their RPO, RTO objectives. Another thing that is not different in the Cloud is the need to meet your compliance regulations, right? So having a product like Veeam that is easy to show back to your auditor, to your regulator to make sure that you have copies of your data, that you can hit an appropriate recovery time objective if you're in finance or healthcare, energy. So there's some really good Veeam technologies that work in Google Cloud to protect applications that actually run in Google Cloud all in. >> To your point about the tape truck I was kind of tongue in cheek, but I know you guys use tape. But the point is you shouldn't have to call the tape truck, right, you should go to Google and say, "Okay. I need my data back." Now having said that sometimes the highest bandwidth in the world is putting all this stuff on the truck. Is there an option for that? >> Again, it gets back to this networking capability that I mentioned. Yes. People do like to joke about, okay, trucks and trains and things can have a lot of bandwidth, big networks can push a lot of data around, obviously. >> And you got a big network. >> We got a huge network. So if you want to push... I've seen statistics. You can do terabits a second to a single Google Cloud storage bucket, super computing type performance inside Google Cloud, which from a scale perspective, whether it be network compute, these are things scale. If there's one thing that Google's really, really good at, it's really high scale. >> If your's companies can't afford to. >> Yeah, if you're that sensitive, avoid moving the data altogether. If you're that sensitive, have your recovery capability be in GCP. >> Yeah. Well, and again-- >> So that when you're recovering you're not having to move data. >> It's approximate to, yeah. That's the point. >> Recovering GCV, fail over your VMware cluster. >> Exactly. >> And use the cloud as a DR target. >> We got very little time but can you just give us a rundown of your portfolio in storage? >> Yeah. So storage, cloud storage for object storage got a bunch of regional options and classes of storage, like I mentioned, archive storage. Our first party offerings in the file area, our file store, basic enterprise and high scale, which is really for highly concurrent paralyzed applications. Persistent disk is our block storage offering. We also have a very high performance cash block storage offering and local SSDs. So that's the main kind of food groups of storage, block file object, increasingly doing a lot of work in data protection and in transfer and distributed cloud environments where the edge of the cloud is pushing outside the cloud regions themselves. But those are our products. Also, we spend a lot of time with our partners 'cause Google's really good at building and open sourcing and partnering at the same time hence with Veeam, obviously with file. We partner with NetApp and Dell and a bunch of folks. So there's a lot of partnerships we have that are important to us as well. >> Yeah. You know, we didn't get into Kubernetes, a great example of open source, Istio, Anthos, we didn't talk about the on-prem stuff. So Brian we'll have to have you back and chat about those things. >> I look forward to it. >> To quote my friend Matt baker, it's not a zero sum game out there and it's great to see Google pushing the technology. Thanks so much for coming on. All right. And thank you for watching. Keep it right there. Our next guest will be up shortly. This is Dave Vellante for Dave Nicholson. We're live at VeeamON 2022 and we'll be right back. (soft beats music)
SUMMARY :
He's the director of product Super excited to be here. So we're going to get into it here. You joined, right as the pandemic hit. and all the public kind of things that Google has the In a rapidly What's different in the technology? the same and familiar to you, in that regard? (Dave laughing) storage in the cloud, BigQuery, obviously on the analytics side, around the world. a lot of the cloud of the consumer services the early Hadoop worlds, is at a scale that for sure. so that a fortune 10 company protection in the Cloud, And it's one of the reasons of the business requirement. One of the things to think is the same substrate we have If I'm on the west coast, of the Google Cloud regions That's good. out of the bucket. And I really often segment the cloud is overpriced. despite all the rhetoric. So it's often one of the things No comment. are you going to deliver the underlying technology. is the need to meet your But the point is you shouldn't have a lot of bandwidth, So if you want to push... avoid moving the data altogether. So that when you're recovering That's the point. over your VMware cluster. So that's the main kind So Brian we'll have to have you back pushing the technology.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
David Nicholson | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Brian Schwarz | PERSON | 0.99+ |
David | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Brian | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
50 cents | QUANTITY | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
two pieces | QUANTITY | 0.99+ |
10 | QUANTITY | 0.99+ |
NetApp | ORGANIZATION | 0.99+ |
second option | QUANTITY | 0.99+ |
two offerings | QUANTITY | 0.99+ |
15% | QUANTITY | 0.99+ |
Veeam | ORGANIZATION | 0.99+ |
First option | QUANTITY | 0.99+ |
three zones | QUANTITY | 0.99+ |
Spotify | ORGANIZATION | 0.99+ |
15 | QUANTITY | 0.99+ |
one region | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
BigQuery | TITLE | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Two main options | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
Matt baker | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
second area | QUANTITY | 0.98+ |
Second one | QUANTITY | 0.98+ |
20 subsea cables | QUANTITY | 0.98+ |
10, $12 billion | QUANTITY | 0.98+ |
two main offerings | QUANTITY | 0.97+ |
North America | LOCATION | 0.97+ |
nine products | QUANTITY | 0.97+ |
two buckets | QUANTITY | 0.96+ |
one thing | QUANTITY | 0.96+ |
Single | QUANTITY | 0.96+ |
Hadoop | TITLE | 0.95+ |
Google Cloud | TITLE | 0.95+ |
one eye | QUANTITY | 0.95+ |
Anthos | ORGANIZATION | 0.95+ |
Two things | QUANTITY | 0.94+ |
Pure | ORGANIZATION | 0.94+ |
first party | QUANTITY | 0.92+ |
VeeamON 2022 | EVENT | 0.91+ |
pandemic | EVENT | 0.91+ |
Breaking Analysis: Technology & Architectural Considerations for Data Mesh
>> From theCUBE Studios in Palo Alto and Boston, bringing you data driven insights from theCUBE in ETR, this is Breaking Analysis with Dave Vellante. >> The introduction in socialization of data mesh has caused practitioners, business technology executives, and technologists to pause, and ask some probing questions about the organization of their data teams, their data strategies, future investments, and their current architectural approaches. Some in the technology community have embraced the concept, others have twisted the definition, while still others remain oblivious to the momentum building around data mesh. Here we are in the early days of data mesh adoption. Organizations that have taken the plunge will tell you that aligning stakeholders is a non-trivial effort, but necessary to break through the limitations that monolithic data architectures and highly specialized teams have imposed over frustrated business and domain leaders. However, practical data mesh examples often lie in the eyes of the implementer, and may not strictly adhere to the principles of data mesh. Now, part of the problem is lack of open technologies and standards that can accelerate adoption and reduce friction, and that's what we're going to talk about today. Some of the key technology and architecture questions around data mesh. Hello, and welcome to this week's Wikibon CUBE Insights powered by ETR, and in this Breaking Analysis, we welcome back the founder of data mesh and director of Emerging Technologies at Thoughtworks, Zhamak Dehghani. Hello, Zhamak. Thanks for being here today. >> Hi Dave, thank you for having me back. It's always a delight to connect and have a conversation. Thank you. >> Great, looking forward to it. Okay, so before we get into it in the technology details, I just want to quickly share some data from our friends at ETR. You know, despite the importance of data initiative since the pandemic, CIOs and IT organizations have had to juggle of course, a few other priorities, this is why in the survey data, cyber and cloud computing are rated as two most important priorities. Analytics and machine learning, and AI, which are kind of data topics, still make the top of the list, well ahead of many other categories. And look, a sound data architecture and strategy is fundamental to digital transformations, and much of the past two years, as we've often said, has been like a forced march into digital. So while organizations are moving forward, they really have to think hard about the data architecture decisions that they make, because it's going to impact them, Zhamak, for years to come, isn't it? >> Yes, absolutely. I mean, we are moving really from, slowly moving from reason based logical algorithmic to model based computation and decision making, where we exploit the patterns and signals within the data. So data becomes a very important ingredient, of not only decision making, and analytics and discovering trends, but also the features and applications that we build for the future. So we can't really ignore it, and as we see, some of the existing challenges around getting value from data is not necessarily that no longer is access to computation, is actually access to trustworthy, reliable data at scale. >> Yeah, and you see these domains coming together with the cloud and obviously it has to be secure and trusted, and that's why we're here today talking about data mesh. So let's get into it. Zhamak, first, your new book is out, 'Data Mesh: Delivering Data-Driven Value at Scale' just recently published, so congratulations on getting that done, awesome. Now in a recent presentation, you pulled excerpts from the book and we're going to talk through some of the technology and architectural considerations. Just quickly for the audience, four principles of data mesh. Domain driven ownership, data as product, self-served data platform and federated computational governance. So I want to start with self-serve platform and some of the data that you shared recently. You say that, "Data mesh serves autonomous domain oriented teams versus existing platforms, which serve a centralized team." Can you elaborate? >> Sure. I mean the role of the platform is to lower the cognitive load for domain teams, for people who are focusing on the business outcomes, the technologies that are building the applications, to really lower the cognitive load for them, to be able to work with data. Whether they are building analytics, automated decision making, intelligent modeling. They need to be able to get access to data and use it. So the role of the platform, I guess, just stepping back for a moment is to empower and enable these teams. Data mesh by definition is a scale out model. It's a decentralized model that wants to give autonomy to cross-functional teams. So it is core requires a set of tools that work really well in that decentralized model. When we look at the existing platforms, they try to achieve this similar outcome, right? Lower the cognitive load, give the tools to data practitioners, to manage data at scale because today centralized teams, really their job, the centralized data teams, their job isn't really directly aligned with a one or two or different, you know, business units and business outcomes in terms of getting value from data. Their job is manage the data and make the data available for then those cross-functional teams or business units to use the data. So the platforms they've been given are really centralized around or tuned to work with this structure as a team, structure of centralized team. Although on the surface, it seems that why not? Why can't I use my, you know, cloud storage or computation or data warehouse in a decentralized way? You should be able to, but some changes need to happen to those online platforms. As an example, some cloud providers simply have hard limits on the number of like account storage, storage accounts that you can have. Because they never envisaged you have hundreds of lakes. They envisage one or two, maybe 10 lakes, right. They envisage really centralizing data, not decentralizing data. So I think we see a shift in thinking about enabling autonomous independent teams versus a centralized team. >> So just a follow up if I may, we could be here for a while. But so this assumes that you've sorted out the organizational considerations? That you've defined all the, what a data product is and a sub product. And people will say, of course we use the term monolithic as a pejorative, let's face it. But the data warehouse crowd will say, "Well, that's what data march did. So we got that covered." But Europe... The primest of data mesh, if I understand it is whether it's a data march or a data mart or a data warehouse, or a data lake or whatever, a snowflake warehouse, it's a node on the mesh. Okay. So don't build your organization around the technology, let the technology serve the organization is that-- >> That's a perfect way of putting it, exactly. I mean, for a very long time, when we look at decomposition of complexity, we've looked at decomposition of complexity around technology, right? So we have technology and that's maybe a good segue to actually the next item on that list that we looked at. Oh, I need to decompose based on whether I want to have access to raw data and put it on the lake. Whether I want to have access to model data and put it on the warehouse. You know I need to have a team in the middle to move the data around. And then try to figure organization into that model. So data mesh really inverses that, and as you said, is look at the organizational structure first. Then scale boundaries around which your organization and operation can scale. And then the second layer look at the technology and how you decompose it. >> Okay. So let's go to that next point and talk about how you serve and manage autonomous interoperable data products. Where code, data policy you say is treated as one unit. Whereas your contention is existing platforms of course have independent management and dashboards for catalogs or storage, et cetera. Maybe we double click on that a bit. >> Yeah. So if you think about that functional, or technical decomposition, right? Of concerns, that's one way, that's a very valid way of decomposing, complexity and concerns. And then build solutions, independent solutions to address them. That's what we see in the technology landscape today. We will see technologies that are taking care of your management of data, bring your data under some sort of a control and modeling. You'll see technology that moves that data around, will perform various transformations and computations on it. And then you see technology that tries to overlay some level of meaning. Metadata, understandability, discovery was the end policy, right? So that's where your data processing kind of pipeline technologies versus data warehouse, storage, lake technologies, and then the governance come to play. And over time, we decomposed and we compose, right? Deconstruct and reconstruct back this together. But, right now that's where we stand. I think for data mesh really to become a reality, as in independent sources of data and teams can responsibly share data in a way that can be understood right then and there can impose policies, right then when the data gets accessed in that source and in a resilient manner, like in a way that data changes structure of the data or changes to the scheme of the data, doesn't have those downstream down times. We've got to think about this new nucleus or new units of data sharing. And we need to really bring back transformation and governing data and the data itself together around these decentralized nodes on the mesh. So that's another, I guess, deconstruction and reconstruction that needs to happen around the technology to formulate ourselves around the domains. And again the data and the logic of the data itself, the meaning of the data itself. >> Great. Got it. And we're going to talk more about the importance of data sharing and the implications. But the third point deals with how operational, analytical technologies are constructed. You've got an app DevStack, you've got a data stack. You've made the point many times actually that we've contextualized our operational systems, but not our data systems, they remain separate. Maybe you could elaborate on this point. >> Yes. I think this is, again, has a historical background and beginning. For a really long time, applications have dealt with features and the logic of running the business and encapsulating the data and the state that they need to run that feature or run that business function. And then we had for anything analytical driven, which required access data across these applications and across the longer dimension of time around different subjects within the organization. This analytical data, we had made a decision that, "Okay, let's leave those applications aside. Let's leave those databases aside. We'll extract the data out and we'll load it, or we'll transform it and put it under the analytical kind of a data stack and then downstream from it, we will have analytical data users, the data analysts, the data sciences and the, you know, the portfolio of users that are growing use that data stack. And that led to this really separation of dual stack with point to point integration. So applications went down the path of transactional databases or urban document store, but using APIs for communicating and then we've gone to, you know, lake storage or data warehouse on the other side. If we are moving and that again, enforces the silo of data versus app, right? So if we are moving to the world that our missions that are ambitions around making applications, more intelligent. Making them data driven. These two worlds need to come closer. As in ML Analytics gets embedded into those app applications themselves. And the data sharing, as a very essential ingredient of that, gets embedded and gets closer, becomes closer to those applications. So, if you are looking at this now cross-functional, app data, based team, right? Business team, then the technology stacks can't be so segregated, right? There has to be a continuum of experience from app delivery, to sharing of the data, to using that data, to embed models back into those applications. And that continuum of experience requires well integrated technologies. I'll give you an example, which actually in some sense, we are somewhat moving to that direction. But if we are talking about data sharing or data modeling and applications use one set of APIs, you know, HTTP compliant, GraQL or RAC APIs. And on the other hand, you have proprietary SQL, like connect to my database and run SQL. Like those are very two different models of representing and accessing data. So we kind of have to harmonize or integrate those two worlds a bit more closely to achieve that domain oriented cross-functional teams. >> Yeah. We are going to talk about some of the gaps later and actually you look at them as opportunities, more than barriers. But they are barriers, but they're opportunities for more innovation. Let's go on to the fourth one. The next point, it deals with the roles that the platform serves. Data mesh proposes that domain experts own the data and take responsibility for it end to end and are served by the technology. Kind of, we referenced that before. Whereas your contention is that today, data systems are really designed for specialists. I think you use the term hyper specialists a lot. I love that term. And the generalist are kind of passive bystanders waiting in line for the technical teams to serve them. >> Yes. I mean, if you think about the, again, the intention behind data mesh was creating a responsible data sharing model that scales out. And I challenge any organization that has a scaled ambitions around data or usage of data that relies on small pockets of very expensive specialists resources, right? So we have no choice, but upscaling cross-scaling. The majority population of our technologists, we often call them generalists, right? That's a short hand for people that can really move from one technology to another technology. Sometimes we call them pandric people sometimes we call them T-shaped people. But regardless, like we need to have ability to really mobilize our generalists. And we had to do that at Thoughtworks. We serve a lot of our clients and like many other organizations, we are also challenged with hiring specialists. So we have tested the model of having a few specialists, really conveying and translating the knowledge to generalists and bring them forward. And of course, platform is a big enabler of that. Like what is the language of using the technology? What are the APIs that delight that generalist experience? This doesn't mean no code, low code. We have to throw away in to good engineering practices. And I think good software engineering practices remain to exist. Of course, they get adopted to the world of data to build resilient you know, sustainable solutions, but specialty, especially around kind of proprietary technology is going to be a hard one to scale. >> Okay. I'm definitely going to come back and pick your brain on that one. And, you know, your point about scale out in the examples, the practical examples of companies that have implemented data mesh that I've talked to. I think in all cases, you know, there's only a handful that I've really gone deep with, but it was their hadoop instances, their clusters wouldn't scale, they couldn't scale the business and around it. So that's really a key point of a common pattern that we've seen now. I think in all cases, they went to like the data lake model and AWS. And so that maybe has some violation of the principles, but we'll come back to that. But so let me go on to the next one. Of course, data mesh leans heavily, toward this concept of decentralization, to support domain ownership over the centralized approaches. And we certainly see this, the public cloud players, database companies as key actors here with very large install bases, pushing a centralized approach. So I guess my question is, how realistic is this next point where you have decentralized technologies ruling the roost? >> I think if you look at the history of places, in our industry where decentralization has succeeded, they heavily relied on standardization of connectivity with, you know, across different components of technology. And I think right now you are right. The way we get value from data relies on collection. At the end of the day, collection of data. Whether you have a deep learning machinery model that you're training, or you have, you know, reports to generate. Regardless, the model is bring your data to a place that you can collect it, so that we can use it. And that leads to a naturally set of technologies that try to operate as a full stack integrated proprietary with no intention of, you know, opening, data for sharing. Now, conversely, if you think about internet itself, web itself, microservices, even at the enterprise level, not at the planetary level, they succeeded as decentralized technologies to a large degree because of their emphasis on open net and openness and sharing, right. API sharing. We don't talk about, in the API worlds, like we don't say, you know, "I will build a platform to manage your logical applications." Maybe to a degree but we actually moved away from that. We say, "I'll build a platform that opens around applications to manage your APIs, manage your interfaces." Right? Give you access to API. So I think the shift needs to... That definition of decentralized there means really composable, open pieces of the technology that can play nicely with each other, rather than a full stack, all have control of your data yet being somewhat decentralized within the boundary of my platform. That's just simply not going to scale if data needs to come from different platforms, different locations, different geographical locations, it needs to rethink. >> Okay, thank you. And then the final point is, is data mesh favors technologies that are domain agnostic versus those that are domain aware. And I wonder if you could help me square the circle cause it's nuanced and I'm kind of a 100 level student of your work. But you have said for example, that the data teams lack context of the domain and so help us understand what you mean here in this case. >> Sure. Absolutely. So as you said, we want to take... Data mesh tries to give autonomy and decision making power and responsibility to people that have the context of those domains, right? The people that are really familiar with different business domains and naturally the data that that domain needs, or that naturally the data that domains shares. So if the intention of the platform is really to give the power to people with most relevant and timely context, the platform itself naturally becomes as a shared component, becomes domain agnostic to a large degree. Of course those domains can still... The platform is a (chuckles) fairly overloaded world. As in, if you think about it as a set of technology that abstracts complexity and allows building the next level solutions on top, those domains may have their own set of platforms that are very much doing agnostic. But as a generalized shareable set of technologies or tools that allows us share data. So that piece of technology needs to relinquish the knowledge of the context to the domain teams and actually becomes domain agnostic. >> Got it. Okay. Makes sense. All right. Let's shift gears here. Talk about some of the gaps and some of the standards that are needed. You and I have talked about this a little bit before, but this digs deeper. What types of standards are needed? Maybe you could walk us through this graphic, please. >> Sure. So what I'm trying to depict here is that if we imagine a world that data can be shared from many different locations, for a variety of analytical use cases, naturally the boundary of what we call a node on the mesh will encapsulates internally a fair few pieces. It's not just the boundary of that, not on the mesh, is the data itself that it's controlling and updating and maintaining. It's of course a computation and the code that's responsible for that data. And then the policies that continue to govern that data as long as that data exists. So if that's the boundary, then if we shift that focus from implementation details, that we can leave that for later, what becomes really important is the scene or the APIs and interfaces that this node exposes. And I think that's where the work that needs to be done and the standards that are missing. And we want the scene and those interfaces be open because that allows, you know, different organizations with different boundaries of trust to share data. Not only to share data to kind of move that data to yes, another location, to share the data in a way that distributed workloads, distributed analytics, distributed machine learning model can happen on the data where it is. So if you follow that line of thinking around the centralization and connection of data versus collection of data, I think the very, very important piece of it that needs really deep thinking, and I don't claim that I have done that, is how do we share data responsibly and sustainably, right? That is not brittle. If you think about it today, the ways we share data, one of the very common ways is around, I'll give you a JDC endpoint, or I give you an endpoint to your, you know, database of choice. And now as technology, whereas a user actually, you can now have access to the schema of the underlying data and then run various queries or SQL queries on it. That's very simple and easy to get started with. That's why SQL is an evergreen, you know, standard or semi standard, pseudo standard that we all use. But it's also very brittle, because we are dependent on a underlying schema and formatting of the data that's been designed to tell the computer how to store and manage the data. So I think that the data sharing APIs of the future really need to think about removing this brittle dependencies, think about sharing, not only the data, but what we call metadata, I suppose. Additional set of characteristics that is always shared along with data to make the data usage, I suppose ethical and also friendly for the users and also, I think we have to... That data sharing API, the other element of it, is to allow kind of computation to run where the data exists. So if you think about SQL again, as a simple primitive example of computation, when we select and when we filter and when we join, the computation is happening on that data. So maybe there is a next level of articulating, distributed computational data that simply trains models, right? Your language primitives change in a way to allow sophisticated analytical workloads run on the data more responsibly with policies and access control and force. So I think that output port that I mentioned simply is about next generation data sharing, responsible data sharing APIs. Suitable for decentralized analytical workloads. >> So I'm not trying to bait you here, but I have a follow up as well. So you schema, for all its good creates constraints. No schema on right, that didn't work, cause it was just a free for all and it created the data swamps. But now you have technology companies trying to solve that problem. Take Snowflake for example, you know, enabling, data sharing. But it is within its proprietary environment. Certainly Databricks doing something, you know, trying to come at it from its angle, bringing some of the best to data warehouse, with the data science. Is your contention that those remain sort of proprietary and defacto standards? And then what we need is more open standards? Maybe you could comment. >> Sure. I think the two points one is, as you mentioned. Open standards that allow... Actually make the underlying platform invisible. I mean my litmus test for a technology provider to say, "I'm a data mesh," (laughs) kind of compliant is, "Is your platform invisible?" As in, can I replace it with another and yet get the similar data sharing experience that I need? So part of it is that. Part of it is open standards, they're not really proprietary. The other angle for kind of sharing data across different platforms so that you know, we don't get stuck with one technology or another is around APIs. It is around code that is protecting that internal schema. So where we are on the curve of evolution of technology, right now we are exposing the internal structure of the data. That is designed to optimize certain modes of access. We're exposing that to the end client and application APIs, right? So the APIs that use the data today are very much aware that this database was optimized for machine learning workloads. Hence you will deal with a columnar storage of the file versus this other API is optimized for a very different, report type access, relational access and is optimized around roles. I think that should become irrelevant in the API sharing of the future. Because as a user, I shouldn't care how this data is internally optimized, right? The language primitive that I'm using should be really agnostic to the machine optimization underneath that. And if we did that, perhaps this war between warehouse or lake or the other will become actually irrelevant. So we're optimizing for that human best human experience, as opposed to the best machine experience. We still have to do that but we have to make that invisible. Make that an implementation concern. So that's another angle of what should... If we daydream together, the best experience and resilient experience in terms of data usage than these APIs with diagnostics to the internal storage structure. >> Great, thank you for that. We've wrapped our ankles now on the controversy, so we might as well wade all the way in, I can't let you go without addressing some of this. Which you've catalyzed, which I, by the way, I see as a sign of progress. So this gentleman, Paul Andrew is an architect and he gave a presentation I think last night. And he teased it as quote, "The theory from Zhamak Dehghani versus the practical experience of a technical architect, AKA me," meaning him. And Zhamak, you were quick to shoot back that data mesh is not theory, it's based on practice. And some practices are experimental. Some are more baked and data mesh really avoids by design, the specificity of vendor or technology. Perhaps you intend to frame your post as a technology or vendor specific, specific implementation. So touche, that was excellent. (Zhamak laughs) Now you don't need me to defend you, but I will anyway. You spent 14 plus years as a software engineer and the better part of a decade consulting with some of the most technically advanced companies in the world. But I'm going to push you a little bit here and say, some of this tension is of your own making because you purposefully don't talk about technologies and vendors. Sometimes doing so it's instructive for us neophytes. So, why don't you ever like use specific examples of technology for frames of reference? >> Yes. My role is pushes to the next level. So, you know everybody picks their fights, pick their battles. My role in this battle is to push us to think beyond what's available today. Of course, that's my public persona. On a day to day basis, actually I work with clients and existing technology and I think at Thoughtworks we have given the talk we gave a case study talk with a colleague of mine and I intentionally got him to talk about (indistinct) I want to talk about the technology that we use to implement data mesh. And the reason I haven't really embraced, in my conversations, the specific technology. One is, I feel the technology solutions we're using today are still not ready for the vision. I mean, we have to be in this transitional step, no matter what we have to be pragmatic, of course, and practical, I suppose. And use the existing vendors that exist and I wholeheartedly embrace that, but that's just not my role, to show that. I've gone through this transformation once before in my life. When microservices happened, we were building microservices like architectures with technology that wasn't ready for it. Big application, web application servers that were designed to run these giant monolithic applications. And now we're trying to run little microservices onto them. And the tail was riding the dock, the environmental complexity of running these services was consuming so much of our effort that we couldn't really pay attention to that business logic, the business value. And that's where we are today. The complexity of integrating existing technologies is really overwhelmingly, capturing a lot of our attention and cost and effort, money and effort as opposed to really focusing on the data product themselves. So it's just that's the role I have, but it doesn't mean that, you know, we have to rebuild the world. We've got to do with what we have in this transitional phase until the new generation, I guess, technologies come around and reshape our landscape of tools. >> Well, impressive public discipline. Your point about microservice is interesting because a lot of those early microservices, weren't so micro and for the naysayers look past this, not prologue, but Thoughtworks was really early on in the whole concept of microservices. So be very excited to see how this plays out. But now there was some other good comments. There was one from a gentleman who said the most interesting aspects of data mesh are organizational. And that's how my colleague Sanji Mohan frames data mesh versus data fabric. You know, I'm not sure, I think we've sort of scratched the surface today that data today, data mesh is more. And I still think data fabric is what NetApp defined as software defined storage infrastructure that can serve on-prem and public cloud workloads back whatever, 2016. But the point you make in the thread that we're showing you here is that you're warning, and you referenced this earlier, that the segregating different modes of access will lead to fragmentation. And we don't want to repeat the mistakes of the past. >> Yes, there are comments around. Again going back to that original conversation that we have got this at a macro level. We've got this tendency to decompose complexity based on technical solutions. And, you know, the conversation could be, "Oh, I do batch or you do a stream and we are different."' They create these bifurcations in our decisions based on the technology where I do events and you do tables, right? So that sort of segregation of modes of access causes accidental complexity that we keep dealing with. Because every time in this tree, you create a new branch, you create new kind of new set of tools and then somehow need to be point to point integrated. You create new specialization around that. So the least number of branches that we have, and think about really about the continuum of experiences that we need to create and technologies that simplify, that continuum experience. So one of the things, for example, give you a past experience. I was really excited around the papers and the work that came around on Apache Beam, and generally flow based programming and stream processing. Because basically they were saying whether you are doing batch or whether you're doing streaming, it's all one stream. And sometimes the window of time, narrows and sometimes the window of time over which you're computing, widens and at the end of today, is you are just getting... Doing the stream processing. So it is those sort of notions that simplify and create continuum of experience. I think resonate with me personally, more than creating these tribal fights of this type versus that mode of access. So that's why data mesh naturally selects kind of this multimodal access to support end users, right? The persona of end users. >> Okay. So the last topic I want to hit, this whole discussion, the topic of data mesh it's highly nuanced, it's new, and people are going to shoehorn data mesh into their respective views of the world. And we talked about lake houses and there's three buckets. And of course, the gentleman from LinkedIn with Azure, Microsoft has a data mesh community. See you're going to have to enlist some serious army of enforcers to adjudicate. And I wrote some of the stuff down. I mean, it's interesting. Monte Carlo has a data mesh calculator. Starburst is leaning in, chaos. Search sees themselves as an enabler. Oracle and Snowflake both use the term data mesh. And then of course you've got big practitioners J-P-M-C, we've talked to Intuit, Orlando, HelloFresh has been on, Netflix has this event based sort of streaming implementation. So my question is, how realistic is it that the clarity of your vision can be implemented and not polluted by really rich technology companies and others? (Zhamak laughs) >> Is it even possible, right? Is it even possible? That's a yes. That's why I practice then. This is why I should practice things. Cause I think, it's going to be hard. What I'm hopeful, is that the socio-technical, Leveling Data mentioned that this is a socio-technical concern or solution, not just a technology solution. Hopefully always brings us back to, you know, the reality that vendors try to sell you safe oil that solves all of your problems. (chuckles) All of your data mesh problems. It's just going to cause more problem down the track. So we'll see, time will tell Dave and I count on you as one of those members of, (laughs) you know, folks that will continue to share their platform. To go back to the roots, as why in the first place? I mean, I dedicated a whole part of the book to 'Why?' Because we get, as you said, we get carried away with vendors and technology solution try to ride a wave. And in that story, we forget the reason for which we even making this change and we are going to spend all of this resources. So hopefully we can always come back to that. >> Yeah. And I think we can. I think you have really given this some deep thought and as we pointed out, this was based on practical knowledge and experience. And look, we've been trying to solve this data problem for a long, long time. You've not only articulated it well, but you've come up with solutions. So Zhamak, thank you so much. We're going to leave it there and I'd love to have you back. >> Thank you for the conversation. I really enjoyed it. And thank you for sharing your platform to talk about data mesh. >> Yeah, you bet. All right. And I want to thank my colleague, Stephanie Chan, who helps research topics for us. Alex Myerson is on production and Kristen Martin, Cheryl Knight and Rob Hoff on editorial. Remember all these episodes are available as podcasts, wherever you listen. And all you got to do is search Breaking Analysis Podcast. Check out ETR's website at etr.ai for all the data. And we publish a full report every week on wikibon.com, siliconangle.com. You can reach me by email david.vellante@siliconangle.com or DM me @dvellante. Hit us up on our LinkedIn post. This is Dave Vellante for theCUBE Insights powered by ETR. Have a great week, stay safe, be well. And we'll see you next time. (bright music)
SUMMARY :
bringing you data driven insights Organizations that have taken the plunge and have a conversation. and much of the past two years, and as we see, and some of the data and make the data available But the data warehouse crowd will say, in the middle to move the data around. and talk about how you serve and the data itself together and the implications. and the logic of running the business and are served by the technology. to build resilient you I think in all cases, you know, And that leads to a that the data teams lack and naturally the data and some of the standards that are needed. and formatting of the data and it created the data swamps. We're exposing that to the end client and the better part of a decade So it's just that's the role I have, and for the naysayers look and at the end of today, And of course, the gentleman part of the book to 'Why?' and I'd love to have you back. And thank you for sharing your platform etr.ai for all the data.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Kristen Martin | PERSON | 0.99+ |
Rob Hoff | PERSON | 0.99+ |
Cheryl Knight | PERSON | 0.99+ |
Stephanie Chan | PERSON | 0.99+ |
Alex Myerson | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Zhamak | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
10 lakes | QUANTITY | 0.99+ |
Sanji Mohan | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Paul Andrew | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
Zhamak Dehghani | PERSON | 0.99+ |
Data Mesh: Delivering Data-Driven Value at Scale | TITLE | 0.99+ |
Boston | LOCATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
14 plus years | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
two points | QUANTITY | 0.99+ |
siliconangle.com | OTHER | 0.99+ |
second layer | QUANTITY | 0.99+ |
2016 | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
today | DATE | 0.99+ |
Snowflake | ORGANIZATION | 0.99+ |
hundreds of lakes | QUANTITY | 0.99+ |
theCUBE | ORGANIZATION | 0.99+ |
david.vellante@siliconangle.com | OTHER | 0.99+ |
theCUBE Studios | ORGANIZATION | 0.98+ |
SQL | TITLE | 0.98+ |
one unit | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
100 level | QUANTITY | 0.98+ |
third point | QUANTITY | 0.98+ |
Databricks | ORGANIZATION | 0.98+ |
Europe | LOCATION | 0.98+ |
three buckets | QUANTITY | 0.98+ |
ETR | ORGANIZATION | 0.98+ |
DevStack | TITLE | 0.97+ |
One | QUANTITY | 0.97+ |
wikibon.com | OTHER | 0.97+ |
both | QUANTITY | 0.97+ |
Thoughtworks | ORGANIZATION | 0.96+ |
one set | QUANTITY | 0.96+ |
one stream | QUANTITY | 0.96+ |
Intuit | ORGANIZATION | 0.95+ |
one way | QUANTITY | 0.93+ |
two worlds | QUANTITY | 0.93+ |
HelloFresh | ORGANIZATION | 0.93+ |
this week | DATE | 0.93+ |
last night | DATE | 0.91+ |
fourth one | QUANTITY | 0.91+ |
Snowflake | TITLE | 0.91+ |
two different models | QUANTITY | 0.91+ |
ML Analytics | TITLE | 0.91+ |
Breaking Analysis | TITLE | 0.87+ |
two worlds | QUANTITY | 0.84+ |
Loris Degioanni | AWS Startup Showcase S2 Ep 1 | Open Cloud Innovations
>>Welcoming into the cubes presentation of AWS startup showcase open cloud innovations. This is season two episode one of the ongoing series covering exciting hot startups from the AWS ecosystem. Today's episode. One of season two theme is open source community and the open cloud innovations. I'm your host, John farrier of the cube. And today we're excited to be joined by Loris Dajani who is the C T O chief technology officer and founder of cystic found that in his backyard with some wine and beer. Great to see you. We're here to talk about Falco finding cloud threats in real time. Thank you for joining us, Laura. Thanks. Good to see you >>Love that your company was founded in your backyard. Classic startup story. You have been growing very, very fast. And the key point of the showcase is to talk about the startups that are making a difference and, and that are winning and doing well. You guys have done extremely well with your business. Congratulations, but thank you. The big theme is security and as organizations have moved their business critical applications to the cloud, the attackers have followed. This is Billy important in the industry. You guys are in the middle of this. What's your view on this? What's your take? What's your reaction? >>Yeah. As we, as a end ecosystem are moving to the cloud as more and more, we are developing cloud native applications. We relying on CACD. We are relying on orchestrations in containers. Security is becoming more and more important. And I would say more and more complex. I mean, we're reading every day in the news about attacks about data leaks and so on. There's rarely a day when there's nothing major happening and that we can see the press from this point of view. And definitely things are evolving. Things are changing in the cloud. In for example, Cisco just released a cloud native security and usage report a few days ago. And the mundane things that we found among our user base, for example, 60, 66% of containers are running as rude. So still many organizations adopting a relatively relaxed way to deploy their applications. Not because they like doing it, but because it tends to be, you know, easier and a little bit with a little bit less ration. >>We also found that that 27% of users unnecessary route access in the 73% of the cloud accounts, public has three buckets. This is all stuff that is all good, but can generate consequences when you make a mistake, like typically, you know, your data leaks, no, because of super sophisticated attacks, but because somebody in your organization forgets maybe some data on it on a public history bucket, or because some credentials that are not restrictive enough, maybe are leaked to another team member or, or, or a Gita, you know, repository or something like that. So is infrastructures and the software becomes a let's a more sophisticated and more automated. There's also at the same time, more risks and opportunities for misconfigurations that then tend to be, you know, very often the sewers of, of issues in the cloud. >>Yeah, those self-inflicted wounds definitely come up. We've seen people leaving S3 buckets open, you know, it's user error, but, you know, w w those are small little things that get taken care of pretty quickly. That's just hygiene. It's just discipline. You know, most of the sophisticated enterprises are moving way past that, but now they're adopting more cloud native, right. And as they get into the critical apps, securing them has been challenging. We've talked to many CEOs and CSOs, and they say that to us. Yeah. It's very challenging, but we're on it. I have to ask you, what should people worry about when secure in the cloud, because they know is challenging, then they'll have the opportunity on the other side, what are they worried about? What do you see people scared of or addressing, or what should I be worried about when securing the cloud? >>Yeah, definitely. Sometimes when I'm talking about the security, I like to compare, you know, the old data center in that the old monolithic applications to a castle, you know, in middle aged castle. So what, what did you do to protect your castle? You used to build very thick walls around it, and then a small entrance and be very careful about the entrance, you know, protect the entrance very well. So what we used to doing that, that data center was protect everything, you know, the, the whole perimeter in a very aggressive way with firewalls and making sure that there was only a very narrow entrance to our data center. And, you know, as much as possible, like active security there, like firewalls or this kind of stuff. Now we're in the cloud. Now, it's everything. Everything is much more diffused, right? Our users, our customers are coming from all over the planet, every country, every geography, every time, but also our internal team is coming from everywhere because they're all accessing a cloud environment. >>You know, they often from home for different offices, again, from every different geography, every different country. So in this configuration, the metaphor data that they like to use is an amusement park, right? You have a big area with many important things inside in the users and operators that are coming from different dangerous is that you cannot really block, you know, you need to let everything come in and in operate together in these kinds of environment, the traditional protection is not really effective. It's overwhelming. And it doesn't really serve the purpose that we need. We cannot build a giant water under our amusement park. We need people to come in. So what we're finding is that understanding, getting visibility and doing, if you Rheodyne is much more important. So it's more like we need to replace the big walls with a granular network of security cameras that allow us to see what's happening in the, in the different areas of our amusement park. And we need to be able to do that in a way that is real time and allows us to react in a smart way as things happen because in the modern world of cloud five minutes of delay in understanding that something is wrong, mean that you're ready being, you know, attacked and your data's already being >>Well. I also love the analogy of the amusement park. And of course, certain rides, you need to be a certain height to ride the rollercoaster that I guess, that's it credentials or security credentials, as we say, but in all seriousness, the perimeter is dead. We all know that also moats were relied upon as well in the old days, you know, you secure the firewall, nothing comes in, goes out, and then once you're in, you don't know what's going on. Now that's flipped. There's no walls, there's no moats everyone's in. And so you're saying this kind of security camera kind of model is key. So again, this topic here is securing real time. Yeah. How do you do that? Because it's happening so fast. It's moving. There's a lot of movement. It's not at rest there's data moving around fast. What's the secret sauce to making real identifying real-time threats in an enterprise. >>Yeah. And in, in our opinion, there are some key ingredients. One is a granularity, right? You cannot really understand the threats in your amusement park. If you're just watching these from, from a satellite picture. So you need to be there. You need to be granular. You need to be located in the, in the areas where stuff happens. This means, for example, in, in security for the clowning in runtime, security is important to whoever your sensors that are distributed, that are able to observe every single end point. Not only that, but you also need to look at the infrastructure, right? From this point of view, cloud providers like Amazon, for example, offer nice facilities. Like for example, there's CloudTrail in AWS that collects in a nice opinionated consistent way, the data that is coming from multiple cloud services. So it's important from one point of view, to go deep into, into the endpoint, into the processes, into what's executing, but also collect his information like the cultural information and being able to correlate it to there's no full security without covering all of the basics. >>So a security is a matter of both granularity and being able to go deep and understanding what every single item does, but also being able to go abroad and collect the right data, the right data sources and correlated. And then the real time is really critical. So decisions need to be taken as the data comes in. So the streaming nature of security engines is becoming more and more important. So the step one of course, security, especially cost security, posture management was very much let's ball. Once in a while, let's, let's involve the API and see what's happening. This is still important. Of course, you know, you need to have the basics covered, but more and more, the paradigm needs to change to, okay, the data is coming in second by second, instead of asking for the data manually, once in a while, second by second, there's the moment it arrives. You need to be able to detect, correlate, take decisions. And so, you know, machine learning is very important. Automation is very important. The rules that are coming from the community on a daily basis are, are very important. >>Let me ask you a question, cause I love this topic because it's a data problem at the same time. There's some network action going on. I love this idea of no perimeter. You're going to be monitoring anything, but there's been trade offs in the past, overhead involved, whether you're monitoring or putting probes in the network or the different, there's all kinds of different approaches. How does the new technology with cloud and machine learning change the dynamics of the kinds of approaches? Because it's kind of not old tech, but you the same similar concepts to network management, other things, what what's going on now that's different and what makes this possible today? >>Yeah, I think from the friction point of view, which is one very important topic here. So this needs to be deployed efficiently and easily in this transparency, transparent as possible, everywhere, everywhere to avoid blind spots and making sure that everything is scheduled in front. His point of view, it's very important to integrate with the orchestration is very important to make use of all of the facilities that Amazon provides in the it's very important to have a system that is deployed automatically and not manually. That is in particular, the only to avoid blind spots because it's manual deployment is employed. Somebody would forget, you know, to deploy where somewhere where it's important. And then from the performance point of view, very much, for example, with Falco, you know, our open source front-end security engine, we really took key design decisions at the beginning to make sure that the engine would be able to support in Paris, millions of events per second, with minimal overhead. >>You know, they're barely measure measurable overhead. When you want to design something like that, you know, that you need to accept some kind of trade-offs. You need to know that you need to maybe limit a little bit this expressiveness, you know, or what can be done, but ease of deployment and performance were more important goals here. And you know, it's not uncommon for us is Dave to have users of Farco or commercial customers that they have tens of thousands, hundreds of thousands of machines. You know, I said two machines and sometimes millions of containers. And in these environments, lightweight is key. You want death, but you want overhead to be really meaningful and >>Okay, so a amusement park, a lot of diverse applications. So integration, I get that orchestration brings back the Kubernetes angle a little bit and Falco and per overhead and performance cloud scale. So all these things are working in favor. If I get that right, is that, am I getting that right? You get the cloud scale, you get the integration and open. >>Yeah, exactly. Any like ingredients over SEP, you know, and that, and with these ingredients, it's possible to bake a, a recipe to, to have a plate better, can be more usable, more effective and more efficient. That may be the place that we're doing in the previous direction. >>Oh, so I've got to ask you about Falco because it's come up a lot. We talked about it on our cube conversations already on the internet. Check that out. And a great conversation there. You guys have close to 40 million plus million downloads of, of this. You have also 80 was far gate integration, so six, some significant traction. What does this mean? I mean, what is it telling us? Why is this successful? What are people doing with Falco? I see this as a leading indicator, and I know you guys were sponsoring the project, so congratulations and propelled your business, but there's something going on here. What does this as a leading indicator of? >>Yeah. And for, for the audience, Falco is the runtime security tool of the cloud native generation such. And so when we, the Falco, we were inspired by previous generation, for example, network intrusion detection, system tools, and a post protection tools and so on. But we created essentially a unique tool that would really be designed for the modern paradigm of containers, cloud CIC, and salt and Falco essentially is able to collect a bunch of brainer information from your applications that are running in the cloud and is a religion that is based on policies that are driven by the community, essentially that allow you to detect misconfigurations attacks and normals conditions in your cloud, in your cloud applications. Recently, we announced that the extension of Falco to support a cloud infrastructure and time security by parsing cloud logs, like cloud trail and so on. So now Falba can be used at the same time to protect the workloads that are running in virtual machines or containers. >>And also the cloud infrastructure to give the audience a couple of examples, focused, able to detect if somebody is running a shelf in a radius container, or if somebody is downloading a sensitive by, from an S3 bucket, all of these in real time with Falco, we decided to go really with CR study. This is Degas was one of the team members that started it, but we decided to go to the community right away, because this is one other ingredient. We are talking about the ingredients before, and there's not a successful modern security tool without being able to leverage the community and empower the community to contribute to it, to use it, to validate and so on. And that's also why we contributed Falco to the cloud native computing foundation. So that Falco is a CNCF tool and is blessed by many organizations. We are also partnering with many companies, including Amazon. Last year, we released that far gate support for Falco. And that was done is a project that was done in cooperation with Amazon, so that we could have strong runtime security for the containers that are running in. >>Well, I've got to say, first of all, congratulations. And I think that's a bold move to donate or not donate contribute to the open source community because you're enabling a lot of people to do great things. And some people might be scared. They think they might be foreclosing and beneficial in the future, but in the reality, that is the new business model open source. So I think that's worth calling out and congratulations. This is the new commercial open source paradigm. And it kind of leads into my last question, which is why is security well-positioned to benefit from open source besides the fact that the new model of getting people enabled and getting scale and getting standards like you're doing, makes everybody win. And again, that's a community model. That's not a proprietary approach. So again, source again, big part of this. Why was security benefit from opensource? >>I am a strong believer. I mean, we are in a better, we could say we are in a war, right? The good guys versus the bad guys. The internet is full of bad guys. And these bad guys are coordinated, are motivated, are sometimes we'll find it. And we'll equip. We win only if we fight this war as a community. So the old paradigm of vendors building their own Eva towers, you know, their own self-contained ecosystems and that the us as users as, as, as customers, every many different, you know, environments that don't communicate with each other, just doesn't take advantage of our capabilities. Our strength is as a community. So we are much stronger against the big guys and we have a much better chance doing when this war, if we adopt a paradigm that allows us to work together. Think only about for example, I don't know, companies any to train, you know, the workforce on the security best practices on the security tools. >>It's much better to standardize on something, build the stack that is accepted by everybody and tell it can focus on learning the stack and becoming a master of the steak rounded rather than every single organization naming the different tool. And, and then B it's very hard to attract talent and to have the right, you know, people that can help you with, with your issues in, in, in, in, in, with your goals. So the future of security is going to be open source. I'm a strong believer in that, and we'll see more and more examples like Falco of initiatives that really start with, with the community and for the community. >>Like we always say an open, open winds, always turn the lights on, put the code out there. And I think, I think the community model is winning. Congratulations, Loris Dajani CTO and founder of SIS dig congratulatory success. And thank you for coming on the cube for the ADB startup showcase open cloud innovations. Thanks for coming on. Okay. Is the cube stay with us all day long every day with the cube, check us out the cube.net. I'm John furrier. Thanks for watching.
SUMMARY :
Good to see you And the key point of the showcase is to talk about the startups that are making a difference and, but because it tends to be, you know, easier and a little bit with a little bit less ration. for misconfigurations that then tend to be, you know, very often the sewers You know, most of the sophisticated enterprises I like to compare, you know, the old data center in that the metaphor data that they like to use is an amusement park, right? What's the secret sauce to making real identifying real-time threats in the cultural information and being able to correlate it to there's no full security the paradigm needs to change to, okay, the data is coming in second by second, How does the new technology with cloud and machine learning change And then from the performance point of view, very much, for example, with Falco, you know, You need to know that you need to maybe limit a little bit this expressiveness, you know, You get the cloud scale, you get the integration and open. over SEP, you know, and that, and with these ingredients, it's possible to bake Oh, so I've got to ask you about Falco because it's come up a lot. on policies that are driven by the community, essentially that allow you to detect And also the cloud infrastructure to give the audience a couple of examples, And I think that's a bold move to donate or not donate contribute that the us as users as, as, as customers, to attract talent and to have the right, you know, people that can help you with, And thank you for coming
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Laura | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Loris Dajani | PERSON | 0.99+ |
Loris Degioanni | PERSON | 0.99+ |
two machines | QUANTITY | 0.99+ |
Loris Dajani | PERSON | 0.99+ |
73% | QUANTITY | 0.99+ |
Paris | LOCATION | 0.99+ |
27% | QUANTITY | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Last year | DATE | 0.99+ |
Falco | ORGANIZATION | 0.99+ |
millions | QUANTITY | 0.99+ |
six | QUANTITY | 0.99+ |
Farco | ORGANIZATION | 0.99+ |
John farrier | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
five minutes | QUANTITY | 0.99+ |
tens of thousands | QUANTITY | 0.99+ |
one point | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Today | DATE | 0.98+ |
today | DATE | 0.98+ |
both | QUANTITY | 0.98+ |
cube.net | OTHER | 0.97+ |
Billy | PERSON | 0.96+ |
a day | QUANTITY | 0.95+ |
SIS dig | ORGANIZATION | 0.94+ |
one other ingredient | QUANTITY | 0.94+ |
One | QUANTITY | 0.93+ |
C T O | ORGANIZATION | 0.91+ |
Ep 1 | QUANTITY | 0.89+ |
second | QUANTITY | 0.89+ |
80 | QUANTITY | 0.88+ |
single | QUANTITY | 0.88+ |
few days ago | DATE | 0.88+ |
one very important topic | QUANTITY | 0.87+ |
hundreds of thousands of machines | QUANTITY | 0.86+ |
Falba | TITLE | 0.85+ |
S3 | TITLE | 0.83+ |
single item | QUANTITY | 0.83+ |
every geography | QUANTITY | 0.8+ |
every country | QUANTITY | 0.78+ |
AWS Startup Showcase S2 | EVENT | 0.75+ |
three buckets | QUANTITY | 0.75+ |
CTO | PERSON | 0.75+ |
60, 66% | QUANTITY | 0.74+ |
CloudTrail | TITLE | 0.74+ |
40 million plus million downloads | QUANTITY | 0.73+ |
containers | QUANTITY | 0.73+ |
two | QUANTITY | 0.73+ |
John furrier | PERSON | 0.73+ |
Degas | PERSON | 0.72+ |
millions of events per second | QUANTITY | 0.67+ |
single end point | QUANTITY | 0.67+ |
season two theme | QUANTITY | 0.65+ |
first | QUANTITY | 0.63+ |
ADB | ORGANIZATION | 0.6+ |
Kubernetes | ORGANIZATION | 0.59+ |
episode one | QUANTITY | 0.59+ |
Rheodyne | ORGANIZATION | 0.59+ |
study | ORGANIZATION | 0.56+ |
step one | QUANTITY | 0.55+ |
season | OTHER | 0.54+ |
Eva | ORGANIZATION | 0.53+ |
team | QUANTITY | 0.53+ |
SEP | TITLE | 0.52+ |
CACD | ORGANIZATION | 0.52+ |
every | QUANTITY | 0.52+ |
view | QUANTITY | 0.5+ |
CR | TITLE | 0.49+ |
S3 | COMMERCIAL_ITEM | 0.35+ |
Network challenges in a Distributed, Hybrid Workforce Era | CUBE Conversation
>>Hello, welcome to the special cube conversation. I'm John for your host of the queue here in Palo Alto, California. We're still remoting in getting great guests in events are coming back. Next few weeks, we'll be at a bunch of different events and you'll see the cube everywhere, but this conversation's about network challenges in a distributed hybrid workforce era. We've got a team say he principal, product manager, edge networking solutions, a Dell technologies and Rob McBride channel and partner sales engineer at versa networks. Gentlemen, thanks for coming on this cube conversation, >>John. Thank you, John. >>So first of all, obviously with the pandemic and now we're moving out of the pandemic, even with Omnichron out there, we still see visibility into kind of back to work and events and it's, but it's clearly hybrid environment cloud hybrid work. This has been a huge opening of everyone's eyes around network security provisioning, you know, unexpected disruptions around everyone being worked at home. Nobody really forecasted that. The fact that the whole workforce would be remote coming in. So again, put a lot of pressure on the network challenges over over the past two years. How is it coming out of this different what's your guys' take on this. >>Yeah, to then when we start looking at it, let's kind of focus a little bit on challenges, you know, you know, when this all kind of started off, obviously, as you stated, right, everyone was kind of taken by surprise in a way, right? What do we do? We don't know what to do at this moment. And you know, I go back and I remember a customer giving me a call, you know, when they were at first looking at, you know, your traditional land transformation and one of the changed their branches to do something from an SD perspective. And then the pandemic hit. And their question to me was Rob, what do I do? Or what do I need to start thinking about now, all of a sudden to your point, right? Everyone now is no longer in the office and how do I get them to connect. >>And more importantly, now that I can maybe figure out a way to connect them, how do I actually see what they're doing and be able to control what they're actually now accessing? Because I no longer have that level of control as of them coming into the office. And so a lot of customers, you know, we're, we're beginning to develop kind of homegrown solutions, look at various different things to kind of quick hot patches, if you will, to address the remote workers coming in and things of that nature. And we'll be seeing kind of progression through all this as a, as, as opposed to just solving, getting a user, to connect into the, into an environment that it can provide, you know, continuity for. They started coming up with other challenges to the point of security. They started, you know, I have other customers calling me up and saying, you know, I I've now got a ransomware problem, right? >>So, you know, what do I do about that? And what are the things I need to kind of consider with respect to now I'm much more vulnerable because my, my, my branch has state has basically become much more diversified and solutions and things that they're looking for, regardless, obviously around security connectivity, there they've been challenged with addressing how do they unify their levels of visibility without over encumbering themselves and how they actually manage now this kind of much more kind of distributed kind of network if you will. Right? So things around, you know, looking at, you know, acronyms around from like a Z TNA or, you know, cloud security and all this fun stuff starts coming into play. But what it, what it points to is that the biggest challenge ideas, how does, how do they converge networking and security together and provide equitable and uniform policy architecture to identify their users, to connect and access the applications that are relevant to the business and be able to have that uniformity between whether it's the branch for them being remote. And that's part of what we've kind of seen as this progression to the last two years and kind of solutions that they're looking for to kind of help them address that. It's almost like >>It's a good thing in a way. It actually opens up the kimono and say, Hey, this is the real world we've got to prepare for this next generation a TIF. I want to get your take because, you know, remember the old days we were like, oh yeah, we've got to prepare for these scenarios where maybe 30% will be dialing on the V land or remotely, you know, it's not 30%. It was like 100%. So budgets aren't out of whack and yet they want more resiliency at the edge. Right. So, so one, I didn't budget for it. They didn't predict it and it's gotta be better, faster, cheaper, more skier. >>Yeah. Yeah. So, so, so John, the difference is, is that, you know, Dell, for instance, as already was already working towards this distributed model, right? The pandemic just accelerated that transformation. So, so when customers came to us and said, oh, we've got a problem with our workforce and our users being so geographically suddenly dispersed, you know, we had some insight that we could immediately lean on. We had already started working on solutions and building those platforms that can help them address those, those problems. Right. Because we'd already done studies before this, right. We had done studies and we'd come back on this whole work from home or remote office scenario. And, and the results were pretty unanimous in that customers were, all users were always complaining about, you know, application performance issues and, and, you know, connectivity issues and, and things like that. So we, we, we kinda knew about this. And so we were able to proactively start building solutions. And so, you know, so when a customer comes, there's like Rob was talking about, you know, their infrastructure, wasn't set up for everybody to suddenly move on day one and start accessing all the corporate resources where the majority of the organization is accessing corporate resources from away from campus. Right? So we, we, we have solutions, we've been building solutions and we have guidance to offer these customers as they try to modernize that network and address these problems. >>Well, that's a great segue to the next topic. Talk track is, you know, what is a network? What is network monetization? Right. So let's, let's define that if you don't mind, well, I got you guys here. You're both pros get that sound bite, but then let's get into the benefits of the outcomes from what that enables. So if you guys want to take a stab at defining what is network modernization mean? >>I think there's a lot of definitions, or it kind of depends on your point, your point of view of where you're, where you're responsible for, from a network or within the stack, you know, are from a take obviously is, you know, working, working from a vendor. And with solutions that we provide modernization is really around solutions that begin to look at more software defined architectures and definitions to begin a level of decoupling between, you know, points of control, hardware and software, and other kinds of points of visibility and automation to the point where, where things are let's, let's kind of put an air quotes in a sense of being more digitized. And in the sense, like even how we're looking at things from a consumerization perspective, but looking at things a much more, more cloud aware cloud specific cloud native in built automation, as well as inbuilt kind of analytics where things are much more in a, in a broader SDN, kind of a construct would be a form of a definition from a, from a, from a, from a monetization perspective. >>Now, do the other element of your, kind of a question in regards to, it's kind of the benefits that come as a result of this. So as customers have been in the last 24 months, looking at different solutions to address part of what we've been talking about, part of it is you want, when you're looking at, whether it's like you're using a word like sassy to kind of define, you know, how are enterprises looking for ZTE and they based solutions or cloud security to augment their, their overall needs. The benefits that they're finding are simplicity of management, because they're now looking for more uniform solutions that can address secure access for remote workers, in addition to their own kind of traditional access, as it relates to their offices to better visibility. Because as this uniformity of this kind of architecture, the now able to actually really see the level of context, right? >>I can see you, John, as far as where you're coming in and access and what applications on what devices. And now I have a means to actually apply a policy to that matters to me as the business, from an IP perspective, to protect me as the business, but also to ensure that you're actually authorized and accessing things that I have from an it regular reg regulations perspective. So benefits and the summary are kind of like Mo in bill automation, better, you know, things get done faster, things repair on their own in a different way, as a result of automation, greater visibility. Now they have much more greater insights into what we are doing as users of the overall it infrastructure and better overall control. That's been ultimately simplified as result of consolidation and unification. >>That's awesome. Insight. I T what's your take on the benefits of ma network modernization? >>So I'd like to sort of double down on, on, you know, something Rob said, right? So the visibility, right? So enhanced visibility in layman's terms, that just means more insight, more insight means the ability to implement best practices around application usage, application performance, more insights means control that it departments are, are meeting. They need that to manage and address security threats, right? To be able to identify an abnormal traffic pattern or unauthorized data movement, to be able to push updates and, and patches quickly. So, so it's really about, you know, that, that manageability, that that level of control gives them the ability to offer a resilient and secure underlying networking infrastructure. And then, you know, finally one of the key benefits is cost savings of, you know, everybody is trying to be more efficient. And so from, from our perspective, it's, it's really about building an open platform. >>You know, we've built a platform or an x86 based platform. We've we chose that because we wanted to tap into a mature ecosystem that, you know, customers can leverage as they, as they build their build towards their modernizing modernization goals. And so we're like tech leveraging technologies, like UCPs so universal customer premise equipments. And so that's really just an open hardware platform, but what you get by consolidating your network functions like routing and firewall, and when optimization you, and when you consolidate it all onto a single device, you get hardware savings, cost savings. You, you get operational savings as well, right? So you've it, common hardware infrastructure means a common deployment model means a streamlined operations means fewer truck rolls, right? So, so there's a tremendous amount of, of, of benefit from the cost standpoint as well, because from our perspective, it's really that what customers are looking for, they need enterprise grade solutions that can scale in a cost-effective manner. >>That's awesome. You guys mentioned sassy earlier. I'm like, first of all, software as a service is very sassy, big modern application movements. Always get my hair sassy. I think, you know, a kind of a term around SAS software as a service, but for you guys, it's talking about secure access service edge, which is a huge category growth right now where, you know, per security and networking, it's a huge discussion SD win fits into that somehow, because it used to be campus networking before now. It's everyone's world is the same. Now it's connected. So sassy is huge. How does that fit into SD when it's in the trend of the SAS at the same? What's the difference? Cause wan has been booming for the past decade as well in terms of trends. How are you guys seeing those converging in what's the difference? >>You know, I like to also agree with you, this thing has been booming the last couple of years, right. You know, kind of, kind of bread and butter part of what we've been doing, but, you know, to your question in regards to kind of its linkage relative to sassy, right. You know, as you articulated, right. It's the sassy secure access service edge from a definition of the acronym. So it's authority is first kind of good to kind of define a little bit, maybe for some of those that may not be overly familiar with it. And I like to kind of dumb it down a little bit into the point of sassy is really an architecture that is around, you know, the convergence of networking and security being put together in a uniform platform or service that is delivered from both the cloud, as well as addressing, you know, their, their kind of traditional land requirements. >>Now digging in sassy is broken to two little buckets, right? It's broken into a network layer and the six security layer and by its definition, right, by, by a particular analyst, the network component, a big portion of that is SD wan. And so SD wan providing that value associated to what does, you know, dynamic lanes, steering automation, application attachments, so on and so forth is a core element of the foundation of the network layer associates, associate sassy. And then the other element of zesty is around the security bit. And so they're very much intrinsically linked, whether, you know, for example, like versus just the kind of mentioned this here, the, the, the sassy cloud that we built for our customers to leverage for private access, public access, you know, secure internet CASBY, DLP type of services is built upon SQM. In addition to our customers that are using Guesty Lampard or traditional land are using SD wan to connect to that cloud. >>So it's very, very much linked and they kind of go hand in hand, depending on your approach to the broader architecture. And, you know, another point I'll bring into that. What, what it also highlights is that whether it's around sassy or not, when we, when in pertinent to everything we'd been other kind of been talking about, the other thing that's coming with sun intrinsically and natively is really the concept of security it's around, whether it's security at the branch, or whether it's around some form of, you know, identity management or a point of improving posture for the, for the enterprise to, you know, obviously the spec traffic at the branch where remotely, but what we're seeing at a trend wise, which, you know, part by customer adoption from our own platform, if you will, is basically security and SD Wang coming together, whether for your traditional land transformation, or as a result of sassy services for a hybrid needs of connectivity, right? Remote workers, hybrid workforce, going into the cloud for, for their connectivity needs and optimizations. In addition to obviously the, the enterprises branch transformations, >>I like that native aspect of it. We used to joke and call SD way in St. Cloud because it's, we're all using cloud technologies. Talk about the security impact real quick. If you don't mind, I want to just double click them what you mentioned there, because I think the cloud effication plus the security piece seems to be a key part of this dynamic. Is that true? Or did they get that right? What's what's this all mean with cloud vacation? Yeah, >>And I, I would, I, I, I agree with, I guess kind of where you're leading into that is, you know, review all of us you're right now. Exactly. In talking with you right now, right. John is, as you stated at the beginning, we're all remote. And so from a business perspective, right, we are accessing, or from an engagement we're accessing a cloud service. Now what's critical for us, as you know, obviously enterprise employees is that our means of accessing this cloud service needs to have some level of hardening. We need to protect, right. Not only our own asset that we're using, right. Our laptops or other machinery that you use to connect to the network, but in addition to protect our company, right? So our company also needs to protect them. So how can we do that? Right? How can we do that in a very fast and distributed way? >>Sure. We can put security endpoints at every location with every user and every home. And that's one means of, of a particular solution. So your point about cloud is now take all of that and bring it to the cloud where you'd have a much more distributed means, right? And much more dynamically, scalable approach to actually doing that level of inspection, posture and, and enforcement. And so that's kind of where the rubber meets the road, right, is for us to access those cloud applications. The cloud that we're using as a conduit for security, as well as network also is now even connected and optimized paths to applications like what we're using right now, right. To, to, to do this conversation. So that's kind of where it meets together. And the security element is because we're so diverse, we just need, we, we, we need to ensure, right. We're all much, we're much more vulnerable. Right? My home network is, you know, maybe arguably maybe not as secure as when I go into an office. Right? >>So most people, because you have worked for virtual networks, >>I can make that argument. Yes. Right. But you know, the average, most of us, remote workers, you know, our homes aren't as hard. And so we point a point of risk, right? And so, as we, as we go to cloud apps, we're more connected to the internet. Right. You know, the, the, the point of being able to do this enforcement from a sassy concept helps provide that improved posture for enterprises to secure their traffic and get visibility into that. >>All my network engineer, friends are secure, as you read about. And I always joked to the malware, you missed, missed the wrong network engineer. If I go after them, their house, spear fishing. And you're trying to get into your network. I'd say, if I want to bring this back, because what we're bringing up here is cloud is actually enabling more on premises because you're working at home. That's a premise, right? So you're also edge is a premise edge and cloud. And a cloud kind of eliminates all this notion of what is cloud and edge, but at the end of the day is where you are. Right. So having the performance and the security and the partnership that same with Dell, I know you guys have been on this for a while because I've been covering it, but the notion of edge completely changes now, because what does that even mean? Home's edge is the camp of data centers and edge the, the cars and edge, the telco monopoles and edge. This is a big deal. This is the unit about the unification. This is all about making it all work. What's your, what's your take on this from the Dell perspective. >>Yeah. And I think, I mean, it that's, I mean, you, you kind of summarize it, right. I mean, what does edge mean to you? Right. It's and then, so every time I have a conversation with, with somebody, I always start with, let's define what your edge is. And so, you know, from, from our perspective, from the Dell perspective is, you know, we believe that we want to provide enterprise grade infrastructure. We want to give our customers the right tools. And we're seeing that with this trend of a hybrid workforce, a geographically dispersed user base, we're seeing a tremendous need for, you know, from it departments for tools, for solutions that can give them the control that they can sort of push out into their networks to ensure a safe and secure external access to corporate resources. Right. And so that's what we're committed to is making sure that, that, that management layer by either developing the solutions, in-house bringing the right partners to the table and just ensuring that our customers have the right tools because this sort of trend, or this, this, this new normal is not going away. And so we have to adapt. >>So thanks for coming on, Rob, we'll give you the final word. What's changed the most, in your opinion, with customers, environments, around how they're handling their networks as we come out of the pandemic, which has proven kind of which projects are working, which ones aren't where to double down on what was screwed up. I mean, come on. This is, we're kind of seeing it all play out. What's your, what's your take on as we come through the pandemic and people come out of this, what's the big learning. Okay. >>Well that you need partners. Right. Okay. So it's not even from a vendor perspective. What I mean by partners is what we're finding and what I think a lot of other customers I've engaged with and others is this ain't easy for even as much as we can within the technology vendor market, right. It's to make things easier to do. There's a lot of technology and the enterprise, it is recognized. They need a lot of these building blocks, right. To, to accomplish a lot of different things, whether it's around automation, to, in other tools as, as auto was leading into. And so we're finding that, you know, a lot of our, our base or our interactions are really trying to identify an appropriate partner that can help not only talk to the technology, but help them actually understand all the various different, you know, multi-colored legal blocks, they've got to put together, but also help help them actually put that into a realization. >>Right. And, you know, and then be able to then give the keys to them so they can eventually drive the car. Right. And so the learning that we're seeing here is this is a lot of tech, there's a lot of new tech, new approaches to existing technology of things that they've actually done. And they're, they're, they're looking for help. Right. And so they're looking for kind of, let's call it like trusted advisor kind of status of people that can help explain the technology to them and then help them understand how do they put it together. So they can then ultimately accomplish our overall kind of, you know, other kinds of objectives from an it perspective. And the other learning that I'll just say, and then I'll, then I'll stop. Here is SD wan isn't dead, right? Yes. The man is actually still driving. And it's actually an impetus for a lot of other things that enterprise is actually doing, whether it's around, you know, sassy, oriented services, remote access, private access, and other things of that nature. >>I totally agree. I think the networking, stuff's still going to be so much innovation going on with the edge exploding as well. That the really great, amazing stuff happening. Thanks for coming on this cube conversation, great conversation, taking it to the edge network challenges in the distributed hybrid workforce era is about moving things around the internet, making them secure. I'm John for your host. Thanks for watching.
SUMMARY :
I'm John for your host of the queue here in Palo Alto, you know, unexpected disruptions around everyone being worked at home. Yeah, to then when we start looking at it, let's kind of focus a little bit on challenges, you know, you know, And so a lot of customers, you know, we're, we're beginning to develop kind of homegrown So things around, you know, land or remotely, you know, it's not 30%. And so, you know, so when a customer comes, there's like Rob was talking about, you know, So let's, let's define that if you don't mind, well, begin a level of decoupling between, you know, points of control, hardware and software, solutions to address part of what we've been talking about, part of it is you want, you know, things get done faster, things repair on their own in a different way, I T what's your take on the benefits of ma network modernization? So I'd like to sort of double down on, on, you know, something Rob said, And so that's really just an open hardware platform, but what you get by consolidating your I think, you know, that is delivered from both the cloud, as well as addressing, you know, their, their kind of traditional land requirements. value associated to what does, you know, dynamic lanes, steering automation, for the enterprise to, you know, obviously the spec traffic at the branch where remotely, plus the security piece seems to be a key part of this dynamic. critical for us, as you know, obviously enterprise employees is that our means of accessing My home network is, you know, maybe arguably maybe not as secure But you know, the average, most of us, remote workers, and the security and the partnership that same with Dell, I know you guys have been on this for a while because I've been covering so, you know, from, from our perspective, from the Dell perspective is, So thanks for coming on, Rob, we'll give you the final word. And so we're finding that, you know, And, you know, and then be able to then give the keys to them so they can eventually drive the I think the networking, stuff's still going to be so much innovation going on with the edge exploding
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Rob | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Rob McBride | PERSON | 0.99+ |
100% | QUANTITY | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
30% | QUANTITY | 0.99+ |
ZTE | ORGANIZATION | 0.99+ |
two little buckets | QUANTITY | 0.99+ |
six security layer | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
pandemic | EVENT | 0.98+ |
first | QUANTITY | 0.97+ |
sassy | TITLE | 0.97+ |
single device | QUANTITY | 0.96+ |
one | QUANTITY | 0.94+ |
Guesty Lampard | ORGANIZATION | 0.94+ |
telco | ORGANIZATION | 0.93+ |
Z TNA | TITLE | 0.93+ |
both pros | QUANTITY | 0.92+ |
Omnichron | ORGANIZATION | 0.89+ |
one means | QUANTITY | 0.85+ |
last 24 months | DATE | 0.84+ |
last couple of years | DATE | 0.8+ |
past decade | DATE | 0.78+ |
SAS | ORGANIZATION | 0.76+ |
versa networks | ORGANIZATION | 0.75+ |
past two years | DATE | 0.73+ |
last two years | DATE | 0.72+ |
sassy | PERSON | 0.67+ |
St. | LOCATION | 0.67+ |
day one | QUANTITY | 0.63+ |
Next few weeks | DATE | 0.63+ |
double | QUANTITY | 0.61+ |
x86 | TITLE | 0.51+ |
Wang | ORGANIZATION | 0.42+ |
Suni Potti & Lior Div | CUBE Conversation, October 2021
hello and welcome to this special cube conversation i'm dave nicholson and this is part of our continuing coverage of google cloud next 2021 i have two very special guests with me and we are going to talk about the topic of security uh i have sunil potti who is vice president and general manager of google cloud security uh who in a previous life had senior leadership roles at nutanix and citrix along with lior div who is the ceo and co-founder of cyber reason lior was formerly a commander in the much famed unit 8200 uh part of the israeli defense forces uh where he was actually a medal of honor recipient uh very uh honored to have him here this morning sunil and lior welcome to the cube sunil welcome back to the cube yeah great to be here david and and to be in the presence of a medal of honor recipient by the way a good friend of mine leor so be here well good to have both of you here so uh i'm the kind of person who likes my dessert before my uh before my entree so why don't we just get right to it you're the two of you are here to announce something very very significant uh in the field of security uh sunil do you want to start us out what are we here to talk about yeah i mean i think maybe uh you know just to set this context um as as many of you know about a decade ago a nation's sponsored attack you know actually got into google plus a whole bunch of tech companies you know the project aurora was quite uh you know infamous for a certain period of time and actually google realized almost a decade ago that look you know security can't just be a side thing it has to be the primary thing including one of the co-founders becoming for lack of a better word the chief security officer for a while but one of the key takeaways from that whole incident was that look you have to be able to detect everything and trust nothing and and the underpinning for at least one of them led to this whole zero trust architectures that everybody now knows about but the other part which is not as popular at least in industry vernacular but in many ways equally important and some ways more important is the fact that you need to be able to detect everything so that you can actually respond and that led to the formation of you know a project internal to google to actually say that look let's democratize uh storage and make sure that nobody has to pay for capturing security events and that led to the formation of this uh new industry concept called a security data lake in chronicle was born and then as we started evolving that over into the enterprise segment partnering with you know cyber reason on one hand created a one plus one equals three synergy between say the presence around what do you detect from the end point but also generally just so happens that as lior will tell you the cyber reason technology happens to start with endpoint but it's actually the core tech is around detecting events but doing it in a smart way to actually respond to them in much more of a contextual manner but beyond just that you know synergy between uh you know a world-class planet scale you know security data like forming the foundation and integrating you know in a much more cohesive way with uh cyber reasons detection response offering the spirit was actually that this is the first step of a long journey to really hit the reset button in terms of going from reactive mode of security to a proactive mode of security especially in a nation-state-sponsored attack vector so maybe leo you can speak a few minutes on that as well absolutely so um as you said i'm coming from a background of uh nation state hacking so for us at cyberism it's uh not is foreign uh what the chinese are doing uh on a daily basis and the growing uh ransomware cartel that's happening right now in russia um when we looked at it we said then uh cyberism is very famous by our endpoint detection and response capability but when we establish cyber reason we establish the cyberism on a core or almost fundamental idea of finding malicious operation we call it the male idea so basically instead of looking for alerts or instead of looking for just pieces of data we want to find the hackers we want to find the attack we want to be able to tell basically the full story of what's going on uh in order to do that we build the inside cyberism basically from day one the ability to analyze any data in real time in order to stitch it into the story of the male the malicious operation but what we realize very quickly that while our solution can process more than 27 trillion events a week we cannot feed it fast enough just from end point and we are kind of blind when it comes to the rest of the attack surface so we were looking uh to be honest quite a while for the best technology that can feed this engine and to as sunil said the one plus one equal three or four or five to be able to fight against those hackers so in this journey uh we we found basically chronicle and the combination of the scale that chronicle bringing the ability to feed the engine and together basically to be able to find those hackers in real time and real time is very very important and then to response to those type of attack so basically what is uh exciting here we created a solution that is five times faster than any solution that exists right now in the market and most importantly it enables us to reverse the atmospheric advantage and basically to find them and to push them out so we're moving from hey just to tell you a story to actually prevent hackers to being in your environment so leor can you i want to double click on that just just a little bit um can you give give us a kind of a concrete example of this difference between simply receiving alerts and uh and actually um you know taking taking uh uh correlating creating correlations and uh and actually creating actionable proactive intelligence can you give us an example of that working in in the real world yeah absolutely we can start from a simple example of ransomware by the time that i will tell you that there is a ransomware your environment and i will send an alert uh it will be five computers that are encrypted and by the time that you gonna look at the alert it's gonna be five thousand uh basically machines that are encrypted and by the time that you will do something it's going to be already too little too late and this is just a simple example so preventing that thing from happening this is critical and very timely manner in order to prevent the damage of ransomware but if you go aside from ransomware and you look for example of the attack like solarwind basically the purpose of this attack was not to create damage it was espionage the russian wanted to collect data on our government and this is kind of uh the main purpose that they did this attack so the ability to be able to say hey right now there is a penetration this is the step that they are doing and there is five ways to push them out of the environment and actually doing it this is something that today it's done manually and with the power of chronicle and cyberism we can do it automatically and that's the massive difference sunil are there specific industries that should be really interested in this or is this a is this a broad set of folks that should be impacted no you know in some ways uh you know the the the saying these days to learn's point on ransomware is that you know if if a customer or an enterprise has a reasonable top-line revenue you're a target you know you're a target to some extent so in that sense especially given that this has moved from pure espionage or you know whether it be you know government oriented or industrial espionage to a financial fraud then at that point in time it applies to pretty much a wide gamut of industries not just financial services or you know critical infrastructure companies like oil and gas pipeline or whatever it could be like any company that has any sort of ip that they feel drives their top line business is now a target for such attacks so when you talk about the idea of partnership and creating something out of a collaboration what's the meat behind this what what what do you what are you guys doing beyond saying you know hey sunil lior these guys really like each other and they respect what the other is doing what's going on behind the scenes what are you actually implementing here moving forward so every partnership is starting with love so it's good [Laughter] but then it need to translate to to really kind of pure value to our customers and pure value coming from a deep integration when it's come to the product so basically uh what will happen is every piece of data that we can collect at cyber is in uh from endpoint any piece of data that the chronicle can collect from any log that exists in the world so basically this is kind of covering the whole attack surface so first we have access to every piece of information across the full attack surface then the main question is okay once you collect all this data what you're gonna do with it and most of companies or all the companies today they don't have an answer they're saying oh we're gonna issue an alert and we hope that there is a smart person behind the keyboard that can understand what just happened and make a decision and with this partnership and with this integration basically we're not asking and outsourcing the question what to do to the user we're giving them the answer we're telling them hey this is the story of the attack this is all the pieces that's going on right now and in most cases we're gonna say hey and by the way we just stopped it so you can prevent it from the future when will people be able to leverage this capability in an integrated way and and and by the way restate how this is going to market as an integrated solution what is what is the what is what are we going to call this moving forward so basically this is the cyber reason xdr uh powered by chronicle and we are very very um uh happy about it yeah and i think just to add to that i would say look the the meta strategy here and the way it'll manifest is in this offering that comes out in early 2022 um is that if you think about it today you know a classical quote-unquote security pipeline is to detect you know analyze and then respond obviously you know just just doing those three in a good way is hard doing it in real time at scale is even harder so just that itself was where cyber reason and chronicle would add real value where we are able to collect a lot of events react in real time but a couple of things that i think that you know to your original point of why this is probably going to be a little for game changer in the years to come is we're trying to change that from detect analyze respond to detect understand and anticipate so because ultimately that's really how we can change you know the profile from being reactive in a world of ransomware or anything else to being proactive against a nation sponsored or nation's influenced attacks because they're not going to stop right so the only way to do this is to rather than just go back up the hatches is just really you know change change the profile of how you'll actually anticipate what they were probably going to do in 6 months or 12 months and so the the graph technology that powers the heart of you know cyber reason is going to be intricately woven in with the contextual information that chronicle can get so that the intermediate step is not just about analysis but it's about truly understanding the overall strategy that has been employed in the past to predict what could happen in the future so therefore then actions could be taken downstream that you can now say hey most likely this these five buckets have this kind of personal information data there's a reasonable chance that you know if they're exposed to the internet then as you create more such buckets in that project you're going to be susceptible to more ransomware attacks or some other attacks right and that's the the the kind of thinking or the transformation that we're trying to bring out with this joint office so lior uh this this concept of uh of mallops and uh cyber reason itself you weren't just born yesterday you've been you've been uh you have thousands of customers around the globe he does look like he was born i i know i know i know well you you know it used to be that the ideal candidate for ceo of a startup company was someone who dropped out of stanford i think it's getting to the point where it's people who refused admission to stanford so uh the the dawn of the 14 year old ceo it's just it's just around the corner but uh but lior do you get frustrated when you see um you know when you become aware of circumstances that would not have happened had they implemented your technology as it exists today yeah we have a for this year it was a really frustrating year that starting with solarwind if you analyze the code of solarwind and we did it but other did it as well basically the russians were checking if cyberism is installed on the machine and if we were installed on the machine they decided to stop the attack this is something that first it was a great compliment for us from you know our not friend from the other side that decided to stop the attack but on a serious note it's like we were pissed because if people were using this technology we know that they are not going to be attacked when we analyze it we realize that we have three different ways to find the solar wind hackers in a three different way so this is just one example and then the next example in the colonial pipeline hack we were the one that found darkseid as a group that we were hacking we were the first one that released a research on them and we showed how we can prevent the basically what they are doing with our technology so when you see kind of those type of just two examples and we have many of them on a daily basis we just know that we have the technology in order to do that now when we're combining uh the chronicle technology into the the technology that we already have we basically can reverse the adversary advantage this is something that you're not doing in a single day but this is something that really give power to the defenders to the communities of siso that exist kind of across the us um and i believe that if we're going to join forces and lean into this community and and basically push the solution out the ability for us to fight against those cartels specifically the ransomware cartels is going to be massive sunil this time next year when we are in uh google cloud next 2022 um are you guys going to come back on and offer up the we told you so awards because once this is actually out there and readily available the combination of chronicle and cyber reasons technology um it's going to be hard for some csos to have an excuse uh it may be it may be a uncomfortable to know that uh they could have kept the door secure uh but didn't yeah where's that bad business is that bad business to uh hand out awards for doing dumb things i don't know about uh you know a version of darwin awards probably don't make sense but but but generally speaking so i do think uh you know we're all like as citizens in this right because you know we talk about customers i mean you know alphabet and google is a customer in some ways cyber reason is a customer the cube is a customer right so i think i think the robot hitting the road a year from now will be we should we should do this where i don't know if the cube does more than two folks at the same time david but we should i mean i'm sure we'll have enough to have at least a half a dozen in in the room to kind of talk about the solution because i think the the you know as you can imagine this thing didn't materialize i mean it's been being cooked for a while between your team and our team and in fact it was inspired by feedback from some joint customers out in the market and all that good stuff so so a year from now i think the best thing would be not just having customers to talk about the solution but to really talk about that transformation from respond to anticipate and do they feel better on their security posture in a world that they know like and leo should probably spend a few minutes on this is i think we're on the tip of the sphere of this nation-state era and what we've just seen in the last few years is what maybe the nation-states have seen over two decades ago and they're going to run those playbooks on the enterprise for the next decade or so yeah leor talk about that for a minute yeah it's it's really you know just to continue the sunil thought it's it's really about finding the unknown because what's happening on the other side it's like specifically china and russia and lately we saw iran starting to gain uh power um basically their job is to become better and better and to basically innovate and create a new type of attack on a daily basis as technology has evolved so basically there is a very simple equation as we're using more technology and relying more on technology the other side is going to exploit it in order to gain more power espionage and create financial damage but it's important to say that this evolution it's not going to stop this is just the beginning and a lot of the data that was belong just to government against government fight basically linked in the past few years now criminals starting to use it as well so in a sense if you think about it what's happening right now there is basically a cold war that nobody is talking about it between kind of the giant that everybody is hacking everybody and in the crossfire we see all of those enterprises across the world it was not a surprise that um you know after the biden and putin uh meeting suddenly it was a quiet it was no ransomware for six weeks and after something changing the politics suddenly we can see a a groin kind of attack when it's come to ransomware that we know that was directed from russia in order to create pressure on the u.s economy sunil wrap us up what are your f what are what are your final thoughts and uh what's what's the what's the big takeaway here no i think you know i i think the key thing for everyone to know is look i think we are going into an era of state-sponsored uh not espionage as much as threat vectors that affect every business and so in many ways the chiefs the chief information security officer the chief risk officer in many ways the ceo and the board now have to pay attention to this topic much like they paid attention to mobile 15 years ago as a transformation thing or maybe cloud 10 years ago i think cyber has been one of those it's sort of like the wireless error david like it existed in the 90s but didn't really break around until iphone hit or the world of consumerization really took off right and i think we're at the tip of the spear of that cyber really becoming like the era of mobile for 15 years ago and so i think that's the if there's like a big takeaway i think yes there's lots of solutions the good news is great innovations are coming through companies like cyber reason working with you know proven providers like google and so forth and so there's a lot of like support in the ecosystem but i think if there was one takeaway that was that everybody should just be ready internalized we don't have to be paranoid about it but we anticipate that this is going to be a long game that we'll have to play together well with that uh taking off my journalist hat for a moment and putting on my citizen hat uh it's reassuring to know that we have really smart people working on this uh because when we talk about critical infrastructure control systems and things like that being under threat um that's more significant than simply having your social security number stolen in a in a data breach so um with that uh i'd like to thank you sunil leor thank you so much for joining us on this special cube conversation this is dave nicholson signing off from our continuing coverage of google cloud next 2021 [Music] you
SUMMARY :
attack so the ability to be able to say
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
October 2021 | DATE | 0.99+ |
five computers | QUANTITY | 0.99+ |
sunil | PERSON | 0.99+ |
dave nicholson | PERSON | 0.99+ |
david | PERSON | 0.99+ |
five ways | QUANTITY | 0.99+ |
six weeks | QUANTITY | 0.99+ |
sunil potti | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
russia | LOCATION | 0.99+ |
three | QUANTITY | 0.99+ |
five thousand | QUANTITY | 0.99+ |
four | QUANTITY | 0.99+ |
early 2022 | DATE | 0.99+ |
two examples | QUANTITY | 0.99+ |
five times | QUANTITY | 0.99+ |
lior | PERSON | 0.99+ |
one example | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
first one | QUANTITY | 0.98+ |
five buckets | QUANTITY | 0.98+ |
iphone | COMMERCIAL_ITEM | 0.98+ |
today | DATE | 0.98+ |
next decade | DATE | 0.98+ |
15 years ago | DATE | 0.98+ |
12 months | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
one takeaway | QUANTITY | 0.98+ |
ORGANIZATION | 0.97+ | |
three different ways | QUANTITY | 0.97+ |
10 years ago | DATE | 0.97+ |
google cloud | ORGANIZATION | 0.97+ |
6 months | QUANTITY | 0.97+ |
two | QUANTITY | 0.97+ |
two very special guests | QUANTITY | 0.97+ |
more than two folks | QUANTITY | 0.97+ |
next year | DATE | 0.96+ |
sunil leor | PERSON | 0.96+ |
sunil lior | PERSON | 0.96+ |
next 2022 | DATE | 0.96+ |
thousands of customers | QUANTITY | 0.96+ |
14 year old | QUANTITY | 0.95+ |
first | QUANTITY | 0.95+ |
more than 27 trillion events a week | QUANTITY | 0.95+ |
this year | DATE | 0.95+ |
every piece of information | QUANTITY | 0.95+ |
first step | QUANTITY | 0.94+ |
next 2021 | DATE | 0.94+ |
three different way | QUANTITY | 0.93+ |
stanford | ORGANIZATION | 0.93+ |
every piece of data | QUANTITY | 0.92+ |
google cloud | TITLE | 0.91+ |
a lot of events | QUANTITY | 0.91+ |
israeli | ORGANIZATION | 0.9+ |
zero trust | QUANTITY | 0.9+ |
darkseid | ORGANIZATION | 0.9+ |
about a decade ago | DATE | 0.9+ |
a decade ago | DATE | 0.88+ |
past few years | DATE | 0.87+ |
russia | ORGANIZATION | 0.87+ |
90s | DATE | 0.87+ |
last few years | DATE | 0.85+ |
this morning | DATE | 0.84+ |
google plus | TITLE | 0.84+ |
two decades ago | DATE | 0.83+ |
cyber reason and | TITLE | 0.82+ |
a half a dozen | QUANTITY | 0.81+ |
single day | QUANTITY | 0.8+ |
nutanix and | ORGANIZATION | 0.79+ |
a lot of the data | QUANTITY | 0.79+ |
Suni Potti | PERSON | 0.77+ |
lot of | QUANTITY | 0.76+ |
couple of things | QUANTITY | 0.74+ |
next 2021 | DATE | 0.74+ |
day one | QUANTITY | 0.73+ |
russian | OTHER | 0.71+ |
u.s | ORGANIZATION | 0.7+ |
Sandeep Lahane and Shyam Krishnaswamy | KubeCon + CloudNative Con NA 2021
>>Okay, welcome back everyone. To the cubes coverage here, coop con cloud native con 2021 in person. The Cuba's here. I'm John farrier hosted the queue with Dave Nicholson, my cohost and cloud analyst, man. It's great to be back, uh, in person. We also have a hybrid event. We've got two great guests here, the founders of deep fence, sham, Krista Swami, C co-founder and CTO, and said deep line founder. It's great to have you on. This is a super important topic. As cloud native is crossed over. Everyone's talking about it mainstream, blah, blah, blah. But security is driving the agenda. You guys are in the middle of it. Cutting edge approach and news >>Like, like we were talking about John, we had operating at the intersection of the awesome desk, right? Open source security and cloud cloud native, essentially. Absolutely. And today's a super exciting day for us. We're launching something called track pepper, Apache V2, completely open source. Think of it as an x-ray or MRI scan for your cloud scan, you know, visualize this cloud at scale, all of the modalities, essentially, we look at cloud as a continuum. It's not a single modality it's containers. It's communities, it's William to settle we'll list all of them. Co-exist side by side. That's how we look at it and threat map. It essentially allows you to visualize all of this in real time, think of fed map, but as something that you, that, that takes over the Baton from the CIS unit, when the lift shift left gets over, that's when the threat pepper comes into picture. So yeah, super excited. >>It's like really gives that developer and the teams ops teams visibility into kind of health statistics of the cloud. But also, as you said, it's not just software mechanisms. The cloud is evolving, new sources being turned on and off. No one even knows what's going on. Sometimes this is a really hidden problem, right? Yeah, >>Absolutely. The basic problem is, I mean, I would just talk to, you know, a gentleman 70 of this morning is two 70 billion. Plus public cloud spent John two 70 billion plus even 3 billion, 30 billion they're saying right. Uh, projected revenue. And there is not even a single community tool to visualize all the clouds and all the cloud modalities at scale, let's start there. That's what we sort of decided, you know what, let's start with utilizing everything else there. And then look for known badness, which is the vulnerabilities, which still remains the biggest attack vector. >>Sure. Tell us about some of the hood. How does this all work cloud scale? Is it a cloud service managed service it's code? Take us out, take us through product. Absolutely. >>So, so, but before that, right, there's one small point that Sandeep mentioned. And Richard, I'd like to elaborate here, right? He spoke about the whole cloud spending such a large volume, right? If you look at the way people look at applications today, it's not just single clone anymore. It's multicloud multi regions across diverse plants, right? What does the solution to look at what my interests are to this point? That is a missing piece here. And that is what we're trying to tackle. And that is where we are going as open source. Coming back to your question, right? How does this whole thing work? So we have a completely on-prem model, right? Where customers can download the code today, install it. It can bill, we give binary stool and Shockley just as the exciting announcement that came out today, you're going to see somewhat exciting entrepreneurs. That's going to make a lot more easy for folks out there all day. Yeah, that's fine. >>So how does this, how does this all fit into security as a micro service and your, your vision of that? >>Absolutely. Absolutely. You know, I'll tell you, this has to do with the one of the continual conferences I would sort of when I was trying to get an idea, trying to shape the whole vision really? Right. Hey, what about syncretism? Microservice? I would go and ask people. They mentioned that sounds, that makes sense. Everything is becoming a microservice. Really. So what you're saying is you're going to deploy one more microservice, just like I deploy all of my other microservices. And that's going to look after my microservices. That compute back makes logical sense, essentially. That was the Genesis of that terminology. So defense essentially is deployed as a microservice. You go to scale, it's deployed, operated just like you to your microservices. So no code changes, no other tool chain changes. It just is yet another microservice. That's going to look after you talk about >>The, >>So there's one point I would like to add here, which is something very interesting, right? The whole concept of microservice came from, if you remember the memo from Jeff Bezos, that everybody's going to go, Microsoft would be fired. That gave rise to a very conventional unconditionally of thinking about their applications. Our deep friends, we believe that security should be. Now. You should bring the same unconventional way of thinking to security. Your security is all bottom up. No, it has to start popping up. So your applications on microservice, your security should also be a micro. >>So you need a microservice for a microservice security for the security. You're starting to get into a paradigm shift where you starting to see the API economy that bayzos and Amazon philosophy and their approach go Beanstream. So when I got to ask you, because this is a trend we've been watching and reporting on the actual application development processes, changing from the old school, you know, life cycle, software defined life cycle to now you've got machine learning and bots. You have AI. Now you have people are building apps differently. And the speed of which they want to code is high. And then other teams are slowing them down. So I've heard security teams just screw people over a couple of days. Oh my God, I can wait five days. No, it used to be five weeks. Now it's five days. They think that's progress. They want five minutes, the developers in real time. So this is a real deal optimum. >>Well, you know what? Shift left was a good thing. Instill a good thing. It helps you sort of figure out the issues early on in the development life cycle, essentially. Right? And so you started weaving in security early on and it stays with you. The problem is we are hydrating. So frequently you end up with a few hundred vulnerabilities every time you scan oftentimes few thousand and then you go to runtime and you can't really fix all these thousand one. You know? So this is where, so there is a little bit of a gap there. If you're saying, if look at the CIC cycle, the in financial cycle that they show you, right. You've got the far left, which is where you have the SAS tools, snake and all of that. And then you've got the center where, which is where you hand off this to ops. >>And then on the right side, you've got tech ops defense essentially starts in the middle and says, look, I know you've had thousand one abilities. Okay. But at run time, I see only one of those packages is loaded in memory. And only that is getting traffic. You go and fix that one because that's going to heart. You see what I'm saying? So that gap is what we're doing. So you start with the left, we come in in the middle and stay with you throughout, you know, till the whole, uh, she asks me. Yeah, well that >>Th that, that touches on a subject. What are the, what are the changes that we're seeing? What are the new threats that are associated with containerization and kind of coupled with that, look back on traditional security methods and how are our traditional security methods failing us with those new requirements that come out of the microservices and containerized world. And so, >>So having, having been at FireEye, I'll tell you I've worked on their windows products and Juniper, >>And very, very deeply involved in. >>And in fact, you know what I mean, at the company, we even sold a product to Palo Alto. So having been around the space, really, I think it's, it's, it's a, it's a foregone conclusion to say that attackers have become more sophisticated. Of course they have. Yeah. It's not a single attack vector, which gets you down anymore. It's not a script getting somewhere shooting who just sending one malicious HTP request exploiting, no, these are multi-vector multi-stage attacks. They, they evolve over time in space, you know? And then what happens is I could have shot a revolving with time and space, one notable cause of piling up. Right? And on the other side, you've got the infrastructure, which is getting fragmented. What I mean by fragmented is it's not one data center where everything would look and feel and smell similar it's containers and tuberosities and several lessons. All of that stuff is hackable, right? So you've got that big shift happening there. You've got attackers, how do you build visibility? So, in fact, initially we used to, we would go and speak with, uh, DevSecOps practitioner say, Hey, what is the coalition? Is it that you don't have enough scanners to scan? Is it that at runtime? What is the main problem? It's the lack of visibility, lack of observability throughout the life cycle, as well as through outage, it was an issue with allegation. >>And the fact that the attackers know that too, they're exploiting the fact that they can't see they're blind. And it's like, you know what? Trying to land a plane that flew yesterday and you think it's landing tomorrow. It's all like lagging. Right? Exactly. So I got to ask you, because this has comes up a lot, because remember when we're in our 11th season with the cube, and I remember conversations going back to 2010, a cloud's not secure. You know, this is before everyone realized shit, the club's better than on premises if you have it. Right. So a trend is emerged. I want to get your thoughts on this. What percentage of the hacks are because the attackers are lazier than the more sophisticated ones, because you see two buckets I'm going to get, I'm going to work hard to get this, or I'm going to go for the easy low-hanging fruit. Most people have just a setup that's just low hanging fruit for the hackers versus some sort of complex or thought through programmatic cloud system, because now is actually better if you do it. Right. So the more sophisticated the environment, the harder it is for the hackers, AK Bob wire, whatever you wanna call it, what level do we cross over? >>When does it go from the script periods to the, the, >>Katie's kind of like, okay, I want to go get the S3 bucket or whatever. There's like levels of like laziness. Yeah. Okay. I, yeah. Versus I'm really going to orchestrate Spearfish social engineer, the more sophisticated economy driven ones. Yeah. >>I think, you know what, this attackers, the hacks aren't being conducted the way they worked in the 10, five years ago, isn't saying that they been outsourced, there are sophisticated teams for building exploiters. This is the whole industry up there. Even the nation, it's an economy really. Right. So, um, the known badness or the known attacks, I think we have had tools. We have had their own tools, signature based tools, which would know, look for certain payloads and say, this is that I know it. Right. You get the stuff really starts sort of, uh, getting out of control when you have so many sort of different modalities running side by side. So much, so much moving attack surfaces, they will evolve. And you never know that you've scanned enough because you never happened because we just pushed the code. >>Yeah. So we've been covering the iron debt. Kim retired general, Keith Alexander, his company. They have this iron dome concept where there's more collective sharing. Um, how do you see that trend? Because I can almost imagine that the open-source man is going to love what you guys got. You're going to probably feed on it, like it's nobody's business, but then you start thinking, okay, we're going to be open. And you have a platform approach, not so much a tool based approach. So just give me tools. We all know that when does it, we cross over to the Nirvana of like real security sharing. Real-time telemetry data. >>And I want to answer this in two parts. The first part is really a lot of this wisdom is only in the community. It's a tribal knowledge. It's their informal feeds in from get up tickets. And you know, a lot of these things, what we're really doing with threat map, but as we are consolidating that and giving it out as a sort of platform that you can use, I like to go for free. This is the part you will never go to monetize this. And we are certain about disaster. What we are monetizing instead is you have, like I said, the x-ray or MRI scan of the cloud, which tells you what the pain points are. This is feel free. This is public collective good. This is a Patrick reader. This is for free. It's shocking. >>I took this long to get to that point, by the way, in this discussion. >>Yeah, >>This is this timing's perfect. >>Security is collective good. Right? And if you're doing open source, community-based, you know, programs like this is for the collector group. What we do look, this whole other set map is going to be open source. We going to make it a platform and our commercial version, which is called fetch Stryker, which is where we have our core IP, which is basically think about this way, right? If you figured out all the pain points and using tech map, or this was a free, and now you wanted the remedy for that pain feed to target a defense, we targeted quarantining of those statin workloads and all that stuff. And that's what our IP is. What we really do there is we said, look, you figured out the attack surface using tech fabric. Now I'm going to use threat Stryker to protect their attacks and stress >>Free. Not free to, or is that going to be Fort bang? >>Oh, that's for, okay. >>That's awesome. So you bring the goodness to the party, the goods to the party, again, share that collective, see where that goes. And the Stryker on top is how you guys monetize. >>And that's where we do some uniquely normal things. I would want to talk about that. If, if, if, if you know public probably for 30 seconds or so unique things we do in industry, which is basically being able to monitor what comes in, what goes out and what changes across time and space, because look, most of the modern attacks evolve over time and space, right? So you go to be able to see things like this. Here's a party structure, which has a vulnerability threats. Mapper told you that to strike. And what it does is it tells you a bunch of stress has a vulnerable again, know that somebody is sending a Melissa's HTP request, which has a malicious payload. And you know what, tomorrow there's a file system change. And there is outbound connection going to some funny place. That is the part that we're wanting this. >>Yeah. And you give away the tool to identify the threats and sell the hammer. >>That's giving you protection. >>Yeah. Yeah. Awesome. I love you guys love this product. I love how you're doing it. I got to ask you to define what is security as a microservice. >>So security is a microservice is a deployment modality for us. So defense, what defense has is one console. So defense is currently self posted by the customers within the infrastructure going forward. We'll also be launching a SAS version, the cloud version of it. But what happens as part of this deployment is they're running the management console, which is the gooey, and then a tiny sensor, which is collecting telemetric that is deployed as a microservice is what I'm saying. So you've got 10 containers running defenses level of container. That's, that's an eight or the Microsoft risk. And it utilizes, uh, EDP F you know, for tracing and all that stuff. Yeah. >>Awesome. Well, I think this is the beginning of a shift in the industry. You start to see dev ops and cloud native technologies become the operating model, not just dev dev ops are now in play and infrastructure as code, which is the ethos of a cloud generation is security is code. That's true. That's what you guys are doing. Thanks for coming on. Really appreciate it. Absolutely breaking news here in the queue, obviously great stuff. Open source continues to grow and win in the new model. Collaboration is the cube bringing you all the cover day one, the three days. I'm Jennifer, your host with Dave Nicholson. Thanks for watching.
SUMMARY :
It's great to have you on. It essentially allows you to visualize all of this in real time, think of fed map, but as something that you, It's like really gives that developer and the teams ops teams visibility into That's what we sort of decided, you know what, let's start with utilizing everything else there. How does this all work cloud scale? the solution to look at what my interests are to this point? That's going to look after you talk about came from, if you remember the memo from Jeff Bezos, that everybody's going to go, Microsoft would be fired. So you need a microservice for a microservice security for the security. You've got the far left, which is where you have the SAS So you start with the left, we come in in the middle and stay with you throughout, What are the new threats that are associated with containerization and kind And in fact, you know what I mean, at the company, we even sold a product to Palo Alto. the environment, the harder it is for the hackers, AK Bob wire, whatever you wanna call it, what level the more sophisticated economy driven ones. And you never know that you've scanned enough because Because I can almost imagine that the open-source man is going to love what you guys got. This is the part you will never go to monetize this. What we really do there is we said, look, you figured out the attack surface using tech And the Stryker on top is how you guys monetize. And what it does is it tells you a bunch of stress has a vulnerable I got to ask you to define what is security as a microservice. And it utilizes, uh, EDP F you know, for tracing and all that stuff. Collaboration is the cube bringing you all the cover day one, the three days.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Richard | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Keith Alexander | PERSON | 0.99+ |
John | PERSON | 0.99+ |
five weeks | QUANTITY | 0.99+ |
five days | QUANTITY | 0.99+ |
30 seconds | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
five minutes | QUANTITY | 0.99+ |
Kim | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Jennifer | PERSON | 0.99+ |
Jeff Bezos | PERSON | 0.99+ |
John farrier | PERSON | 0.99+ |
Krista Swami | PERSON | 0.99+ |
Shyam Krishnaswamy | PERSON | 0.99+ |
two parts | QUANTITY | 0.99+ |
2010 | DATE | 0.99+ |
Sandeep Lahane | PERSON | 0.99+ |
tomorrow | DATE | 0.99+ |
yesterday | DATE | 0.99+ |
3 billion | QUANTITY | 0.99+ |
10 containers | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Patrick | PERSON | 0.99+ |
three days | QUANTITY | 0.99+ |
Katie | PERSON | 0.99+ |
11th season | QUANTITY | 0.99+ |
30 billion | QUANTITY | 0.99+ |
KubeCon | EVENT | 0.99+ |
two buckets | QUANTITY | 0.98+ |
bayzos | ORGANIZATION | 0.98+ |
10 | DATE | 0.98+ |
one console | QUANTITY | 0.98+ |
first part | QUANTITY | 0.98+ |
Melissa | PERSON | 0.98+ |
one | QUANTITY | 0.98+ |
two great guests | QUANTITY | 0.98+ |
Palo Alto | LOCATION | 0.98+ |
FireEye | ORGANIZATION | 0.97+ |
one point | QUANTITY | 0.96+ |
Sandeep | PERSON | 0.96+ |
CloudNative Con | EVENT | 0.96+ |
Juniper | ORGANIZATION | 0.96+ |
Cuba | LOCATION | 0.96+ |
single modality | QUANTITY | 0.96+ |
single attack | QUANTITY | 0.95+ |
eight | QUANTITY | 0.94+ |
two | QUANTITY | 0.94+ |
70 | QUANTITY | 0.94+ |
Shockley | ORGANIZATION | 0.93+ |
one small point | QUANTITY | 0.92+ |
this morning | DATE | 0.9+ |
single clone | QUANTITY | 0.89+ |
thousand | QUANTITY | 0.89+ |
day one | QUANTITY | 0.88+ |
SAS | ORGANIZATION | 0.87+ |
70 billion | QUANTITY | 0.85+ |
single community tool | QUANTITY | 0.85+ |
William | PERSON | 0.83+ |
Baton | LOCATION | 0.83+ |
five years ago | DATE | 0.83+ |
S3 | COMMERCIAL_ITEM | 0.83+ |
NA 2021 | EVENT | 0.81+ |
one data center | QUANTITY | 0.81+ |
CTO | PERSON | 0.79+ |
con 2021 | EVENT | 0.78+ |
Nirvana | LOCATION | 0.78+ |
Apache | ORGANIZATION | 0.72+ |
Stryker | ORGANIZATION | 0.71+ |
few thousand | QUANTITY | 0.7+ |
DevSecOps | ORGANIZATION | 0.7+ |
coop con cloud native | ORGANIZATION | 0.69+ |
one abilities | QUANTITY | 0.69+ |
a couple of days | QUANTITY | 0.68+ |
hundred vulnerabilities | QUANTITY | 0.67+ |
one more microservice | QUANTITY | 0.64+ |
Beanstream | ORGANIZATION | 0.64+ |
track pepper | ORGANIZATION | 0.63+ |
Mapper | PERSON | 0.62+ |
AK Bob | PERSON | 0.59+ |
CIS | ORGANIZATION | 0.56+ |
fence | ORGANIZATION | 0.54+ |
V2 | COMMERCIAL_ITEM | 0.45+ |
Stryker | TITLE | 0.39+ |
Ajay Patel, VMware | VMworld 2021
(upbeat music) >> Welcome to theCUBE's coverage of VMworld 2021. I'm Lisa Martin. I've got a CUBE alum with me next. Ajay Patel is here, the SVP and GM of Modern Apps and Management at VMware. Ajay, welcome back to the program, it's great to see you. >> Well thank you for having me. It's always great to be here. >> Glad that you're doing well. I want to dig into your role as SVP and GM with Modern Apps and Management. Talk to me about some of the dynamics of your role and then we'll get into the vision and the strategy that VMware has. >> Makes sense. VMware has created a business group called Modern Apps and Management, with the single mission of helping our customers accelerate their digital transformation through software. And we're finding them leveraging both the edge and the multiple clouds they deploy on. So our mission here is helping, them be the cloud diagnostic manager for application development and management through our portfolio of Tazu and VRealize solutions allowing customers to both build and operate applications at speed across these edge data center and cloud deployments And the big thing we hear is all the day two challenges, right of managing costs, risks, security, performance. That's really the essence of what the business group is about. How do we speed idea to production and allow you to operate at scale. >> When we think of speed, we can't help, but think of the acceleration that we've seen in the last 18 months, businesses transforming digitally to first survive the dynamics of the market. But talk to me about how the, the pandemic has influenced catalyzed VMware's vision here. >> You can see in every industry, this need for speed has really accelerated. What used to be weeks and months of planning and execution has materialized into getting something out in production in days. One of great example I can remember is one of my financial services customer that was responsible for getting all the COVID payments out to the small businesses and being able to get that application from idea to production matter of 10 days, it was just truly impressive to see the teams come together, to come up with the idea, put the software together and getting production so that we could start delivering the financial funds the companies needed, to keep them viable. So great social impact and great results in matter of days. >> And again, that acceleration that we've seen there, there's been a lot of silver linings, I think, but I want to get in next to some of the industry trends that are influencing app modernization. What are you seeing in the customer environment? What are some of those key trends that are driving adoption? >> I mean, this move to cloud is here to stay and most of customers have a cloud first strategy, and we rebranded this from VMware the cloud smart strategy, but it's not just about one particular flavor of cloud. We're putting the best workload on the best cloud. But the reality is when I speak to many of the customers is they're way behind on the bar of digital plats. And it's, that's because the simple idea of, you know, lift and shift or completely rewrite. So there's no one fits all and they're struggling with hardware capability, their the development teams, their IT assets, the applications are modernized across these three things. So we see modernization kind of fall in three categories, infrastructure modernization, the practice of development or devops modernization, and the application transform itself. And we are starting to find out that customers are struggling with all three. Well, they want to leverage the best of cloud. They just don't have the skills or the expertise to do that effectively. >> And how does VMware help address that skills gap. >> Yeah, so the way we've looked at it is we put a lot of effort around education. So on the everyone knows containers and Kubernetes is the future. They're looking to build these modern microservices, architectures and applications. A lot of investment in just kind of putting the effort to help customers learn these new tools, techniques, and create best practices. So theCUBE academy and the effort and the investment putting in just enabling the ecosystem now with the skills and capabilities is one big effort that VMware is putting. But more importantly, on the product side, we're delivering solutions that help customers both build design, deliver and operate these applications on Kubernetes across the cloud of choice. I'm most excited about our announcement around this product. We're just launching called Tanzu application platform. It is what we call an application aware platform. It's about making it easy for developers to take the ideas and get into production. It kind of bridging that gap that exists between development and operations. We hear a lot about dev ops, as you know, how do you bring that to life? How do you make that real? That's what Tanzu application platform is about. >> I'm curious of your customer conversations, how they've changed in the last year or so in terms of, app modernization, things like security being board level conversations, are you noticing that that is rising up the chain that app modernization is now a business critical initiative for our businesses? >> So it's what I'm finding is it's the means. It's not that if you think about the board level conversations about digital transformation you know, I'm a financial services company. I need to provide mobile FinTech. I'm competing with this new age application and you're delivering the same service that they offered digitally now, right. Like from a retail bank. I can't go to the store, the retail branch anymore, right. I need to provide the same capability for payments processing all online through my mobile phone. So it's really the digitalization of the traditional processes that we're finding most exciting. In order to do that, we're finding that no applications are in cloud right. They had to take the existing financial applications and put a mobile frontend to it, or put some new business logic or drive some transformation there. So it's really a transformation around existing application to deliver a business outcome. And we're focusing it through our Tanzu lab services, our capabilities of Tanzu application platform, all the way to the operations and management of getting these products in production or these applications in production. So it's the full life cycle from idea to production is what customers are looking for. They're looking to compress the cycle time as you and I spoke about, through this agility they're looking for. >> Right, definitely a compressed cycle time. Talk to me about some of the other announcements that are being made at VMworld with respect to Tanzu and helping customers on the app modernization front, and that aligned to the vision and mission that you talked about. >> Wonderful, I would say they're kind of, I put them in three buckets. One is what are we doing to help developers get access to the new technology. Back to the skills learning part of it, most excited about Tanzu of community edition and Tanzu mission control starter pack. This is really about getting Kubernetes stood up in your favorite deployment of choice and get started building your application very quickly. We're also announcing Tanzu application platform that I spoke about, we're going to beta 2 for that platform, which makes it really easy for developers to get access to Kubernetes capability. It makes development easy. We're also announcing marketplace enhancements, allowing us to take the best of breed IC solutions and making them available to help you build applications faster. So one set of announcements around building applications, delivering value, getting them down to market very quickly. On the management side, we're really excited about the broad portfolio management we've assembled. We're probably in the customer's a way to build a cloud operating model. And in the cloud operating model, it's about how do I do VMs and containers? How do I provide a consistent management control plane so I can deliver applications on the cloud of my choice? How do I provide intrinsic observability, intrinsic security so I can operate at scale. So this combination of development tooling, platform operations, and day two operations, along with enhancements in our cost management solution with CloudHealth or being able to take our universal capabilities for consumption, driving insight and observity that really makes it a powerful story for customers, either on the build or develop or deploy side of the equation. >> You mentioned a couple of things are interesting. Consistency being key from a management perspective, especially given this accelerated time in which we're living, but also you mentioned security. We've seen so much movement on the security front in the last year and a half with the massive rise in ransomware attacks, ransomware now becoming a household word. Talk to me about the security factor and how you're helping customers from a risk mitigation perspective, because now it's not, if we get attacked, it's when. >> And I think it's really starts with, we have this notion of a secure software supply chain. We think of software as a production factory from idea to production. And if you don't start with known good hard attacks to start with, trying to wire in security after attack is just too difficult. So we started with secure content, curated images content catalogs that customers are setting up as best practices. We started with application accelerators. These are best practice that codifies with the right guard rails in place. And then we automate that supply chain so that you have checks in every process, every step of the way, whether it's in the build process and the deploy process or in runtime production. And you had to do this at the application layer because there is no kind of firewall or edge you can protect the application is highly distributed. So things like application security and API security, another area we announced a new offering at VM world around API security, but everything starts with an API endpoint when you have a security. So security is kind of woven in into the design build, deploy and in the runtime operation. And we're kind of wire this in intrinsically to the platform with best of breed security partners now extending in evolving their solution on top of us. >> What's been some of the customer feedback from some of the new technologies that you announced. I'm curious, I imagine knowing how VMware is very customer centric, customers were essential in the development and iteration of the technologies, but just give me some of the idea on customer feedback of this direction that you're going. >> Yeah, there's a great, exciting example where we're working with the army to create a software factory. you would've never imagined right, The US army being a software digital enterprise, we're partnering with what we call the US army futures command in a joint effort to help them build the first ever software development factory where army personnel are actually becoming true cloud native developers, where you're putting the soldiers to do cloud native development, everything in the terms of practice of building software, but also using the Tanzu portfolio in delivering best-in-class capability. This is going to rival some of the top tech companies in Silicon valley. This is a five-year prototype project in which we're picking cohorts of soldiers, making them software developers and helping them build great capability through both combination of classroom based training, but also strong technical foundation and expertise provided by our lab. So this is an example where, you know, the industry is working with the customer to co-innovate, how we build software, but also driving the expertise of these personnel hierarchs. As a soldier, you know, what you need, what if you could start delivering solutions for rest of your members in a productive way. So very exciting, It's an example where we've leapfrogging and delivering the kind of the Silicon valley type innovation to our standard practice. It's traditionally been a procurement driven model. We're trying to speed that and drive it into a more agile delivery factory concept as well. So one of the most exciting projects that I've run into the last six months. >> The army software factory, I love that my dad was an army medic and combat medic in Vietnam. And I'm sure probably wouldn't have been apt to become a software developer. But tell me a little bit about, it's a very cool project and so essential. Talk to me a little bit about the impetus of the army software factory. How did that come about? >> You know, this came back with strong sponsorship from the top. I had an opportunity to be at the opening of the campus in partnership with the local Austin college. And as General Milley and team spoke about it, they just said the next battleground is going to be a digital backup power hub. It's something we're going to have to put our troops in place and have modernized, not just the army, but modernize the way we deliver it through software. It's it speaks so much to the digital transformation we're talking about right. At the very heart of it is about using software to enable whether it's medics, whether it's supplies, either in a real time intelligence on the battlefield to know what's happening. And we're starting to see user technology is going to drive dramatically hopefully the next war, we don't have to fight it more of a defensive mode, but that capability alone is going to be significant. So it's really exciting to see how technology has become pervasive in all aspects, in every format including the US army. And this partnership is a great example of thought leadership from the army command to deliver software as the innovation factory, for the army itself. >> Right, and for the army to rival Silicon valley tech companies, that's pretty impressive. >> Pretty ambitious right. In partnership with one of the local colleges. So that's also starting to show in terms of how to bring new talent out, that shortage of skills we talked about. It's a critical way to kind of invest in the future in our people, right? As we, as we build out this capability. >> That's excellent that investment in the future and helping fill those skills gaps across industries is so needed. Talk to me about some of the things that you're excited about this year's VMworld is again virtual, but what are some of the things that you think are really fantastic for customers and prospects to learn? >> I think as Raghu said, we're in the third act of VM-ware, but more interestingly, but the third act of where the cloud is, the cloud has matured cloud 2.0 was really about shifting and using a public cloud for the IS capabilities. Cloud 3.0 is about to use the cloud of choice for the best application. We are going to increasingly see this distributed nature of application. I asked most customers, where does your application run? It's hard to answer that, right? It's on your mobile device, it's in your storefront, it's in your data center, it's in a particular cloud. And so an application is a collection of services. So what I'm most excited about is all business capables being published as an API, had an opportunity to be part of a company called Sonos and then Apogee. And we talked about API management years ago. I see increasingly this need for being able to expose a business capability as an API, being able to compose these new applications rapidly, being able to secure them, being able to observe what's going on in production and then adjust and automate, you can scale up scale down or deploy the application where it's most needed in minutes. That's a dynamic future that we see, and we're excited that VM was right at the heart of it. Where that in our cloud agnostic software player, that can help you, whether it's your development challenges, your deployment challenges, or your management challenges, in the future of multi-cloud, that's what I'm most excited about, we're set up to help our customers on this cloud journey, regardless of where they're going and what solution they're looking to build. >> Ajay, what are some of the key business outcomes that the cloud is going to deliver across industries as things progress forward? >> I think we're finding the consistent message I hear from our customers is leverage the power of cloud to transform my business. So it's about business outcomes. It's less about technology. It's what outcomes we're driving. Second it's about speed and agility. How do I respond, adjust kind of dynamic contiuness. How do I innovate continuously? How do I adjust to what the business needs? And third thing we're seeing more and more is I need to be able to management costs and I get some predictability and able to optimize how I run my business. what they're finding with the cloud is the costs are running out of control, they need a way, a better way of knowing the value that they're getting and using the best cloud for the right technology. Whether may be a private cloud in some cases, a public cloud or an edge cloud. So they want to able to going to select and move and have that portability. Being able to make those choices optimization is something they're demanding from us. And so we're most excited about this need to have a flexible infrastructure and a cloud agnostic infrastructure that helps them deliver these kinds of business outcomes. >> You mentioned a couple of customer examples and financial services. You mentioned the army software factory. In terms of looking at where we are in 2021. Are there any industries in particular, maybe essential services that you think are really prime targets for the technologies, the new announcements that you're making at VM world. >> You know, what we are trying to see is this is a broad change that's happening. If you're in retail, you know, you're kind of running a hybrid world of digital and physical. So we're seeing this blending of physical and digital reality coming together. You know, FedEx is a great customer of ours and you see them as spoken as example of it, you know, they're continue to both drive operational change in terms of being delivering the packages to you on time at a lower cost, but on the other side, they're also competing with their primary partners and retailers and in some cases, right, from a distribution perspective for Amazon, with Amazon prime. So in every industry, you're starting to see the lines are blurring between traditional partners and competitors. And in doing so, they're looking for a way to innovate, innovate at speed and leverage technology. So I don't think there is a specific industry that's not being disrupted whether it's FinTech, whether it's retail, whether it's transportation logistics, or healthcare telemedicine, right? The way you do pharmaceutical, how you deliver medicine, it's all changing. It's all being driven by data. And so we see a broad application of our technology, but financial services, healthcare, telco, government tend to be a kind of traditional industries that are with us but I think the reaches are pretty broad. >> Yeah, it is all changing. Everything is becoming more and more data-driven and many businesses are becoming data companies or if they're not, they need to otherwise their competition, as you mentioned, is going to be right in the rear view mirror, ready to take their place. But that's something that we see that isn't being talked about. I don't think enough, as some of the great innovations coming as a result of the situation that we're in. We're seeing big transformations in industries where we're all benefiting. I think we need to get that, that word out there a little bit more so we can start showing more of those silver linings. >> Sure. And I think what's happening here is it's about connecting the people to the services at the end of the day, these applications are means for delivering value. And so how do we connect us as consumers or us employees or us as partners to the business to the operator with both digitally and in a physical way. And we bring that in a seamless experience. So we're seeing more and more experience matters, you know, service quality and delivery matter. It's less about the technologies back again to the outcomes. And so very much focused in building that the platform that our customers can use to leverage the best of the cloud, the best of their people, the best of the innovation they have within the organization. >> You're right. It's all about outcomes. Ajay, thank you for joining me today, talking about some of the new things that the mission of your organization, the vision, some of the new products and technologies that are being announced at VM world, we appreciate your time and hopefully next year we'll see you in person. >> Thank you again and look forward to the next VMWorld in person. >> Likewise for Ajay Patel. You're very welcome for Ajay Patel. I'm Lisa Martin, and you're watching theCUBEs coverage of VMWorld of 2021. (soft music)
SUMMARY :
Ajay Patel is here, the SVP and GM It's always great to be here. and the strategy that VMware has. and the multiple clouds they deploy on. the dynamics of the market. and being able to get that application some of the industry trends or the expertise to do that effectively. address that skills gap. putting the effort to help So it's really the digitalization of the and that aligned to the vision And in the cloud operating model, in the last year and a half at the application layer and iteration of the technologies, the customer to co-innovate, impetus of the army software factory. of the campus in partnership Right, and for the army to rival of invest in the future Talk to me about some of the things in the future of multi-cloud, and able to optimize You mentioned the army software factory. the packages to you on time of the situation that we're in. building that the platform that the mission of your organization, and look forward to the and you're watching theCUBEs
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
Ajay Patel | PERSON | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Sonos | ORGANIZATION | 0.99+ |
Silicon valley | LOCATION | 0.99+ |
FedEx | ORGANIZATION | 0.99+ |
Vietnam | LOCATION | 0.99+ |
Apogee | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Tanzu | ORGANIZATION | 0.99+ |
10 days | QUANTITY | 0.99+ |
2021 | DATE | 0.99+ |
Ajay | PERSON | 0.99+ |
third | QUANTITY | 0.99+ |
Second | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
Cloud 3.0 | TITLE | 0.99+ |
one | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
two challenges | QUANTITY | 0.99+ |
today | DATE | 0.98+ |
third act | QUANTITY | 0.98+ |
Raghu | PERSON | 0.98+ |
both | QUANTITY | 0.98+ |
last year | DATE | 0.98+ |
Tazu | ORGANIZATION | 0.97+ |
VMworld 2021 | EVENT | 0.97+ |
Austin | LOCATION | 0.97+ |
VMWorld | EVENT | 0.97+ |
Kubernetes | TITLE | 0.97+ |
first strategy | QUANTITY | 0.96+ |
three | QUANTITY | 0.95+ |
US | ORGANIZATION | 0.95+ |
VMworld | ORGANIZATION | 0.95+ |
this year | DATE | 0.95+ |
VRealize | ORGANIZATION | 0.95+ |
single mission | QUANTITY | 0.95+ |
five-year prototype | QUANTITY | 0.95+ |
Modern Apps and Management | ORGANIZATION | 0.94+ |
beta 2 | OTHER | 0.93+ |
prime | COMMERCIAL_ITEM | 0.93+ |
three buckets | QUANTITY | 0.91+ |
last six months | DATE | 0.89+ |
SVP | PERSON | 0.87+ |
Modern Apps | ORGANIZATION | 0.86+ |
three things | QUANTITY | 0.84+ |
two | QUANTITY | 0.84+ |
three categories | QUANTITY | 0.83+ |
cloud 2.0 | TITLE | 0.83+ |
last year and a half | DATE | 0.8+ |
VMWorld of 2021 | EVENT | 0.78+ |
pandemic | EVENT | 0.78+ |
day | QUANTITY | 0.77+ |
one set | QUANTITY | 0.76+ |
theCUBE | ORGANIZATION | 0.76+ |
US army | ORGANIZATION | 0.75+ |
theCUBE academy | ORGANIZATION | 0.73+ |
COVID | OTHER | 0.73+ |
last 18 months | DATE | 0.72+ |
CUBE | ORGANIZATION | 0.71+ |
telco | ORGANIZATION | 0.67+ |
VM world | EVENT | 0.66+ |
Gil Geron, Orca Security | AWS Startup Showcase: The Next Big Thing in AI, Security, & Life Sciences
(upbeat electronic music) >> Hello, everyone. Welcome to theCUBE's presentation of the AWS Startup Showcase. The Next Big Thing in AI, Security, and Life Sciences. In this segment, we feature Orca Security as a notable trend setter within, of course, the security track. I'm your host, Dave Vellante. And today we're joined by Gil Geron. Who's the co-founder and Chief Product Officer at Orca Security. And we're going to discuss how to eliminate cloud security blind spots. Orca has a really novel approach to cybersecurity problems, without using agents. So welcome Gil to today's sessions. Thanks for coming on. >> Thank you for having me. >> You're very welcome. So Gil, you're a disruptor in security and cloud security specifically and you've created an agentless way of securing cloud assets. You call this side scanning. We're going to get into that and probe that a little bit into the how and the why agentless is the future of cloud security. But I want to start at the beginning. What were the main gaps that you saw in cloud security that spawned Orca Security? >> I think that the main gaps that we saw when we started Orca were pretty similar in nature to gaps that we saw in legacy, infrastructures, in more traditional data centers. But when you look at the cloud when you look at the nature of the cloud the ephemeral nature, the technical possibilities and disruptive way of working with a data center, we saw that the usage of traditional approaches like agents in these environments is lacking, it actually not only working as well as it was in the legacy world, it's also, it's providing less value. And in addition, we saw that the friction between the security team and the IT, the engineering, the DevOps in the cloud is much worse or how does that it was, and we wanted to find a way, we want for them to work together to bridge that gap and to actually allow them to leverage the cloud technology as it was intended to gain superior security than what was possible in the on-prem world. >> Excellent, let's talk a little bit more about agentless. I mean, maybe we could talk a little bit about why agentless is so compelling. I mean, it's kind of obvious it's less intrusive. You've got fewer processes to manage, but how did you create your agentless approach to cloud security? >> Yes, so I think the basis of it all is around our mission and what we try to provide. We want to provide seamless security because we believe it will allow the business to grow faster. It will allow the business to adopt technology faster and to be more dynamic and achieve goals faster. And so we've looked on what are the problems or what are the issues that slow you down? And one of them, of course, is the fact that you need to install agents that they cause performance impact, that they are technically segregated from one another, meaning you need to install multiple agents and they need to somehow not interfere with one another. And we saw this friction causes organization to slow down their move to the cloud or slow down the adoption of technology. In the cloud, it's not only having servers, right? You have containers, you have manage services, you have so many different options and opportunities. And so you need a different approach on how to secure that. And so when we understood that this is the challenge, we decided to attack it in three, using three periods; one, trying to provide complete security and complete coverage with no friction, trying to provide comprehensive security, which is taking an holistic approach, a platform approach and combining the data in order to provide you visibility into all of your security assets, and last but not least of course, is context awareness, meaning being able to understand and find these the 1% that matter in the environment. So you can actually improve your security posture and improve your security overall. And to do so, you had to have a technique that does not involve agents. And so what we've done, we've find a way that utilizes the cloud architecture in order to scan the cloud itself, basically when you integrate Orca, you are able within minutes to understand, to read, and to view all of the risks. We are leveraging a technique that we are calling side scanning that uses the API. So it uses the infrastructure of the cloud itself to read the block storage device of every compute instance and every instance, in the environment, and then we can deduce the actual risk of every asset. >> So that's a clever name, side scanning. Tell us a little bit more about that. Maybe you could double click on, on how it works. You've mentioned it's looking into block storage and leveraging the API is a very, very clever actually quite innovative. But help us understand in more detail how it works and why it's better than traditional tools that we might find in this space. >> Yes, so the way that it works is that by reading the block storage device, we are able to actually deduce what is running on your computer, meaning what kind of waste packages applications are running. And then by con combining the context, meaning understanding that what kind of services you have connected to the internet, what is the attack surface for these services? What will be the business impact? Will there be any access to PII or any access to the crown jewels of the organization? You can not only understand the risks. You can also understand the impact and then understand what should be our focus in terms of security of the environment. Different factories, the fact that we are doing it using the infrastructure itself, we are not installing any agents, we are not running any packet. You do not need to change anything in your architecture or design of how you use the cloud in order to utilize Orca Orca is working in a pure SaaS way. And so it means that there is no impact, not on cost and not on performance of your environment while using Orca. And so it reduces any friction that might happen with other parties of the organization when you enjoy the security or improve your security in the cloud. >> Yeah, and no process management intrusion. Now, I presume Gil that you eat your own cooking, meaning you're using your own product. First of all, is that true? And if so, how has your use of Orca as a chief product officer help you scale Orca as a company? >> So it's a great question. I think that something that we understood early on is that there is a, quite a significant difference between the way you architect your security in cloud and also the way that things reach production, meaning there's a difference, that there's a gap between how you imagined, like in everything in life how you imagine things will be and how they are in real life in production. And so, even though we have amazing customers that are extremely proficient in security and have thought of a lot of ways of how to secure the environment. Ans so, we of course, we are trying to secure environment as much as possible. We are using Orca because we understand that no one is perfect. We are not perfect. We might, the engineers might, my engineers might make mistakes like every organization. And so we are using Orca because we want to have complete coverage. We want to understand if we are doing any mistake. And sometimes the gap between the architecture and the hole in the security or the gap that you have in your security could take years to happen. And you need a tool that will constantly monitor your environment. And so that's why we are using Orca all around from day one not to find bugs or to do QA, we're doing it because we need security to our cloud environment that will provide these values. And so we've also passed the compliance auditing like SOC 2 and ISO using Orca and it expedited and allowed us to do these processes extremely fast because of having all of these guardrails and metrics has. >> Yeah, so, okay. So you recognized that you potentially had and did have that same problem as your customer has been. Has it helped you scale as a company obviously but how has it helped you scale as a company? >> So it helped us scale as a company by increasing the trust, the level of trust customer having Orca. It allowed us to adopt technology faster, meaning we need much less diligence or exploration of how to use technology because we have these guardrails. So we can use the richness of the technology that we have in the cloud without the need to stop, to install agents, to try to re architecture the way that we are using the technology. And we simply use it. We simply use the technology that the cloud offer as it is. And so it allows you a rapid scalability. >> Allows you allows you to move at the speed of cloud. Now, so I'm going to ask you as a co-founder, you got to wear many hats first of a co-founder and the leadership component there. And also the chief product officer, you got to go out, you got to get early customers, but but even more importantly you have to keep those customers retention. So maybe you can describe how customers have been using Orca. Did they, what was their aha moment that you've seen customers react to when you showcase the new product? And then how have you been able to keep them as loyal partners? >> So I think that we are very fortunate, we have a lot of, we are blessed with our customers. Many of our customers are vocal customers about what they like about Orca. And I think that something that comes along a lot of times is that this is a solution they have been waiting for. I can't express how many times I hear that I could go on a call and a customer says, "I must say, I must share. "This is a solution I've been looking for." And I think that in that respect, Orca is creating a new standard of what is expected from a security solution because we are transforming the security all in the company from an inhibitor to an enabler. You can use the technology. You can use new tools. You can use the cloud as it was intended. And so (coughs) we have customers like one of these cases is a customer that they have a lot of data and they're all super scared about using S3 buckets. We call over all of these incidents of these three buckets being breached or people connecting to an s3 bucket and downloading the data. So they had a policy saying, "S3 bucket should not be used. "We do not allow any use of S3 bucket." And obviously you do need to use S3 bucket. It's a powerful technology. And so the engineering team in that customer environment, simply installed a VM, installed an FTP server, and very easy to use password to that FTP server. And obviously two years later, someone also put all of the customer databases on that FTP server, open to the internet, open to everyone. And so I think it was for him and for us as well. It was a hard moment. First of all, he planned that no data will be leaked but actually what happened is way worse. The data was open to the to do to the world in a technology that exists for a very long time. And it's probably being scanned by attackers all the time. But after that, he not only allowed them to use S3 bucket because he knew that now he can monitor. Now, you can understand that they are using the technology as intended, now that they are using it securely. It's not open to everyone it's open in the right way. And there was no PII on that S3 bucket. And so I think the way he described it is that, now when he's coming to a meeting about things that needs to be improved, people are waiting for this meeting because he actually knows more than what they know, what they know about the environment. And I see it really so many times where a simple mistake or something that looks benign when you look at the environment in a holistic way, when you are looking on the context, you understand that there is a huge gap. That should be the breech. And another cool example was a case where a customer allowed an access from a third party service that everyone trusts to the crown jewels of the environment. And he did it in a very traditional way. He allowed a certain IP to be open to that environment. So overall it sounds like the correct way to go. You allow only a specific IP to access the environment but what he failed to to notice is that everyone in the world can register for free for this third-party service and access the environment from this IP. And so, even though it looks like you have access from a trusted service, a trusted third party service, when it's a Saas service, it's actually, it can mean that everyone can use it in order to access the environment and using Orca, you saw immediately the access, you saw immediately the risk. And I see it time after time that people are simply using Orca to monitor, to guardrail, to make sure that the environment stays safe throughout time and to communicate better in the organization to explain the risk in a very easy way. And the, I would say the statistics show that within few weeks, more than 85% of the different alerts and risks are being fixed, and think it comes to show how effective it is and how effective it is in improving your posture, because people are taking action. >> Those are two great examples, and of course they have often said that the shared responsibility model is often misunderstood. And those two examples underscore thinking that, "oh I hear all this, see all this press about S3, but it's up to the customer to secure the endpoint components et cetera. Configure it properly is what I'm saying. So what an unintended consequence, but but Orca plays a role in helping the customer with their portion of that shared responsibility. Obviously AWS is taking care of this. Now, as part of this program we ask a little bit of a challenging question to everybody because look it as a startup, you want to do well you want to grow a company. You want to have your employees, you know grow and help your customers. And that's great and grow revenues, et cetera but we feel like there's more. And so we're going to ask you because the theme here is all about cloud scale. What is your defining contribution to the future of cloud at scale, Gil? >> So I think that cloud is allowed the revolution to the data centers, okay? The way that you are building services, the way that you are allowing technology to be more adaptive, dynamic, ephemeral, accurate, and you see that it is being adopted across all vendors all type of industries across the world. I think that Orca is the first company that allows you to use this technology to secure your infrastructure in a way that was not possible in the on-prem world, meaning that when you're using the cloud technology and you're using technologies like Orca, you're actually gaining superior security that what was possible in the pre cloud world. And I think that, to that respect, Orca is going hand in hand with the evolution and actually revolutionizes the way that you expect to consume security, the way that you expect to get value, from security solutions across the world. >> Thank You for that Gil. And so we're at the end of our time, but we'll give you a chance for final wrap up. Bring us home with your summary, please. >> So I think that Orca is building the cloud security solution that actually works with its innovative aid agentless approach to cyber security to gain complete coverage, comprehensive solution and to gain, to understand the complete context of the 1% that matters in your security challenges across your data centers in the cloud. We are bridging the gap between the security teams, the business needs to grow and to do so in the paste of the cloud, I think the approach of being able to install within minutes, a security solution in getting complete understanding of your risk which is goes hand in hand in the way you expect and adopt cloud technology. >> That's great Gil. Thanks so much for coming on. You guys doing awesome work. Really appreciate you participating in the program. >> Thank you very much. >> And thank you for watching this AWS Startup Showcase. We're covering the next big thing in AI, Security, and Life Science on theCUBE. Keep it right there for more great content. (upbeat music)
SUMMARY :
of the AWS Startup Showcase. agentless is the future of cloud security. and the IT, the engineering, but how did you create And to do so, you had to have a technique into block storage and leveraging the API is that by reading the you eat your own cooking, or the gap that you have and did have that same problem And so it allows you a rapid scalability. to when you showcase the new product? the to do to the world And so we're going to ask you the way that you expect to get value, but we'll give you a in the way you expect and participating in the program. And thank you for watching
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Orca | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
1% | QUANTITY | 0.99+ |
Gil | PERSON | 0.99+ |
Gil Geron | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
more than 85% | QUANTITY | 0.99+ |
two examples | QUANTITY | 0.99+ |
two years later | DATE | 0.99+ |
Orca Security | ORGANIZATION | 0.98+ |
three | QUANTITY | 0.98+ |
two great examples | QUANTITY | 0.98+ |
ISO | ORGANIZATION | 0.98+ |
three buckets | QUANTITY | 0.97+ |
three periods | QUANTITY | 0.96+ |
today | DATE | 0.96+ |
S3 | TITLE | 0.96+ |
First | QUANTITY | 0.95+ |
first | QUANTITY | 0.94+ |
first company | QUANTITY | 0.91+ |
day one | QUANTITY | 0.9+ |
SOC 2 | TITLE | 0.87+ |
theCUBE | ORGANIZATION | 0.86+ |
Saas | ORGANIZATION | 0.82+ |
Startup Showcase | EVENT | 0.8+ |
s3 | TITLE | 0.7+ |
double | QUANTITY | 0.57+ |
Gil | ORGANIZATION | 0.55+ |
Next Big Thing | TITLE | 0.51+ |
years | QUANTITY | 0.5+ |
S3 | COMMERCIAL_ITEM | 0.47+ |
LIVE Panel: FutureOps: End-to-end GitOps
>>and hello, we're back. I've got my panel and we are doing things real time here. So sorry for the delay a few minutes late. So the way let's talk about things, the reason we're here and we're going around the room and introduce everybody. Got three special guests here. I got my evil or my john and the normal And we're going to talk about get ops I called it future office just because I want to think about what's the next thing for that at the end, we're gonna talk about what our ideas for what's next for getups, right? Um, because we're all starting to just get into get ups now. But of course a lot of us are always thinking about what's next? What's better? How can we make this thing better? So we're going to take your questions. That's the reason we're here, is to take your questions and answer them. Or at least the best we can for the next hour. And all right, so let's go around the room and introduce yourself. My name is Brett. I am streaming from Brett from that. From Brett. From Virginia Beach in Virginia beach, Virginia, United States. Um, and I talk about things on the internet, I sell courses on you, to me that talk about Docker and kubernetes Ive or introduce yourself. >>How's it going? Everyone, I'm a software engineer at axel Springer, currently based in Berlin and I happen to be Brett Brett's teaching assistant. >>All right, that's right. We're in, we're in our courses together almost every day. Mm john >>hey everyone, my name is john Harris, I used to work at Dhaka um, I now work at VM ware is a star field engineer. Um, so yeah, >>and normal >>awesome by the way, you are streaming from Brett Brett, >>I answered from breath to breath. >>Um I'm normal method. I'm a distinguished engineer with booz allen and I'm also a doctor captain and it's good to see either in person and it's good to see you again john it's been a little while. >>It has the pre covid times, right? You're up here in Seattle. >>Yeah. It feels, it feels like an eternity ago. >>Yeah, john shirt looks red and reminds me of the Austin T shirt. So I was like, yeah, so we all, we all have like this old limited edition doctor on E. >>T. That's a, that's a classic. >>Yeah, I scored that one last year. Sometimes with these old conference church, you have to like go into people's closets. I'm not saying I did that. Um, but you know, you have to go steal stuff, you to find ways to get the swag >>post post covid. If you ever come to my place, I'm going to have to lock the closets. That >>that's right, That's right. >>So the second I think it was the second floor of the doctor HQ in SAn Francisco was where they kept all the T shirts, just boxes and boxes and boxes floor to ceiling. So every time I went to HQ you just you just as many as you can fit in your luggage. I think I have about 10 of these. You >>bring an extra piece of luggage just for your your shirt shirt grab. Um All right, so I'm going to start scanning questions uh so that you don't have to you can you help you all are welcome to do that. And I'm going to start us off with the topic. Um So let's just define the parameters. Like we can talk about anything devops and here we can go down and plenty of rabbit holes. But the kind of, the goal here is to talk about get ups and get ups if you haven't heard about it is essentially uh using versioning systems like get like we've all been getting used to as developers to track your infrastructure changes, not just your code changes and then automate that with a bunch of tooling so that the robots take over. And essentially you have get as a central source of truth and then get log as a central source of history and then there's a bunch of magic little bits in the middle and then supposedly everything is wonderful. It's all automatic. The reality is is what it's often quite messy, quite tricky to get everything working. And uh the edges of this are not perfect. Um so it is a relatively new thing. It's probably three, maybe four years old as an official thing from. We've uh so we're gonna get into it and I'll let's go around the room and the same word we did before and um not to push on that, put you on the spot or anything. But what is, what is one of the things you either like or either hate about getups um that you've enjoyed either using it or you know, whatever for me. I really, I really love that I can point people to a repo that basically is hopefully if they look at the log a tracking, simplistic tracking of what might have changed in that part of the world or the environment. I remember many years past where, you know, I've had executive or some mid level manager wants to see what the changes were or someone outside my team went to see what we just changed. It was okay, they need access to this system into that dashboard and that spreadsheet and then this thing and it was always so complicated and now in a world where if we're using get up orbit bucket or whatever where you can just say, hey go look at that repo if there was three commits today, probably three changes happened. That's I love that particular part about it. Of course it's always more complicated than that. But um Ive or I know you've been getting into this stuff recently. So um any thoughts? Yeah, I think >>my favorite part about get ops is >>reproducibility. Um >>you know the ability to just test something and get it up and running >>and then just tear it down. >>Uh not >>being worried that how did I configure it the first time? I think that's my favorite part about >>it. I'm changing your background as we do this. >>I was going to say, did you just do it get ups pushed to like change his >>background, just a dialogue that different for that green screen equals false? Uh Change the background. Yeah, I mean, um and I mean I think last year was really my first year of actually using it on anything significant, like a real project. Um so I'm still, I still feel like I'm very new to john you anything. >>Yeah, it's weird getups is that thing which kind of crystallizes maybe better than anything else, the grizzled veteran life cycle of emotions with the technology because I think it's easy to get super excited about something new. And when I first looked into get up, so I think this is even before it was probably called getups, we were looking at like how to use guest source of truth, like everything sounds great, right? You're like, wait, get everyone knows, get gets the source of truth, There's a load of robust tooling. This just makes a sense. If everything dies, we can just apply the get again, that would be great. Um and then you go through like the trough of despair, right? We're like, oh no, none of this works. The application is super stateless if this doesn't work and what do we do with secrets and how do we do this? Like how do we get people access in the right place and then you realize everything is terrible again and then everything it equalizes and you're kind of, I think, you know, it sounds great on paper and they were absolutely fantastic things about it, but I think just having that measured approach to it, like it's, you know, I think when you put it best in the beginning where you do a and then there's a magic and then you get C. Right, like it's the magic, which is >>the magic is the mystery, >>right? >>Magic can be good and bad and in text so >>very much so yeah, so um concurrence with with john and ever uh in terms of what I like about it is the potential to apply it to moving security to left and getting closer to a more stable infrastructures code with respect to the whole entire environment. Um And uh and that reconciliation loop, it reminds me of what, what is old is new again? Right? Well, quote unquote old um in terms of like chef and puppet and that the reconciliation loop applied in a in a more uh in a cleaner interface and and into the infrastructure that we're kind of used to already, once you start really digging into kubernetes what I don't like and just this is in concurrence with the other Panelist is it's relatively new. It has um, so it has a learning curve and it's still being, you know, it's a very active um environment and community and that means that things are changing and constantly and there's like new ways and new patterns as people are exploring how to use it. And I think that trough of despair is typically figuring out incrementally what it actually is doing for you and what it's not going to solve for you, right, john, so like that's that trough of despair for a bit and then you realize, okay, this is where it fits potentially in my architecture and like anything, you have to make that trade off and you have to make that decision and accept the trade offs for that. But I think it has a lot of promise for, for compliance and security and all that good stuff. >>Yeah. It's like it's like the potentials, there's still a lot more potential than there is uh reality right now. I think it's like I feel like we're very early days and the idea of especially when you start getting into tooling that doesn't appreciate getups like you're using to get up to and use something else and that tool has no awareness of the concept so it doesn't flow well with all of the things you're trying to do and get um uh things that aren't state based and all that. So this is going to lead me to our first question from Camden asking dumb questions by the way. No dumb questions here. Um How is get apps? Not just another name for C. D. Anybody want to take that as an answer as a question. How is get up is not just another name for C. D. I have things but we can talk about it. I >>feel like we need victor foster kids. Yeah, sure you would have opinions. Yeah, >>I think it's a very yeah. One person replied said it's a very specific it's an opinionated version of cd. That's a great that's a great answer like that. Yeah. >>It's like an implement. Its it's an implementation of deployment if you want it if you want to use it for that. All right. I realize now it's kind of hard in terms of a physical panel and a virtual panel to figure out who on the panel is gonna, you know, ready to jump in to answer a question. But I'll take it. So um I'll um I'll do my best inner victor and say, you know, it's it's an implementation of C. D. And it's it's a choice right? It's one can just still do docker build and darker pushes and doctor pulls and that's fine. Or use other technologies to deploy containers and pods and change your, your kubernetes infrastructure. But get apps is a different implementation, a different method of doing that same thing at the end of the day. Yeah, >>I like it. I like >>it and I think that goes back to your point about, you know, it's kind of early days still, I think to me what I like about getups in that respect is it's nice to see kubernetes become a platform where people are experimenting with different ways of doing things, right? And so I think that encourages like lots of different patterns and overall that's going to be a good thing for the community because then more, you know, and not everything needs to settle in terms of only one way of doing things, but a lot of different ways of doing things helps people fit, you know, the tooling to their needs, or helps fit kubernetes to their needs, etcetera. Yeah, >>um I agree with that, the, so I'm gonna, since we're getting a load of good questions, so um one of the, one of the, one of the, I want to add to that real quick that one of the uh from the, we've people themselves, because I've had some on the show and one of things that I look at it is distinguishing is with continuous deployment tools, I sort of think that it's almost like previous generation and uh continuous deployment tools can be anything like we would consider Jenkins cd, right, if you if you had an association to a server and do a doctor pull and you know, dr up or dr composed up rather, or if it did a cube control apply uh from you know inside an ssh tunnel or something like that was considered considered C. D. Well get ops is much more rigid I think in terms of um you you need to apply, you have a specific repo that's all about your deployments and because of what tool you're using and that one your commit to a specific repo or in a specific branch that repo depends on how you're setting it up. That is what kicks off a workflow. And then secondly there's an understanding of state. So a lot of these tools now I have uh reconciliation where they they look at the cluster and if things are changing they will actually go back and to get and the robots will take over and will commit that. Hey this thing has changed um and you maybe you human didn't change it, something else might have changed it. So I think that's where getups is approaching it, is that ah we we need to we need to consider more than just a couple of commands that be runnin in a script. Like there needs to be more than that for a getups repo to happen anyway, that's just kind of the the take back to take away I took from a previous conversation with some people um >>we've I don't think that lost, its the last piece is really important, right? I think like for me, C d like Ci cd, they're more philosophical ideas, write a set of principles, right? Like getting an idea or a code change to environments promoting it. It's very kind of pipeline driven um and it's very imperative driven, right? Like our existing CD tools are a lot of the ways that people think about Cd, it would be triggered by an event, maybe a code push and then these other things are happening in sequence until they either fail or pass, right? And then we're done. Getups is very much sitting on the, you know, the reconciliation side, it's changing to a pull based model of reconciliation, right? Like it's very declarative, it's just looking at the state and it's automatically pulling changes when they happen, rather than this imperative trigger driven model. That's not to say that there aren't city tools which we're doing pull based or you can do pull based or get ups is doing anything creatively revolutionary here, but I think that's one of the main things that the ideas that are being introduced into those, like existing C kind of tools and pipelines, um certainly the pull based model and the reconciliation model, which, you know, has a lot in common with kubernetes and how those kind of controllers work, but I think that's the key idea. Yeah. >>Um This is a pretty specific one Tory asks, does anyone have opinions about get ops in a mono repo this is like this is getting into religion a little bit. How many repos are too many repose? How um any thoughts on that? Anyone before I rant, >>go >>for it, go for it? >>Yeah. How I'm using it right now in a monitor repo uh So I'm using GIT hub. Right, so you have what? The workflow and then inside a workflow? Yeah, mo file, I'll >>track the >>actual changes to the workflow itself, as well as a folder, which is basically some sort of service in Amman Arepa, so if any of those things changes, it'll trigger the actual pipeline to run. So that's like the simplest thing that I could figure out how to, you know, get it set up using um get hubs, uh workflow path future. Yeah. And it's worked for me for writing, you know? That's Yeah. >>Yeah, the a lot of these things too, like the mono repo discussion will, it's very tool specific. Each tool has various levels of support for branch branching and different repos and subdirectories are are looking at the defense and to see if there's changes in that specific directory. Yeah. Sorry, um john you're going to say something, >>I was just going to say, I've never really done it, but I imagine the same kind of downsides of mono repo to multiple report would exist there. I mean, you've got the blast radius issues, you've got, you know, how big is the mono repo? Do we have to pull does the tool have to pull that or cashier every time it needs to determine def so what is the support for being able to just look at directories versus you know, I think we can get way down into a deeper conversation. Maybe we'll save it for later on in the conversation about what we're doing. Get up, how do we structure our get reposed? We have super granular repo per environment, Perper out reaper, per cluster repo per whatever or do we have directories per environment or branches per environment? How how is everything organized? I think it's you know, it's going to be one of those, there's never one size fits all. I'll give the class of consultant like it depends answer. Right? >>Yeah, for sure. It's very similar to the code struggle because it depends. >>Right? >>Uh Yeah, it's similar to the to the code problem of teams trying to figure out how many repose for their code. Should they micro service, should they? Semi micro service, macro service. Like I mean, you know because too many repose means you're doing a bunch of repo management, a bunch of changes on your local system, you're constantly get pulling all these different things and uh but if you have one big repo then it's it's a it's a huge monolithic thing that you usually have to deal with. Path based issues of tools that only need to look at a specific directory and um yeah, it's a it's a culture, I feel like yeah, like I keep going back to this, it's a culture thing. Does your what is your team prefer? What do you like? What um what's painful for everyone and who's what's the loudest pain that you need to deal with? Is it is it repo management? That's the pain um or is it uh you know, is that that everyone's in one place and it's really hard to keep too many cooks out of the kitchen, which is a mono repo problem, you know? Um How do we handle security? So this is a great one from Tory again. Another great question back to back. And that's the first time we've done that um security as it pertains to get up to anyone who can commit can change the infrastructure. Yes. >>Yes. So the tooling that you have for your GIT repo and the authentication, authorization and permissions that you apply to the GIT repo using a get server like GIT hub or get lab or whatever your flavor of the day is is going to be how security is handled with respect to changes in your get ups configuration repository. So um that is completely specific to your implementation of that or ones implementation of of how they're handling that. Get repositories that the get ups tooling is looking at. To reconcile changes with respect to the permissions of the for lack of better term robot itself. Right? They get up tooling like flux or Argosy. D Um one kid would would create a user or a service account or uh other kind of authentication measures to limit the permissions for that service account that the Gaddafi's tooling needs to be able to read the repose and and send commits etcetera. So that is well within the realm of what you have already for your for your get your get um repo. Yeah. >>Yeah. A related question is from a g what they like about get apps if done nicely for a newbie it's you can get stuff done easily if you what they dislike about it is when you have too many get repose it becomes just too complicated and I agree. Um was making a joke with a team the other week that you know the developer used to just make one commit and they would pass pass it on to a QA team that would then eventually emerging in the master. But they made the commits to these feature branches or whatever. But now they make a commit, they make a pR there for their code then they go make a PR in the helm chart to update the thing to do that and then they go make a PR in the get ups repeal for Argo. And so we talked about that they're probably like four or five P. R. Is just to get their code in the production. But we were talking about the negative of that but the reality was It's just five or 4 or five prs like it wasn't five different systems that had five different methodologies and tooling and that. So I looked at it I was like well yeah that's kind of a pain in the get sense but you're also dealing with one type. It's a repetitive action but it's it's the one thing I don't have to go to five different systems with five different ways of doing it. And once in the web and one's on the client wants a command line that I don't remember. Um Yeah so it's got pros and cons I think when you >>I think when you get to the scale where those kind of issues are a problem then you're probably at the scale where you can afford to invest some time into automation into that. Right? Like what I've when I've seen this in larger customers or larger organizations if there ever at that stage where okay apps are coming up all the time. You know, there's a 10 X 100 X developer to operations folks who may be creating get repose setting up permissions then that stuff gets automated, right? Like, you know, maybe ticket based systems or whatever. Developers say I need a new app. It templates things or more often using the same model, right of reconciliation and operators and the horrific abuse of cogs that we're seeing in the communities community right now. Um You know, developers can create a crd which just says, hey, I'm creating a new app is called app A and then a controller will pick up that app a definition. It will go create a get a repo Programmatically it will add the right definitely will look up and held up the developers and the permissions that need to be able to get to that repo it will create and template automatically some name space and the clusters that it needs in the environments that it needs, depending on, you know, some metadata it might read. So I think, you know, those are definite problems and they're definitely like a teething, growing pain thing. But once you get to that scale, you kind of need to step back and say, well look, we just need to invest in time into the operational aspect of this and automating this pain away, I think. Yeah, >>yeah. And that ultimately ends in Yeah. Custom tooling, which it's hard to avoid it at scale. I mean, there's there's two, there's almost two conversations here, right. There is what I call the Solo admin Solo devops, I bought that domain Solo devops dot com because, you know, whenever I'm talking to dr khan in the real world, it's like I asked people to raise hands, I don't know how we can raise hands here, but I would ask people to raise hands and see how many of you here are. The sole person responsible for deploying the app that your team makes and like a quarter of the room would raise their hand. So I call that solo devops like those, that person can't make all the custom tooling in the world. So they really need dr like solutions where it's opinionated, the workflow is sort of built in and they don't have to wrangle things together with a bunch of glue, you know, in other words bash. Um and so this kind of comes to a conversation uh starting this question from lee he's asking how do you combine get ops with ci cd, especially the continuous bit. How do you avoid having a human uh sort of the complaint the team I was working with has, how do you avoid a human editing and get committing for every single deploy? They've settled on customized templates and a script for routine updates. So as a seed for this conference, this question I'm gonna ask you all uh instead of that specific question cause it's a little open ended. Um Tell me whether you agree with this. I I kind of look at the image, the image artifact because the doctor image or container image in general is an artifact that I I view it that way and that thing going into the registry with the right label or right part of the label. Um That tag rather not the label but the tag that to me is like one of the great demarche points of, we're kind of done with Ci and we're now into the deployment phase and it doesn't necessarily mean the tooling is a clear cut there, but that artifact being shipped in a specific way or promoted as we sometimes say. Um what do you think? Does anyone have opinions on that? I don't even know if that's the right opinion to have so mhm. >>So um I think what you're, what you're getting at is that get ups, models can trigger off of different events um to trigger the reconciliation loop. And one way to do that is if the image, if it notices a image change in the registry, the other is if there's a commit event on a specific rebo and branch and it's up to, you are up to the person that's implementing their get ups model, what event to trigger there, that reconciliation loop off of, You can do both, you can do one or the other. It also depends on the Templeton engine that you're using on top of um on top of kubernetes, such as helm or um you know, the other ones that are out there or if you're not even doing that, then, you know straight. Yeah, mo um so it kind of just depends, but those are the typically the two options one has and a combination of of those to trigger that event. You can also just trigger it manually, right? You can go into the command line and force a a, you know, a really like a scan or a new reconciliation loop to occur. So it kind of just, I don't want to say this, but it depends on what you're trying to do and what makes sense in your pipeline. Right? So if you're if you're set up where you are tag, if you're doing it based off of image tags, then you probably want to use get ups in a way that you're using the image tags. Right. And the pattern that you've established there, if you're not really doing that and you're more around, like, different branches are mapped to different environments, then triggered off of the correct branch. And that's where the permissions also come into play. Where if you don't want someone to touch production and you've got your getups for your production cluster based off of like uh you know, a main branch, then whoever can push a change to that main branch has the authority to push that change to production. Right? So that's your authentication and permissions um system same for the registry itself. Right. So >>Yeah. Yeah. Sorry, anyone else have any thoughts on that? I was about to go to the next topic, >>I was going to say. I think certain tools dictate the approach, like, if you're using Argosy d it's I think I'm correct me if I'm wrong, but I think the only way to use it right now is just through image modification. Like, the manifest changes, it looks at a specific directory and anything changes then it will do its thing. And uh Synchronize the cost there with whatever's and get >>Yeah, flux has both. Yeah, and flux has both. So it it kind of depends. I think you can make our go do that too, but uh this is back to what we were saying in the beginning, uh you know, these things are changing, right? So that might be what it is right now in terms of triggering the reconciliation loops and get ups, tooling, but there might be other events in the future that might trigger it, and it's not completely stand alone because you still need you're tooling to do any kind of testing or whatever you have in terms of like the specific pipeline. So oftentimes you're bolting in getups into some other part of broader Cfd solution. That makes sense. Yeah, >>we've got a lot of questions about secrets or people that are asking about secrets. >>So my my tongue and cheek answered the secrets question was, what's the best practices for kubernetes? Secrets? That's the same thing for secrets with good apps? Uh getups is not last time I checked and last time I was running this stuff get ups is not has nothing to do with secrets in that sense. It's just there to get your stuff running on communities. So, um there's probably a really good session on secrets at dr concept. I >>would agree with you, I agree with you. Yeah, I mean, get off stools, I mean every every project of mine handles secrets differently. Uh huh. And I think I'm not sure if it was even when I was talking to but talking to someone recently that I'm very bullish on get up actions, I love get up actions, it's not great for deployments yet, but we do have this new thing and get hub environments, I think it's called. So it allows me at least the store secrets per environment, which it didn't have the concept of that before, which you know, if you if any of you running kubernetes out there, you typically end up when you start running kubernetes, you end up with more than one kubernetes, like you're going to end up with a lot of clusters at some point, at least many multiple, more than two. Um and so if you're trying to store secret somewhere, you do have and there's a discussion happening in chat right now where people are talking about um sealed secrets which if you haven't heard of that, go look that up and just be versed on what sealed secrets is because it's a it's a fantastic concept for how to store secrets in the public. Um I love it because I'm a big P. K. I nerd but um it's not the only way and it doesn't fit all models. So I have clients that use A W. S. Secrets because they're in A W. S. And then they just have to use the kubernetes external secret. But again like like like normal sand, you know, it's that doesn't really affect get ops, get ops is just applying whatever helm charts or jahmal or images that you're, you're you're deploying, get off. It was more about the approach of when the changes happen and whether it's a push or pull model like we're talking about and you know, >>I would say there's a bunch of prerequisites to get ups secrets being one of them because the risk of you putting a secret into your git repo if you haven't figured out your community secrets architecture and start diving into getups is high and removing secrets from get repose is you know, could be its own industry, right. It's >>a thing, >>how do >>I hide this? How do I obscure this commit that's already now on a dozen machines. >>So there are some prerequisites in terms of when you're ready to adopt get up. So I think is the right way of saying the answer to that secrets being one of them. >>I think the secrets was the thing that made me, you know, like two or three years ago made me kind of see the ah ha moment when it came to get ups which, which was that the premier thing that everyone used to say about get up about why it was great. Was its the single source of truth. There's no state anywhere else. You just need to look at git. Um and then secrets may be realized along with a bunch of other things down the line that is not true and will never be true. So as soon as you can lose the dogmatism about everything is going to be and get it's fantastic. As long as you've understood everything is not going to get. There are things which will absolutely never be and get some tools just don't deal with that. They need to earn their own state, especially in communities, some controls on their own state. You know, cuz sealed secrets and and other projects like SOps and I think there are two or three others. That's a great way of dealing with secrets if you want to keep them in get. But you know, projects like vault more kind of like what I would say, production grade secret strategies. Right? And if you're in AWS or a cloud, you're more likely to be using their secrets. Your secret policy is maybe not dictated by you in large organizations might be dictated by CSO or security or Great. Like I think once if you, if you're trying to adopt getups or you're thinking about it, get the dogmatism of get as a single point of truth out of your mind and think about getups more as a philosophy and a set of best practice principles, then you will be in much better stead, >>right? Yeah. >>People are asking more questions in chat like infrastructure as code plus C d essentially get ups or C I rather, um, these are all great questions and a part of the debate, I'm actually just going to throw up on screen. I'm gonna put this in chat, but this is, this is to me the source, Right? So we worked with when they coined the term. We, a lot of us have been trying to get, if we talk about the history for a minute and then tell me if I'm getting this right. Um, a lot of us were trying to automate all these different parts of the puzzle, but a lot of them, they, some things might have been infrastructure as code. Some things weren't, some things were sort of like settings is coded, like you're going to Jenkins and type in secrets and settings or type in a certain thing in the settings of Jenkins and then that it wasn't really in get and so what we was trying to go for was a way to have almost like eventually a two way state understanding where get might change your infrastructure but then your infrastructure might also change and needs to be reflected in the get if the get is trying to be the single source of truth. Um and like you're saying the reality is that you're never gonna have one repo that has all of your infrastructure in it, like you would have to have, you have to have all your terra form, anything else you're spinning up. Right. Um but anyway, I'm gonna put this link in chat. So this guide actually, uh one of things they talk about is what it's not, so it's, it's kind of great to read through the different requirements and like what I was saying well ago um mhm. Having having ci having infrastructure as code and then trying a little bit of continuous deployment out, it's probably a prerequisite. Forget ops so it's hard to just jump into that when you don't already have infrastructure as code because a machine doing stuff on your behalf, it means that you have to have things documented and somewhere and get repo but let me put this in the in the >>chitty chat, I would like to know if the other panelists agree, but I think get apps is a okay. I would say it's a moderate level, it's not a beginner level communities thing, it's like a moderate level advanced, a little bit more advanced level. Um One can start off using it but you definitely have to have some pre recs in place or some understanding of like a pattern in place. Um So what do the other folks think about that opinion? >>I think if you're if you're trying to use get out before, you know what problem you have, you're probably gonna be in trouble. Right. It's like having a solution to it probably don't have yet. Mhm. Right. I mean if if you're just evil or and you're just typing, keep control apply, you're one person right, Get off. It doesn't seem like a big a big jump, like, I mean it doesn't like why would I do that? I'm just, I'm just gonna inside, it's the type of get commit right, I'm typing Q control apply. But I think one of the rules from we've is none of your developers and none of your admins can have cute control access to the cluster because if you can't, if you do have access and you can just apply something, then that's just infrastructure as code. That's just continuous deployment, that's, that's not really get ops um, getups implies that the only way things get into the cluster is through the get up, get automation that you're using with, you know, flux Argo, we haven't talked about, what's the other one that Victor Farsi talks about, by the way people are asking about victor, because victor would love to talk about this stuff, but he's in my next life, so come back in an hour and a half or whatever and victor is going to be talking about sys, admin list with me. Um >>you gotta ask him nothing but get up questions in the next, >>confuse them, confuse them. But anyway, that, that, that's um, it's hard, it's hard to understand and without having tried it, I think conceptually it's a little challenging >>one thing with getups, especially based off the we've works blog post that you just put up on there. It's an opinionated way of doing something. Uh you know, it's an opinionated way of of delivering changes to an environment to your kubernetes environment. So it's opinionated were often not used to seeing things that are very opinionated in this sense, in the in the ecosystem, but get apps is a opinionated thing. It's it's one way of doing it. Um there are ways to change it and like there are options um like what we were talking about in terms of the events that trigger, but the way that it's structured is an opinion opinionated way both from like a tooling perspective, like using get etcetera, but also from a devops cultural perspective, right? Like you were talking about not having anyone access cube control and changing the cluster directly. That's a philosophical opinion that get ups forces you to adopt otherwise. It kind of breaks the model and um I just I want everyone to just understand that. That is very opinion, anything in that sense. Yeah, >>polygamy is another thing. Infrastructure as code. Um someone's mentioning plummy and chat, I just had actually my life show self plug bread that live go there. I'm on Youtube every week. I did the same thing. These these are my friends um and had palami on two weeks ago uh last week, remember uh and it was in the last couple of weeks and we talked about their infrastructure as code solution. Were actually writing code instead of um oh that's an interesting take on uh developer team sort of owning coding the infrastructure through code rather than Yamil as a data language. I don't really have an opinion on it yet because I haven't used it in production or anything in the real real world, but um, I'm not sure how much they are applying trying to go towards the get up stuff. I will do a plug for Solomon hikes. Who has a, the beginning of the day, it's already happened so you can go back and watch it. It's a, it's a, what's it called? Q. Rethinking application delivery with Q. And build kit. So go look this up. This is the found co founder of Dr and former CTO Solomon hikes at the beginning of the day. He has a tool called dagger. I'm not sure why the title of the talk is delivering with Q. And built it, but the tool is showing off in there for an hour is called dagger. And it's, it's an interesting idea on how to apply a lot of this opinionated automated stuff to uh, to deployment and it's get off space and you use Q language. It's a graph language. I watched most of it and it was a really interesting take. I'm excited to see if that takes off and if they try that because it's another way that you can get a little bit more advanced with your you're get deployments and without having to just stick everything in Yemen, which is kind of what we're in today with helm charts and what not. All right. More questions about secrets, I think. I think we're not going to have a whole lot of more, a lot more about secrets basically. Uh put secrets in your cluster to start with and kubernetes in encrypted, you know, thing. And then, you know, as it gets harder, then you have to find another solution when you have five clusters, you don't wanna have to do it five times. That's when you have to go for Walton A W. S secrets and all >>that. Right? I'm gonna post it note. Yeah. Crm into the cluster. Just kidding. >>Yes, there are recordings of this. Yes, they will be later. Uh, because we're that these are all gonna be on youtube later. Um, yeah, detects secrets cushion saying detect secrets or get Guardian are absolute requirements. I think it's in reference to your secrets comment earlier. Um, Camels asking about Cuban is dropping support for Docker that this is not the place to ask for that, but it, it is uh, basically it's a Nonevent Marantz has actually just created that same plug in available in a different repos. So if you want to keep using Docker and kubernetes, you know, you can do it like it's no big deal. Most of us aren't using doctor in our communities anyway, so we're using like container D or whatever is provided to us by our provider. Um yeah, thank you so much for all these comments. These are great people helping each other and chat. I feel like we're just here to make sure the chats available so people can help each other. >>I feel like I want to pick up on something when you mentioned pollux me, I think there's a um we're talking about getups but I think in the original like the origination of that I guess was deploying applications to clusters right, picking up deployment manifest. But I think with the gloomy and I obviously terra form and things have been around a long time, folks are starting to apply this I think I found one earlier which was like um kub stack the Terror Forms get ups framework. Um but also with the advent of things like cluster A. P. I. Um in the Cuban at the space where you can declare actively build the infrastructure for your clusters and build the cluster right? We're not just talking about deploying applications, the cluster A. P. I will talk to a W. S. Spin up, VPc spin up machines, you know, we'll do the same kind of things that terra form does and and those other tools do I think applying getups principles to the infrastructure spin up right, the proper infrastructure as code stuff, constantly applying Terror form um you know, plans and whatever, constantly applying cluster Api resources spinning up stuff in those clouds. That's a super interesting. Um you know, extension of this area, I'd be curious to see if what the folks think about that. >>Yeah, that's why I picked this topic is one of my three. Uh I got I got to pick the topics. I was like the three things that there like the most bleeding edge exciting. Most people haven't, we haven't basically we haven't figured all this out yet. We as an industry, so um it's I think we're gonna see more ideas on it. Um what's the one with the popsicle as the as the icon victor talks about all the time? It's not it's another getups like tool, but it's um it's getups for you use this kubernetes limit and then we have to look it up, >>You're talking about cross plane. >>So >>my >>wife is over here with the sound effects and the first sound effect of the day that she chooses to use is one. >>All right, can we pick it? Let's let's find another question bret >>I'm searching >>so many of them. All right, so uh I think one really quick one is getups only for kubernetes, I think the main to tooling to tools that we're talking about, our Argosy D and flux and they're mostly geared toward kubernetes deployments but there's a, it seems like they're organized in a way that there's a clean abstraction in with respect to the agent that's doing the deployment and the tooling that that can interact with. So I would imagine that in the future and this might be true already right now that get ups could be applied to other types of deployments at some point in the future. But right now it's mostly focused and treats kubernetes as a first class citizen or the tooling on top of kubernetes, let's say something like how as a first class citizen? Yeah, to Brett, >>to me the field, back to you bret the thing I was looking for is cross plane. So that's another tool. Um Victor has been uh sharing a lot about it in Youtube cross plane and that is basically runs inside a kubernetes, but it handles your other infrastructure besides your app. It allows you to like get ops, you're a W. S stuff by using the kubernetes state engine as a, as a way to manage that. And I have not used it yet, but he does some really great demos on Youtube. So people are liking this idea of get off, so they're trying to figure out how do we, how do we manage state? How do we uh because the probably terra form is that, well, there's many problems, but it's always a lot of problems, but in the get outs world it's not quite the right fit yet, It might be, but you still, it's still largely as expected for people to, you know, like type the command, um, and it keeps state locally the ss, clouds and all that. And but the other thing is I'm I'm now realizing that when I saw the demo from Solomon, I'm going back to the Solomon hikes thing. He was using the demo and he was showing it apply deploying something on S three buckets, employing internet wifi and deploying it on google other things beyond kubernetes and saying that it's all getups approach. So I think we're just at the very beginning of seeing because it all started with kubernetes and now there's a swarm one, you can look up swarm, get office and there's a swarm, I can't take the name of it. Swarm sink I think is what's called swarm sink on git hub, which allows you to do swarm based getups like things. And now we're seeing these other tools coming out. They're saying we're going to try to do the get ups concepts, but not for kubernetes specifically and that's I think, you know, infrastructure as code started with certain areas of the world and then now then now we all just assume that you're going to have an infrastructure as code way of doing whatever that is and I think get off is going to have that same approach where pretty soon, you know, we'll have get apps for all the clouds stuff and it won't just be flexor Argo. And then that's the weird thing is will flex and Argo support all those things or will it just be focused on kubernetes apps? You know, community stuff? >>There's also, I think this is what you're alluding to. There is a trend of using um kubernetes and see rDS to provision and control things that are outside of communities like the cloud service providers services as if they were first class entities within kubernetes so that you can use the kubernetes um focus tooling for things that are not communities through the kubernetes interface communities. Yeah, >>yeah, even criticism. >>Yeah, yeah, I'm just going to say that sounds like cross plane. >>Yeah, yeah, I mean, I think that's that's uh there were, you know, for the last couple of years, it's been flux and are going back and forth. Um they're like frenemies, you know, and they've been going back and forth with iterating on these ideas of how do we manage this complicated thing? That is many kubernetes clusters? Um because like Argo, I don't know if the flux V two can do this, but Argo can manage multiple clusters now from one cluster, so your, you can manage other clusters, technically external things from a single entity. Um Originally flux couldn't do that, but I'm going to say that V two can, I don't actually >>know. Um I think all that is gonna, I think that's going to consolidate in the future. All right. In terms of like the common feature set, what Iver and john what do you think? >>I mean, I think it's already begun, right, I think haven't, didn't they collaborate on a common engine? I don't know whether it's finished yet, but I think they're working towards a common getups engine and then they're just going to layer on features on top. But I think, I mean, I think that's interesting, right, because where it runs and where it interacts with, if we're talking about a pull based model, it shouldn't, it's decentralized to a certain extent, right? We need get and we need the agent which is pulling if we're saying there's something else which is orchestrating something that we start to like fuzzy the model even right. Like is this state living somewhere else, then I think that's just interesting as well. I thought flux was completely decentralized, but I know you install our go somewhere like the cargo has a server as well, but it's been a while since I've looked in depth at them. But I think the, you know, does that muddy the agent only pull model? >>I'm reading a >>Yeah, I would say that there's like a process of natural selection going on as as the C. N. C. F. Landscape evolves and grows bigger and a lot of divide and conquer right now. But I think as certain things kind of get more prominent >>and popular, I think >>it starts to trend and it inspires other things and then it starts to aggregate and you know, kind of get back into like a unified kind of like core. Maybe like for instance, cross plane, I feel like it shouldn't even really exist. It should be, it like it's a communities add on, but it should be built in, it should be built into kubernetes, like why doesn't this exist already >>for like controlling a cloud? >>Yeah, like just, you know, having this interface with the cloud provider and be able to Yeah, >>exactly. Yeah, and it kinda, you're right. That kinda happens because you do, I mean when you start talking about storage providers and networking providers was very specific implementations of operators or just individual controllers that do operate and control other resources in the cloud, but certainly not universally right. Not every feature of AWS is available to kubernetes out of the box. Um and you know, it, one of the challenges across plane is you gotta have kubernetes before you can deploy kubernetes. Like there's a chicken and egg issue there where if you're going to use, if you're going to use our cross plane for your other infrastructure, but it's gotta, but it has to run on kubernetes who creates that first kubernetes in order for you to put that on there. And victor talks about one of his videos, the same problem with flux and Argo where like Argo, you can't deploy Argo itself with getups. There has to be that initial, I did a thing with, I'm a human and I typed in some commands on a server and things happened but they don't really have an easy deployment method for getting our go up and running using simply nothing but a get push to an existing system. There's something like that. So it's a it's an interesting problem of day one infrastructure which is again only day one, I think data is way more interesting and hard, but um how can we spend these things up if they're all depending on each other and who is the first one to get started? >>I mean it's true of everything though, I mean at the end of that you need some kind of big bang kind of function too, you know, I started running start everything I >>think without going over that, sorry, without going off on a tangent. I was, I was gonna say there's a, if folks have heard of kind which is kubernetes and Docker, which is a mini kubernetes cluster, you can run in a Docker container or each container will run as a as a node. Um you know, that's been a really good way to spin up things like clusters. KPI because they boot strap a local kind, install the manifests, it will go and spin up a fully sized cluster, it will transfer its resources over there and then it will die itself. Right? So that, that's kind of bootstrapping itself. And I think a couple of folks in the community, Jason to Tiberius, I think he works for Quinyx metal um has, has experimented with like an even more minimal just Api server, so we're really just leveraging the kubernetes ideas of like a reconciliation loop and a controller. We just need something to bootstrap with those C R D s and get something going and then go away again. So I think that's gonna be a pattern that comes up kind of more and more >>Yeah, for sure. Um, and uh, the next, next quick answer to the question, Angel asked what your thoughts on getups being a niche to get or versus others vcs tools? Well, if I knew anyone who is using anything other than get, I would say no, you know, get ops is a horrible name. It should just be CVS office, but that doesn't or vcs ops or whatever like that, but that doesn't roll off the tongue. So someone had to come up with the get ups phrase. Um but absolutely, it's all about version control solutions used for infrastructure, not code. Um might get doctor asks a great question, we're not gonna have time for it, but maybe people can reply and chat with what they think but about infrastructure and code, the lines being blurred and that do develop, how much of infrastructure does developer do developers need to know? Essentially, they're having to know all the things. Um so unfortunately we've had way more questions like every panel here today with all the great community, we've got way more questions we can handle in this time. So we're gonna have to wrap it up and say goodbye. Go to the next live panel. I believe the next one is um on developer, developer specific setups that's gonna be peter running that panel. Something about development in containers and I'm sure it's gonna be great. Just like this one. So let's go around the room where can people find you on the internet? I'm at Brett fisher on twitter. That's where you can usually find me most days you are? >>Yeah, I'm on twitter to um, I'll put it in the chat. It's kind of confusing because the TSR seven. >>Okay. Yeah, that's right. You can't just say it. You can also look at the blow of the video and like our faces are there and if you click on them, it tells you our twitter in Arlington and stuff, john >>John Harris 85, pretty much everywhere. Get hub Twitter slack, etc. >>Yeah >>and normal, normal faults or just, you know, living on Youtube live with Brett. >>Yeah, we're all on the twitter so go check us out there and thank you so much for joining. Uh thank you so much to you all for being here. I really appreciate you taking time in your busy schedule to join me for a little chit chat. Um Yes, all the, all the cheers, yes. >>And I think this kid apps loop has been declarative lee reconciled. >>Yeah, there we go. And with that ladies and gentlemen, uh bid you would do, we will see you in the next, next round coming up next with Peter >>bye.
SUMMARY :
I got my evil or my john and the normal And we're going to talk about get ops I currently based in Berlin and I happen to be Brett Brett's teaching assistant. All right, that's right. Um, so yeah, it's good to see either in person and it's good to see you again john it's been a little It has the pre covid times, right? Yeah, john shirt looks red and reminds me of the Austin T shirt. Um, but you know, you have to go steal stuff, you to find ways to get the swag If you ever come to my place, I'm going to have to lock the closets. So the second I think it was the second floor of the doctor HQ in SAn Francisco was where they kept all the Um All right, so I'm going to start scanning questions uh so that you don't have to you can Um I still feel like I'm very new to john you anything. like it's, you know, I think when you put it best in the beginning where you do a and then there's a magic and then you get C. so it has a learning curve and it's still being, you know, I think it's like I feel like we're very early days and the idea of especially when you start getting into tooling sure you would have opinions. I think it's a very yeah. um I'll do my best inner victor and say, you know, it's it's I like it. then more, you know, and not everything needs to settle in terms of only one way of doing things, to a server and do a doctor pull and you know, dr up or dr composed up rather, That's not to say that there aren't city tools which we're doing pull based or you can do pull based or get ups I rant, Right, so you have what? thing that I could figure out how to, you know, get it set up using um get hubs, and different repos and subdirectories are are looking at the defense and to see if there's changes I think it's you know, Yeah, for sure. That's the pain um or is it uh you know, is that that everyone's in one place So that is well within the realm of what you have Um was making a joke with a team the other week that you know the developer used to just I think when you get to the scale where those kind of issues are a problem then you're probably at the scale this kind of comes to a conversation uh starting this question from lee he's asking how do you combine top of kubernetes, such as helm or um you know, the other ones that are out there I was about to go to the next topic, I think certain tools dictate the approach, like, if you're using Argosy d I think you can make our go do that too, but uh this is back to what That's the same thing for secrets with good apps? But again like like like normal sand, you know, it's that doesn't really affect get ops, the risk of you putting a secret into your git repo if you haven't figured I hide this? So I think is the right way of saying the answer to that I think the secrets was the thing that made me, you know, like two or three years ago made me kind of see Yeah. in it, like you would have to have, you have to have all your terra form, anything else you're spinning up. can start off using it but you definitely have to have some pre recs in if you do have access and you can just apply something, then that's just infrastructure as code. But anyway, one thing with getups, especially based off the we've works blog post that you just put up on And then, you know, as it gets harder, then you have to find another solution when Crm into the cluster. I think it's in reference to your secrets comment earlier. like cluster A. P. I. Um in the Cuban at the space where you can declare actively build the infrastructure but it's um it's getups for you use this kubernetes I think the main to tooling to tools that we're talking about, our Argosy D and flux I think get off is going to have that same approach where pretty soon, you know, we'll have get apps for you can use the kubernetes um focus tooling for things I mean, I think that's that's uh there were, you know, Um I think all that is gonna, I think that's going to consolidate But I think the, you know, does that muddy the agent only But I think as certain things kind of get more it starts to trend and it inspires other things and then it starts to aggregate and you know, the same problem with flux and Argo where like Argo, you can't deploy Argo itself with getups. Um you know, that's been a really good way to spin up things like clusters. So let's go around the room where can people find you on the internet? the TSR seven. are there and if you click on them, it tells you our twitter in Arlington and stuff, john Get hub Twitter slack, etc. and normal, normal faults or just, you know, I really appreciate you taking time in your And with that ladies and gentlemen, uh bid you would do,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Brett | PERSON | 0.99+ |
Berlin | LOCATION | 0.99+ |
Victor Farsi | PERSON | 0.99+ |
john Harris | PERSON | 0.99+ |
Virginia Beach | LOCATION | 0.99+ |
Seattle | LOCATION | 0.99+ |
Jason | PERSON | 0.99+ |
Brett Brett | PERSON | 0.99+ |
Gaddafi | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
first question | QUANTITY | 0.99+ |
Yemen | LOCATION | 0.99+ |
last week | DATE | 0.99+ |
three | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
Arlington | LOCATION | 0.99+ |
Brett fisher | PERSON | 0.99+ |
five times | QUANTITY | 0.99+ |
Tiberius | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
two options | QUANTITY | 0.99+ |
john | PERSON | 0.99+ |
Virginia beach | LOCATION | 0.99+ |
two weeks ago | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
Amman Arepa | LOCATION | 0.99+ |
three changes | QUANTITY | 0.99+ |
one cluster | QUANTITY | 0.99+ |
second floor | QUANTITY | 0.99+ |
Quinyx | ORGANIZATION | 0.99+ |
five | QUANTITY | 0.99+ |
Tory | PERSON | 0.99+ |
an hour and a half | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
axel Springer | ORGANIZATION | 0.99+ |
Victor | PERSON | 0.99+ |
Jenkins | TITLE | 0.98+ |
youtube | ORGANIZATION | 0.98+ |
SAn Francisco | LOCATION | 0.98+ |
three special guests | QUANTITY | 0.98+ |
4 | QUANTITY | 0.98+ |
Each tool | QUANTITY | 0.98+ |
booz allen | PERSON | 0.98+ |
one person | QUANTITY | 0.98+ |
five clusters | QUANTITY | 0.98+ |
three things | QUANTITY | 0.98+ |
first time | QUANTITY | 0.98+ |
five different systems | QUANTITY | 0.98+ |
each container | QUANTITY | 0.98+ |
day one | QUANTITY | 0.98+ |
Youtube | ORGANIZATION | 0.98+ |
Angel | PERSON | 0.98+ |
Iver | PERSON | 0.98+ |
five different ways | QUANTITY | 0.98+ |
first year | QUANTITY | 0.97+ |
V two | OTHER | 0.97+ |
three commits | QUANTITY | 0.97+ |
more than two | QUANTITY | 0.97+ |
One person | QUANTITY | 0.97+ |
two way | QUANTITY | 0.96+ |
ORGANIZATION | 0.96+ | |
one way | QUANTITY | 0.96+ |
single source | QUANTITY | 0.96+ |
single point | QUANTITY | 0.96+ |
five prs | QUANTITY | 0.95+ |
first one | QUANTITY | 0.95+ |
John Harris 85 | PERSON | 0.95+ |
first | QUANTITY | 0.95+ |
more than one kubernetes | QUANTITY | 0.95+ |
Donnie Berkholz, Docker | DockerCon 2021
>>Welcome back to the cubes coverage of dr khan 2021 virtual. I'm john for a host of the cube. Got a great cube segment here at Donnie Bergholz, VP of products at Docker Industry veterans, seeing all the ways of innovation now uh had a product that dr dani great to see you. >>It's great to see you again to john >>hey, great program this year, Dr khan almost pushing the envelope again. Just the world's changed significantly over the past few years in this past year has been pretty crazy last year were virtual at the beginning of the pandemic, the watershed moment. Dr khan 2020 you know, with virtual event and then share action packed keynote track, uh four tracks run share build accelerate, you got a cube track, you've got live hits. Uh, community rooms global, huge growth in the developer community around Docker Kubernetes is now well understood by everyone and the general consensus is everyone's in production with it moving like a fast train cloud natives at the center of the action coupons, very operational operators. Dr khan's very development focus. So this is a key developer event really in the CNC F cloud native world. What's going on the process? Give us the update? >>Yeah. And I think you made a fantastic point there, john which is the developer focus. Uh, I joined dr back in october of last year and one of the first things that I did was make sure that we were going out there listening to our customers, having a lot of fresh conversations with them and using those as the core product strategy as we were talking to customers. What we learned fell into three big buckets around building sharing and running modern applications. So we've used those to create our product strategy which is based on solving problems that our customers and developers using Docker care about rather than lot of product strategies that I've come across as an analyst and as a leader on the enterprise side, which are very much feature factory driven of like here's the thing we can ship it, what kind of shove it in your face and try and sell it to you. So I'm really excited about what we're doing a doctor by delivering things that are developers really care about based on problems that they have told us are really valuable to solve problems that when we win, we went together and so we're focused on helping developers really accelerate their application delivery. So what are we doing? There's so much stuff and you know, if you've seen the keynote already, you'll see more and more of that. We announced for really big things and a lot of smaller things as well, um things like uh doctor verified publisher program which brings more trusted content. Um the doctor deV environments that help teams collaborate more effectively, um dr desktop on apple silicon bringing environments to the latest and greatest of machines that everybody is trying to get ahold of. Especially now that cps are harder to come by. Uh uh as well as uh some of those little things like scoped personal access tokens, which makes it easier for people to use a Ci pipeline without having to give it full right privileges and be concerned that if they get hacked, if the sea acrobatic it's hacked, then they get hacked to we're trying to help them defend against those kinds of cases. >>It's funny you made me think of the eye with the apple silicon comment, the supply chain threats that you've seen in hardware. And even here I'm hearing the word kicked around just in the CTO of doctor used the word supply chain, software supply chain. So again, you bring up this idea of supply chain, you mentioned trust. I can almost see the dots connecting, you know, in real time out in the audience out there saying, okay, you've got trust supply chain hardware, software, containers, there's no perimeter and clouds. You have to have a kind of unit level security. This is kind of a big deal. Can you just unpack this trend? Because this is a security kind of anywhere kind of not going to use a buzzword, but like supply chain actually hits home here. Like talk about that. What? All wise all this means? >>Yeah, I think Doctor is in a really interesting position in terms of how development teams and enterprises are adopting it, because it's been around for long enough that enterprises have come to trust Docker and it's really gotten in there in a way that a lot of brand new technologies have not. And yet we're still pushing the boundaries of innovation at the same time. So when when we think about where dr fits in for developers, we've got dr official images, which are probably adopted the default for anything you're going to do in a container. You go and get a doctor official image and start doing it. But then what Right? You pulling a bunch of those, you start building applications, you start pulling other libraries, you build your own code on top, um, on your DEV environment where you're probably running doctor desktop to do so. And so we've got content coming from a trusted source, we've got dr running on the developer laptop and then we've got everything else like where else does it go from there? Uh, and so there's a ton of um, both problem and opportunity to help bring all that complex kind of spaghetti pipeline mess together and help provide people with the path of they can have confidence in while they're doing so. It's interesting because it's different for developers than it is for option. Security teams very, very different in terms of what they care about. >>So talk about the automation impact because I can see two things happening. One is the trusted environment, more containers everywhere. And then you have more developers coming on board. Right? So actually more people writing code, not just bots, machines and humans. So you have more people flooding in writing code, more containers everywhere that need to be trusted? What's the impact to the environment? What's the but how do you, how does develop experience get easier and simpler when that's happening? >>We see that as you get more and more content, The tail, the long tail continues to extend, right, more and more community generated third party content. People publishing their own applications on Docker hub and all across the Internet. And that makes the importance of being able to discover things that you can trust that you can incorporate without worrying about what might be there all the more important. So we've got dr official images today, we announced the doctor verified publisher program. All these are things that we're doing to try and make it easier for developers to find the good stuff to use it and not worry about it and just move on with their lives. >>What's your vision and what's your, what stalkers take on the collaboration aspect of coding? I think it's one of the key themes here. Where does that fit in? What's the story with collaboration? >>Yeah, we see this as an area that really has been left behind around the adoption of containers, the adoption of kubernetes, the focus has been so much on that pipeline and that path and production and production container orchestration where we watched the generation of kubernetes arise and most of the vendors in the space, we're doing some kind of top down infrastructure deal right selling to the VP of Ops or something along those lines. Um and so the development of those applications really was left by the wayside because that's not a problem that the VP of us cares about, but it's a very interesting problem as we think about dr being focused on developers now to help those teams collaborate because no application is built in a closet. Every single application that is built is built in partnership with other developers, with product managers, with designers, all these people who need to somehow work together to review not only the source code, but the application as a whole. >>What does the product? Um, Evolution looked like as Justin Cormack and I were talking about, you know, developer productivity, the simplification containers as a P. I. S. What is the, what is the priority? How, how do you look at that? Because the securities front and center and a variety of security partners here in the ecosystem. Where's the priorities on the road map? You can, if someone asked you, hey Donnie, what's the bottom line? What's the product strategy? >>Yeah, our priority is the team. First and foremost, it is not optimizing for the single developer, it is optimizing for that team working together effectively. We feel that that is a very underserved audience of that developer team as a unit. Um, if you look at everybody in the container space, like I said, they're all kind of focused on operations, production, cloud environments, not on that team. And so we see that as a great opportunity to solve really important problems that nobody else is doing a great job of solving today. >>I gotta ask you on the team formation is the general consensus. Also in a lot of my interviews here at dr khan and outside in the industry, is that the, the monolithic organization building monolithic applications certainly has been disrupted. Certainly the engineering teams now look like they're going to be into end workloads, full visibility and to end with an s sorry, on the team, everyone kind of built in these teams. We kind of platform engineering flexing in between. So you don't have that kind of like silent organization certainly has been discussed for well, but this seems to be the standard. Now, what's your take on this and is that what you mean by teams that could you share your view on how people are organizing teams? Because certainly get hub and a lot of other leaders are saying, yeah, we see the same way these teams have, you know, threaded leaders and or fully baked team members inside these teams. >>Yeah, we definitely see that team as a cross functional team. It's not, you know, your your old world, we might have been like, you've got the development team here, you've got the QA team here, you've got the operations team there. It's completely not that it's that team is it's got developers on it. If there are dedicated testers or software engineers and test their on it, if they need to have a devops person or an SRE there on it as well, it's all part of the same team and that team is building on top of the platforms that are exposed by other teams. And that's the big shift that I think has been in the works for probably a decade at this point has been that kind of rotation of responsibilities that you used to be, that DEV's owned the DEV environment and DEV test and ops owned Prod and everything about PROd and now it's much more that there are platforms that span every environment and there's a platform team responsible for each one of those components that delivers it in a self service way. And then there are teams that build on top of that that own their application all the way from development through to production, they support it there on call for it. This is how we work internally, our development teams in our product development teams, I should say, because they're cross functional, really take ownership for their applications and it's it's a super powerful imperative. It gives people the ability to iterate much more quickly by taking away a lot of those gatekeepers. And it's it's the same thing as a matter of fact, when I was at an enterprise before I joined dr it's the same thing we did. A big part of our strategy was creating these self service platforms so that product teams could move quickly. >>Remember I interviewed during the QB was awesome. Great concept. Go back to look at that tape. That's not exactly not tape, it's on disk, but Great. Great concept. Let me ask you one more question on that because one of the things that's clear that's coming out even in the university areas Engineering DeVOPS has now brought in much more of a focus of the SRE that used to be an ops role but now becomes becoming developer. I mean it's DEVOPS, as you said, it's been going on for a while over a decade now it's much more clear that this s. R. Re engineering role is key. So with that I've always thought Doctor and containers is a perfect integration tool capability. I mean why not? I mean that's one of the benefits of containers as you allow, you can contain arise things. So if you play out what you just said about the team's integration is huge. Talk about how you see that evolving as a product person. >>Yeah. I think as you say, the integration is huge. Um You know, one way that I look at it is that the application itself or the service itself is defined by either a container or a set of containers. Um And the product development team cares about what's inside of that set of containers up and to that container layer or that group of containers layer. Whether that's the doctor file with its containers. Docker compose those kinds of things and then there might be a platform team responsible for running a great kubernetes environment, whether they're using a cloud platform or in house and they care about everything outside of the containers, up to the containers as that interface. Uh So when we think about those focuses, like Docker is all about that application in words. Um And a lot of the more production oriented containers vendors are container outwards. So it's very different when we think about the kinds of problems we want to solve. It's about making that application definition really easy and portable and enabling a clean handoff to SRE teams who may be responsible for running that Apple product. >>You brought up trusted content, trusted containers, modern applications earlier. What does trusted containers mean to you? I mean that's I mean obviously means security built in but there's a lot of migration there with containers, containers coming in and out of clusters all the time. They're being orchestrated. They're being used with state and state stateless data. What does trusted content mean? >>No. Really, for us, the focus is an interesting one because when we think about building, sharing and running applications for developers, our run means we want to give developers are great interface into the production environment. We don't want to provide the production environment. And so some of those problems are ones we deeply care about where the developers are making sure that they've got a trusted, secure, verifiable path to get the content that they are incorporating into their app all the way to production or to a point of hand off. If there is a point of hand off, once it gets to production, it becomes the problem of different products and different vendors to make it really easy for those same enterprises to effectively secure that application and project. >>What does containers is as an A P. I mean that's just docker reference classic approach or is there a new definition to containers as a piece? Our container ap >>Yeah, I think the question becomes really interesting when you start thinking about what's inside of each one of those containers and how you might be able to use those as building blocks. Even thinking about trends that are on the rise, like Loco Noko development, how could you imagine incorporating containers or a service composed of a group of containers um, into one of those kinds of contexts to do so you have to have a clean ap that you can define and published in support of how a different component would interface with every one of those containers. What are the ports? What are the protocols? What are the formats? Every one of those things is important to creating an API >>So I gotta ask you don? T put you on the spot because you've been on many, many sides of the table, analyst Docker, you've been at an enterprise doing some hardcore devops. If I'm a customer out there and say I'm a classic main street enterprise. Hey Donnie, I'm putting my teams, we're kicking ass. We've been kicking the tires, been in the cloud pandemics, giving us a little lift, we know it to double down on, we feel good about where we're going. Um, but I got a couple clouds out there. I'm all in on one. I got another one going, but I'm going hybrid all the way. I don't even know what multi cloud is yet, but hybrid means edge and ultimately distributed computing. What do I do? What's the doctor Playbook, What do you, what do you say to me? How do you keep me calm and motivated? Yeah, >>I think, you know, the reality is like you say every company is going to be running in multiple different environments. Um It's probably not the same application in multiple environments and different apps and they've gotten to a place maybe accidentally as different business units are different functions started picking different clouds of their choice and getting them there. But in the end of it, like the company as a whole has to figure out how do I support that and how do I make it all work together effectively and deal with all the different, not just levels of expertise in these different environments, but the different levels of performance and latency to expect as you have applications that may need to run across all those, um you know, I used to work in the travel industry and you might have somebody trying to book a flight and that's but you know, bouncing across a cloud to a data center, to a different cloud, to a service provider and on back and you can imagine very quickly, how do you solve for those latency problems that we know are correlated to user experience and in an e commerce kind of context correlated with revenue because people balance if they can't get a good response, it's complicated. The fact is it's just it's a hard problem to solve. Um containers can definitely help solve part of that by providing a consistent platform that lets you take your applications from place to place. That lets you build a consistent set of expertise so that, you know, a container here is like a container, there is like a container over there um And work with those in a fairly consistent way. But there's always going to be differences. I think it's very dangerous to assume that because you have a container in multiple places, it's going to provide the same levels of guarantees. And we had a lot of these conversations back in the early 2010s when private cloud was really starting to pick up steam and we said Oh let's make compatible storage layers. Uh And it was true to a point you could provide api compatibility but you had to run as hard as you could to keep up with the changes and you couldn't provide the same level of resiliency, You couldn't provide the same level of data protection, you couldn't provide the same level of performance and global footprint and all those provide what what does the A. P. I mean to a developer using it. It's all of those things regardless of whether they're in an api spec somewhere. >>That's a great call out looking at the how things are moving so fast and you just got to keep up. It's almost like you want some peace, peace time kind of philosophy. So I gotta ask you as you look at the landscape again, you've got a unique perspective running product over a docker which puts you at the front lines and looking at the whole marketplace as as a whole cloud native. But you also been an analyst. I got to ask you what does success look like because as the world changes that it's not always obvious until you see it. And then you know that success and then some people are trying different approaches. How do you tell the winners from the losers or the better approaches versus the ones that struggle? Is there a pattern that you're seeing emerge from the pandemic as a team is a tech? What's the, what's the pattern of success that you see? Development teams and organizations deploying that's working and what's a sign of bad things? >>Yeah, I think, you know, one of the biggest patterns is the ability to iterate quickly and learn fast. You know, if there's nothing else that you can do, you just think about what are those basic principles that let you be Agile? Not as a development team. Agile is a company getting from those ideas and that customer feedback all the way through the loop. To build that thing, tested with your customers before you ship it, get it out there. Maybe you do some kind of a modern deployment practice to decrease your risk as you're doing so right. It's Canary, it's rolling, releases its blue green, all those things Right? How do you d risk, how do you experiment while you're doing so and how do you stay agile so that you're able to provide customer value as fast as possible? Almost every failure pattern that you see is one that happens because you're not listening to your customers effectively and often enough and you're not iterating quickly enough so you're building in a direction that is not what they wanted or needed, >>you know, looking at Dr khan 2021 this year, look at the calendar, the cube tracks in there, which I'm excited to do a bunch of coverage on. It's always fun. But you got the classic build share run, which is the ethos of Doctor, but you get a new track called accelerate, there is an acceleration coming out of the pandemic more than ever. Um it's been pretty cool. I mean you're seeing a lot more action in all areas but talk about the acceleration with containers and what you what you're seeing on the landscape side of the industry and how that's impacting customers. What specifically is this acceleration really all about? >>Yeah, when I think about what acceleration means to me, it's about how do you avoid building things, avoid finding things that you don't need to spend your time on? How can you pick things up? Incorporate those into your workflows, incorporate those into your applications that you don't have to build it yourself right, you can accelerate every time you want to accelerate. Its because somebody else built something that you can then reuse and build on top of whether its application components, whether that's SAs or apps, developer services, whether that's pre integrated pipelines. So you've already got plug ins and tools that work every one of those things as an accelerator, A lot of them are delivered by all kinds of different vendors all over the map. And so if they don't integrate well together, if there aren't open A. P. S, if there aren't pre integrated offerings, it's not gonna be an accelerator is gonna be exactly the opposite. It's going to be I want to get this thing in, let me bring in five or six different consulting teams to start trying to piece all this stuff together. Big, big slow down. So the pretty integrated solutions, the open A. P. S. Those are the kinds of things that really are going to accelerate people. >>I can't I can't agree with you more on this whole slowdown thing. And one of the hardest things to do is insert new team members are new kind of rules and process into kind of already accelerated momentum, which is hard. This is a hard new kind of a cloud native dynamic, which is scale and speed are critical, right? So it's one of those things that's actually benefit. But if you don't rein it in a little bit, how do you balance that? What's your advice to folks? This is, this is a common problem. I mean, it could get away from you. It's on one hand, but if you slow down too much, it's a gridlock and you, you misfire. What's your thoughts on this? >>Yeah, that, that balance of scale and speed. Um, and it definitely is a balance there. You know, I think there's always a danger of over architect ng for your current state of reality. Um, and you know, one of the things that I've learned over the years is, you've got to, you got to scale your process and scale your architecture to where you're at and where you're going to be soon, if you start Designing for five years, 10 years down the road, um it's going to slow you down in the short term and you might never get to where you thought you were going to be in five or 10 years. You've got to build for where you're at, built for where you're going soon, you're not gonna go for the future. And this is, it ties into these ideas like evolutionary architecture, like how do you build in a way that makes change easy because, you know, things are always going to change. Um, you know, some of the recent trends around things like project product playing so well to this, right? It's not like a project team comes together and builds the solution and then walks away and the solution works untouched for years or decades. Instead, it's it's that agile approach of is a product team there long lived. They own what they're building and they support it and they continue to enhance it, going forward to improve their ability to meet their customers needs over time. >>Yeah, and I think that's a super important point. The magical product team that just scales infinitely by itself while you're sleeping is different. Again, the team formation is an indicator of that. So, I think this whole agility going to the next level really is all about, you know, a series of these teams. Micro micro teams. Microservices, I mean, again, monolithic applications yielded monolithic organizations. >>Microservices >>brings in kind of this open source ethos, this new hate to use the term to Pizza team because it's an Amazonian thing, but it kind of applies here, Right? So you got to have these teams. I had to focus and to end and take ownership of that, whether it's product, platform or project at the end of the day, you're still serving customers. Final question for you on. Well, I got you here. I know end user experience you brought this up earlier. This is a huge important piece. I think last year, you and I talked about this briefly in our interview as developers come to the front lines of the business, some of them all don't have M B A. S and that always, you know, going to business school and some of the best engineers shouldn't go to business school in my opinion, But but you know, they have to learn the vernacular of complex topics, understand quality, get bring craft into the software more and more developers on the front lines closer and closer to the customer as they go direct. This is a huge change from just 5, 10 years ago. What's your thoughts on this? And what do you tell people when when they say hey donnie what how should I ah posture to the customer? What can I do to get better? What do you say to that? >>Yeah it's a great question. Um and it's one that I think a lot of companies are struggling to solve. How do we bring developers closer to the customers? And what does that mean? One of the things that we do regularly at Dr is we bring our developers along on customer interviews. So our product managers are constantly out there, you know kind of beating the virtual street, talking to developers talking to customers. Um and regularly they'll bring developers on the same team along. This is super valuable in helping our developers really build an understanding of the customers are building for, right. It may not even be about that specific thing that they're building on that one day. Um but it's about understanding the customer's needs and really making that something that is internalized in the way they think about how do they solve problems? How do they design solutions? How do they do? So in a way that is much more likely to resonate with the customers. Um Do they have an NBA? No, but where do you start? You gotta start somewhere? You start by bringing people into the conversation, so we don't expect them to lead an interview. We expect them to come along, learn and ask questions. And what happens so often is that people with, you know, the business in other companies might say yeah, developers, they're just these tech people will just like give him a set of requirements and they'll deliver stuff. Um but bring them along for the ride and letting them interact with the customers that are using their product is an amazing and exciting experience for developers. We hear consistently just super excited, treat back. >>It's clearly the trend. I mean one of the best, the best performing teams have the business and developers working together. It's really interesting phenomenon. I think it's going to change the makeup of taking that and to end approach to a whole nother level dani. Great to have you on. Great to see you final question. Um take a minute to put a plug in for the product team over there. What are you working on? What are you most excited about? Give a quick plug? >>You know, I am super excited about what we're doing in both trusted content and around team collaboration. Um I think both of those are just going to be amazing. Amazing opportunities to improve how developers are working on their microservices. It's so fragmented, it's so complicated that helping make that easier is going to be really important and valuable an area for development teams to focus on. >>Uh, Dr khan 2021 Virtual, Donnie Bergholz, VP of products and Dakar, good friend of the CUBA and the industry as well. Dani, thanks for that. Great insight and sharing some gems you drop there. Thanks. >>All right. Thank you. All >>right. Dr khan coverage I'm john for your host of the cube, The Cube track here at Dakar 2021 virtual. Thanks for watching. Mhm.
SUMMARY :
I'm john for a host of the cube. Dr khan 2020 you know, lot of product strategies that I've come across as an analyst and as a leader on the enterprise I can almost see the dots connecting, you know, in real time out in the audience out there saying, okay, You pulling a bunch of those, you start building applications, you start pulling other libraries, What's the impact to the environment? And that makes the importance of being able to discover things that you can trust What's the story with collaboration? Um and so the development of those applications really was left by the wayside you know, developer productivity, the simplification containers as a P. I. Um, if you look at everybody in the container space, like I said, I gotta ask you on the team formation is the general consensus. you know, your your old world, we might have been like, you've got the development team here, you've got the QA team here, I mean that's one of the benefits of containers as you allow, you can contain arise things. Um And a lot of the more a lot of migration there with containers, containers coming in and out of clusters all the time. are great interface into the production environment. classic approach or is there a new definition to containers as a piece? have to have a clean ap that you can define and published in support of how a different So I gotta ask you don? You couldn't provide the same level of data protection, you couldn't provide the same level of performance and global footprint That's a great call out looking at the how things are moving so fast and you just got to keep up. Yeah, I think, you know, one of the biggest patterns is the ability to iterate quickly and learn fast. and what you what you're seeing on the landscape side of the industry and how that's impacting customers. applications that you don't have to build it yourself right, you can accelerate every time you want to accelerate. And one of the hardest things to do is insert the short term and you might never get to where you thought you were going to be in five or 10 years. you know, a series of these teams. I think last year, you and I talked about this briefly in our interview as developers come to the front lines And what happens so often is that people with, you know, Great to have you on. It's so fragmented, it's so complicated that helping make that easier is going to be good friend of the CUBA and the industry as well. All right. Dr khan coverage I'm john for your host of the cube, The Cube track here at Dakar 2021 virtual.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Donnie | PERSON | 0.99+ |
Donnie Berkholz | PERSON | 0.99+ |
Donnie Bergholz | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
Dani | PERSON | 0.99+ |
five years | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
Justin Cormack | PERSON | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
10 years | QUANTITY | 0.99+ |
Dakar | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
CUBA | ORGANIZATION | 0.99+ |
early 2010s | DATE | 0.99+ |
One | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
this year | DATE | 0.98+ |
two things | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
john | PERSON | 0.98+ |
khan | PERSON | 0.97+ |
single developer | QUANTITY | 0.97+ |
pandemic | EVENT | 0.96+ |
Loco Noko | ORGANIZATION | 0.96+ |
four tracks | QUANTITY | 0.96+ |
Dr | PERSON | 0.95+ |
dr khan | PERSON | 0.93+ |
agile | TITLE | 0.92+ |
one day | QUANTITY | 0.92+ |
each one | QUANTITY | 0.92+ |
5, 10 years ago | DATE | 0.92+ |
Agile | ORGANIZATION | 0.92+ |
six different consulting teams | QUANTITY | 0.91+ |
first things | QUANTITY | 0.9+ |
october of last year | DATE | 0.9+ |
Docker | ORGANIZATION | 0.88+ |
one more question | QUANTITY | 0.87+ |
Docker Industry | ORGANIZATION | 0.86+ |
one way | QUANTITY | 0.85+ |
dr dani | PERSON | 0.83+ |
2020 | DATE | 0.82+ |
past year | DATE | 0.81+ |
both problem | QUANTITY | 0.79+ |
dr khan | ORGANIZATION | 0.79+ |
Dakar 2021 virtual | ORGANIZATION | 0.78+ |
past few years | DATE | 0.78+ |
DockerCon 2021 | EVENT | 0.76+ |
decades | QUANTITY | 0.73+ |
Dr | ORGANIZATION | 0.73+ |
single application | QUANTITY | 0.71+ |
a decade | QUANTITY | 0.69+ |
years | QUANTITY | 0.69+ |
over a decade | QUANTITY | 0.69+ |
2021 | DATE | 0.69+ |
apple | ORGANIZATION | 0.67+ |
Amazonian | OTHER | 0.63+ |
key themes | QUANTITY | 0.63+ |
a ton of um | QUANTITY | 0.62+ |
Playbook | TITLE | 0.62+ |
Docker Kubernetes | ORGANIZATION | 0.62+ |
three | QUANTITY | 0.61+ |
couple | QUANTITY | 0.59+ |
Akanksha Mehrotra, Dell Technologies | Dell Technologies World 2021
(upbeat music) >> Welcome back to DTW 2021, theCUBE's continuous coverage of Dell Technologies World, the virtual version. My name is Dave Vellante and for years we've been looking forward to the day that the on-premises experience was substantially similar to that offered in the public cloud. And one of the biggest gaps has been subscription based experiences, pricing and simplicity and transparency with agility and scalability, not buying and installing a box but rather consuming an outcome based service to support my IT infrastructure needs. And with me to talk about how Dell is delivering on this vision is Akanksha Mehrotra, Vice-President Marketing for APEX at Dell Technologies. Welcome Akanksha, great to see you. >> Thank you, thanks for having me. >> It's our pleasure. So we're going to dig into APEX. We know that Dell has been delivering cloud-based solutions for a long time now, but it seems like there's a convergence happening in all these areas. And it's generating a lot of talk in the industry. What are your customers asking you to deliver and how is Dell responding? >> Yeah, there's a few trends that we're seeing and they've been in place for a while, but they have accelerated certainly over the past year. The first one is organizations all over the world want to become more digital in order to modernize their operation and foster innovation on behalf of their customers. And they've been thriving for years, digital transformation can do so. That in and of itself isn't necessarily new, but the relative complexity of driving digital transformation. For example, when they're bringing on a predominantly or all of the remote workforce as well as the relative piece of change, for example, if they see remarkable spike in the consumption of digital content validated over the past year. And because of that the need for agility has gone up. The other trend that we see is that there's a clear preference for a hybrid cloud approach. Customers tell us that they need on-prem cloud resources to help mitigate risk for applications that need dedicated fast performance as well as, you know, in order to contain costs. But then they also tell us that public cloud is here to stay for the increased agility that it provides a simplified operations as well as the faster access to innovation. And so what's really clear is that both private cloud and public cloud has their strengths and picking one you're inevitably trading off the benefits of the other. And so an organizations want the flexibility to be able to choose the right path that best meet their business objectives. And IT is a service delivered at the location of your choice is one way to do that. As you know, we talk a lot to analysts like yourself and they tend to agree with us. IDC predicts that by 2024 or perhaps a better data center infrastructure is going to be consumed as a service. At Dell Technologies, we're beginning to see the shift happen already. As you said, we've been providing flexible consumption and as a service solutions for well over a decade. However, what's different now is that we're radically simplifying that entire technology experience to deliver this at scale to our entire install base and that's what APEX is all about. >> Great, thank you. So I know Dell is very proud of the tie. I think I got this ratio right, do to say ratio, right? The numerator's bigger than the denominator. And you've got a good track record in this regard. You're going to announce project APEX in October and you've provided a preview of what was coming then and today you're fully unveiling APEX, no more project, just APEX. What's APEX all about and what customer benefits specifically does APEX deliver? >> Yeah, so you're right. We announced that this a vision back in October and now we're kind of taking away the project and it's generally available. So you can kind of refer to it as APEX going forward. APEX represents our portfolio of as-a-service offerings. These helps simplify digital transformation for our customers by increasing their IT's agility and their control. We believe it's a solution that helps bridge this divide between public and private cloud by delivering as a service wherever it's needed to help organizations meet the needs of their digital transformation agenda. Talking to our customers in terms of customer benefits, we've centered around three areas and they are simplicity, agility, and control as the key benefits that APEX is going to provide to our customers. So let me unpack these one by one and kind of demonstrate how we're going to deliver on these promises. Let's start with simplicity. APEX represents a fundamental shift in the way that we deliver our technology portfolio. And obviously we do this to simplify IT for our customers. Our goal is to remove complexity from every stage of the customer journey. So for example, with APEX and APEX offers that I'll just get into in a bit, we take away that complexity, the pain and frankly the undifferentiated work of managing infrastructure so that organizations can focus on what they do best, right? Adding value to their organizations. Another way in which we simplify is streamline the procurement process. So we allow customers to just simplify a simple set of outcomes that they're looking for and subscribe to a service using an easy web based console and then we'll take it from there. We will pick the technology and its services that best meets the needs, you know, best delivers on those set of outcomes and then we'll deliver it for them. So as a result, organizations can kind of take advantage of the technology that best meets their needs but without all the complexity of life cycle management whether it's at the beginning or at the end, you know, the decommissioning part of the life cycle. Next, let's talk about agility. This is an area that's been top of mind for our customers as I said, certainly over the past year and frankly, it's been one of the main driving factors over the other service revolution. Again, with APEX we aim to deliver agility to every stage of the customer journey. So for example, with APEX, our goal is to get customers started on projects faster than they ever have before within their data center. We target a 14 day time to value from order to activation or from subscription to activation within the location of their choice. Another driver for agility is having access to technology when you need it without costly over provisioning. So with APEX, you can dynamically scale your resources up and down based on changing business requirements. And then the third barrier of agility and this is a serious one, it's just forecasting costs and containing them. And with APEX, our promise is that you're paying for technology only as it's used using a clear, consistent and a transparent rate. So you're never guessing what you're going to pay. There's no overage charges and you're not paying to access your own data. And then finally from a control standpoint, often business and IT leaders are forced to make difficult trade offs between the simplicity and the flexibility they want and the control, the performance and the data locality that perhaps they need. APEX will help bridge this divide and so we're not going to make them make this kind of false trade off between them. It'll enable organizations to take control of their operations from where resources are located to how they are run to who can access them. So for example, by dictating where they want to run their resources in a cool or at the Edge or within their data center, you know, IT teams can take charge of their compliance obligations and simplify them by using role-based permissions stick to limit access, IT organizations can choose who can access certain functionality for configuring APEX services and thereby kind of reduce risk and simplify those security obligations. So, those are some examples of, you know, how we deliver simplicity, agility and control to our customers with APEX. >> You know, I'll give you a little aside here if I may, you know, you said the trade-offs and I've been working on this scenario of how we're going to come back from the pandemic. And you're seeing this hybrid approach where we're, organizations are having to fund their digital transformation. They're having to support a hybrid workforce and their headquarters investments, their traditional data center investments have been neglected. And the other thing is there's very clearly a skills gap, a shortage of talent. So to the extent that you have something like APEX that where I don't have to be provisioning lungs and spending all time, both waiting and provisioning and tuning, that allows me to free up talent and really deliver on some of those problematic areas that are forcing me today to do a trade-off. So I think that really resonates with me Akanksha, so. >> You're exactly right and we're what kind of refactoring applications, learning new skillset, hiring new people. If the part that resonates with you is that agility and simplicity, you know, why not have it where it makes sense in a skill set? >> So APEX is new way of thinking. I mean, certainly for Dell in terms of how you deliver for way customers consume, can you be specific on some of the offerings that we can expect from DTW this year? >> Yes, we've got a variety of announcements, let me talk about those. Let's start with the APEX console. This is a unified experience for the entire APEX journey. It provides self-service access to our catalog of APEX services. As I mentioned customers simply select the outcomes that they're looking for and it's ascribed to the technology services that best meets their needs and then we'll take it from there. From a day two operation standpoint the console will also give customers insight and oversight into other aspects of the APEX experience. For example, they can limit access to the functionality by role. They can modify, view their subscriptions and then modify it. They can engage and kind of provisioning type tasks. They can see costs transparent, review billing and payment information each month and use it for things like show back or charge back to, you know, various business units within their organization. Over time, we will also be integrating the console with common procurement and provisioning systems so that they can further streamline approval workflows as well as published API for further integration from developers at the customer site. So, Net-Net console will be the single place for us to procure, operate and monitor APEX services and we think it's going to become an important way for us to interact with our customers as well as our partners to interact with Dell Technologies going forward. >> Yes, please, no carry on, thanks. >> The next announcement is APEX data storage services. This one is a first in a series of outcome-based turnkey services in the APEX portfolio. At the end this essentially delivers storage resources at the customers at the location that they would prefer. When subscribing to this which is four parameters that the customers need to think about, what type of data services they're looking for, file block and soon it'll be object. What performance tier, the application that the customer is going to run on these resources needs, they can be in three levels, what base capacity they want where they can start at 50 terabytes and then the time length that they're looking for, the subscription length. We also announced a partnership with Equinix. So if a customer wants they can deploy these resources at Equinix's data centers all around the world and still get a unified bill from us and that's it. Once they make those four selections, they subscribe to the service, we take it from there, there's no selecting what product do you want, what configuration on that product, etc, etc. You know, we take care of all of that, include the right services and then kind of deliver it to them. So it's really an outcome-based way of procuring technology as easily as you would provision resources in a public cloud. >> Awesome, so again console, data storage, cloud services, which are key... >> Now, they check the cloud services. >> And then the partner piece with Equinix for latency and proximity, speed of light type stuff, okay, cool. >> Exactly. Cloud services very quickly are integrated solutions to help simplify that adoption and they support both cloud native as well as traditional workloads. Customers can subscribe either to a private cloud offer or a hybrid cloud offer depending on the level of control that they're looking for and the operational consistency that they need. And again, similar to storage services they pick from kind of four simple steps and we'll deliver it to them within 14 days. And then finally, we've got something called custom solutions. These are for customers who are looking for a more flexible as a service environment, they're available right now in over 30 countries, also available to our partner network. Comes in two flavors, APEX Flex On Demand, which takes anything within our broad infrastructure portfolio, servers, storage, data protection, you name it and we can turn that into a paper use environment. You can also select what services you'd like to include. So if a customer wants it managed, we can manage it for them. If they don't want to managed again, you know, include it without those services. And essentially they can configure their own as a service experience. And the data center utility takes it to the next level and offers even more customization in terms of customer elementary options, etc, etc. So that's kind of a quick summary of the announcements in the APEX portfolio. >> Okay, I think I got it. Five buckets, the console, which gives you that full life cycle, that self-service, the storage piece, the cloud services, the Equinix partnership and the partners, that's a whole nother conversation and then the custom piece if you really want to customize it for your... >> And storage services. >> All right, good, okay, you guys have been busy. So you announced project APEX last fall and so I presume you've been out talking to customers about this, prototyping it, testing it out. Maybe you could share some examples of customers who've tried it out and what the feedback has been and the use cases. >> Yeah, let me give you a couple of examples. We'll start with APEX data storage services. As I said, this one's going generally available now. At Dell we believe in drinking our own champagne. So our own IT team has been engaged in a private data of this service for the past several months and their feedback has helped shape the offer. The feedback that they've given us is that they really liked that, like simple life cycle management. You know, they tell us that it speeds up their folks to do a lot of other things. And that are kind of higher level order tasks if you will versus managing the infrastructure. They're seeing greater efficiencies in the past in performance management, they like not having to worry about building a capacity pipeline. And they like being able to kind of build on a charge back process that will allow them to build internal views based on what's being used. And so they think it's going to be a game changer for them. And, you know, that's the feedback that they and of course they've given us lots of feedback that we've also put into building the product itself, in short they really liked the flexibility of it. Let me give you a, maybe a customer example and then a partner example as well. APEX cloud services. This is one where more and more customers are realizing that for compliance, regulatory or performance reasons, maybe public cloud doesn't really work for them. And so they've been looking for ways to get that experience within their data center. APEX hybrid cloud enables this, using this as a foundation customers are quickly able to extend workloads like VDI into these different environments. A global technology consulting firm wanted to focus on their business of providing consulting service versus you know, managing our infrastructure. And so what they also really liked was the people use model and the ability to scale up without having to engage and kind of renegotiating terms. They also appreciated and like the cost transparency that we provided and their feedback to us that it was sort of unmatched with other solutions that they'd seen and they like sort of cost-containment benefits because it give them much more control over their budget. And then from a partner standpoint, APEX custom solutions as I said is available in over 30 countries today, it's available through our vast partner network. We've got a series of lucrative partner options for them. A recent win that we saw in the space was with a healthcare provider. This particular healthcare provider was constantly challenging their IT team to improve service delivery. They wanted to onboard customers faster, drive services deployment while ensuring the compliance of their healthcare data as you I'm sure know their, you know, some strict requirements in this space. With Flex On Demand they were able to dramatically cut that onboarding time from months to days, they were able to be just as agile while simplifying their compliance with industry regulations for data privacy and sovereignty. And so their feedback with that since they were able to be just as agilent just as cost effective as a cloud solution but without the concerns over data residency. So those are a few use cases and then real customer examples of customers that have tried out these services. >> Awesome, thanks for that. And the real transformation for the partners as well. I think actually if partners leaned in they can make a lot of money doing this. >> It means so much in profitability. >> Yeah, well, hey, that's what the channel cares about. I mean, it's different from the past of selling boxes, That was to do, okay, I know you got my margin there, but this I think actually huge opportunities to get deeper into the customer, add value in so many other different ways, the channel is undergoing tremendous transformation. I have to ask you, so I think the first time I saw it, so you have flexible consumption, you've had that for a number of years. I think the first time I saw it it was like late '90s or early 2000s when I saw these types of models emerge. So can you explain how APEX differs from your past as a service offerings? And I got another sort of second part of the question after that. >> Yeah, you're right. We've offered these solutions for a while and very successfully so I should add, certainly over the past year our business has seen tremendous momentum. And if you listen to our earnings you've probably heard that. What's different here is that we're caking, think of this as APEX is a two durdle of that. So we've been doing that. We're going to continue doing that, but what I talked about in APEX customer solutions is what we've been delivering for a while. And of course, we continue to improve it as we get customer feedback on it. What we're doing here on the turnkey side is that we're taking out a product based, not a service based but really an outcome based approach and what's different there and what I mean by that is we're truly looking to bypass complexity throughout the entire technology life cycle. We're truly kind of looking to figure out where can we remove a significant amount of time and effort from IT teams by delivering them an offer that's simple from the get-go. Each of these offers have been designed from the ground up to provide not just the innovative technology that our customers have known us forever, but to so with greater simplicity, to deliver greater agility while still retaining the control that we know our customers want. That is what is different. And by doing that, by making this consistently available in a very kind of simple way we believe we can scale that experience. That along with backed up with our services, our scale, our supply chain leadership that we've had for awhile built on our industry leading portfolio, the broadest in the industry then delivering that with unmatched time to value at whatever location the customer is looking for, by doing these three things we believe we're combining not just the agility that our customers want and as well as the control that they need and putting it all together in the simplest way possible and delivering it with our partners. So I think that's what's different with what we're doing now and frankly that's also our commitment going forward. So you can imagine today, I talked to you about our cloud solutions, our infrastructure solutions, but imagine going forward all of our solutions, server, storage, data protection, workload, end user devices telecom solutions, Edge Solutions, gaming devices all of them kind of delivered in this way. And you know, only the way that Dell Technologies and our partner community camp. >> When I hear you say outcome based a lot of people may say, well, what's that? I'll tell you what I think it is. The outcome I want is I want is I want my IT to be fast, I want it to be reliable, I want it to be at a fair price. I don't want to run out of storage for example and if I need more, I want it fast and I want it simple. I mean, that's the outcome that I want. Is that what you mean by outcome based? >> Absolutely, those are exactly the types of, you know, it's a combinations like you've said of business as well as technology outcomes that we're targeting. But those are exactly it availability, uptime, performance, you know, time to value. Those are exactly the types of outcomes that we're targeting with these offers and that's what our services are designed from the ground up to do. >> Okay, last question, second part of my other question is, I mean, it's essentially, you've got the cloud model. You're bringing that to on-prem, you've got other on-prem competitors, what's different with Dell from the competition? >> Yeah, so I would say from a competitive standpoint as you've said, we certainly have a series of competitors in the on-prem space, and then we've got another set of competitors in the cloud space. And what we are truly trying to do is, you know, bring the best of that experience to wherever our customers want to deploy these resources. From an on-prem standpoint I think our differentiation always has and will continue to be the breadth of our portfolio. You know, the technology that we provide and bringing this APEX experience in a very simple and consistent way across that entire breadth of products. The other differentiation that I believe we have is frankly our pricing model, right? You mentioned it a few times, I talked a little bit about it earlier as well. If I use storage as an example we are not going to have, you know, we're not going to charge you a penalty if you need to scale up and down. We understand and realize that businesses, you know, need to have that flexibility to be able to go up and down and having a simple clear consistent rate that they understand very clearly upfront, that they have visibility to that, you know, charges them in kind of a fair way is another kind of point of differentiation. So not having that kind of surge pricing, if you will. And then finally, the third differences are our services, our scale, our supply chain leadership and then just say-do ratio, right? When we say something we're going to do it and we're going to deliver it. From a cloud clearer standpoint it's really interesting. You know, I talk about this trade off that our customers often have to make. You have to give up control to get this simplicity and agility, and we're not going to make you do that, right? As an IT DN you manage, you know, you've got full control of that infrastructure while still getting the benefits of the agility and the simplicity that today you often have to go to public cloud for. Again, from a pricing standpoint, the other differentiation that we have is you're not going to be paying to access your old data. You pay a clear rate and it stays consistent, there's no egress ingress charges. There's no retraining of your sales force. There's no refactoring of the application to move it there. There's all these kind of unspoken costs that go into moving an application into public cloud that you're not going to see with us. And then finally, from a performance standpoint we do believe that the performance that we have at APEX Solution is significantly better. You know, just the fact that you've got dedicated infrastructure, like you're not running into issues with noisy neighbors, for example, as well as just the underlying quality of the technology that we deliver. I mean, the experience that we've had and not just in the space, but then delivering it to, you know, hundreds of thousands of customers and hundreds and thousands of locations there's a very good at optimizing for a few locations for hundreds of thousands of customers, but we've been for years delivering this experience, across the world, across hundreds and thousands of data centers and the expertise that our services, our supply chain, and in fact their product teams have built out I think will serve as well. >> Great, a lot of depth there Akanshka, thanks so much. And congratulations for giving birth formerly to APEX and best of luck, really appreciate you coming on theCUBE and sharing. >> Thanks Dave, thank you for having me. >> And it was really our pleasure. And thank you for watching everybody. This is theCUBE's coverage ongoing coverage of Dell Tech World 2021, we'll be right back. (upbeat music)
SUMMARY :
that the on-premises experience of talk in the industry. And because of that the need and what customer benefits that best meets the needs, you know, So to the extent that you If the part that resonates with you some of the offerings and it's ascribed to that the customers need to think about, Awesome, so again console, And then the partner piece with Equinix and the operational that self-service, the storage piece, and so I presume you've been out and the ability to scale And the real transformation I have to ask you, I talked to you about our cloud solutions, I mean, that's the outcome that I want. exactly the types of, you know, You're bringing that to on-prem, and the expertise that our to APEX and best of luck, And thank you for watching everybody.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Maribel | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Keith | PERSON | 0.99+ |
Equinix | ORGANIZATION | 0.99+ |
Matt Link | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Indianapolis | LOCATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Scott | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Tim Minahan | PERSON | 0.99+ |
Paul Gillin | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Stephanie Cox | PERSON | 0.99+ |
Akanshka | PERSON | 0.99+ |
Budapest | LOCATION | 0.99+ |
Indiana | LOCATION | 0.99+ |
Steve Jobs | PERSON | 0.99+ |
October | DATE | 0.99+ |
India | LOCATION | 0.99+ |
Stephanie | PERSON | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Chris Lavilla | PERSON | 0.99+ |
2006 | DATE | 0.99+ |
Tanuja Randery | PERSON | 0.99+ |
Cuba | LOCATION | 0.99+ |
Israel | LOCATION | 0.99+ |
Keith Townsend | PERSON | 0.99+ |
Akanksha | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Akanksha Mehrotra | PERSON | 0.99+ |
London | LOCATION | 0.99+ |
September 2020 | DATE | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
David Schmidt | PERSON | 0.99+ |
90% | QUANTITY | 0.99+ |
$45 billion | QUANTITY | 0.99+ |
October 2020 | DATE | 0.99+ |
Africa | LOCATION | 0.99+ |
Ali Golshan, Red Hat | KubeCon + CloudNativeCon Europe 2021 - Virtual
>> Announcer: From around the Globe, it's theCUBE with coverage of Kube Con and Cloud Native Con Europe 2021 virtual brought to you by Red Hat, the cloud native computing foundation and ecosystem partners. >> Hello, and welcome back to theCUBE's coverage of Kube Con and Cloud Native Con 2021 virtual. I'm John Furrier, host of theCUBE, here with a great guest, I'm excited to talk to. His company, that he was part of founding CTO, was bought by Red Hat. Ali Golshan, Senior Director of Global Software Engineer at Red Hat, formerly CTO of StackRox. Ali thanks for coming on, I appreciate it. Thanks for joining us. >> Thanks for having me excited to be here. >> So big acquisition in January, where we covered it on SiliconANGLE, You guys, security company, venture backed amplify Sequoya and on and on. Big part of Red Hat story in their security as developers want to shift left as they say and as more and more modern applications are being developed. So congratulations. So real quick, just quick highlight of what you guys do as a company and inside Red Hat. >> Sure, so the company's premise was built around how do you bring security the entire application life cycle. So StackRox focuses on sort of three big areas that we talk about. One is, how do you secure the supply chain? The second part of it is, how do you secure infrastructure and foster management and then the third part is now, how do you protect the workload that run on top of that infrastructure. So this is the part that aligned really well with Red Hat which is, Red Hat had wanted to take a lot of what we do around infrastructure, foster management configuration management and developer tools integrated into a lot of the things they do and obviously the workload protection part was a very seamless part of integrating us into the OpenShift part because we were built around cloud native constructs and obviously Red Hat having some of the foremost experts around cloud native sort of created a really great asset. >> Yeah, you guys got a great story. Obviously cloud native applications are rocking and rolling. You guys were in early serverless emerges, Kubernetes and then security in what I call the real time developer workflow. Ones that are building really fast, pushing code. Now it's called day two operations. So cloud native did two operations kind of encapsulates this new environment. You guys were right in the sweet spot of that. So this became quite the big deal, Red Hat saw an opportunity to bring you in. What was the motivation when you guys did the deal Was it like, "wow" this is a good fit. How did you react? What was the vibe at the StackRox when this was all going down? >> Yeah, so I think there's really three areas you look for, anytime a company comes up and sort of starts knocking on your door. One is really, is the team going to be the right fit? Is the culture going to be the right environment for the people? For us, that was a big part of what we were taking into consideration. We found Red Hat's general culture, how they approach people and sort of the overall approach the community was very much aligned with what we were trying to do. The second part of it was really the product fit. So we had from very early on started to focus purely on the Kubernetes components and doing everything we could, we call it sort of our product approach built in versus bolted on and this is sort of a philosophy that Red Hat had adopted for a long time and it's a part of a lot of their developer tools, part of their shift left story as well as part of OpenShift. And then the third part of it was really the larger strategy of how do you go to market. So we were hitting that point where we were in triple digit customers and we were thinking about scalability and how to scale the company. And that was the part that also fit really well which was obviously, RedHat more and more hearing from their customers about the importance and the criticality of security. So that last part happened to be one part. We ended up spending a lot of time on it, ended up being sort of three out of three matches that made this acquisition happen. >> Well congratulations, always great to see startups in the right position. Good hustle, great product, great market. You guys did a great job, congratulations. >> Thank you. >> Now, the big news here at KubeCon as Linux foundation open-source, you guys are announcing that you're open-sourcing at StackRox, this is huge news, obviously, you now work for an open-source company and so that was probably a part of it. Take us through the news, this is the top story here for this segment tickets through open-source. Take us through the news. >> Yeah, so traditionally StackRox was a proprietary tool. We do have open-source tooling but the entire platform in itself was a proprietary tool. This has been a number of discussions that we've had with the Red Hat team from the very beginning. And it sort of aligns around a couple of core philosophies. One is obviously Red Hat at its core being an open-source company and being very much plugged into the community and working with users and developers and engineers to be able to sort of get feedback and build better products. But I think the other part of it is that, I think a lot of us from a historic standpoint have viewed security to be a proprietary thing as we've always viewed the sort of magic algorithms or black boxes or some magic under the hood that really moved the needle. And that happens not to be the case anymore also because StackRox's philosophy was really built around Kubernetes and Built-in, we feel like one of the really great messages around wide open-source of security product is to build that trust with the community being able to expose, here's how the product works, here's how it integrates here are the actions it takes here's the ramifications or repercussions of some of the decisions you may make in the product. Those all I feel make for very good stories of how you build connection, trust and communication with the community and actually get feedback on it. And obviously at its core, the company being very much focused on Kubernetes developer tools, service manage, these are all open-source toolings obviously. So, for us it was very important to sort of talk the talk and walk the walk and this is sort of an easy decision at the end of the day for us to take the platform open-source. And we're excited about it because I think most still want a productized supported commercial product. So while it's great to have some of the tip of the spear customers look at it and adopt the open-source and be able to drive it themselves. We're still hearing from a lot of the customers that what they do want is really that support and that continuous management, maintenance and improvement around the product. So we're actually pretty excited. We think it's only going to increase our velocity and momentum into the community. >> Well, I got some questions on how it's going to work but I do want to get your comment because I think this is a pretty big deal. I had a conversation about 10 years ago with Doug Cutting, who was the founder of Hadoop, And he was telling me a story about a company he worked for, you know all this coding, they went under and the IP was gone, the software was gone and it was a story to highlight that proprietary software sometimes can never see the light of day and it doesn't continue. Here, you guys are going to continue the story, continue the code. How does that feel? What's your expectations? How's that going to work? I'm assuming that's what you're going to open it up which means that anyone can download the code. Is that right? Take us through how to first of all, do you agree with that this is going to stay alive and how's it going to work? >> Yeah, I mean, I think as a founder one of the most fulfilling things to have is something you build that becomes sustainable and stands the test of time. And I think, especially in today's world open-source is a tool that is in demand and only in a market that's growing is really a great way to do that. Especially if you have a sort of an established user base and the customer base. And then to sort of back that on top of thousands of customers and users that come with Red Hat in itself, gives us a lot of confidence that that's going to continue and only grow further. So the decision wasn't a difficult one, although transparently, I feel like even if we had pushed back I think Red Hat was pretty determined about open-source and we get anyway, but it's to say that we actually were in agreement to be able to go down that path. I do think that there's a lot of details to be worked out because obviously there's sort of a lot of the nuances in how you build product and manage it and maintain it and then, how do you introduce community feedback and community collaboration as part of open-source projects is another big part of it. I think the part we're really excited about is, is that it's very important to have really good community engagement, maintenance and response. And for us, even though we actually discussed this particular strategy during StackRox, one of the hindering aspects of that was really the resources required to be able to manage and maintain such a massive open-source project. So having Red Hat behind us and having a lot of this experience was very relevant. I think, as a, as a startup to start proprietary and suddenly open it and try to change your entire business model or go to market strategy commercialization, changed the entire culture of the company can sometimes create a lot of headwind. And as a startup, like sort of I feel like every year just trying not to die until you create that escape velocity. So those were I think some of the risk items that Red Hat was able to remove for us and as a result made the decision that much easier. >> Yeah, and you got the mothership with Red Hat they've done it before, they've been doing it for generations. You guys, you're in the startup, things are going crazy. It's like whitewater rafting, it's like everything's happening so fast. And now you got the community behind you cause you're going to have the CNC if you get Kubecon. I mean, it's a pretty great community, the support is amazing. I think the only thing the engineers might want to worry about is go back into the code base and clean things up a bit, as you start to see the code I'm like, wait a minute, their names are on it. So, it's always always a fun time and all serious now this is a big story on the DevSecOps. And I want to get your thoughts on this because kubernetes is still emerging, and DevOps is awesome, we've been covering that in for all of the life of theCUBE for the 11 years now and the greatness of DevOps but now DevSecOps is critical and Kubernetes native security is what people are looking at. When you look at that trend only continuing, what's your focus? What do you see? Now that you're in Red Hat as the CTO, former CTO of StackRox and now part of the Red Hat it's going to get bigger and stronger Kubernetes native and shifting left-hand or DevSecOps. What's your focus? >> Yeah, so I would say our focus is really around two big buckets. One is, Kubernetes native, sort of a different way to think about it as we think about our roadmap planning and go-to-market strategy is it's mutually exclusive with being in infrastructure native, that's how we think about it and as a startup we really have to focus on an area and Kubernetes was a great place for us to focus on because it was becoming the dominant orchestration engine. Now that we have the resources and the power of Red Hat behind us, the way we're thinking about this is infrastructure native. So, thinking about cloud native infrastructure where you're using composable, reusable, constructs and objects, how do you build potential offerings or features or security components that don't rely on third party tools or components anymore? How do you leverage the existing infrastructure itself to be able to conduct some of these traditional use cases? And one example we use for this particular scenario is networking. Networking, the way firewalling in segmentation was typically done was, people would tweak IP tables or they would install, for example, a proxy or a container that would terminate MTLS or become inline and it would create all sorts of sort of operational and risk overhead for users and for customers. And one of the things we're really proud of as sort of the company that pioneered this notion of cloud native security is if you just leverage network policies in Kubernetes, you don't have to be inline you don't have to have additional privileges, you don't have to create additional risks or operational overhead for users. So we're taking those sort of core philosophies and extending them. The same way we did to Kubernetes all the way through service manager, we're doing the same sorts of things Istio being able to do a lot of the things people are traditionally doing through for example, proxies through layer six and seven, we want to do through Istio. And then the same way for example, we introduced a product called GoDBledger which was an open-source tool, which would basically look at a yaml on helm charts and give you best practices responses. And it's something you we want for example to your get repositories. We want to take those sort of principles, enabling developers, giving them feedback, allowing them not to break their existing workflows and leveraging components in existing infrastructure to be able to sort of push security into cloud native. And really the two pillars we look at are ensuring we can get users and customers up and running as quickly as possible and reduce as much as possible operational overhead for them over time. So we feel these two are really at the core of open-sourcing in building into the infrastructure, which has sort of given us momentum over the last six years and we feel pretty confident with Red Hat's help we can even expand that further. >> Yeah, I mean, you bring up a good point and it's certainly as you get more scale with Red Hat and then the customer base, not only in dealing with the threat detection around containers and cloud native applications, you got to kind of build into the life cycle and you've got to figure out, okay, it's not just Kubernetes anymore, it's something else. And you've got advanced cluster security with Red Hat they got OpenShift cloud platform, you're going to have managed services so this means you're going to have scale, right? So, how do you view that? Because now you're going to have, you guys at the center of the advanced cluster security paradigm for Red Hat. That's a big deal for them and they've got a lot of R and D and a lot of, I wouldn't say R and D, but they got emerging technologies developing around that. We covered that in depth. So when you start to get into advanced cluster, it's compliance too, it's not just threat detection. You got insights telemetry, data acquisition, so you have to kind of be part of that now. How do you guys feel about that? Are you up for the task? >> Yeah, I hope so it's early days but we feel pretty confident about it, we have a very good team. So as part of the advanced cluster security we work also very closely with the advanced cluster management team in Red Hat because it's not just about security, it's about, how do you operationalize it, how do you manage it and maintain it and to your point sort of run it longterm at scale. The compliance part of it is a very important part. I still feel like that's in its infancy and these are a lot of conversations we're having internally at Red Hat, which is, we all feel that compliance is going to sort of more from the standard benchmarks you have from CIS or particular compliance requirements like the power, of PCI or Nest into how do you create more flexible and composable policies through a unified language that allows you to be able to create more custom or more useful things specific to your business? So this is actually, an area we're doing a lot of collaboration with the advanced cluster management team which is in that, how do you sort of bring to light a really easy way for customers to be able to describe and sort of abstract policies and then at the same time be able to actually and enforce them. So we think that's really the next key point of what we have to accomplish to be able to sort of not only gain scale, but to be able to take this notion of, not only detection in response but be able to actually build in what we call declarative security into your infrastructure. And what that means is, is to be able to really dictate how you want your applications, your services, your infrastructure to be configured and run and then anything that is sort of conflicting with that is auto responded to and I think that's really the larger vision that with Red Hat, we're trying to accomplish. >> And that's a nice posture to have you build it in, get it built in, you have the declarative models then you kind of go from there and then let the automation kick in. You got insights coming in from Red Hat. So all these things are kind of evolving. It's still early days and I think it was a nice move by Red Hat, so congratulations. Final question for you is, as you prepare to go to the next generation KubeCon is also seeing a lot more end user participation, people, you know, cloud native is going mainstream, when I say mainstream, seeing beyond the hyperscalers in the early adopters, Kubernetes and other infrastructure control planes are coming in you start to see the platforms emerge. Nobody wants another security tool, they want platforms that enable applications handle tools. As it gets more complicated, what's going to be the easy button in security cloud native? What's the approach? What's your vision on what's next? >> Yeah so, I don't know if there is an easy button in security and I think part of it is that there's just such a fragmentation and use cases and sort of designs and infrastructure that doesn't exist, especially if you're dealing with such a complex stack. And not only just a complex stack but a potentially use cases that not only span runtime but they deal with you deployment annual development life cycle. So the way we think about it is more sort of this notion that has been around for a long time which is the shared responsibility model. Security is not security's job anymore. Especially, because security teams probably cannot really keep up with the learning curve. Like they have to understand containers then they have to understand Kubernetes and Istio and Envoy and cloud platforms and APIs. and there's just too much happening. So the way we think about it is if you deal with security a in a declarative version and if you can state things in a way where how infrastructure is ran is properly configured. So it's more about safety than security. Then what you can do is push a lot of these best practices back as part of your gift process. Involve developers, engineers, the right product security team that are responsible for day-to-day managing and maintaining this. And the example we think about is, is like CVEs. There are plenty of, for example, vulnerability tools but the CVEs are still an unsolved problem because, where are they, what is the impact? Are they actually running? Are they being exploited in the wild? And all these things have different ramifications as you span it across the life cycle. So for us, it's understanding context, understanding assets ensuring how the infrastructure has to handle that asset and then ensuring that the route for that response is sent to the right team, so they can address it properly. And I think that's really our larger vision is how can you automate this entire life cycle? So, the information is routed to the right teams, the right teams are appending it to the application and in the future, our goal is not to just pardon the workload or the compute environment, but use this information to action pardon application themselves and that creates that additional agility and scalability. >> Yeah it's in the lifecycle of that built in right from the beginning, more productivity, more security and then, letting everything take over on the automation side. Ali congratulations on the acquisition deal with Red Hat, buyout that was great for them and for you guys. Take a minute to just quickly answer final final question for the folks watching here. The big news is you're open-sourcing StackRox, so that's a big news here at KubeCon. What can people do to get involved? Well, just share a quick quick commercial for what people can do to get involved? What are you guys looking for? Take a pledge to the community? >> Yeah, I mean, what we're looking for is more involvement in direct feedback from our community, from our users, from our customers. So there's a number, obviously the StackRox platform itself being open-source, we have other open-source tools like the KubeLinter. What we're looking for is feedback from users as to what are the pain points that they're trying to solve for. And then give us feedback as to how we're not addressing those or how can we better design our systems? I mean, this is the sort of feedback we're looking for and naturally with more resources, we can be a lot faster in response. So send us feedback good or bad. We would love to hear it from our users and our customers and get a better sense of what they're looking for. >> Innovation out in the open love it, got to love open-source going next gen, Ali Golshan Senior Director of Global Software Engineering the new title at Red Hat former CTO and founder of StackRox which spread had acquired in January, 2021. Ali thanks for coming on congratulations. >> Thanks for having, >> Okay, so keeps coverage of Kube Con cloud native Con 2021. I'm John Furrie, your host. Thanks for watching. (soft music)
SUMMARY :
brought to you by Red Hat, and Cloud Native Con 2021 virtual. me excited to be here. and as more and more modern applications and obviously the workload protection part to bring you in. and sort of the overall in the right position. and so that was probably a part of it. and momentum into the community. and how's it going to work? and as a result made the and now part of the Red Hat and the power of Red Hat behind us, and it's certainly as you the standard benchmarks you have from CIS and I think it was a nice move by Red Hat, and in the future, our goal is that was great for them and for you guys. and naturally with more resources, Innovation out in the open love it, Thanks for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ali Golshan | PERSON | 0.99+ |
January, 2021 | DATE | 0.99+ |
John Furrier | PERSON | 0.99+ |
Doug Cutting | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
January | DATE | 0.99+ |
John Furrie | PERSON | 0.99+ |
StackRox | ORGANIZATION | 0.99+ |
Ali | PERSON | 0.99+ |
11 years | QUANTITY | 0.99+ |
one part | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
KubeCon | ORGANIZATION | 0.99+ |
third part | QUANTITY | 0.99+ |
second part | QUANTITY | 0.99+ |
Global Software Engineering | ORGANIZATION | 0.99+ |
three matches | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
Kubernetes | TITLE | 0.98+ |
today | DATE | 0.98+ |
KubeCon | EVENT | 0.98+ |
two operations | QUANTITY | 0.98+ |
two | QUANTITY | 0.98+ |
two pillars | QUANTITY | 0.97+ |
DevSecOps | TITLE | 0.97+ |
one example | QUANTITY | 0.97+ |
one | QUANTITY | 0.96+ |
Hadoop | ORGANIZATION | 0.96+ |
three areas | QUANTITY | 0.95+ |
StackRox | TITLE | 0.95+ |
Red Hat | TITLE | 0.93+ |
GoDBledger | TITLE | 0.93+ |
three big areas | QUANTITY | 0.92+ |
Sequoya | ORGANIZATION | 0.92+ |
Istio | TITLE | 0.91+ |
RedHat | ORGANIZATION | 0.91+ |
OpenShift | TITLE | 0.9+ |
Kube Con cloud native Con 2021 | EVENT | 0.88+ |
DevOps | TITLE | 0.88+ |
Istio | ORGANIZATION | 0.87+ |
thousands of customers | QUANTITY | 0.86+ |
Cloud Native Con 2021 | EVENT | 0.85+ |
theCUBE | ORGANIZATION | 0.84+ |
last six years | DATE | 0.83+ |
Cloud Native Con Europe 2021 | EVENT | 0.82+ |
KubeLinter | TITLE | 0.82+ |
10 years ago | DATE | 0.81+ |
Kubecon | ORGANIZATION | 0.81+ |
two big buckets | QUANTITY | 0.8+ |
CloudNativeCon Europe 2021 | EVENT | 0.8+ |
Envoy | TITLE | 0.79+ |
Linux | ORGANIZATION | 0.79+ |
APAC LIVE RT
>>Good afternoon and welcome back to our audience here in Asia pacific This is Sandeep again uh from my home studio in Singapore, I hope you found the session to be insightful. I thought it was a key takeaway in terms of how you know the the world is going through a massive transformation, driven by underpinning the workload optimized solutions around up by round of security, 3 60 degree security. As Neil Mcdonald talked about underpinned by the scale, you know, whether you're on exa scale, compute public cloud or on the edge and that's kind of underpinning the digital transformation that our customers are going to go through. I have two special guests with me. Uh let me just quickly introduce them Santos restaurant martin who uh is the Managing director for intel in A P. K. And Dorinda Kapoor, Managing Director for HB Initial pacific So, good afternoon, both you gentlemen. >>Good afternoon. >>So Santos. My first question is to you, first of all, a comment, you know, the passion at which uh, pad Kill Singer talked through the four superpowers. That was amazing. You know, I could see that passion comes through the screen. You know, I think everybody in the audience could relate with that. We are like, you know, as you know, on the words of the launch, the gentle plus by power, but it's isolate processor from intel, what are you seeing and what do our customers should expect improvements, especially with regard to the business outcomes. >>Yeah, So first of all, thank you so much for having me in this session and, and as you said, Sandeep, I mean, you could really see how energized we are. And you heard that from pad as well. Uh, so we launched the third gen, intel, Xeon processors or isolate, you know about a couple of weeks ago and I'm sure, you know, there's lots of benefits that you get in these new products. But I thought what I'll do is I'll try and summarize them in three key buckets. The first one is about the performance benefits that these new products bring in. The 2nd 1 is the value of platforms and I think the last pieces about the partnerships and how it makes deployment really easy and simple for our customers. Let me start with the first one which is about performance and the and the big jump that we're staying. It's about a 46% performance, increased generation over generation. It's flexible, it's optimized performance from the edge to the cloud where you would see about 1.5 to 1.7 X improvements on key war clouds like the cloud five G I O D HPC and AI that are so critical all around us. It's probably the only data center processor that has built in A I acceleration that helps with faster analytics. It's got security optimist on intel SGX that basically gives you a secure on cliff when when sensitive data is getting transacted and it also has crypto acceleration that reduces any performance impact because of the pervasive encryption that we have all around us. Now The second key benefit is about platform and if you remember when we launch sky lake in 2017, we laid out a strategy that said that we are here to help customers >>move, >>store and process data. So it's not just the CPU that we announced with the third genitals, jOHn Announcements. We also announce products like the obtained persistent memory, 200 cds That gives you about a 32 higher memory bandwidth and six terabytes of memory capacity on stock. It the obtain S S D S, the intel internet, 800 cities adapter that gives you about 200 Gbps per port, which means you can move data much more faster and you have the intellectual X F P G s that gives you about a double the better fabric performance for what? Which means if there's key workloads that you want to go back and offloaded to a to a steak or a specific uh CPU then you have the F P G s that can really help you there Now. What does the platform do for our customers? It helps them build higher application and system level performance that they can all benefit from the last b which is the partnerships area is a critical one because we've had decades of experience of solution delivery with a broad ecosystem and with partners like HP and we build elements like the Intel select solution and the market ready solution that makes it so much more easier for our customers to deploy with Over 50 million Xeon scalable processes that is shipped around the world. A billion Xeon cores that are powering the cloud since 2013 customers have really a proven solution that they can work with. So in summary, I want you to remember the three key piece that can really >>help you be >>successful with these new products, the performance uplifted, you get generation over generation, the platform benefits. So it's not just the CPU but it's things around that that makes the system and the application work way better. And then the partnerships that give you peace of mind because you can go deploy proven solutions that you can go and implement in your organization and serve your customers better. >>Thanks. Thanks thanks and Tosha for clearly outlining, you know, the three PS and kind of really resonates well. Um, so let me just uh turn over you know, to Dorinda there in the hot, you know, there's a lot of new solutions, you're our new treaties that santos talked about security, you get a lot of performance benefits and yet our customers have to go through a massive amount of change from a digital transformation perspective in order that they take all the advantages in state competitive. We're using HP Iran addressing the needs for the challenges of our customers and how we really helping them accelerate their transformation journey. >>Yeah, sure. Sandeep, thanks a lot for the question. And you are right. Most of the businesses actually need to go uh digital transformation in order to stay relevant in the current times. And in fact actually COVID-19 has further accelerated the pace of digital transformation for uh most of our customers. And actually the digital transformation is all about delivering differentiated experiences and outcomes at the age by converting data collected from multiple different sources to insights and actions. So we actually an HP believe that enterprise of the future is going to be eight centric data driven and cloud enabled And with our strategy of providing H2 cloud platform and having a complete portfolio of uh software, networking computer and the storage solutions both at the age and court uh to of course collect, transmit secure, analyze and store data. I believe we are in the best position to help our customers start and execute on their transformation journey. Now reality is various enterprises are at different stages of their transformation journey. You know, uh we in HP are able to help our customers who are at the early stage or just starting the transformation journey to to help build their transformation broad maps with the help of our advisory teams and uh after that helped them to execute on the same with our professional services team. While for the customers who are already midway in the transformation journey, we have been helping them to differentiate themselves by delivering workload optimized solutions which provide latency, flexibility and performance. They need to turn data into insights and innovations to help their business. Now, speaking of the workload optimized solutions, HP has actually doubled down in this area with the help of our partners like Intel, which powers our latest Gentlemen plus platform. This brings more compute power, memory and storage capacity which our customers need as they process more data and solve more complex challenges within their business. >>Thank you. Thanks. And er in there I think that's really insightful. Hopefully you know our customer base, I will start joined in here, can hear that and take advantage of you know, how HP is helping you know, fast track the exploration. I come back to you something you don't like during the talk about expanding capacities and we saw news about you know Intel invest $20 billion dollars or so, something like that in terms of you know, adding capacities or manufacturing. So I'd like to hear from your perspective, you know how this investments which intel is putting is a kind of a game changer, how you're shaping the industry as we move forward. >>Yeah, I mean as we all know, I think there's accelerated demand for semiconductors across the world digitization especially in an environment that we're that we're going through has really made computing pervasive and it's it's becoming a foundation of every industry and our society, the world just needs more semiconductors. Intel is in a unique position to rise to that occasion and meet the growing demand for semiconductors given our advanced manufacturing scale that we have. So the intel foundry services and the that you mentioned is is part of the Intel's new I. D. M. Torrado strategy that Bad announced which is a differentiated winning formula that will really deliver the new era of innovation, manufacturing and product leadership. We will expand our manufacturing capacity as you mentioned with that 20 billion investments and building to fabs in Arizona. But there's more to come in the year ahead and these fans will support the expanding requirements of our current products and also provide committed capacity for our foundry customers. Our foundry customers will also be able to leverage our leading edge process, the treaty packaging technology, a world class I. P. Portfolio. So >>I'm really really >>excited. I think it's a truly exciting time for our industry. The world requires more semiconductors and Intel is stepping in to help build the same. >>Fantastic, fantastic. Thank you. Some potion is really heartening to know and we really cherish the long partnership, HP and Intel have together. I look forward that you know with this gentleman plus launch and the partnership going forward. You know, we have only motivation and work together. Really appreciate your taking the time and joining and thank you very much for joining us. >>Thank you. >>Thanks. >>Okay, so with that I will move on to our second segment and in white, another special guest and this is Pete Chambers who is the managing director for A N D N A P K. Good afternoon Pete. You can hear us Well >>I can. Thank you. Sandy, Great to be >>here. Good and thanks for joining me. Um I thought I just opened up, you know, like a comment around the 19 world Records uh, am D. N. H. We have together and it's a kind of a testament to the joint working model and relationship and the collaboration. And so again, really thank you for the partnership. We have any change. Uh, let me just quickly get to the first question. You know, when it comes to my mind listening over to what Antonio and Liza were discussing, you know, they're talking about there's a huge amount of flow of data. You know, the technology and the compute needs to be closer to where the data is being generated and how is A. M. D. You know, helping leverage some of those technologies to bring feature and benefits and driving outcome for customers here in asia. >>Yeah, as lisa mentioned, we're now in a high performance computing mega cycle driven by cloud computing, digital transformation five DNA. Which means that everyone needs and wants more computer IDC predicts that by 20 23/65 percent of the impact GDP will be digitized. So there's an inflection coming with digital transformation at the fall, businesses are ever increasingly looking for trusted partners like HP and HP and and to help them address and adapt to these complex emerging technologies while keeping their IT infrastructure highly efficient, you know, and is helping enable this transformation by bringing leadership performance such as high court densities, high PC and increased I. O. But at the same time offering the best efficiency and performance for what all third gen Epic. CPU support 100 and 28 lanes of superfast PC for connectivity to four terabytes of memory and multiple layers of security. You know, we've heard from our customers that security continues to be a key consideration, you know? And he continues to listen. And with third gen, Epic, we're providing a multitude of security features such as secure root of trust at the bios level which we work very closely with HP on secure encrypted virtualization, secure memory encryption and secure nested paging to really giving the customers confidence when designing Epic. We look very closely at the key workloads that our customers will be looking to enable. And we've designed Epic from the ground up to deliver superior experience. So high performance computing is growing in this region and our leadership per socket core density of up to 64 cause along with leading IO and high memory bandwidth provides a compelling solution to help solve customers most complex computational problems faster. New HP Apollo 6500 and 10 systems featuring third gen, Epic are also optimist for artificial intelligence capabilities to improve training and increased accuracy and results. And we also now support up to eight and instinct accelerators. In each of these systems, hyper converged infrastructure continues to gain momentum in today's modern data center and our superior core density helps deliver more VMS per CPU supported by a multitude of security virtualization features to provide peace of mind and works very closely with industry leaders in HD like HP but also Nutanix and VM ware to help simplify the customers infrastructure. And in recent times we've seen video. I have a resurgence as companies have looked to empower their remote employee remote employees. Third gen, Epic enables more video sessions per CPU providing a more cost optimized solution, simply put Epics higher core density per CPU means customers need fewer service. That means less space required, lower power and cooling expenditure and as a result, a tangibly lower total cost of ownership add to this the fact, as you mentioned that Andy Epic with HP of 19 world records across virtualization, energy efficiency, decision support, database workloads, etc. And service side java. And it all adds up to a very strong value proposition to encourage Cdos to embark on their next upgrade cycle with HP and Epic >>Interstate. Thank you Peter and really quite insightful. And I've just done that question over to Narendra Pete talked about great new technologies, new solution, new areas that are going to benefit from these technology enhancements at the same time. You know, if I'm a customer, I look at every time we talk about technology, you know, you need to invest and where is you know, the bigger concern for customers always wears this money will come from. So I want to uh, you know, uh, the if you share your insights, how is actually helping customers to be able to implement these technology solutions, giving them a financial flexibility so that they can drive business outcomes. >>Yes, and the very important point, you know, from how HP is able to help our customers from their transformation. Now, reality is that most of the traditional enterprises are being challenged by this new digital bond businesses who have no doubt of funding and very low expectation of profitability. But in reality, majority of the capital of these traditional enterprises has uh tied up in their existing businesses as they do need to keep current operations running while starting their digital transformation at the same time. This of course creates real challenges and funding their transformation. Now with HP, with our Green Lake Cloud services, we are able to help customers fund their transformation journey. Were instead of buying up front, customers pay only for what they consume as the scale. We are not only able to offer flexible consumption model for new investments but are also able to help our customers, you know, for monetize their capital, which is tied up in the old ICT infrastructure because we can buy back that old infrastructure and convert that into conception of frank. So while customers can continue to use those assets to run their current business and reality is HIV is the leader in the this as a service space and probably the only vendor to be able to offer as a service offering for all of our portfolio. Uh, if you look at the ideas prediction, 70 of the applications are not ready for public cloud and will continue to run in private environments in addition. And everybody talked about the beef for a I and you know, HPC as well as the edge and more and more workloads are actually moving to the edge where the public cloud will have for less and less a role to play. But when you look at the customers, they are more and more looking for a cloud, like business model for all the workloads, uh, that they're running outside the public cloud. Now, with our being like offering, we are able to take away all the complexity from customers, allowing them to run the workloads wherever they want. That means that the edge in the data center or in the cloud and consume in the way they want. In other words, we're able to provide cloud, like experience anytime, anywhere to our customers. And of course, all these Green Lake offerings are powered by our latest compute capabilities that HP has to offer. >>Thank you. Thank you, surrender. That's really, really, very insightful. I have a minute or two, so let me try to squeeze another question from your feet, you know, MD is just now introduced the third generation of epics and congratulations on that. How are you seeing that? Excellent. Helping you accelerate in this growth, in the impact? Uh, you know, the geography as as such. >>Sure, great question. And as I mentioned, you know, third gen Epic with me and and once again delivers industry leading solutions, bending the curve on performance efficiency and TCO helping more than ever to deliver along with HP the right technologies for today and tomorrow. You know, in the service space, it's not just about what you can offer today. You need to be able to predictably deliver innovation over the long term. And we are committed to doing just that, you know, and strategy is to focus on the customer. We continue to see strong growth both globally and in a pack in HPC cloud and Web tech manufacturing, Fc telco and public and government sectors are growth plan is focused on getting closer to our customers directly, engaging with HP and our partners and the end customer to help guide them on the best solution and assist them in solving their computing pain points cost effectively. A recent example of this is our partnership with palsy supercomputing center in Australia, where HP and M. D will be helping to provide some 200,000 cause across 1600 nodes and over 750 radio on instinct accelerators empowering scientists to solve today's most challenging problems. We have doubled ourselves and F8 teams in the region over the past year and will continue to invest in additional customer facing sales and technical people through 2021, you know, and has worked very closely with HP to co design and co developed the best technologies for our customers needs. We joined forces over seven years ago to prepare for the first generation of Epic at launch and you fast forward to today and it's great to see that HP now has a very broad range of Andy Epic servers spanning from the edge two extra scale. So we are truly excited about what we can offer the market in partnership with HP and feel that we offer a very strong foundation of differentiation for our channel partners to address their customers need to accelerate accelerate their digital transformation. Thank you. Sandy, >>thank you. Thanks Peter. And really it's been amazing partnering with the NDP here and thanks for your sponsorship on that. And together we want to work with you to create another 19 world records right from here in the issue. Absolutely. So with that we are coming to the end of the event. Really thanks for coming pete and to our audience here because the pig is being a great a couple of hours. I hope you all found these sessions very, very insightful. You heard from our worldwide experts as to where, you know, divorce, moving in terms of the transformation, what your hp is bringing to our compute workload optimized solutions which are going to go from regardless of what scale of computing you're using and wrapped around 3 60 security and then offer truly as a service experience. But before you drop off, I would like to request you to please scan the QR code you see on your screen and fill in the feedback form we have, you know, lucky draw for some $50 worth of vultures for the five lucky winners today. So please click up your phone and, you know, spend a minute or two and give us a feedback and thank you very much again for this wonderful day. And I wish everybody a great day. Thank you.
SUMMARY :
I thought it was a key takeaway in terms of how you know the the world is We are like, you know, as you know, on the words of the launch, it's optimized performance from the edge to the cloud where you would see about 1.5 have the intellectual X F P G s that gives you about a double the better fabric performance successful with these new products, the performance uplifted, you get generation over generation, so let me just uh turn over you know, to Dorinda that enterprise of the future is going to be eight centric data driven and cloud I come back to you So the intel foundry services and the that you mentioned is is part of the Intel's new I. I think it's a truly exciting time for our industry. I look forward that you Okay, so with that I will move on to our second segment and Sandy, Great to be You know, the technology and the compute needs to be closer to where the data to be a key consideration, you know? the if you share your insights, how is actually helping customers to be able Yes, and the very important point, you know, from how HP is able to help our customers from Uh, you know, the geography as as such. You know, in the service space, it's not just about what you can offer today. to please scan the QR code you see on your screen and fill in the feedback
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Neil Mcdonald | PERSON | 0.99+ |
Antonio | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Australia | LOCATION | 0.99+ |
Sandeep | PERSON | 0.99+ |
Singapore | LOCATION | 0.99+ |
Arizona | LOCATION | 0.99+ |
200 cds | QUANTITY | 0.99+ |
28 lanes | QUANTITY | 0.99+ |
Dorinda Kapoor | PERSON | 0.99+ |
Pete | PERSON | 0.99+ |
2017 | DATE | 0.99+ |
Narendra Pete | PERSON | 0.99+ |
Liza | PERSON | 0.99+ |
Tosha | PERSON | 0.99+ |
2021 | DATE | 0.99+ |
first question | QUANTITY | 0.99+ |
Pete Chambers | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
second segment | QUANTITY | 0.99+ |
COVID-19 | OTHER | 0.99+ |
lisa | PERSON | 0.99+ |
Sandy | PERSON | 0.99+ |
asia | LOCATION | 0.99+ |
100 | QUANTITY | 0.99+ |
2013 | DATE | 0.99+ |
$50 | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
first one | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
six terabytes | QUANTITY | 0.99+ |
800 cities | QUANTITY | 0.99+ |
Andy Epic | PERSON | 0.99+ |
NDP | ORGANIZATION | 0.99+ |
a minute | QUANTITY | 0.99+ |
today | DATE | 0.98+ |
2nd 1 | QUANTITY | 0.98+ |
Apollo 6500 | COMMERCIAL_ITEM | 0.98+ |
five lucky winners | QUANTITY | 0.98+ |
two special guests | QUANTITY | 0.98+ |
Epic | ORGANIZATION | 0.98+ |
each | QUANTITY | 0.98+ |
Dorinda | LOCATION | 0.98+ |
1600 nodes | QUANTITY | 0.97+ |
about 200 Gbps | QUANTITY | 0.97+ |
tomorrow | DATE | 0.97+ |
HB | ORGANIZATION | 0.97+ |
third gen | QUANTITY | 0.97+ |
Nutanix | ORGANIZATION | 0.97+ |
first generation | QUANTITY | 0.96+ |
four terabytes | QUANTITY | 0.96+ |
decades | QUANTITY | 0.96+ |
intel | ORGANIZATION | 0.96+ |
Third gen | QUANTITY | 0.95+ |
$20 billion dollars | QUANTITY | 0.95+ |
19 world records | QUANTITY | 0.95+ |
70 of the applications | QUANTITY | 0.94+ |
KC6 Ali Golshan V1
>> Announcer: From around the Globe, it's theCUBE with coverage of Kube Con and Cloud Native Con Europe 2021 virtual brought to you by Red Hat, the cloud native computing foundation and ecosystem partners. >> Hello, and welcome back to theCUBE's coverage of Kube Con and Cloud Native Con 2021 virtual. I'm John Furrier, host of theCUBE, here with a great guest, I'm excited to talk to. His company, that he was part of founding CTO, was bought by Red Hat. Ali Golshan, Senior Director of Global Software Engineer at Red Hat, formerly CTO of StackRox. Ali thanks for coming on, I appreciate it. Thanks for joining us. >> Thanks for having me excited to be here. >> So big acquisition in January, where we covered it on SiliconANGLE, You guys, security company, venture backed amplify Sequoya and on and on. Big part of Red Hat story in their security as developers want to shift left as they say and as more and more modern applications are being developed. So congratulations. So real quick, just quick highlight of what you guys do as a company and inside Red Hat. >> Sure, so the company's premise was built around how do you bring security the entire application life cycle. So StackRox focuses on sort of three big areas that we talk about. One is, how do you secure the supply chain? The second part of it is, how do you secure infrastructure and foster management and then the third part is now, how do you protect the workload that run on top of that infrastructure. So this is the part that aligned really well with Red Hat which is, Red Hat had wanted to take a lot of what we do around infrastructure, foster management configuration management and developer tools integrated into a lot of the things they do and obviously the workload protection part was a very seamless part of integrating us into the OpeShift part because we were built around cloud native constructs and obviously Red Hat having some of the foremost experts around cloud native sort of created a really great asset. >> Yeah, you guys got a great story. Obviously cloud native applications are rocking and rolling. You guys were in early serverless emerges, Kubernetes and then security in what I call the real time developer workflow. Ones that are building really fast, pushing code. Now it's called day two operations. So cloud native did two operations kind of encapsulates this new environment. You guys were right in the sweet spot of that. So this became quite the big deal, Red Hat saw an opportunity to bring you in. What was the motivation when you guys did the deal Was it like, "wow" this is a good fit. How did you react? What was the vibe at the StackRox when this was all going down? >> Yeah, so I think there's really three areas you look for, anytime a company comes up and sort of starts knocking on your door. One is really, is the team going to be the right fit? Is the culture going to be the right environment for the people? For us, that was a big part of what we were taking into consideration. We found Red Hat's general culture, how they approach people and sort of the overall approach the community was very much aligned with what we were trying to do. The second part of it was really the product fit. So we had from very early on started to focus purely on the Kubernetes components and doing everything we could, we call it sort of our product approach built in versus built it on and this is sort of a philosophy that Red Hat had adopted for a long time and it's a part of a lot of their developer tools, part of their shift left story as well as part of OpenShift. And then the third part of it was really the larger strategy of how do you go to market. So we were hitting that point where we were in triple digit customers and we were thinking about scalability and how to scale the company. And that was the part that also fit really well which was obviously, RedHat more and more hearing from their customers about the importance and the criticality of security. So that last part happened to be one part. We ended up spending a lot of time on it, ended up being sort of the outer three matches that made this acquisition happen. >> Well congratulations, always great to see startups in the right position. Good hustle, great product, great market. You guys did a great job, congratulations. >> Thank you. >> Now, the big news here at KubeCon as Linux foundation open-source, you guys are announcing that you're open-sourcing at StackRox, this is huge news, obviously, you now work for an open-source company and so that was probably a part of it. Take us through the news, this is the top story here for this segment tickets through open-source. Take us through the news. >> Yeah, so traditionally StackRox was a proprietary tool. We do have open-source tooling but the entire platform in itself was a proprietary tool. This has been a number of discussions that we've had with the Red Hat team from the very beginning. And it sort of aligns around a couple of core philosophies. One is obviously Red Hat at its core being an open-source company and being very much plugged into the community and working with users and developers and engineers to be able to sort of get feedback and build better products. But I think the other part of it is that, I think a lot of us from a historic standpoint have viewed security to be a proprietary thing as we've always viewed the sort of magic algorithms or black boxes or some magic under the hood that really moved the needle. And that happens not to be the case anymore also because StackRox's philosophy was really built around Kubernetes and Built-in, we feel like one of the really great messages around wide open-source of security product is to build that trust with the community being able to expose, here's how the product works, here's how it integrates here are the actions it takes here's the ramifications or repercussions of some of the decisions you may make in the product. Those all I feel make for very good stories of how you build connection, trust and communication with the community and actually get feedback on it. And obviously at its core, the company being very much focused on Kubernetes developer tools, service manage, these are all open-source toolings obviously. So, for us it was very important to sort of talk the talk and walk the walk and this is sort of an easy decision at the end of the day for us to take the platform open-source. And we're excited about it because I think most still want a productized supported commercial product. So while it's great to have some of the tip of the spear customers look at it and adopt the open-source and be able to drive it themselves. We're still hearing from a lot of the customers that what they do want is really that support and that continuous management, maintenance and improvement around the product. So we're actually pretty excited. We think it's only going to increase our velocity and momentum into the community. >> Well, I got some questions on how it's going to work but I do want to get your comment because I think this is a pretty big deal. I had a conversation about 10 years ago with Doug Cutting, who was the founder of Hadoop, And he was telling me a story about a company he worked for, you know all this coding, they went under and the IP was gone, the software was gone and it was a story to highlight that proprietary software sometimes can never see the light of day and it doesn't continue. Here, you guys are going to continue the story, continue the code. How does that feel? What's your expectations? How's that going to work? I'm assuming that's what you're going to open it up which means that anyone can download the code. Is that right? Take us through how to first of all, do you agree with that this is going to stay alive and how's it going to work? >> Yeah, I mean, I think as a founder one of the most fulfilling things to have is something you build that becomes sustainable and stands the test of time. And I think, especially in today's world open-source is a tool that is in demand and only in a market that's growing is really a great way to do that. Especially if you have a sort of an established user base and the customer base. And then to sort of back that on top of thousands of customers and users that come with Red Hat in itself, gives us a lot of confidence that that's going to continue and only grow further. So the decision wasn't a difficult one, although transparently, I feel like even if we had pushed back I think Red Hat was pretty determined about open-source and we get anyway, but it's to say that we actually were in agreement to be able to go down that path. I do think that there's a lot of details to be worked out because obviously there's sort of a lot of the nuances in how you build product and manage it and maintain it and then, how do you introduce community feedback and community collaboration as part of open-source projects is another big part of it. I think the part we're really excited about is, is that it's very important to have really good community engagement, maintenance and response. And for us, even though we actually discussed this particular strategy during StackRox, one of the hindering aspects of that was really the resources required to be able to manage and maintain such a massive open-source project. So having Red Hat behind us and having a lot of this experience was very relevant. I think, as a, as a startup to start proprietary and suddenly open it and try to change your entire business model or go to market strategy commercialization, changed the entire culture of the company can sometimes create a lot of headwind. And as a startup, like sort of I feel like every year just trying not to die until you create that escape velocity. So those were I think some of the risk items that Red Hat was able to remove for us and as a result made the decision that much easier. >> Yeah, and you got the mothership with Red Hat they've done it before, they've been doing it for generations. You guys, you're in the startup, things are going crazy. It's like whitewater rafting, it's like everything's happening so fast. And now you got the community behind you cause you're going to have the CNC if you get Kubecon. I mean, it's a pretty great community, the support is amazing. I think the only thing the engineers might want to worry about is go back into the code base and clean things up a bit, as you start to see the code I'm like, wait a minute, their names are on it. So, it's always always a fun time and all serious now this is a big story on the DevSecOps. And I want to get your thoughts on this because kubernetes is still emerging, and DevOps is awesome, we've been covering that in for all of the life of theCUBE for the 11 years now and the greatness of DevOps but now DevSecOps is critical and Kubernetes native security is what people are looking at. When you look at that trend only continuing, what's your focus? What do you see? Now that you're in Red Hat as the CTO, former CTO of StackRox and now part of the Red Hat it's going to get bigger and stronger Kubernetes native and shifting left-hand or DevSecOps. What's your focus? >> Yeah, so I would say our focus is really around two big buckets. One is, Kubernetes native, sort of a different way to think about it as we think about our roadmap planning and go-to-market strategy is it's mutually exclusive with being in infrastructure native, that's how we think about it and as a startup we really have to focus on an area and Kubernetes was a great place for us to focus on because it was becoming the dominant orchestration engine. Now that we have the resources and the power of Red Hat behind us, the way we're thinking about this is infrastructure native. So, thinking about cloud native infrastructure where you're using composable, reusable, constructs and objects, how do you build potential offerings or features or security components that don't rely on third party tools or components anymore? How do you leverage the existing infrastructure itself to be able to conduct some of these traditional use cases? And one example we use for this particular scenario is networking. Networking, the way firewalling in segmentation was typically done was, people would tweak IP tables or they would install, for example, a proxy or a container that would terminate MTLS or become inline and it would create all sorts of sort of operational and risk overhead for users and for customers. And one of the things we're really proud of as sort of the company that pioneered this notion of cloud native security is if you just leverage network policies in Kubernetes, you don't have to be inline you don't have to have additional privileges, you don't have to create additional risks or operational overhead for users. So we're taking those sort of core philosophies and extending them. The same way we did to Kubernetes all the way through service manager, we're doing the same sorts of things Istio being able to do a lot of the things people are traditionally doing through for example, proxies through layer six and seven, we want to do through Istio. And then the same way for example, we introduced a product called GoDBledger which was an open-source tool, which would basically look at a yaml on helm charts and give you best practices responses. And it's something you we want for example to your get repositories. We want to take those sort of principles, enabling developers, giving them feedback, allowing them not to break their existing workflows and leveraging components in existing infrastructure to be able to sort of push security into cloud native. And really the two pillars we look at are ensuring we can get users and customers up and running as quickly as possible and reduce as much as possible operational overhead for them over time. So we feel these two are really at the core of open-sourcing in building into the infrastructure, which has sort of given us momentum over the last six years and we feel pretty confident with Red Hat's help we can even expand that further. >> Yeah, I mean, you bring up a good point and it's certainly as you get more scale with Red Hat and then the customer base, not only in dealing with the threat detection around containers and cloud native applications, you got to kind of build into the life cycle and you've got to figure out, okay, it's not just Kubernetes anymore, it's something else. And you've got advanced cluster security with Red Hat they got OpenShift cloud platform, you're going to have managed services so this means you're going to have scale, right? So, how do you view that? Because now you're going to have, you guys at the center of the advanced cluster security paradigm for Red Hat. That's a big deal for them and they've got a lot of R and D and a lot of, I wouldn't say R and D, but they got emerging technologies developing around that. We covered that in depth. So when you start to get into advanced cluster, it's compliance too, it's not just threat detection. You got insights telemetry, data acquisition, so you have to kind of be part of that now. How do you guys feel about that? Are you up for the task? >> Yeah, I hope so it's early days but we feel pretty confident about it, we have a very good team. So as part of the advanced cluster security we work also very closely with the advanced cluster management team in Red Hat because it's not just about security, it's about, how do you operationalize it, how do you manage it and maintain it and to your point sort of run it longterm at scale. The compliance part of it is a very important part. I still feel like that's in its infancy and these are a lot of conversations we're having internally at Red Hat, which is, we all feel that compliance is going to sort of more from the standard benchmarks you have from CIS or particular compliance requirements like the power, of PCI or Nest into how do you create more flexible and composable policies through a unified language that allows you to be able to create more custom or more useful things specific to your business? So this is actually, an area we're doing a lot of collaboration with the advanced cluster management team which is in that, how do you sort of bring to light a really easy way for customers to be able to describe and sort of abstract policies and then at the same time be able to actually and enforce them. So we think that's really the next key point of what we have to accomplish to be able to sort of not only gain scale, but to be able to take this notion of, not only detection in response but be able to actually build in what we call declarative security into your infrastructure. And what that means is, is to be able to really dictate how you want your applications, your services, your infrastructure to be configured and run and then anything that is sort of conflicting with that is auto responded to and I think that's really the larger vision that with Red Hat, we're trying to accomplish. >> And that's a nice posture to have you build it in, get it built in, you have the declarative models then you kind of go from there and then let the automation kick in. You got insights coming in from Red Hat. So all these things are kind of evolving. It's still early days and I think it was a nice move by Red Hat, so congratulations. Final question for you is, as you prepare to go to the next generation KubeCon is also seeing a lot more end user participation, people, you know, cloud native is going mainstream, when I say mainstream, seeing beyond the hyperscalers in the early adopters, Kubernetes and other infrastructure control planes are coming in you start to see the platforms emerge. Nobody wants another security tool, they want platforms that enable applications handle tools. As it gets more complicated, what's going to be the easy button in security cloud native? What's the approach? What's your vision on what's next? >> Yeah so, I don't know if there is an easy button in security and I think part of it is that there's just such a fragmentation and use cases and sort of designs and infrastructure that doesn't exist, especially if you're dealing with such a complex stack. And not only just a complex stack but a potentially use cases that not only span runtime but they deal with you deployment annual development life cycle. So the way we think about it is more sort of this notion that has been around for a long time which is the shared responsibility model. Security is not security's job anymore. Especially, because security teams probably cannot really keep up with the learning curve. Like they have to understand containers then they have to understand Kubernetes and Istio and Envoy and cloud platforms and APIs. and there's just too much happening. So the way we think about it is if you deal with security a in a declarative version and if you can state things in a way where how infrastructure is ran is properly configured. So it's more about safety than security. Then what you can do is push a lot of these best practices back as part of your gift process. Involve developers, engineers, the right product security team that are responsible for day-to-day managing and maintaining this. And the example we think about is, is like CVEs. There are plenty of, for example, vulnerability tools but the CVEs are still an unsolved problem because, where are they, what is the impact? Are they actually running? Are they being exploited in the wild? And all these things have different ramifications as you span it across the life cycle. So for us, it's understanding context, understanding assets ensuring how the infrastructure has to handle that asset and then ensuring that the route for that response is sent to the right team, so they can address it properly. And I think that's really our larger vision is how can you automate this entire life cycle? So, the information is routed to the right teams, the right teams are appending it to the application and in the future, our goal is not to just pardon the workload or the compute environment, but use this information to action pardon application themselves and that creates that additional agility and scalability. >> Yeah it's in the lifecycle of that built in right from the beginning, more productivity, more security and then, letting everything take over on the automation side. Ali congratulations on the acquisition deal with Red Hat, buyout that was great for them and for you guys. Take a minute to just quickly answer final final question for the folks watching here. The big news is you're open-sourcing StackRox, so that's a big news here at KubeCon. What can people do to get involved? Well, just share a quick quick commercial for what people can do to get involved? What are you guys looking for? Take a pledge to the community? >> Yeah, I mean, what we're looking for is more involvement in direct feedback from our community, from our users, from our customers. So there's a number, obviously the StackRox platform itself being open-source, we have other open-source tools like the KubeLinter. What we're looking for is feedback from users as to what are the pain points that they're trying to solve for. And then give us feedback as to how we're not addressing those or how can we better design our systems? I mean, this is the sort of feedback we're looking for and naturally with more resources, we can be a lot faster in response. So send us feedback good or bad. We would love to hear it from our users and our customers and get a better sense of what they're looking for. >> Innovation out in the open love it, got to love open-source going next gen, Ali Golshan Senior Director of Global Software Engineering the new title at Red Hat former CTO and founder of StackRox which spread had acquired in January, 2021. Ali thanks for coming on congratulations. >> Thanks for having, >> Okay, so keeps coverage of Kube Con cloud native Con 2021. I'm John Furrie, your host. Thanks for watching. (soft music)
SUMMARY :
brought to you by Red Hat, and Cloud Native Con 2021 virtual. me excited to be here. and as more and more modern applications and obviously the workload protection part to bring you in. and sort of the overall in the right position. and so that was probably a part of it. and momentum into the community. and how's it going to work? and as a result made the and now part of the Red Hat and the power of Red Hat behind us, and it's certainly as you the standard benchmarks you have from CIS and I think it was a nice move by Red Hat, and in the future, our goal is that was great for them and for you guys. and naturally with more resources, Innovation out in the open love it, Thanks for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ali Golshan | PERSON | 0.99+ |
January, 2021 | DATE | 0.99+ |
John Furrier | PERSON | 0.99+ |
Doug Cutting | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
January | DATE | 0.99+ |
John Furrie | PERSON | 0.99+ |
Ali | PERSON | 0.99+ |
11 years | QUANTITY | 0.99+ |
StackRox | ORGANIZATION | 0.99+ |
one part | QUANTITY | 0.99+ |
KubeCon | ORGANIZATION | 0.99+ |
third part | QUANTITY | 0.99+ |
second part | QUANTITY | 0.99+ |
Global Software Engineering | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
two operations | QUANTITY | 0.98+ |
two pillars | QUANTITY | 0.97+ |
two | QUANTITY | 0.97+ |
Kubernetes | TITLE | 0.97+ |
one example | QUANTITY | 0.97+ |
DevSecOps | TITLE | 0.96+ |
Hadoop | ORGANIZATION | 0.96+ |
Kube Con | EVENT | 0.95+ |
one | QUANTITY | 0.95+ |
three areas | QUANTITY | 0.95+ |
Red Hat | TITLE | 0.93+ |
KubeCon | EVENT | 0.93+ |
Sequoya | ORGANIZATION | 0.92+ |
three big areas | QUANTITY | 0.92+ |
three matches | QUANTITY | 0.91+ |
RedHat | ORGANIZATION | 0.91+ |
StackRox | TITLE | 0.91+ |
Istio | ORGANIZATION | 0.91+ |
GoDBledger | TITLE | 0.91+ |
Istio | TITLE | 0.87+ |
two big buckets | QUANTITY | 0.87+ |
DevOps | TITLE | 0.86+ |
thousands of customers | QUANTITY | 0.86+ |
Cloud Native Con 2021 | EVENT | 0.85+ |
OpeShift | TITLE | 0.85+ |
theCUBE | ORGANIZATION | 0.84+ |
Kubecon | ORGANIZATION | 0.84+ |
last six years | DATE | 0.84+ |
Cloud Native Con Europe 2021 | EVENT | 0.82+ |
10 years ago | DATE | 0.81+ |
Con 2021 | EVENT | 0.8+ |
CTO | PERSON | 0.78+ |
KubeLinter | TITLE | 0.77+ |
Kubernetes | ORGANIZATION | 0.77+ |
CTO | ORGANIZATION | 0.77+ |
Linux | ORGANIZATION | 0.76+ |
Global Software Engineer | ORGANIZATION | 0.75+ |
Zhamak Dehghani, ThoughtWorks | theCUBE on Cloud 2021
>>from around the globe. It's the Cube presenting Cuban cloud brought to you by silicon angle in 2000 >>nine. Hal Varian, Google's chief economist, said that statisticians would be the sexiest job in the coming decade. The modern big data movement >>really >>took off later in the following year. After the Second Hadoop World, which was hosted by Claudette Cloudera in New York City. Jeff Ham Abakar famously declared to me and John further in the Cube that the best minds of his generation, we're trying to figure out how to get people to click on ads. And he said that sucks. The industry was abuzz with the realization that data was the new competitive weapon. Hadoop was heralded as the new data management paradigm. Now, what actually transpired Over the next 10 years on Lee, a small handful of companies could really master the complexities of big data and attract the data science talent really necessary to realize massive returns as well. Back then, Cloud was in the early stages of its adoption. When you think about it at the beginning of the last decade and as the years passed, Maurin Mawr data got moved to the cloud and the number of data sources absolutely exploded. Experimentation accelerated, as did the pace of change. Complexity just overwhelmed big data infrastructures and data teams, leading to a continuous stream of incremental technical improvements designed to try and keep pace things like data Lakes, data hubs, new open source projects, new tools which piled on even Mawr complexity. And as we reported, we believe what's needed is a comm pleat bit flip and how we approach data architectures. Our next guest is Jean Marc de Connie, who is the director of emerging technologies That thought works. John Mark is a software engineer, architect, thought leader and adviser to some of the world's most prominent enterprises. She's, in my view, one of the foremost advocates for rethinking and changing the way we create and manage data architectures. Favoring a decentralized over monolithic structure and elevating domain knowledge is a primary criterion. And how we organize so called big data teams and platforms. Chamakh. Welcome to the Cube. It's a pleasure to have you on the program. >>Hi, David. This wonderful to be here. >>Well, okay, so >>you're >>pretty outspoken about the need for a paradigm shift in how we manage our data and our platforms that scale. Why do you feel we need such a radical change? What's your thoughts there? >>Well, I think if you just look back over the last decades you gave us, you know, a summary of what happened since 2000 and 10. But if even if we go before then what we have done over the last few decades is basically repeating and, as you mentioned, incrementally improving how we've managed data based on a certain assumptions around. As you mentioned, centralization data has to be in one place so we can get value from it. But if you look at the parallel movement off our industry in general since the birth of Internet, we are actually moving towards decentralization. If we think today, like if this move data side, if he said the only way Web would work the only way we get access to you know various applications on the Web pages is to centralize it. We would laugh at that idea, but for some reason we don't. We don't question that when it comes to data, right? So I think it's time to embrace the complexity that comes with the growth of number of sources, the proliferation of sources and consumptions models, you know, embrace the distribution of sources of data that they're not just within one part of organization. They're not just within even bounds of organization there beyond the bounds of organization. And then look back and say Okay, if that's the trend off our industry in general, Um, given the fabric of computation and data that we put in, you know globally in place, then how the architecture and technology and organizational structure incentives need to move to embrace that complexity. And to me, that requires a paradigm shift, a full stack from how we organize our organizations, how we organize our teams, how we, you know, put a technology in place, um, to to look at it from a decentralized angle. >>Okay, so let's let's unpack that a little bit. I mean, you've spoken about and written that today's big architecture and you basically just mentioned that it's flawed, So I wanna bring up. I love your diagrams of a simple diagram, guys, if you could bring up ah, figure one. So on the left here we're adjusting data from the operational systems and other enterprise data sets and, of course, external data. We cleanse it, you know, you've gotta do the do the quality thing and then serve them up to the business. So So what's wrong with that picture that we just described and give granted? It's a simplified form. >>Yeah, quite a few things. So, yeah, I would flip the question may be back to you or the audience if we said that. You know, there are so many sources off the data on the Actually, the data comes from systems and from teams that are very diverse in terms off domains. Right? Domain. If if you just think about, I don't know retail, Uh, the the E Commerce versus Order Management versus customer This is a very diverse domains. The data comes from many different diverse domains. And then we expect to put them under the control off a centralized team, a centralized system. And I know that centralization. Probably if you zoom out, it's centralized. If you zoom in it z compartmentalized based on functions that we can talk about that and we assume that the centralized model will be served, you know, getting that data, making sense of it, cleansing and transforming it then to satisfy in need of very diverse set of consumers without really understanding the domains, because the teams responsible for it or not close to the source of the data. So there is a bit of it, um, cognitive gap and domain understanding Gap, um, you know, without really understanding of how the data is going to be used, I've talked to numerous. When we came to this, I came up with the idea. I talked to a lot of data teams globally just to see, you know, what are the pain points? How are they doing it? And one thing that was evident in all of those conversations that they actually didn't know after they built these pipelines and put the data in whether the data warehouse tables or like, they didn't know how the data was being used. But yet the responsible for making the data available for these diverse set of these cases, So s centralized system. A monolithic system often is a bottleneck. So what you find is, a lot of the teams are struggling with satisfying the needs of the consumers, the struggling with really understanding the data. The domain knowledge is lost there is a los off understanding and kind of in that in that transformation. Often, you know, we end up training machine learning models on data that is not really representative off the reality off the business. And then we put them to production and they don't work because the semantic and the same tax off the data gets lost within that translation. So we're struggling with finding people thio, you know, to manage a centralized system because there's still the technology is fairly, in my opinion, fairly low level and exposes the users of those technologies. I said, Let's say warehouse a lot off, you know, complexity. So in summary, I think it's a bottleneck is not gonna, you know, satisfy the pace of change, of pace, of innovation and the pace of, you know, availability of sources. Um, it's disconnected and fragmented, even though the centralizes disconnected and fragmented from where the data comes from and where the data gets used on is managed by, you know, a team off hyper specialized people that you know, they're struggling to understand the actual value of the data, the actual format of the data, so it's not gonna get us where our aspirations and ambitions need to be. >>Yes. So the big data platform is essentially I think you call it, uh, context agnostic. And so is data becomes, you know, more important, our lives. You've got all these new data sources, you know, injected into the system. Experimentation as we said it with the cloud becomes much, much easier. So one of the blockers that you've started, you just mentioned it is you've got these hyper specialized roles the data engineer, the quality engineer, data scientists and and the It's illusory. I mean, it's like an illusion. These guys air, they seemingly they're independent and in scale independently. But I think you've made the point that in fact, they can't that a change in the data source has an effect across the entire data lifecycle entire data pipeline. So maybe you could maybe you could add some color to why that's problematic for some of the organizations that you work with and maybe give some examples. >>Yeah, absolutely so in fact, that initially the hypothesis around that image came from a Siris of requests that we received from our both large scale and progressive clients and progressive in terms of their investment in data architectures. So this is where clients that they were there were larger scale. They had divers and reached out of domains. Some of them were big technology tech companies. Some of them were retail companies, big health care companies. So they had that diversity off the data and the number off. You know, the sources of the domains they had invested for quite a few years in, you know, generations. If they had multi generations of proprietary data warehouses on print that they were moving to cloud, they had moved to the barriers, you know, revisions of the Hadoop clusters and they were moving to the cloud. And they the challenges that they were facing were simply there were not like, if I want to just, like, you know, simplifying in one phrase, they were not getting value from the data that they were collecting. There were continuously struggling Thio shift the culture because there was so much friction between all of these three phases of both consumption of the data and transformation and making it available consumption from sources and then providing it and serving it to the consumer. So that whole process was full of friction. Everybody was unhappy. So its bottom line is that you're collecting all this data. There is delay. There is lack of trust in the data itself because the data is not representative of the reality has gone through a transformation. But people that didn't understand really what the data was got delayed on bond. So there is no trust. It's hard to get to the data. It's hard to create. Ultimately, it's hard to create value from the data, and people are working really hard and under a lot of pressure. But it's still, you know, struggling. So we often you know, our solutions like we are. You know, Technologies will often pointed to technology. So we go. Okay, This this version of you know, some some proprietary data warehouse we're using is not the right thing. We should go to the cloud, and that certainly will solve our problems. Right? Or warehouse wasn't a good one. Let's make a deal Lake version. So instead of you know, extracting and then transforming and loading into the little bits. And that transformation is that, you know, heavy process, because you fundamentally made an assumption using warehouses that if I transform this data into this multi dimensional, perfectly designed schema that then everybody can run whatever choir they want that's gonna solve. You know everybody's problem, but in reality it doesn't because you you are delayed and there is no universal model that serves everybody's need. Everybody that needs the divers data scientists necessarily don't don't like the perfectly modeled data. They're looking for both signals and the noise. So then, you know, we've We've just gone from, uh, et elles to let's say now to Lake, which is okay, let's move the transformation to the to the last mile. Let's just get load the data into, uh into the object stores into semi structured files and get the data. Scientists use it, but they're still struggling because the problems that we mentioned eso then with the solution. What is the solution? Well, next generation data platform, let's put it on the cloud, and we sell clients that actually had gone through, you know, a year or multiple years of migration to the cloud. But with it was great. 18 months I've seen, you know, nine months migrations of the warehouse versus two year migrations of the various data sources to the clubhouse. But ultimately, the result is the same on satisfy frustrated data users, data providers, um, you know, with lack of ability to innovate quickly on relevant data and have have have an experience that they deserve toe have have a delightful experience off discovering and exploring data that they trust. And all of that was still a missed so something something else more fundamentally needed to change than just the technology. >>So then the linchpin to your scenario is this notion of context and you you pointed out you made the other observation that look, we've made our operational systems context aware. But our data platforms are not on bond like CRM system sales guys very comfortable with what's in the CRM system. They own the data. So let's talk about the answer that you and your colleagues are proposing. You're essentially flipping the architecture whereby those domain knowledge workers, the builders, if you will, of data products or data services there now, first class citizens in the data flow and they're injecting by design domain knowledge into the system. So So I wanna put up another one of your charts. Guys, bring up the figure to their, um it talks about, you know, convergence. You showed data distributed domain, dream and architecture. Er this self serve platform design and this notion of product thinking. So maybe you could explain why this approach is is so desirable, in your view, >>sure. The motivation and inspiration for the approach came from studying what has happened over the last few decades in operational systems. We had a very similar problem prior to micro services with monolithic systems, monolithic systems where you know the bottleneck. Um, the changes we needed to make was always, you know, our fellow Noto, how the architecture was centralized and we found a nice nation. I'm not saying this is the perfect way of decoupling a monolith, but it's a way that currently where we are in our journey to become data driven, um is a nice place to be, um, which is distribution or decomposition off your system as well as organization. I think when we whenever we talk about systems, we've got to talk about people and teams that's responsible for managing those systems. So the decomposition off the systems and the teams on the data around domains because that's how today we are decoupling our business, right? We're decoupling our businesses around domains, and that's a that's a good thing and that What does that do really for us? What it does? Is it localizes change to the bounded context of fact business. It creates clear boundary and interfaces and contracts between the rest of the universe of the organization on that particular team, so removes the friction that often we have for both managing the change and both serving data or capability. So it's the first principle of data meshes. Let's decouple this world off analytical data the same to mirror the same way we have to couple their systems and teams and business why data is any different. And the moment you do that, So you, the moment you bring the ownership to people who understands the data best, then you get questions that well, how is that any different from silence that's connected databases that we have today and nobody can get to the data? So then the rest of the principles is really to address all of the challenges that comes with this first principle of decomposition around domain Context on the second principle is well, we have to expect a certain level off quality and accountability and responsibility for the teams that provide the data. So let's bring product thinking and treating data as a product to the data that these teams now, um share and let's put accountability around. And we need a new set of incentives and metrics for domain teams to share the data. We need to have a new set off kind of quality metrics that define what it means for the data to be a product. And we can go through that conversation perhaps later eso then the second principle is okay. The teams now that are responsible, the domain teams responsible for the analytical data need to provide that data with a certain level of quality and assurance. Let's call that a product and bring products thinking to that. And then the next question you get asked off by C. E. O s or city or the people who build the infrastructure and, you know, spend the money. They said, Well, it's actually quite complex to manage big data, and now we're We want everybody, every independent team to manage the full stack of, you know, storage and computation and pipelines and, you know, access, control and all of that. And that's well, we have solved that problem in operational world. And that requires really a new level of platform thinking toe provide infrastructure and tooling to the domain teams to now be able to manage and serve their big data. And that I think that requires reimagining the world of our tooling and technology. But for now, let's just assume that we need a new level of abstraction to hide away ton of complexity that unnecessarily people get exposed to and that that's the third principle of creating Selves of infrastructure, um, to allow autonomous teams to build their domains. But then the last pillar, the last you know, fundamental pillar is okay. Once you distributed problem into a smaller problems that you found yourself with another set of problems, which is how I'm gonna connect this data, how I'm gonna you know, that the insights happens and emerges from the interconnection of the data domains right? It does not necessarily locked into one domain. So the concerns around interoperability and standardization and getting value as a result of composition and interconnection of these domains requires a new approach to governance. And we have to think about governance very differently based on a Federated model and based on a computational model. Like once we have this powerful self serve platform, we can computational e automate a lot of governance decisions. Um, that security decisions and policy decisions that applies to you know, this fabric of mesh not just a single domain or not in a centralized. Also, really. As you mentioned that the most important component of the emissions distribution of ownership and distribution of architecture and data the rest of them is to solve all the problems that come with that. >>So very powerful guys. We actually have a picture of what Jamaat just described. Bring up, bring up figure three, if you would tell me it. Essentially, you're advocating for the pushing of the pipeline and all its various functions into the lines of business and abstracting that complexity of the underlying infrastructure, which you kind of show here in this figure, data infrastructure is a platform down below. And you know what I love about this Jama is it to me, it underscores the data is not the new oil because I could put oil in my car I can put in my house, but I can't put the same court in both places. But I think you call it polyglot data, which is really different forms, batch or whatever. But the same data data doesn't follow the laws of scarcity. I can use the same data for many, many uses, and that's what this sort of graphic shows. And then you brought in the really important, you know, sticking problem, which is that you know the governance which is now not a command and control. It's it's Federated governance. So maybe you could add some thoughts on that. >>Sure, absolutely. It's one of those I think I keep referring to data much as a paradigm shift. And it's not just to make it sound ground and, you know, like, kind of ground and exciting or in court. And it's really because I want to point out, we need to question every moment when we make a decision around how we're going to design security or governance or modeling off the data, we need to reflect and go back and say, um, I applying some of my cognitive biases around how I have worked for the last 40 years, I have seen it work. Or do I do I really need to question. And we do need to question the way we have applied governance. I think at the end of the day, the rule of the data governance and objective remains the same. I mean, we all want quality data accessible to a diverse set of users. And these users now have different personas, like David, Personal data, analyst data, scientists, data application, Um, you know, user, very diverse personal. So at the end of the day, we want quality data accessible to them, um, trustworthy in in an easy consumable way. Um, however, how we get there looks very different in as you mentioned that the governance model in the old world has been very commander control, very centralized. Um, you know, they were responsible for quality. They were responsible for certification off the data, you know, applying making sure the data complies. But also such regulations Make sure you know, data gets discovered and made available in the world of the data mesh. Really. The job of the data governance as a function becomes finding that equilibrium between what decisions need to be um, you know, made and enforced globally. And what decisions need to be made locally so that we can have an interoperable measure. If data sets that can move fast and can change fast like it's really about instead of hardest, you know, kind of putting the putting those systems in a straitjacket of being constant and don't change, embrace, change and continuous change of landscape because that's that's just the reality we can't escape. So the role of governance really the governance model called Federated and Computational. And by that I mean, um, every domain needs to have a representative in the governance team. So the role of the data or domain data product owner who really were understand the data that domain really well but also wears that hacks of a product owner. It is an important role that had has to have a representation in the governance. So it's a federation off domains coming together, plus the SMEs and people have, you know, subject matter. Experts who understands the regulations in that environmental understands the data security concerns, but instead off trying to enforce and do this as a central team. They make decisions as what need to be standardized, what need to be enforced. And let's push that into that computational E and in an automated fashion into the into the camp platform itself. For example, instead of trying to do that, you know, be part of the data quality pipeline and inject ourselves as people in that process, let's actually, as a group, define what constitutes quality, like, how do we measure quality? And then let's automate that and let Z codify that into the platform so that every native products will have a C I City pipeline on as part of that pipeline. Those quality metrics gets validated and every day to product needs to publish those SLOC or service level objectives. So you know, whatever we choose as a measure of quality, maybe it's the, you know, the integrity of the data, the delay in the data, the liveliness of it, whatever the are the decisions that you're making, let's codify that. So it's, um, it's really, um, the role of the governance. The objectives of the governance team tried to satisfies the same, but how they do it. It is very, very different. I wrote a new article recently trying to explain the logical architecture that would emerge from applying these principles. And I put a kind of light table to compare and contrast the roll off the You know how we do governance today versus how we will do it differently to just give people a flavor of what does it mean to embrace the centralization? And what does it mean to embrace change and continuous change? Eso hopefully that that that could be helpful. >>Yes, very so many questions I haven't but the point you make it to data quality. Sometimes I feel like quality is the end game. Where is the end game? Should be how fast you could go from idea to monetization with the data service. What happens again? You sort of address this, but what happens to the underlying infrastructure? I mean, spinning a PC to S and S three buckets and my pie torches and tensor flows. And where does that that lives in the business? And who's responsible for that? >>Yeah, that's I'm glad you're asking this question. Maybe because, um, I truly believe we need to re imagine that world. I think there are many pieces that we can use Aziz utilities on foundational pieces, but I but I can see for myself a 5 to 7 year roadmap of building this new tooling. I think, in terms of the ownership, the question around ownership, if that would remains with the platform team, but and perhaps the domain agnostic, technology focused team right that there are providing instead of products themselves. And but the products are the users off those products are data product developers, right? Data domain teams that now have really high expectations in terms of low friction in terms of lead time to create a new data product. Eso We need a new set off tooling, and I think with the language needs to shift from, You know, I need a storage buckets. So I need a storage account. So I need a cluster to run my, you know, spark jobs, too. Here's the declaration of my data products. This is where the data for it will come from. This is the data that I want to serve. These are the policies that I need toe apply in terms of perhaps encryption or access control. Um, go make it happen. Platform, go provision, Everything that I mean so that as a data product developer. All I can focus on is the data itself, representation of semantic and representation of the syntax. And make sure that data meets the quality that I have that I have to assure and it's available. The rest of provisioning of everything that sits underneath will have to get taken care of by the platform. And that's what I mean by requires a re imagination and in fact, Andi, there will be a data platform team, the data platform teams that we set up for our clients. In fact, themselves have a favorite of complexity. Internally, they divide into multiple teams multiple planes, eso there would be a plane, as in a group of capabilities that satisfied that data product developer experience, there would be a set of capabilities that deal with those need a greatly underlying utilities. I call it at this point, utilities, because to me that the level of abstraction of the platform is to go higher than where it is. So what we call platform today are a set of utilities will be continuing to using will be continuing to using object storage, will continue using relation of databases and so on so there will be a plane and a group of people responsible for that. There will be a group of people responsible for capabilities that you know enable the mesh level functionality, for example, be able to correlate and connects. And query data from multiple knows. That's a measure level capability to be able to discover and explore the measure data products as a measure of capability. So it would be set of teams as part of platforms with a strong again platform product thinking embedded and product ownership embedded into that. To satisfy the experience of this now business oriented domain data team teams s way have a lot of work to do. >>I could go on. Unfortunately, we're out of time. But I guess my first I want to tell people there's two pieces that you put out so far. One is, uh, how to move beyond a monolithic data lake to a distributed data mesh. You guys should read that in a data mesh principles and logical architectures kind of part two. I guess my last question in the very limited time we have is our organization is ready for this. >>E think the desire is there I've bean overwhelmed with number off large and medium and small and private and public governments and federal, you know, organizations that reached out to us globally. I mean, it's not This is this is a global movement and I'm humbled by the response of the industry. I think they're the desire is there. The pains are really people acknowledge that something needs to change. Here s so that's the first step. I think that awareness isa spreading organizations. They're more and more becoming aware. In fact, many technology providers are reach out to us asking what you know, what shall we do? Because our clients are asking us, You know, people are already asking We need the data vision. We need the tooling to support. It s oh, that awareness is there In terms of the first step of being ready, However, the ingredients of a successful transformation requires top down and bottom up support. So it requires, you know, support from Chief Data Analytics officers or above the most successful clients that we have with data. Make sure the ones that you know the CEOs have made a statement that, you know, we want to change the experience of every single customer using data and we're going to do, we're going to commit to this. So the investment and support, you know, exists from top to all layers. The engineers are excited that maybe perhaps the traditional data teams are open to change. So there are a lot of ingredients. Substance to transformation is to come together. Um, are we really ready for it? I think I think the pioneers, perhaps the innovators. If you think about that innovation, careful. My doctors, probably pioneers and innovators and leaders. Doctors are making making move towards it. And hopefully, as the technology becomes more available, organizations that are less or in, you know, engineering oriented, they don't have the capability in house today, but they can buy it. They would come next. Maybe those are not the ones who aren't quite ready for it because the technology is not readily available. Requires, you know, internal investment today. >>I think you're right on. I think the leaders are gonna lead in hard, and they're gonna show us the path over the next several years. And I think the the end of this decade is gonna be defined a lot differently than the beginning. Jammeh. Thanks so much for coming in. The Cuban. Participate in the >>program. Pleasure head. >>Alright, Keep it right. Everybody went back right after this short break.
SUMMARY :
cloud brought to you by silicon angle in 2000 The modern big data movement It's a pleasure to have you on the program. This wonderful to be here. pretty outspoken about the need for a paradigm shift in how we manage our data and our platforms the only way we get access to you know various applications on the Web pages is to So on the left here we're adjusting data from the operational lot of data teams globally just to see, you know, what are the pain points? that's problematic for some of the organizations that you work with and maybe give some examples. And that transformation is that, you know, heavy process, because you fundamentally So let's talk about the answer that you and your colleagues are proposing. the changes we needed to make was always, you know, our fellow Noto, how the architecture was centralized And then you brought in the really important, you know, sticking problem, which is that you know the governance which So at the end of the day, we want quality data accessible to them, um, Where is the end game? And make sure that data meets the quality that I I guess my last question in the very limited time we have is our organization is ready So the investment and support, you know, Participate in the Alright, Keep it right.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Jean Marc de Connie | PERSON | 0.99+ |
Hal Varian | PERSON | 0.99+ |
Zhamak Dehghani | PERSON | 0.99+ |
New York City | LOCATION | 0.99+ |
John Mark | PERSON | 0.99+ |
5 | QUANTITY | 0.99+ |
Jeff Ham Abakar | PERSON | 0.99+ |
two year | QUANTITY | 0.99+ |
two pieces | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
John | PERSON | 0.99+ |
nine months | QUANTITY | 0.99+ |
2000 | DATE | 0.99+ |
18 months | QUANTITY | 0.99+ |
first step | QUANTITY | 0.99+ |
second principle | QUANTITY | 0.99+ |
both places | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
a year | QUANTITY | 0.99+ |
one part | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
Claudette Cloudera | PERSON | 0.99+ |
third principle | QUANTITY | 0.98+ |
10 | DATE | 0.98+ |
first principle | QUANTITY | 0.98+ |
one domain | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Lee | PERSON | 0.98+ |
one phrase | QUANTITY | 0.98+ |
three phases | QUANTITY | 0.98+ |
Cuban | OTHER | 0.98+ |
Jammeh | PERSON | 0.97+ |
7 year | QUANTITY | 0.97+ |
Mawr | PERSON | 0.97+ |
Jamaat | PERSON | 0.97+ |
last decade | DATE | 0.97+ |
Maurin Mawr | PERSON | 0.94+ |
single domain | QUANTITY | 0.92+ |
one thing | QUANTITY | 0.91+ |
ThoughtWorks | ORGANIZATION | 0.9+ |
one | QUANTITY | 0.9+ |
nine | QUANTITY | 0.9+ |
theCUBE | ORGANIZATION | 0.89+ |
end | DATE | 0.88+ |
last few decades | DATE | 0.87+ |
one place | QUANTITY | 0.87+ |
Second Hadoop World | EVENT | 0.86+ |
three | OTHER | 0.85+ |
C. E. O | ORGANIZATION | 0.84+ |
this decade | DATE | 0.84+ |
Siris | TITLE | 0.83+ |
coming decade | DATE | 0.83+ |
Andi | PERSON | 0.81+ |
Chamakh | PERSON | 0.8+ |
three buckets | QUANTITY | 0.77+ |
Jama | PERSON | 0.77+ |
Cuban | PERSON | 0.76+ |
Aziz | ORGANIZATION | 0.72+ |
years | DATE | 0.72+ |
first class | QUANTITY | 0.72+ |
last 40 | DATE | 0.67+ |
single customer | QUANTITY | 0.66+ |
part two | OTHER | 0.66+ |
last | DATE | 0.66+ |
Cloud | TITLE | 0.56+ |
2021 | DATE | 0.55+ |
next 10 years | DATE | 0.54+ |
Hadoop | EVENT | 0.53+ |
following year | DATE | 0.53+ |
years | QUANTITY | 0.51+ |
Cube | ORGANIZATION | 0.5+ |
Noto | ORGANIZATION | 0.45+ |
Cube | PERSON | 0.39+ |
Cube | COMMERCIAL_ITEM | 0.26+ |
Manpreet Mattu & Michael Jackson, AWS | AWS re:Invent 2020 Public Sector Day
>> From around the globe, it's theCUBE with digital coverage of AWS re:Invent 2020. Special coverage sponsored by AWS Worldwide Public Sector. >> Hello, welcome back to theCUBES coverage, of AWS re:Invent 2020 virtual. This is theCUBE virtual, I'm John Furrier, your host. We're not there in person this year because of the pandemic, but we're doing the remote. This is special coverage of the public sector, we got two great guests, Manpreet Mattu, who was the Worldwide Public Sector of Startups and Venture Capital team with AWS, and Michael Jackson who's the leader, general manager of Public Health and Venture Capital and Startups. Gentlemen, thanks for joining me. Thanks for coming up. >> Okay, it's my pleasure, thanks for having. >> I loved love welcome to theCUBE. I just want to say that Amazon never forgets the startups, that's where are they were born and bred it's been a startup. It's always day one as the expression goes, but truly even with the success, not just in the enterprise and starts within public sector, it's still a startup agility mindset, just want to call that out and say congratulations. Okay, let's get into it. Tell us about your roles and your backgrounds and why you're here. >> So, I believe so, I'm the head of AWS Public Sector, VC and Startups team, and our mission really is to help our public sector customers, adopt innovation that is built by the startups. I've been with AWS for about two and a half years. And prior to that, I was in a similar role with Booz Allen, helping our public sector customers, adopt innovation data as well. >> Michael. >> Yeah, so I am the general manager of Public Health, for on the Venture Capital and Startups team. My career here at AWS began just over four years ago. I was brought on to the state and local government team, initially building the public health practice from inception, and I also built and led our U S elections business. And I'm really excited now to transition into this global role, to lead our public health VC and startups practice, and really democratize access to innovation for our startups in the healthcare space. >> Well, great journey. You guys are converging, the VC and startup teams are coming together. A lot of macro trends certainly are tailwinds for you guys. Obviously, the pandemic is forcing, more accelerated modern applications in public sector, and we've been covering more and more success stories, of the change happening quickly. As access to capital continues to be great, and agility with the cloud, how has that impacted your teams and your approach? Can you guys share how that's changed this year? Because there's more pressure now to be digital, there's more opportunities, there's more still capital flowing, how has it impacted your roles? >> Now, so at the very high level, Amazon invests in companies because, we want those companies to be successful. And AWS itself makes a substantial investment, in agility, the startup customers success. We have things like service credits and things like, business nurturing programs that we have built over the course of the last seven, eight years. For example, over the past, you had a loan, Amazon has provided more than a billion dollars in credits, through AWS Activate program, to help startups grow and scale their businesses. And not only that a total of more than three and a half billion dollars in credit to more than 140,000 startups, over the last seven years, all through the course of the Activate program. From more so, on the healthcare side, I would want, certainly MJ to also, speak through or speak to, the challenges that the health system has faced in the COVID times, and how AWS is helping the provider, healthcare providers and the startups, really achieve success, and help the patient populations on that note. >> Michael, weighing on this new programs, you guys are launching in the impact healthcare, I see where we're seeing the frontline workers, I mean, it's everyone seeing it on TV and the newspaper, and it's impacting friends and family, give us the update. >> Absolutely, so we're here today to launch a new program. We call it the Healthcare Acceleration program. And basically, there are two halves to the program, with an undercurrent or a recurring undercurrent, I should say. Just really quickly before I touch on that though, I'd be remissed if I didn't make note of the fact that, you're right capital is still flowing, and it's a really big deal particularly, as healthcare and public health becomes such a priority, but one of the strategic imperatives of our team's role, similar to the way we democratize access to innovation for startups, we also find it really important to democratize access, to resources for founders, underrepresented founders, so, that everyone can have a level playing field, and equal access to those resources and funding, and things of that nature. Getting back to some of the healthcare priorities, in particular, I don't have to tell you about, this pandemic where on the third, and possibly the deadliest wave losing over 1000 Americans per day. And so, not only are we interested in helping our customers, our enterprise customers inject innovation from startups so that they can address clinical aspects, of the pandemic and beyond, but there are underlying rippling societal implications as well. Things that have been exacerbated by the pandemic. Things like mental health, behavioral health, including substance use abuse, clinical clinician burnout, things like social determinants of health, which lead to disproportionately impacted demographics. So, there's a whole lot to unpack and I'm sure we will, but at the highest level, that's what we're looking to help, our enterprise customers address, with the help of our innovative high potential startups. >> I mean, strategic focus, just go a little bit further on how important this is, because, programs are needed, there is burnout, okay. >> Yeah. >> You have mental health, physical health, everything in between. What are you guys launching? What's new? What can people take away right now from AWS, and what startups and when, 'cause a lot of people are changing their focus. I was seeing people leave their jobs, to have to get on this new mission. They're seeing the pain, there's a lot of entrepreneurial energy, happening right now here. Go further, please. >> So, you touched right on it. So, there are two sides. I mentioned there are two halves, and an underlying current, right? So, the two halves are the supply and the demand. The supply side is what we refer to as the startups, vetted high potential, high growth startups, in the health tech space, that we can help to accelerate their go to market, right? We can pair them with mentorship, credits, we call it the 4Cs. There's capital, mapping them potentially to investors, who are interested into accelerating their growth. There's code, technical support, whether it's cloud formation templates, or technical expertise, connections such as other startups, incubators, accelerators, etcetera, and finally mapping them to customers. So, that's, what's in it for the startups. And then on the other side, the enterprise side, again, there are so many enterprises from payers to providers and others who are looking to accelerate their efforts, to digitally transform their enterprise. And so, by partnering with AWS, and the Healthcare Acceleration program, they can trust that there are AWS powered startups, that are vetted and prepared, to inject that sense of urgency, that sense of innovation. And the underlying current, the dots that are being connected is, workforce modernization or economic development, because in many cases, you're right, people are losing their jobs, people are looking at ways that they can, modernize the workforce is locally leverage local talent. And so, entrepreneurship is a great way, to stimulate the local economy, and help older workers or workers who are looking to transition into a more relevant occupations, to do just that. So, this is an all encompassing program. >> Let's get into this health accelerator from AWS. This is something that is on the table, AWS Health Accelerator, who are the stakeholders, and what are the benefits of this program? >> Well, I mean, before we actually, go to the accelerator for me, I think there's this focus on the healthcare, as an industry, as a vertical, is very important to talk about. The industry is experiencing transformation. It is experiencing disruption and the COVID-19 pandemic, has only accelerated that. If you made, it has sort of magnified some of the stressors, which were already there in the system. If you combine that with the sort of the undercurrent that MJ mentioned from a technological perspective, the delivery of healthcare globally is going digital. So, you see technology is like artificial intelligence, machine learning, big data, augmented reality, IoT based variables. All of these technologies are coming together, to enable applications, such as remote diagnostics, patient monitoring, predictive prescriptive healthcare. And we truly feel that this presents a tremendous opportunity to improve the patient experience, and more importantly, the patient outcomes, using these technologies, and these newly enabled applications through those technologies. And as an example, in the U S alone, there are 22 key healthcare AI use cases, that are projected to grow by, or to approximately around $22 billion by 2025. So, in AWS, we are collaborating with the wide spectrum of healthcare providers, with public health organizations, with government agencies, all around the globe to support their effort, to cope with the rippling effects of the COVID-19. And arguably, many of them are visible to us today, but I would argue that many many are not even yet, have been begun to understand by us and by our customers. So, that is the reason why we want to put some emphasis, on healthcare from a public sector standpoint. >> Yeah, that's a great call-out Manpreet, I want to just highlight that, maybe get an additional commentary because, the old days it was just the institution, the hospital and then you're done. And then it was okay, hospital plus the caregiver, the doctors, and the workers, and now the patient. So, holistically, you're calling out the big picture, the patient care, right. Their families, their environment, the caregivers, and the institution, and now the supply chain, all of it integrated together. That's where the action is. And that's where the data comes in, that's where cloud scale can come in. Is that right? Am I getting that right there? >> Yeah, that's absolutely. I'm sorry Manpreet. >> Welcome MJ, go on. >> I was going to say you're absolutely right. In fact, we like to look at it almost like a bullseye, right? So, at the center of the bullseye, like you said, usually, the first stakeholder that comes to mind, is the provider or the coordinator of care. Outside of there, you have the payer, outside of there, you have researchers. And in any even further outside still are your regulators, your healthcare agencies at the local state, and federal levels, including military health. So, it's a rippling effect of customers on that side, as well as you asked about stakeholders on the startup side, there's also a bullseye of influence. Starting with the founder herself, the founder, and her executive team, moving out from there to the startup, as an organization outside from there, we've got incubators and accelerators that are in place, to help accelerate that growth as well. And then farther out you've got investors, VCs, and investors, and so on both sides, supply and demand we're looking to tap into, and accelerate the growth, and make connections between the two. >> Yeah, (indistinct) but when I, in back in real life, when we used to go to games, you walk into the stadium, you buy your ticket with your phone, you go to your seat, concessions guys, deliver things there for you, the fan experience, the players are there. I mean, why can't we have that in healthcare? I was just everything is happening, right. Go for good, yeah. And I think that's the Nirvana, hopefully soon. >> We're working on it. >> Good stuff. I know, I just love the vision, I think is so relevant and super important. Now, let's get into this health accelerator. What's this all about? Let's get into that. >> So, the health accelerator will be, a multi-week on-demand program. Where we're going to map high potential vetted startups, to a number of resources, right. I mentioned before that there will be mentorship, there will be technical experts who will be able to, take these startups who have established some presence, but we want to accelerate their ability to go deeper specifically into public health, throughout that ecosystem that I just described, right? Starting with providers and coordinators, payers, researchers, regulators. We want to give them a way to go deep into this, heavily regulated industry, so that they can not only have access to the innovation that many startups would not otherwise, like Manpreet mentioned machine learning AI, but they also have access to the resources, to ensure their success. >> What kind of problems are you guys trying to solve with this? I mean, is there a specific vetting process, is there a criteria? Is there a bar to all over share some specifics? >> Yeah, absolutely. So, for the past few years, a lot of the major change challenges, for our public health customers have been the same, but they require a new approach. And I like to call our approach the HIGH FIVE. So, some of those challenges that have been traditionally, lingering for the past few years, equal social determinants of health. Social determinants, when we talk about that, we not only refer to the nonclinical contributors to a person's overall wellness. So, you think about issues like food deserts or recidivism homelessness, all of that transportation to access to care, right, all of that contributes. But then there's also disparities and health outcomes. When you think about socioeconomic differences, rural health, ethnic and racial minorities, so, that all factors into social determinants of health. Then there's also aging. Now, these are the strategic pillars that we like to focus on, or that we are focusing on. When I mentioned aging every day in the U S, 10,000 people celebrate their 65th birthday. Many of those individuals are suffering from comorbidities, from hypertension, diabetes, cancer, and now the lingering impact of COVID-19. And so, as these aging individuals continue to live longer, the goal is to improve the quality of their life as well. And so, many of them look to technology to age independently at home, etcetera. So, that's our second strategic pillar. The third, is mental and behavioral health. So, when I talk about mental health, I mean, everything from mild depression, all the way through suicide prevention, and especially these days with COVID-19, we see a lot of clinicians suffering from burnout. And so, it's important, that we take care of the frontline workers, those healthcare providers, and even outside of COVID-19, you think about the ways that the patient population, has continued to expand, and the growth within the provider market has not, or the pool of providers has not nearly expanded at the same rate. We've got people living longer, we've got more people than ever insured. And so, we need to leverage technology to help a stagnant, number of providers to treat a growing pool of patients, without sacrificing the quality of care. And then finally, we've got environmental health. From air quality to water purity. It's important to understand the correlation between, the environment and the health care of our population. So, those are the pillars. I know I mentioned the HIGH FIVE, the fifth is not specific to healthcare. I touched on it a little bit earlier, but the fifth is, it is democratizing access to innovation, to resources, specifically for founders from underrepresented communities. >> And that's great insight, Michael great, great Schaeffer pointed that out. Manpreet take us on the final word here. Venture Capital, Startups, AWS, what's the current state share with us, the current worldview from your perspective. >> Oh, so, bringing home this point that MJ mentioned, the strategic plan of focus areas. And if you, look at all those strategic areas there's, you can really sort of put those into two buckets. One is the patient side of the bucket, and then there's the provider side of the bucket, or the caretaker side of the bucket. And if the patient side, what we want to do is work with startups that are, really working across a broad spectrum of use cases, but to solve those two key challenges of the, one on the patient's side and other on the provider side. Then the end goal of providing patient experience, and patient outcomes. For the patient side, it's the patient experience, patient engagement, patient outcomes. So, the startups looking on those sides, on those use cases of criteria. And then we have the provider side where, we want to ensure that the providers have the right set of technologies, the right set of solutions, right set of innovation, to help them where healthcare operations. You have all seen in COVID times, how the provider systems are getting overwhelmed. And that's where the healthcare operations comes into play. Clinical decision support. Now, many patients cannot get to the hospitals. So, how do we provide through our startup partners for startup customers, those solutions where remote diagnostics, remote imaging or remote health delivery could be provided. Things like predictive and prescriptive health solutions. How can we work with our startups to provide, those sort of solutions to the providers, to again, at the end, the better the outcome of the patients, right? So, that's what we were looking at. And that's what this program is all about. Working with public sector provider side of the house and the customers understanding, and helping them understand the need as well, and then bringing the right set of startup solutions, and help solve those challenges that they are facing, and the patients are facing as well. MJ, I'm sure you want to close it out, with some thoughts too. >> Okay. >> Absolutely, I would just close it with this, our goal, like Manpreet said, is to match the high potential startups, with the, the enterprises who are desiring those solutions, and success for us, we'll have three traits. It will be valuable, meaning that there will be a true alignment between what our startups offer and what the market needs. It will be measurable, so that we can quantify the improvement and outcomes. And finally, it will be sustainable. So, beyond COVID-19 beyond the opioid crisis, beyond any situation or condition, we look to bring solutions to market through our startups, that are going to truly sustain a transformative approach to modernizing public health enterprises. >> Great job again, and important work and DevOps, impacting healthcare in all kinds of ways. And it's super important work. I'm glad you guys are doing it, and it's going to develop out beautifully, and if I can give you a high five, Michael, I'll give you a high five off in-person, but remotely, >> Virtual. >> Get virtual high five great program. We're going to spread the word, good work. >> Thank you. >> Thanks for doing it, I appreciate it. >> Thank you very much for your time. >> Okay, it's theCUBE coverage virtual, we are theCUBE virtual bringing all the coverage, super important work being done in public sector, cloud enabling it, great people important, and of course, happening at re:Invent. Thanks for watching. (upbeat music)
SUMMARY :
From around the globe, of the public sector, Okay, it's my pleasure, not just in the enterprise and So, I believe so, I'm the in the healthcare space. of the change happening quickly. and how AWS is helping the provider, in the impact healthcare, and possibly the deadliest wave losing I mean, strategic focus, They're seeing the pain, and the Healthcare Acceleration program, This is something that is on the table, all around the globe to and now the patient. Yeah, that's absolutely. and make connections between the two. the fan experience, the players are there. I know, I just love the vision, So, the health accelerator will be, the goal is to improve the the current worldview and the patients are facing as well. beyond the opioid crisis, and it's going to develop out beautifully, We're going to spread the word, good work. bringing all the coverage,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
AWS | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Manpreet Mattu | PERSON | 0.99+ |
Michael | PERSON | 0.99+ |
Michael Jackson | PERSON | 0.99+ |
two sides | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
MJ | PERSON | 0.99+ |
two halves | QUANTITY | 0.99+ |
Schaeffer | PERSON | 0.99+ |
U S | LOCATION | 0.99+ |
Manpreet | PERSON | 0.99+ |
third | QUANTITY | 0.99+ |
more than a billion dollars | QUANTITY | 0.99+ |
10,000 people | QUANTITY | 0.99+ |
fifth | QUANTITY | 0.99+ |
2025 | DATE | 0.99+ |
Venture Capital | ORGANIZATION | 0.99+ |
COVID-19 | OTHER | 0.99+ |
two | QUANTITY | 0.99+ |
more than three and a half billion dollars | QUANTITY | 0.99+ |
65th birthday | QUANTITY | 0.99+ |
both sides | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
today | DATE | 0.98+ |
Worldwide Public Sector of Startups | ORGANIZATION | 0.98+ |
pandemic | EVENT | 0.98+ |
second strategic pillar | QUANTITY | 0.97+ |
more than 140,000 startups | QUANTITY | 0.97+ |
this year | DATE | 0.97+ |
Venture Capital | ORGANIZATION | 0.97+ |
about two and a half years | QUANTITY | 0.97+ |
first stakeholder | QUANTITY | 0.96+ |
AWS Worldwide Public Sector | ORGANIZATION | 0.96+ |
Public Health | ORGANIZATION | 0.95+ |
eight years | QUANTITY | 0.95+ |
FIVE | QUANTITY | 0.95+ |
two buckets | QUANTITY | 0.94+ |
two key challenges | QUANTITY | 0.94+ |
one | QUANTITY | 0.94+ |
two great guests | QUANTITY | 0.93+ |
over 1000 | QUANTITY | 0.91+ |
22 key healthcare AI | QUANTITY | 0.88+ |
re: | EVENT | 0.87+ |
around $22 billion | QUANTITY | 0.86+ |
four years ago | DATE | 0.86+ |
COVID-19 pandemic | EVENT | 0.81+ |
Public | ORGANIZATION | 0.8+ |
HIGH FIVE | QUANTITY | 0.8+ |
Invent 2020 Public Sector Day | EVENT | 0.78+ |
Booz | PERSON | 0.78+ |
years | DATE | 0.78+ |
wave | EVENT | 0.77+ |
approximately | QUANTITY | 0.76+ |
per day | QUANTITY | 0.75+ |
three traits | QUANTITY | 0.73+ |
Jay Snyder, New Relic | AWS re:Invent 2020
>>from around the globe. It's the Cube with digital coverage of AWS reinvent 2020 sponsored by Intel, AWS and our community partners. >>Hello and welcome to the Cube virtual here with coverage of aws reinvent 2020. I'm your host, Justin Warren. And today I'm joined by J. Snyder, who is the chief chief customer officer at New Relic J. Welcome to the Cube. >>It is fantastic. Me back with the Cube. One of my favorite things to do has been for years. So I appreciate you having me. >>Yes, a bit of a cube veteran. Been on many times. So it's great to have you with us here again. Eso you've got some news about new relic and and Amazon away W s strategic collaboration agreement. I believe so. Maybe tell us a bit more about what that actually is and what it means. >>Yes. So we've been partners with AWS for years, but most recently in the last two weeks, we've just announced a five year strategic partnership that really expands on the relationship that we already had. We had a number of integrations and competencies already in place, but this is a big deal to us. and and we believe a big deal. Teoh A W s Aziz Well, so really takes all the work we've done to what I'll call the next level. It's joint technology development where were initially gonna be embedding new relic one right into the AWS management console for ease of use and really agility for anyone who's developing and implementing Ah cloud strategy, uh, big news as well from an adoption relative to purchase power so you can purchase straight through the AWS marketplace and leverage your existing AWS spend. And then we're gonna really be able to tap into the AWS premier partner ecosystem. So we get more skills, more scale as we look to drive consulting and skills development in any implementation for faster value realization and overall success in the cloud. So that's the high level. Happy to get into a more detailed level if you're interested around what I think it means to companies but just setting the stage, we're really excited about it as a company. In fact, I just left a call with a W S to join this call as we start to build out the execution plan for the next five years look like >>fantastic. So for those who might be new to new relic and aren't particularly across the sort of field of observe ability, could you just give us a quick overview of what new relic does? And and then maybe talk about what the strategic partnership means for for the nature of new relics business? >>Yes, so when I think about observe ability and what it means to us as opposed to the market at large, I would say our vision around observe ability is around one word, and that word is simplification. So, you know, I talked to a lot of customers. That's what I do all the time. And every time I do, I would say that there's three themes that come up over and over. It's the need to deliver a customer experience with improved up time and ever improving importance. It's the need to move more quickly to public cloud to embrace the scale and efficiency public cloud services have to offer. And then it's the need to improve the efficiency and speed of their own engineering teams so they can deliver innovation through software more quickly. And if you think about all those challenges And what observe ability is it's the one common thread that cuts across all those right. It's taking all of the operational data that your system admits it helps you measure improve the customer, experience your ability to move to public cloud and compare that experience before you start to after you get there. The effectiveness of your team before you deploy toe after you get there. And it's all the processes around that right, it helps you be almost able to be there before your there there. I mean, if that makes sense right, you'll be able to troubleshoot before the event actually happens or occurs. So our vision for this is like I talked about earlier is all about simply simplification. And we've broken this down into literally three piece parts, right? Three products. That's all we are. The first is about having a much data as you possibly can. I talked about admitting that transactional telemetry data, so we've created a telemetry data platform which rides on the world's most powerful database, and we believe that if we can take all of that data, all that infrastructure and application data and bring it into that database, including open source data and allow you to query it, analyze it and take action against it. Um, that's incredibly powerful, but that's only part one. Further, we have a really strong point of view that anybody who has the ability to break production should have the ability to fix production. And for us, that's giving them full stack observe ability. So it's the ability to action against all of that data that sits in the data platform. And then finally, we believe that you need to have applied intelligence because there's so many things that are happening in these complex environments. You wanna be able to cut through the noise and reduce it to find those insights and take action in a way that leverages machine learning. And that, for us, is a i ops. So really for us. Observe ability. When I talked about simplification, we've simplified what is a pretty large market with a whole bunch of products, just down to three simple things. A data platform, the ability to operationalize in action against that data and then layer on top in the third layer, that cake machine learning so it could be smarter than you can be so it sees problems before they occur. And that And that's what that's what I would say observe, ability is to us, and it's the ability to do that horizontally and vertically across your entire infrastructure in your entire stack. I hope that makes sense. >>Yeah, there's a lot of dig into there, So let's let's start with some of that operational side of things because I've long been a big believer in the idea of cloud is being a state of mind rather than a particular location on. A lot of people have been embracing Cloud Way Know that for we're about 10 or so years. And the and the size of reinvent is proven out how popular cloud could be. Eso some of those operational aspects that you were talking about there about the ability to react are particularly like that. You you were saying that anyone who could break production should be able to fix production. That's a very different way of working than what many organizations would be used to. So how is new relic helping customers to understand what they need to change about how they operate their business as they adopt some of these methods. >>Well, it's a great question. There's a couple of things we do. So we have an observe ability, maturity framework by which we employ deploy and that, and I don't want to bore the audience here. But needless to say, it's been built over the last year, year and a half by using hundreds of customers as a test case to determine effectively that there is a process that most companies go through to get to benefits realization. And we break those benefit categories into two different areas, one around operational efficiency and agility. The other is around innovation and digital experience. So you were talking about operational efficiency, and in there we have effectively three or four different ways and what I call boxes on how we would double, click and triple click into a set of actions that would lead you to an operational outcome. So we have learned over time and apply to methodology and approach to measure that. So depending on what you're trying to do, whether it's meantime to recover or meantime, to detect, or if you've got hundreds of developers and you're finding that they're ineffective or inefficient and you want to figure out how to deploy those resource is to different parts of the environment so you can get them to better use their time. It all depends on what your business outcome and business objective is. We have a way to measure that current state your effectiveness ply rigor to it and the design a process by using new relic one to fill in those gaps. And it can take on the burden of a lot of those people. E hate to say it because I'm not looking to replace any individual. It's really about freeing up their time to allow them to go do something in a more effective and more effective, efficient manner. So I don't know if that's answering the question perfectly, but >>e don't think there is a perfect answer to its. Every customer is a bit different. >>S So this is exactly why we developed the methodology because every customer is a little different. The rationale, though, is yeah, So the rationale there's a lot of common I was gonna say there's a lot of common themes, So what we've been able to develop over time with this framework is that we've built a catalog of use cases and experiences that we can apply against you. So depending on what your business objectives are and what you're trying to achieve, were able to determine and really auger in there and assess you. What is your maturity level of being able to deliver against these? Are you even using the platform to the level of maturity that would allow you to gain this benefit realization? And that's where we're adding a massive amount of value. And we see that every single day with our customers who are actually quite surprised by the power of the platform. I mean, if you think traditionally back not too far, two or even three years. People thought of new relic as an a P M. Company. And I think with the launch this summer, this past July with new relic one, we've really pivoted to a platform company. So while a lot of companies love new relic for a PM, they're now starting to see the power of the platform and what we can do for them by operationally operationalize ing. Those use cases around agility and effectiveness to drive cost and make people b'more useful and purposeful with their time so they can create better software. >>Yeah, I think that's something that people are realizing a lot more lately than they were previously. I think that there was a lot of TC analysis that was done on a replacement of FTE basis, but I think many organizations have realized that well, actually, that doesn't mean that those people go away. They get re tasked to do new things. So any of these efficiency, you start with efficiency. And it turns out actually being about business agility about doing new things with the same sort with the same people that you have who now don't have to do some of these more manual and fairly boring tasks. >>Yeah, just e Justin. If this if this cube interview thing doesn't work out for you were hiring some value engineers Right now it sounds like you've got the talk track down perfectly, because that's exactly what we're seeing in the market place. So I agree. >>So give us some examples, if you can, of maybe one or two off things that you've seen that customers have have used new relic where they've stripped out some of that make work or the things that they don't really need to be doing. And then they're turning that into new agility and have created something new, something more individual. Have you got an example you could share with us? >>You know, it's it's funny way were just I just finished doing our global customer advisory boards, which is, you know, rough and tough about 100 customers around the world. So we break it into the three theaters, and we just we were just talking with a particular customer. I don't want to give their name, but the session was called way broke the sessions into two different buckets, and I think every customer buys products like New Relic for two reasons. One is to either help them save money or to help them make money. So we actually split the sessions into those two areas and e think you're talking about how do we help them? How do we help them save money? And this particular company that was in the media industry talked at great length about the fact that they are a massive news conglomerate. They have a whole bunch of individual business units. They were decentralized and non standardized as it related to understanding how their software was getting created, how they were defining and, um, determining meantime to recover performance metrics. All these things were happening around them in a highly complex environment, just like we see with a lot of our customers, right? The complexity of the environments today are really driving the need for observe ability. So one of the things we did with them is we came in and we apply the same type of approach that we just discussed. We did a maturity assessment for them, and we find a found a variety of areas where they were very immature and using capabilities that existed within the platform. So we're able to light up a variety of things around. Insights were able to take more data in from a logging perspective. And again, I'm probably getting a little bit into the weeds for this particular session. But needless to say, way looked at the full gamut of metrics, events, logs and traces which was wasn't really being done in observe, ability, strategy, manner, and deploy that across the entire enterprise so created a standard platform for all the data in this particular environment. Across 5th, 14 different business units and as a byproduct, they were able to do a variety of things. One, the up time for a lot of their customer facing media applications improved greatly. We actually started to pivot from actually driving cost to showing how they could quote unquote make money, because the digital experience they were creating for a lot of their customers reduced the time to glass, if you will, for clicking the button and how quickly they could see the next page, the next page or whatever online app they were looking to get dramatically. So as a byproduct of this, they were about the repurpose to the point you made Justin. Dozens of resource is off of what was traditionally maintenance mode and fighting fires in a reactive capability towards building new code and driving new innovation in the marketplace. And they gave a couple of examples of new applications that they were able to bring to market without actually having to hire any net New resource is so again, I don't want to give away the name, the company, it maybe it was a little too high level, but it actually plays perfectly into exactly what what you're describing, Um, >>that is a good example of one of those that one of the it's always nice to have a specific concrete customer doing one of these kinds of things that you you describe in generic terms. Okay. No, this is this is being applied very specifically to one customer. So we're seeing those sorts of things more and more. >>Yeah, and I was gonna give you, you know, I thought about in advance of this session. You know, what is a really good example of what's happening in the world around us today? And I thought of particular company that we just recently worked with, which is check. I don't know if you're familiar with keg, if you've heard of them. But their education technology company based in California and they do digital and physical textbook rentals. They do online tutoring an online customer services. So, Justin, if you're like me or the rest of the world and you have kids who are learning at home right now, think about the amount of pressure and strain that's now being put on this poor company Check to keep their platform operational 24 77 days a week. So that students can learn at pace and keep up right. And it's an unbelievable success story for us and one that I love, because it touches me personally because I have three kids all doing online, learning in a variety of different manners right now. And, you know, we talked about it earlier. The complexity of some of the environments today, this is a company that you would never gas, but they run 500 micro services and highly complex, uh, technical architectural right. So we had to come in and help these folks, and we're able to produce their meantime to recover because they were having a lot of issues with their ability to provide a seamless performance experience. Because you could imagine the volume of folks hitting them these days on. Reduce that meantime to recover by five X. So it's just another example we're able to say, you know, it's a real world example. Were you able to actually reduce the time to recover, to provide a better experience and whether or not you want to say that saving money or making money? What I know for sure is is giving an incredible experience so that folks in the next generation of great minds aren't focused on learning instead of waiting to learn right, So very cool. >>That is very cool. And yes, and I have gone through the whole teaching kids >>about on >>which is, uh, which it was. It was disruptive, not necessarily in a good way, but we all we adapted and learned how to do it in a new way, which is, uh, it was a lot easier towards the end than it was at the beginning. >>I'd say we're still getting there at the Snyder household. Justin, we're still getting >>was practice makes perfect eso for organizations like check that who might be looking at JAG and thinking that that sounds like a bit of a success story. I want to learn more about how new relic might be able to help me. How should they start? >>Well, there's a lot of ways they can start. I mean, one of the most exciting things about our launch in July was that we have a new free tier. So for anybody who's interested in understanding the power of observe ability, you could go right to our website and you can sign up for free and you can start to play with new relic one. I think once you start playing for, we're gonna find the same thing that happens to most of the folks to do that. They're gonna play more and more and more, and they're gonna start Thio really embrace the power. And there's an incredible new relic university that has fantastic training online. So as you start to dabble in that free tier, start to see with the power and the potential is you'll probably sign up for some classes. Next thing you know, you're often running, so that is one of the easiest ways to get exposed to it. So certainly check us out at our website and you can find out all about that free tier. And what observe ability could potentially mean to you or your business. >>And as part of the AWS reinvent experience, are they able to engage with you in some way? >>It could definitely come by our booth, check us out, virtually see what we have to say. We'd love to talk to them, and we'd be happy to talk to you about all the powerful things we're doing with A. W. S. in the marketplace to help meet you wherever you are in your cloud journey, whether it's pre migration during migration, post migration or even optimization. We've got some incredible statistics on how we can help you maximize and leverage your investment in AWS. And we're really excited to be a strategic partner with them. And, you know, it's funny. It's, uh, for me to see how observe ability this platform can really touch every single facet of that cloud migration journey. And, you know, I was thinking originally, as I got exposed to this, it would be really useful for identity Met entity relationship management at the pre migration phase and then possibly at the post migration flays is you try to baseline and measure results. But what I've come to learn through our own process, of moving our own business to the AWS cloud, that there's tremendous value everywhere along that journey. That's incredibly exciting. So not only are we a great partner, but I'm excited that we will be what I call first and best customer of AWS ourselves new relic as we make our own journey to the cloud >>or fantastic and I'm I encourage any customers who might be interested in new relic Thio definitely gone and check you out as part of the show. Thank you. J. J. Snyder from New Relic. You've been watching the Cube virtual and our coverage of AWS reinvent 2020. Make sure that you check out all the rest of the cube coverage of AWS reinvent on your desktop laptop your phone wherever you are. I've been your host, Justin Warren, and I look forward to seeing you again soon.
SUMMARY :
It's the Cube with digital coverage Welcome to the Cube. So I appreciate you having me. So it's great to have you with us here again. so you can purchase straight through the AWS marketplace and leverage your existing AWS spend. across the sort of field of observe ability, could you just give us a quick overview of what new relic So it's the ability to action So how is new relic helping customers to understand what they need to change about of actions that would lead you to an operational outcome. e don't think there is a perfect answer to its. to the level of maturity that would allow you to gain this benefit realization? new things with the same sort with the same people that you have who now don't have to do some of these more If this if this cube interview thing doesn't work out for you were hiring some So give us some examples, if you can, of maybe one or two off things that you've seen that customers So one of the things we did with them is we came in and we apply the same type of approach doing one of these kinds of things that you you describe in generic terms. X. So it's just another example we're able to say, you know, And yes, and I have gone through the whole teaching kids but we all we adapted and learned how to do it in a new way, which is, I'd say we're still getting there at the Snyder household. I want to learn more about how new relic might be able to help me. mean to you or your business. W. S. in the marketplace to help meet you wherever you are in your cloud journey, whether it's pre migration during Make sure that you check out all the rest of
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Justin Warren | PERSON | 0.99+ |
Jay Snyder | PERSON | 0.99+ |
Justin | PERSON | 0.99+ |
California | LOCATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
July | DATE | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
three kids | QUANTITY | 0.99+ |
J. J. Snyder | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
five year | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
5th | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
two reasons | QUANTITY | 0.99+ |
two areas | QUANTITY | 0.99+ |
three themes | QUANTITY | 0.99+ |
Three products | QUANTITY | 0.99+ |
J. Snyder | PERSON | 0.99+ |
three theaters | QUANTITY | 0.99+ |
three years | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
two different buckets | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
New Relic | ORGANIZATION | 0.98+ |
third layer | QUANTITY | 0.98+ |
Cube | COMMERCIAL_ITEM | 0.97+ |
about 100 customers | QUANTITY | 0.97+ |
Intel | ORGANIZATION | 0.97+ |
one customer | QUANTITY | 0.97+ |
today | DATE | 0.97+ |
new relic | ORGANIZATION | 0.97+ |
this summer | DATE | 0.95+ |
triple | QUANTITY | 0.95+ |
double | QUANTITY | 0.95+ |
five X. | QUANTITY | 0.95+ |
14 different business units | QUANTITY | 0.95+ |
Snyder | PERSON | 0.94+ |
hundreds of customers | QUANTITY | 0.93+ |
three piece parts | QUANTITY | 0.93+ |
J. | PERSON | 0.93+ |
part one | QUANTITY | 0.92+ |
Thio | PERSON | 0.92+ |
24 77 days a week | QUANTITY | 0.92+ |
three simple things | QUANTITY | 0.91+ |
500 micro services | QUANTITY | 0.89+ |
FTE | ORGANIZATION | 0.89+ |
Dozens of resource | QUANTITY | 0.88+ |
last year | DATE | 0.87+ |
one word | QUANTITY | 0.86+ |
four different ways | QUANTITY | 0.85+ |
A. W. S. | ORGANIZATION | 0.84+ |
hundreds of developers | QUANTITY | 0.83+ |
two different areas | QUANTITY | 0.83+ |
one common | QUANTITY | 0.82+ |
last two weeks | DATE | 0.82+ |
about 10 or so years | QUANTITY | 0.81+ |
aws | ORGANIZATION | 0.78+ |
year and a half | QUANTITY | 0.78+ |
New Relic | PERSON | 0.78+ |
P M. | ORGANIZATION | 0.78+ |
Aziz | PERSON | 0.76+ |
next five years | DATE | 0.75+ |
past July | DATE | 0.74+ |
Deepak Singh, AWS | AWS re:Invent 2020.
>>from around the globe. It's the Cube with digital coverage of AWS reinvent 2020 sponsored by Intel and AWS. Yeah, welcome back to the Cubes. Live coverage of AWS reinvent 2020. It's virtual this year over three weeks. Next three weeks we're here on the ground, covering all the live action. Hundreds of videos Walter Wall coverage were virtual not in person this year. So we're bringing all the interviews remote. We have Deepak Singh, vice president of Compute Services. A range of things within Amazon's world. He's the container guy. He knows all what's going on with open source. Deepak, great to see you again. Sorry, we can't be in person, but that's the best we could do. Thanks for coming on. And big keynote news all year all over the keynote. Your DNA is everywhere in the keynote. Thanks for coming on. >>Yeah. Now, no thanks for having me again. It's always great to be on the Cube. Unfortunately, not sitting in the middle of the floral arrangement, which I kind of miss. I know, but it waas great morning for us. We had a number of announcements in the container space and sort of adjacent to that in the developer and operator experience space about making it easy for people to adopt things like containers and serverless. So we're pretty excited about. And his keynote today and the rest agreement. >>It's interesting, You know, I've been following Amazon. Now start a three invent. I've been using Amazon since easy to started telling that garment that story. But you look like the mainstream market right now. This is a wake up call for Cloud. Um, mainly because the pandemic has been forced upon everybody. I talked to Andy about that he brought up in the keynote, but you start to get into the meat on the bone here. When you're saying OK, what does it really mean? The containers, the server Lis, Uh, the machine learning all kind of tied together with computers getting faster. So you see an absolute focus of infrastructures of service, which has been the bread and butter for Amazon web services. But now that kinda you know, connective tissue between where the machine learning kicks in. This is where I see containers and lambda and serve Earless really kicking ass and and really fill in the hole there because that's really been the innovation story and containers air all through that and the eks anywhere was to me the big announcement because it shows Amazon's wow vision of taking Amazon to the edge to the data center. This is a big important announcement. Could you explain E. K s anywhere? Because I think this is at the heart of where customers are looking to go to its where the puck is going. You're skating to where the puck is. Explain the importance of eks anywhere. >>Yeah, I'll actually step back. And I talked about a couple of things here on I think some of the other announcements you heard today like the smaller outposts, uh, you know, the one you and do you outpost skills are also part of that story. So I mean, if you look at it, AWS started thinking about what will it take for us to be successful in customers data centers a few years ago? Because customers still have data centers, they're still running there On our first step towards that Waas AWS in many ways benefits a lot from the way we build hardware. How what we do with nitro all the way to see C two instance types that we have. What we have a GPS on our post waas. Can we bring some of the core fundamental properties that AWS has into a customer data center, which then allowed PCs any KS and other AWS services to be run on output? Because that's how we run today. But what we started hearing from customers waas That was not enough for two reasons. One, not all of them have big data centers. They may want to run things on, you know, in a much smaller location. I like to think about things like oil rates of point of sale places, for they may have existing hardware that they still plan to use and intend to use for a very long time with the foundational building blocks easy to EBS. Those get difficult when we go on to hardware. That is not a W s hardware because be very much depend on that. But it containers we know it's possible. So we started thinking about what will it take for us to bring the best of AWS toe help customers run containers in their own data center, so I'll start with kubernetes, so with que binaries. People very often pick Kubernetes because they start continue rising inside their own data centers. And the best solution for them is Cuban Aires. So they learn it very well. They understand it, their organizations are built around it. But then they come to AWS and run any chaos. And while communities is communities, if you're running upstream, something that runs on Prem will run on AWS. They end up in two places in sort of two situations. One, they want to work with AWS. They want to get our support. They want to get our expertise second, most of them once they start running. Eks realized that we have a really nice operational posture of a D. K s. It's very reliable. It scales. They want to bring that same operational posture on Prem. So with the ts anywhere what we decided to do Waas start with the bits underlying eks. The eks destroyed that we announced today it's an open source communities distribution with some additional pieces that that we had some of the items that we use that can be run anywhere. They're not dependent on AWS. You don't even have be connected to a W s to use eks destro, but we will Patrick. We will updated. It's an open source project on get help. So that's a starting point that's available today. No, Over the next several months, what will add is all of the operational to link that we have from chaos, we will make available on premises so that people can operate the Cuban and these clusters on Prem just the way they do on AWS. And then we also announced the U. K s dashboard today which gives you visibility into our communities clusters on AWS, and we'll extend that so that any communities clusters you're running will end up on the dashboard to get a single view into what's going on. And that's the vision for eks anywhere, which is if you're running communities. We have our operational approach to running it. We have a set of tools that we're gonna that we have built. We want everybody to have access to the same tools and then moving from wherever you are to aws becomes super easy cause using the same tooling. We did something similar with the C s as well the DCs anywhere. But we did it a little bit differently. Where in the CSU was centralized control plane and all we want for you is to bring a CPU and memory. The demo for that actually runs in a bunch of raspberry PiS. So as long as you can install the C s agent and connect to an AWS region, you're good to go. So same problem. Different, slightly different solutions. But then we are customers fall into both buckets. So that's that's the general idea is when we say anywhere it means anywhere and we'll meet you there >>and then data centers running the case in the data center and cloud all good stuff. The other thing that came out I want you to explain is the importance of what Andy was getting to around this notion of the monolith versus Micro Services at one slightly put up. And that's where he was talking about Lambda and Containers for smaller compute loads. What does it mean? What was he talking about there? Explain what he means by that >>that Z kind of subtle and quite honestly, it's not unique to London containers. That's the way the world was going, except that with containers and with several functions with panda. You got this new small building blocks that allow you to do it that much better. So you know you can break your application off. In the smaller and smaller pieces, you can have teams that own each of those individual pieces each other pieces. Each of these services can be built using architecture that you secret, some of them makes sense. Purely service, land and media gateway. Other things you may want to run on the C s and target. Ah, third component. You may have be depending on open source ecosystem of applications. And there you may want to run in communities. So what you're doing is taking up what used to be one giant down, breaking up into a number of constituent pieces, each of which is built somewhat independently or at least can be. The problem now is how do you build the infrastructure where the platform teams of visibility in tow, what all the services are they being run properly? And also, how do you scale this within an organization, you can't train an entire organ. Communities overnight takes time similar with similarly with server list eso. That's kind of what I was talking about. That's where the world is going. And then to address that specific problem we announced AWS proton, uh, AWS program is essentially a service that allows you to bring all of these best practices together, allows the centralized team, for example, to decide what are the architectures they want to support. What are the tools that they want to support infrastructure escort, continuous delivery, observe ability. You know all the buzzwords, but that's where the world's going and then give them a single framework where they can deploy these and then the developers can come into self service. It's like I want to build a service using Lambda. I don't even learn how toe put it all together. I'm just gonna put my coat and pointed at this stock that might centralized team has built for me. All I need to do is put a couple of parameters, um, and I'm off to the races and not scale it to end, and it gives you the ability to manage also, So >>it's really kind of the building blocks pushing that out to the customer. I gotta ask you real quick on the proton. That's a fully managed service created best. Could you explain what that means for the developer customer? What's the bottom line? What's the benefit to >>them? So the biggest benefit of developers if they don't need to become an expert at every single technology out there, they can focus on writing application court, not have to learn how to crawl into structure and how pipelines are built and what are the best practices they could choose to do. So the developers, you know, modern and companies Sometimes developers wear two hats and the building off, the sort of underlying scaffolding and the and the build applications for application development. Now all you have to do is in writing an application code and then just go into a proton and say, This is architecture, that I'm going to choose your self, service it and then you're off to the races. If there's any underlying component that's changing, or any updates are coming on, put on it automatically take care off updates for you or give you a signal that says, Hey, the stock has to be updated first time to redeploy accord so you can do all of that in a very automated fashion. That's why everything is done. Infrastructures Gold. It's like a key, uh, infrastructure and told us, and continuous delivery of sort of key foundational principles off put on. And what they basically do is doing something that every company that we talked oh wants to do. But only a handful have the teams and the skill set to do that. It takes a lot of work and it takes ah lot of retraining. And now most companies don't need to do that. Or at least not in that here. So I think this is where the automation and manageability that brings makes life a lot easier. >>Yeah, a lot of drugs. No docker containers. They're very familiar with it. They want to use that. Whatever. Workflow. Quickly explain again to me so I can understand fully the benefit of the lamb container dynamic. Because what was the use case there? What's the problem that you solve? And what does it mean for the developer? What specifically is going on there? What's the What's the benefit? Why would I care? >>Yeah, eso I'll actually talked about one of the services that my team runs called it of your stature. AWS batch has a front time that's completely serverless. It's Lambda and FBI did play its back in the PCs running on the city right? That's the better the back end services run on their customers. Jobs in the running. Our customers are just like that. You know, we have many customers out there that are building services that are either completely service, but they fit that pattern. They are triggered by events. They're taking an event from something and then triggering a bunch of services or their triggering an action which is doing some data processing. And then they have these long running services, which almost universally in our running on containment. How do you bring all of this together into a single framework, as opposed to some people being experts on Lambda and some people being experts and containers? That's not how the real world works. So trying to put all of this because these teams do work together into a single framework was our goal, because that's what we see our customers doing, and I think they'll they'll do it. More related to that is the fact that Lambda now supports Dr Images containing images as a packaging format because a lot of companies have invested in tooling, toe build container images and our land. I can benefit from that as well. While customers get all the, you know, magic, The Lambda brings you >>a couple of years ago on this on the Cube. I shared this tweet out earlier in the week. Andy, we pressed and even services launches like, would you launch build Amazon on Lamb? Day says we probably would. And then he announced to me And he also I think you mentioned the keynote that half of Amazon's new APS are built on lambda. >>Yeah, that's good. This >>is a new generation of developers. >>Oh, absolutely. I mean, you should talk to the Lambda today also, but even like even in the container side, almost half of the new container customers that we have on AWS in 2020 have chosen target, which is serverless containers. They're not picking E c s or E. T. S and running at least two. They're running it on target the vast majority of those two PCs, but we see that trend on the container side as well, and actually it's accelerating. More and more and more new customers will pick target, then running containers on the city. >>Deepak, great to chat with you. I know you gotta go. Thanks for coming on our program. Breaking down the keynote analysis. You've got a great, um, focus area is only going to get hotter and grow faster and a lot more controversy and goodness coming at the same time. So congratulations. >>Thank you. And always good to be here. >>Thanks for coming on. This is the Cube Virtual. We are the Cube. Virtual. I'm John for your host. Thanks for watching.
SUMMARY :
Deepak, great to see you again. in the container space and sort of adjacent to that in the developer and operator experience I talked to Andy about that he brought up in the keynote, but you start to get into the meat on So that's that's the general idea is when we say anywhere it means anywhere and we'll meet you there to explain is the importance of what Andy was getting to around this notion of the monolith versus In the smaller and smaller pieces, you can have teams it's really kind of the building blocks pushing that out to the customer. So the biggest benefit of developers if they don't need to become an expert at every single technology out there, What's the problem that you solve? It's Lambda and FBI did play its back in the PCs running on the city right? And then he announced to me And he also I think you mentioned the keynote that half Yeah, that's good. almost half of the new container customers that we have on AWS in 2020 have I know you gotta go. And always good to be here. This is the Cube Virtual.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Andy | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Deepak Singh | PERSON | 0.99+ |
2020 | DATE | 0.99+ |
Deepak | PERSON | 0.99+ |
Walter Wall | PERSON | 0.99+ |
today | DATE | 0.99+ |
first step | QUANTITY | 0.99+ |
John | PERSON | 0.99+ |
two situations | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
two PCs | QUANTITY | 0.99+ |
Lambda | TITLE | 0.99+ |
two reasons | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
Each | QUANTITY | 0.99+ |
two places | QUANTITY | 0.99+ |
London | LOCATION | 0.98+ |
FBI | ORGANIZATION | 0.98+ |
Hundreds of videos | QUANTITY | 0.98+ |
this year | DATE | 0.98+ |
Intel | ORGANIZATION | 0.97+ |
first time | QUANTITY | 0.97+ |
third component | QUANTITY | 0.97+ |
U. K | LOCATION | 0.97+ |
single framework | QUANTITY | 0.96+ |
each | QUANTITY | 0.96+ |
both buckets | QUANTITY | 0.96+ |
Cuban Aires | LOCATION | 0.96+ |
Cube | COMMERCIAL_ITEM | 0.95+ |
Cube Virtual | COMMERCIAL_ITEM | 0.95+ |
pandemic | EVENT | 0.94+ |
single | QUANTITY | 0.93+ |
Patrick | PERSON | 0.93+ |
over three weeks | QUANTITY | 0.93+ |
few years ago | DATE | 0.92+ |
aws | ORGANIZATION | 0.9+ |
second | QUANTITY | 0.9+ |
Compute Services | ORGANIZATION | 0.9+ |
Lambda | ORGANIZATION | 0.88+ |
couple of years ago | DATE | 0.87+ |
Kubernetes | TITLE | 0.87+ |
Next three weeks | DATE | 0.86+ |
two hats | QUANTITY | 0.82+ |
single technology | QUANTITY | 0.82+ |
EBS | ORGANIZATION | 0.81+ |
Prem | ORGANIZATION | 0.8+ |
months | DATE | 0.77+ |
The Lambda | TITLE | 0.76+ |
three | QUANTITY | 0.74+ |
C s | TITLE | 0.71+ |
least two | QUANTITY | 0.69+ |
Hemanth Manda, IBM Cloud Pak
(soft electronic music) >> Welcome to this CUBE Virtual Conversation. I'm your host, Rebecca Knight. Today, I'm joined by Hermanth Manda. He is the Executive Director, IBM Data and AI, responsible for Cloud Pak for Data. Thanks so much for coming on the show, Hermanth. >> Thank you, Rebecca. >> So we're talking now about the release of Cloud Pak for Data version 3.5. I want to explore it for, from a lot of different angles, but do you want to just talk a little bit about why it is unique in the marketplace, in particular, accelerating innovation, reducing costs, and reducing complexity? >> Absolutely, Rebecca. I mean, this is something very unique from an IBM perspective. Frankly speaking, this is unique in the marketplace because what we are doing is we are bringing together all of our data and AI capabilities into a single offering, single platform. And we have continued, as I said, we made it run on any cloud. So we are giving customers the flexibility. So it's innovation across multiple fronts. It's still in consolidation. It's, in doing automation and infusing collaboration and also having customers to basically modernize to the cloud-native world and pick their own cloud which is what we are seeing in the market today. So I would say this is a unique across multiple fronts. >> When we talk about any new platform, one of the big concerns is always around internal skills and maintenance tasks. What changes are you introducing with version 3.5 that does, that help clients be more flexible and sort of streamline their tasks? >> Yeah, it's an interesting question. We are doing a lot of things with respect to 3.5, the latest release. Number one, we are simplifying the management of the platform, made it a lot simpler. We are infusing a lot of automation into it. We are embracing the concept of operators that are not open shelf has introduced into the market. So simple things such as provisioning, installation, upgrades, scaling it up and down, autopilot management. So all of that is taken care of as part of the latest release. Also, what we are doing is we are making the collaboration and user onboarding very easy to drive self service and use the productivity. So overall, this helps, basically, reduce the cost for our customers. >> One of the things that's so striking is the speed of the innovation. I mean, you've only been in the marketplace for two and a half years. This is already version 3.5. Can you talk a little bit about, about sort of the, the innovation that it takes to do this? >> Absolutely. You're right, we've been in the market for slightly over two and a half years, 3.5's our ninth release. So frankly speaking, for any company, or even for startups doing nine releases in 2.5 years is unheard of, and definitely unheard of at IBM. So we are acting and behaving like a startup while addressing the go to market, and the reach of IBM. So I would say that we are doing a lot here. And as I said before, we're trying to address the unique needs of the market, the need to modernize to the cloud-native architectures to move to the cloud also while addressing the needs of our existing customers, because there are two things we are trying to focus, here. First of all, make sure that we have a modern platform across the different capabilities in data and AI, that's number one. Number two is also how do we modernize our existing install base. We have six plus billion dollar business for data and AI across significant real estates. We're providing a platform through Cloud Pak for Data to those existing install base and existing customers to more nice, too. >> I want to talk about how you are addressing the needs of customers, but I want to delve into something you said earlier, and that is that you are behaving like a startup. How do you make sure that your employees have that kind of mindset that, that kind of experimental innovative, creative, resourceful mindset, particularly at a more mature company like IBM? What kinds of skills do you try to instill and cultivate in your, in your team? >> That's a very interesting question, Rebecca. I think there's no single answer, I would say. It starts with listening to the customers, trying to pay detailed attention to what's happening in the market. How competent is it reacting. Looking at the startups, themselves. What we did uniquely, that I didn't touch upon earlier is that we are also building an open ecosystem here, so we position ourselves as an open platform. Yes, there's a lot of IBM unique technology here, but we also are leveraging open source. We are, we have an ecosystem of 50 plus third party ISVs. So by doing that, we are able to drive a lot more innovation and a lot faster because when you are trying to do everything by yourself, it's a bit challenging. But when you're part of an open ecosystem, infusing open source and third party, it becomes a lot easier. In terms of culture, I just want to highlight one thing. I think we are making it a point to emphasize speed over being perfect, progress over perfection. And that, I think, that is something net new for IBM because at IBM, we pride ourselves in quality, scalability, trying to be perfect on day one. I think we didn't do that in this particular case. Initially, when we launched our offense two and a half years back, we tried to be quick to the market. Our time to market was prioritized over being perfect. But now that is not the case anymore, right? I think we will make sure we are exponentially better and those things are addressed for the past two and one-half years. >> Well, perfect is the enemy of the good, as we know. One of the things that your customers demand is flexibility when building with machine learning pipeline. What have you done to improve IBM machine learning tools on this platform? >> So there's a lot of things we've done. Number one, I want to emphasize our building AI, the initial problem that most of our customers concerned about, but in my opinion, that's 10% of the problem. Actually deploying those AI models or managing them and covering them at scales for the enterprise is a bigger challenge. So what we have is very unique. We have the end-to-end AI lifecycle, we have tools for all the way from building, deploying, managing, governing these models. Second is we are introducing net new capabilities as part of a latest release. We have this call or this new service called WMLA, Watson Machine Learning Accelerator that addresses the unique challenges of deep learning capabilities, managing GPUs, et cetera. We are also making the auto AI capabilities a lot more robust. And finally, we are introducing a net new concept called Federator Learning that allows you to build AI across distributed datasets, which is very unique. I'm not aware of any other vendor doing this, so you can actually have your data distributed across multiple clouds, and you can build an aggregated AI model without actually looking at the data that is spread across these clouds. And this concept, in my opinion, is going to get a lot more traction as we move forward. >> One of the things that IBM has always been proud of is the way it partners with ISVs and other vendors. Can you talk about how you work with your partners and foster this ecosystem of third-party capabilities that integrate into the platform? >> Yes, it's always a challenge. I mean, for this to be a platform, as I said before, you need to be open and you need to build an ecosystem. And so we made that a priority since day one and we have 53 third party ISVs, today. It's a chicken and egg problem, Rebecca, because you need to obviously showcase success and make it a priority for your partners to onboard and work with you closely. So, we obviously invest, we co-invest with our partners and we take them to market. We have different models. We have a tactical relationship with some of our third party ISVs. We also have a strategic relationship. So we partner with them depending on their ability to partner with us and we go invest and make sure that we are not only integrating them technically, but also we are integrating with them from a go-to-market perspective. >> I wonder if you can talk a little bit about the current environment that we're in. Of course, we're all living through a global health emergency in the form of the COVID-19 pandemic. So much of the knowledge work is being done from home. It is being done remotely. Teams are working asynchronously over different kinds of digital platforms. How have you seen these changes affect the team, your team at IBM, what kinds of new kinds of capabilities, collaborations, what kinds of skills have you seen your team have to gain and have to gain quite quickly in this environment? >> Absolutely. I think historically, IBM had quite a, quite a portion of our workforce working remotely so we are used to this, but not at the scale that the current situation has compelled us to. So we made a lot more investments earlier this year in digital technologies, whether it is Zoom and WebEx or trying to use tools, digital tools that helps us coordinate and collaborate effectively. So part of it is technical, right? Part of it is also a cultural shift. And that came all the way from our CEO in terms of making sure that we have the necessary processes in place to ensure that our employees are not in getting burnt out, that they're being productive and effective. And so a combination of what I would say, technical investments, plus process and leadership initiatives helped us essentially embrace the changes that we've seen, today. >> And I want you to close us out, here. Talk a little bit about the future, both for Cloud Pak for Data, but also for the companies and clients that you work for. What do you see in the next 12 to 24 months changing in the term, in terms of how we have re-imagined the future of work. I know you said this was already version nine. You've only been in the marketplace for, for not even three years. That's incredible innovation and speed. Talk a little bit about changes you see coming down the pike. >> So I think everything that we have done is going to get amplified and accelerated as we move forward, shift to cloud, embracing AI, adopting AI into business processes to automate and amplify new business models, collaboration, to a certain extent, consolidation of the different offerings into platforms. So all of this, we, I obviously see that being accelerated and that acceleration will continue as we move forward. And the real challenge I see with our customers and all the enterprises is, I see them in two buckets. There's one bucket which are resisting change, like to stick to the old concepts, and there's one bucket of enterprises who are embracing the change and moving forward, and actually get accelerating this transformation and change. I think it will be successful over the next one to five years. You know, it could be under the other bucket and if you're not, I think it's, you're going to get, you're going to miss out and that is getting amplified and accelerated, as we speak. >> So for those ones in the bucket that are resistant to the change, how do you get them onboard? I mean, this is classic change management that they teach at business schools around the world. But what are some advice that you would have to those who are resisting the change? >> So, and again, frankly speaking, we, at IBM, are going through that transition so I can speak from experience. >> Rebecca: You're drinking the Kool-Aid. >> Yeah, when, when I think, one way to address this is basically take one step at a time, like as opposed to completely revolutionizing the way you do your business. You can transform your business one step at a time while keeping the end objective as your goal, as your end goal. So, and it just want a little highlight that with full factor, that's exactly what we are enabling because what we do is we enable you to actually run anywhere you like. So if most of your systems, most of your data and your models, and analytics are on-premise, you can actually start your journey there while you plan for the future of a public cloud or a managed service. So my advice is pretty simple. You start the journey, but you can take, you can, you don't need to, you don't need to do it as a big bang. You, it could be a journey, it could be a gradual transformation, but you need to start the journey today. If you don't, you're going to miss out. >> Baby steps. Hey Hermanth Manda, thank you so much for joining us for this Virtual CUBE Conversation >> Thank you very much, Rebecca. >> I'm Rebecca Knight, stay tuned for more of theCUBE Virtual. (soft electronic music)
SUMMARY :
He is the Executive but do you want to just talk a little bit So we are giving one of the big concerns is of the platform, made it a lot simpler. the innovation that it takes to do this? the need to modernize to the and that is that you are is that we are also building of the good, as we know. that addresses the unique challenges One of the things that IBM has always and we have 53 third party ISVs, today. So much of the knowledge And that came all the way from our CEO and clients that you work for. over the next one to five years. in the bucket that are So, and again, frankly speaking, is we enable you to actually Hey Hermanth Manda, thank you so much for more of theCUBE Virtual.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Rebecca | PERSON | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Hermanth | PERSON | 0.99+ |
Hemanth Manda | PERSON | 0.99+ |
10% | QUANTITY | 0.99+ |
two and a half years | QUANTITY | 0.99+ |
nine releases | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
Hermanth Manda | PERSON | 0.99+ |
Second | QUANTITY | 0.99+ |
IBM Data | ORGANIZATION | 0.99+ |
one bucket | QUANTITY | 0.99+ |
2.5 years | QUANTITY | 0.99+ |
ninth release | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
50 plus | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
over two and a half years | QUANTITY | 0.98+ |
five years | QUANTITY | 0.98+ |
two buckets | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
both | QUANTITY | 0.98+ |
First | QUANTITY | 0.97+ |
three years | QUANTITY | 0.97+ |
WMLA | ORGANIZATION | 0.97+ |
COVID-19 pandemic | EVENT | 0.96+ |
Kool-Aid | ORGANIZATION | 0.96+ |
Watson Machine Learning Accelerator | ORGANIZATION | 0.96+ |
Cloud Pak for Data | TITLE | 0.96+ |
single platform | QUANTITY | 0.96+ |
24 months | QUANTITY | 0.96+ |
one thing | QUANTITY | 0.95+ |
one | QUANTITY | 0.95+ |
Zoom | ORGANIZATION | 0.95+ |
WebEx | ORGANIZATION | 0.94+ |
Number two | QUANTITY | 0.92+ |
day one | QUANTITY | 0.9+ |
Cloud Pak | TITLE | 0.9+ |
single offering | QUANTITY | 0.89+ |
version 3.5 | OTHER | 0.87+ |
12 | QUANTITY | 0.87+ |
one step | QUANTITY | 0.86+ |
53 third party | QUANTITY | 0.84+ |
two and a half years back | DATE | 0.84+ |
single answer | QUANTITY | 0.81+ |
year | QUANTITY | 0.8+ |
nine | OTHER | 0.79+ |
3.5 | OTHER | 0.78+ |
Cloud Pak for Data version 3.5 | TITLE | 0.76+ |
one way | QUANTITY | 0.74+ |
Number one | QUANTITY | 0.74+ |
six plus billion dollar | QUANTITY | 0.7+ |
party | QUANTITY | 0.61+ |
one-half years | QUANTITY | 0.61+ |
past two | DATE | 0.57+ |
3.5 | TITLE | 0.56+ |
version | QUANTITY | 0.56+ |
Cloud Pak | ORGANIZATION | 0.52+ |
Learning | OTHER | 0.46+ |
CUBE | ORGANIZATION | 0.43+ |
Cloud | COMMERCIAL_ITEM | 0.4+ |
Satyen Sangani, Alation | CUBEConversation
>> Narrator: From theCUBE studios in Palo Alto, in Boston, connecting with thought leaders all around the world. This is a CUBE Conversation. >> Hey, welcome back everybody Jeff Frick here with theCUBE. We're coming to you today from our Palo Alto studios with theCUBE conversation, talking about data, and we're excited to have our next guest. He's been on a number of times, many times, CUBE alum, really at the forefront of helping companies and customers be more data centric in their activities. So we'd like to welcome onto the show Satyen Sangani. He is the co founder and CEO of Alation. Satyen, great to see you. >> Great to see you, Jeff. It's good to see you again in this new world, a new format. >> It is a new world, a new format, and what's crazy is, in March and April we were talking about this light switch moment, and now we've just turned the calendar to October and it seems like we're going to be doing this thing for a little bit longer. So, it is kind of the new normal, and even I think when it's over, I don't think everything's going to go back to the way it was, so here we are, but you guys have some exciting news to announce, so let's just jump to the news and then we'll get into a little bit more of the nitty gritty. So what do you got coming out today, right? >> Yeah its so. >> What we are announcing today is basically Alation 2020, which is probably one of the biggest releases that I've been with, that we've had since I've been with the company. We with it are releasing three things. So in some sense, there's a lot of simplicity to the release. The first thing that we're releasing is a new experience around what we call the business user experience, which will bring in a whole new set of users into the catalog. The second thing that we're announcing is basically around Alation analytics and the third is around what we would describe as a cloud-native architecture. In total, it brings a fully transformative experience, basically lowering the total cost of getting to a data management experience, lower and data intelligent experience, much lower than previously had been the case. >> And you guys have a really simple mission, right? You're just trying to help your customers be more data, what's the right word? Data centric, use data more often and to help people actually make that decision. And you had an interesting quote in another interview, you talked about trying to be the Yelp for information which is such a nice kind of humanizing way to think about it because data isn't necessarily that way and I think, you mentioned before we turned on the cameras, that for a lot of people, maybe it's just easier to ignore the data. If I can just get the decision through, on a gut and intuition and get onto my next decision. >> Yeah, you know it's funny. I mean, we live in a time where people talk a lot about fake news and alternative facts and our vision is to empower a curious and rational world and I always smile when I say that a little bit, because it's such a crazy vision, right? Like how you get people to be curious and how do you get people to think rationally? But you know, to us, it's about one making the data really accessible, just allowing people to find the data they need when and as they want it. And the second is for people to be able to think scientifically, teaching people to take the facts at their disposal and interpret them correctly. And we think that if those two skills existed, just the ability to find information and interpret it correctly, people can make a lot better decisions. And so the Yelp analogy is a perfect one, because if you think about it, Yelp did that for local businesses, just like Amazon did it for really complicated products on the web and what we're trying to do at Alation is, in some sense very simple, which is to just take information and make it super usable for people who want to use it. >> Great, but I'm sure there's the critics out there, right? Who say, yeah, we've heard this before the promise of BI has been around forever and I think a lot of peoples think it just didn't work whether the data was too hard to get access to, whether it was too hard to manipulate, whether it was too hard to pull insights out, whether there's just too much scrubbing and manipulating. So, what is some of the secret sauce to take? What is a very complex world? And again and you got some very large customers with some giant data sets and to, I don't want to say humanize it, but kind of humanize it and make it easier, more accessible for that business analyst not just generally, but more specifically when I need it to make a decision. >> Yeah I mean, it's so funny because, making something, data is like a lot of software death by 1000 cuts. I mean you look at something from the outside and it looks really, really, really simple, but then you kind of dwell into any problem and that can be CRM something like Salesforce, or it can be something like service now with ITSM, but these are all really, really complicated spaces and getting into the depths and the detail of it is really hard. And data is really no different, like data is just the sort of exhaust from all of those different systems that exist inside of your company. So the detail around the data in your company is exhaustingly minute. And so, how do you make something like that simple? I think really the biggest challenge there is progressively revealing complexity, right? Giving people the right amount of information at the right amount of time. So, one of the really clever things that we do in this business user experience is we allow people to search for and receive the information that's most relevant to them. And we determined that relevance based upon the other people in the enterprise that happen to be using that data. And we know what other people are using in that company, because we look at the logs to understand which data sources are used most often, and which reports are used most often. So right after that, when you get something, you just see the name of the report and it could be around the revenues of a certain product line. But the first thing that you see is who else uses it. And that's something that people can identify with, you may not necessarily know what the algorithm was or what the formula might be, how the business glossary term relates to some data model or data artifact, but you know the person and if you know the person, then you can trust the information. And so, a lot of what we do is spend time on design to think about what is it that a person expects to see and how do they verify what's true. And that's what helps us really understand what to serve up to somebody so that they can navigate this really complicated, relevant data. >> That's awesome, cause there's really a signal to noise problem, right? And I think I've heard you speak before. >> Yeah >> And of course this is not new information, right? There's just so much data, right? The increasing proliferation of data. And it's not that there's that much more data, we're just capturing a lot more of it. So your signal to noise problem just gets worse and worse and worse. And so what you're talking about is really kind of helping filter that down to get through a lot of that, a lot of that noise, so that you can find the piece of information within the giant haystack. That is what you're looking for at this particular time in this particular moment. >> Yeah and it's a really tough problem. I mean, one of the things that, it's true that we've been talking about this problem for such a long time. And in some instance, if we're lucky, we're going to be talking about it for a lot longer because it used to be that the problem was, back when I was growing up, you were doing research on a topic and you'd go to the card catalog and you'd go to the Dewey decimal system. And in your elementary school or high school library, you might be lucky if you were to find, one, two or three books that map to the topic that you were looking for. Now, you go to Google and you find 10,000 books. Now you go inside of an enterprise and you find 4,000 relational database tables and 200 reports about an artifact that you happened to be looking for. And so really the problem is what do I trust? And what's correct and getting to that level of accuracy around information, if there's so much information out there is really the big problem of our time and I think, for me it's a real privilege to be able to work on it because I think if we can teach people to use information better and better then they can make better decisions and that can help the world in so many different. >> Right, right, my other favorite example that everybody knows is photographs, right? Back when you only got 24 and a roll and cost you six bucks to develop it. Those were pretty special and now you go buy a fancy camera. You can shoot 11, 11 frames a second. You go out and shoot the kids at the soccer game. You come home with 5,000 photos. How do you find the good photo? It's a real, >> Yeah. >> It's a real problem. If you've ever faced something like that, it's kind of a splash of water in the face. Like where do I even begin? But the other piece that you talk about a lot, which is slightly different but related is context, and in favorite concept, it's like 55, right? That's a number, but if you don't have any context for that number, is it a temperature? Is it cold inside the building? Is it a speed? Is it too slow on i5? Or is it fast because I'm on a bicycle going down a Hill and without context data is just, it's just a number. It doesn't mean anything. So you guys really by adding this metadata around the data are adding a lot more contextual information to help figure out kind of what that signal is from the noise. >> Yap, you'll get facts from anywhere, right? Like, you're going to have a Hitchcock, you've got a 55 or 42, and you can figure out like what the meaning of the universe is and apparently the answer is 42 and what does that mean? It might mean a million different things and that, to me, that context is the difference between, suspecting and knowing. And there's the difference between having confidence and basically guessing. And I think to the extent that we can provide more of that over time, that's, what's going to make us, an ever more valuable partner to the customers that we satisfy today. >> Right, well, I do know why 42 is always the answer 'cause that's Ronnie Lot and that's always the answer. So, that one I know that's an easy one. (both chuckles) But it is really interesting and then you guys just came out. I heard Aaron Kalb on, one of your co-founders the other day and we talked about this new report that you guys have sponsored the Data Culture Report and really, putting some granularity on a Data Culture Index and I thought it was pretty interesting and I'm excited that you guys are going to be doing this, longitudinally because whether you do or do not necessarily agree with the method, it does give you a number, It does give you a score, It's a relatively simple formula. And at least you can compare yourself over time to see how you're tracking. I wonder if you could share, I mean, the thing that jumps out right off the top of that report is something we were talking about before we turned the cameras on that, people's perception of where they are on this path doesn't necessarily map out when you go bottoms up and add the score versus top down when I'm just making an assessment. >> Yeah, it's funny, it's kind of the equivalent of everybody thinks they're an above average driver or everybody thinks they're above average in terms of obviously intelligence. And obviously that mathematically is not possible or true, but I think in the world of data management, we all talk about data, we all talk about how important it is to use data. And if you're a data management professional, you want people in your company to use more data. But ironically, the discipline of data management doesn't actually use a lot of data itself. It tends to be a very slow methodical process driven gut oriented process to develop things like, what data models exist and how do I use my infrastructure and where do I put my data and which data quality is best? Like all of those things tend to be, somewhat heuristic driven or gut driven and they don't have to be and a big part of our release actually is around this product called Alation Analytics. And what we do with that product is really quite interesting. We start measuring elements of how your organization uses data by team, by data source, by use case. And then we give you transparency into what's going on with the data inside of your landscape and eco-system. So you can start to actually score yourself both internally, but also as we reveal in our customer success methodology against other customers, to understand what it is that you're doing well and what it is that you're doing badly. And so you don't need necessarily to have a ton of guts instinct anymore. You can look at the data of yourselves and others to figure out where you need to improve. And so that's a pretty exciting thing and I think this notion that says, look, you think you're good, but are you really good? I mean, that's fundamental to improvement in business process and improvement in data management, improvement in data culture fundamentally for every company that we work with. >> Right, right and if you don't know, there's a problem, and if you're not measuring it, then there's no way to improve on it, right? Cause you can't, you don't know, what you're measuring is. >> Right. >> But I'm curious of the three buckets that you guys measured. So you measured data search and discovery was bucket number one, data literacy, you know what you do once you find it and then data governance in terms of managing. It feels like that the search and discovery, which is, it sounds like what you're primarily focused on is the biggest gap because you can't get to those other two buckets unless you can find and understand what you're looking for. So is that JIve or is that really not problem, is it more than manipulation of the data once you get it? >> Yeah, I mean we focus really. We focus on all three and I think that, certainly it's the case that it's a virtuous cycle. So if you think about kind of search and discovery of data, if you have very little context, then it's really hard to guide people to the right bit of information. But if I know for example that a certain data is used by a certain team and then a new member of that team comes on board. Then I can go ahead and serve them with exactly that bit of data, because I know that the human relationships are quite tight in the context graph on the back end. And so that comes from basically building more context over time. Now that context can come from a stewardship process implemented by a data governance framework. It can come from, building better data literacy through having more analytics. But however, that context is built and revealed, there tends to be a virtuous cycle, which is you get more, people searching for data. Then once they've searched for the data, you know how to necessarily build up the right context. And that's generally done through data governance and data stewardship. And then once that happens, you're building literacy in the organization. So people then know what data to search for. So that tends to be a cycle. Now, often people don't recognize that cycle. And so they focus on one thing thinking that you can do one to the exclusion of the others, but of course that's not the case. You have to do all three. >> Great and I would presume you're using some good machine, Machine Learning and Artificial Intelligence in that process to continue to improve it over time as you get more data, the metadata around the data in terms of the usage and I think, again I saw in another interview there talking about, where should people invest? What is the good data? What's the crap data? what's the stuff we shouldn't use 'cause nobody ever uses it or what's the stuff, maybe we need to look and decide whether we want to keep it or not versus, the stuff that's guiding a lot of decisions with Bob, Mary and Joe, that seems to be a good investment. So, it's a great application of applied AI Machine Learning to a very specific process to again get you in this virtuous cycle. That sounds awesome. >> Yeah, I know it is and it's really helpful to, I mean, it's really helpful to think about this, I mean the problem, one of the biggest problems with data is that it's so abstract, but it's really helpful to think about it in just terms of use cases. Like if I'm using a customer dataset and I want to join that with a transaction dataset, just knowing which other transaction datasets people joined with that customer dataset can be super helpful. If I'm an analyst coming in to try to answer a question or ask a question, and so context can come in different ways, just in the same way that Amazon, their people who bought this product also bought this product. You can have all of the same analogies exist. People who use this product also use that product. And so being able to generate all that intelligence from the back end to serve up simple seeming experience on the front end is the fun part of the problem. >> Well I'm just curious, cause there's so many pieces of this thing going on. What's kind of the, aha moment when you're in with a new customer and you finish the install and you've done all the crawling and where all the datasets are, and you've got some baseline information about who's using what I mean, what is kind of the, Oh, my goodness. When they see this thing suddenly delivering results that they've never had at their fingertips before. >> Yeah, it's so funny 'cause you can show Alation as a demo and you can show it to people with data sets that are fake. And so we have this like medical provider data set that, we've got in there and we've got a whole bunch of other data sets that are in there and people look at it and interestingly enough, a lot of time, they're like, Oh yeah, I can kind of see it work and I can kind of like understand that. And then you turn it on against their own data. The data they have been using every single day and literally their faces change. They look at the data and they say, Oh my God, like, this is a dataset that Steven uses, I didn't even know that Steven thought that this data existed and, Oh my God, like people are using this data in this particular way. They shouldn't be using that data at all, Like I thought I deprecated that dataset two years ago. And so people have all of these interesting insights and it's interesting how much more real it gets when you turn it on against the company's systems themselves. And so that's been a really fun thing that I've just seen over and over again, over the course of multiple years where people just turn on the cup, they turn on the product and all of a sudden it just changes their view of how they've been doing it all along. And that's been really fun and exciting. >> That's great yeah, cause it means something to them, right? It's not numbers on a page, It's actually, it's people, it's customers, it's relationships, It's a lot of things. That's a great story and I'm curious too, in that process, is it more often that they just didn't know that there were these other buckets of reports and other buckets of data or was it more that they just didn't have access to it? Or if they did, they didn't really know how to manipulate it or to integrate it into their own workflow. >> Yeah, It's kind of funny and it's somewhat role dependent, but it's kind of all of the above. So, if you think about it, if you're a data management professional, often you kind of know what data sources might exist in the enterprise, but you don't necessarily know how people are using the data. And so you look at data and you're like, Oh my God, I can't believe this team is using this data for this particular purpose. They shouldn't be doing that. They should be using this other data set. I deprecated that data set like two years ago. And then sometimes if you're a data scientist, you're you find, Oh my gosh, there's this new database that I otherwise didn't realize existed. And so now I can use that data and I can process that for building some new machine learning algorithms. In one case we've had a customer where they had the same data set procured five different times. So it was a pure, it was a data set that cost multiple hundreds of thousands of dollars. They were spending $2 million overall on a data set where they could have been spending literally one fifth of that amount. And then you had a sort of another case finally, where you're basically just looking at it and saying, Hey, I remember that data set. I knew I had that dataset, but I just don't remember exactly where it was. Where did I put that report? And so it's exactly the same way that you would use Google. Sometimes you use it for knowledge discovery, but sometimes you also use it for just remembering the thing you forgot. >> Right but, but the thing, like I remember when people were trying to put Google search in that companies just to find records not necessarily to support data efforts and the knock was always, you didn't have enough traffic to drive the algorithm to really have effective search say across a large enterprise that has a lot of records, but not necessarily a lot of activity. So, that's a similar type of problem that you must have. So is it really extracting that extra context of other people's usage that helps you get around kind of that you just don't have a big numbers? >> Yeah, I mean that kind of is fundamentally the special sauce. I mean, I think a lot of data management has been this sort of manual brute force effort where I get a whole bunch of consultants or a whole bunch of people in the room and we do this big documentation session. And all of a sudden we hope that we've kind of, painted the golden gate bridge is at work. But, knowing that three to six months later, you're going to have to go back and repaint the golden gate bridge overall all over again, if not immediately, depending on the size and scale of your company. The one thing that Google did to sort of crawl the web was to really understand, Oh, if a certain webpage was linked to super often, then that web page is probably a really useful webpage. And when we crawled the logs, we basically do the exact same thing. And that's really informed getting a really, really specific day one view of your data without having to have a whole bunch of manual effort. And that's been really just dramatical. I mean, it's been, it's allowed people to really see their data very quickly and new different ways and I think a big part of this is just friction reduction, right? We'd all love to have an organized data world. We'd love to organize all the information in a company, but for anybody has an email inbox, organizing your own inbox, let alone organizing every database in your company just seems like a specificity in effort. And so being able to focus people on what's the most important thing has been the most important thing. And that's kind of why we've been so successful. >> I love it and I love just kind of the human factors kind of overlay, that you've done to add the metadata with the knowledge of who is accessing these things and how are they accessing it. And the other thing I think is so important Satyen is, we talk about innovation all the time. Everybody wants more innovation and they've got DevOps so they can get software out faster, et cetera, et cetera. But, I fundamentally believe in my heart of hearts that it's much more foundational than that, right? That if you just get more people, access to more information and then the ability to manipulate and clean knowledge out of that information and then actually take action and have the power and the authority to take action. And you have that across, everyone in the company or an increasing number of people in the company. Now suddenly you're leveraging all those brains, right? You're leveraging all that insight. You're leveraging all that kind of First Line experience to drive kind of a DevOps type of innovation with each individual person, as opposed to, kind of classic waterfall with the Chief Innovation Officer, Doing PowerPoints in his office, on his own time. And then coming down from the mountain and handing it out to everybody to go build. So it's a really a kind of paradox that by adding more human factors to the data, you're actually making it so much more usable and so much more accessible and ultimately more valuable. >> Yeah, it's funny we, there's this new term of art called data intelligence. And it's interesting because there's lots of people who are trying to define it and there's this idea and I think IDC, IDC has got a definition and you can go look it up, but if you think about the core word of intelligence, it basically DevOps down to the ability to acquire information or skills, right? And so if you then apply that to companies and data, data intelligence then stands to reason. It's sort of the ability for an organization to acquire, information or skills leveraging their data. And that's not just for the company, but it's for every individual inside of that company. And we talk a lot about how much change is going on in the world with COVID and with wildfires here in California. And then obviously with the elections and then with new regulations and with preferences, cause now that COVID happened everybody's at home. So what products and what services do you have to deliver to them? And all of this change is, basically what every company has to keep up with to survive, right? If capitalism is creative destruction, the world's getting destroyed, like, unfortunately more often than we'd like it to be,. >> Right. >> And so then you're say there going, Oh my God, how do I deal with all of this? And it used to be the case that you could just build a company off of being really good at one thing. Like you could just be the best like logistics delivery company, but that was great yesterday when you were delivering to restaurants. But since there are no restaurants in business, you would just have to change your entire business model and be really good at delivering to homes. And how do you go do that? Well, the only way to really go do that, is to be really, really intelligent throughout your entire company. And that's a function of data. That's a function of your ability to adapt to a world around you. And that's not just some CEO cause literally by the time it gets to the CEO, it's probably too late. Innovations got to be occurring on the ground floor. And people have got to repackage things really quickly. >> I love it, I love it. And I love the other human factor that we talked about earlier. It's just, people are curious, right? So if you can make it easy for them to fulfill their curiosity, they're going to naturally seek out the information and use it versus if you make it painful, like a no fun lesson, then people's eyes roll in and they don't pay attention. So I think that it's such an insightful way to address the problem and really the opportunity and the other piece I think that's so different when you're going down the card catalog analogy earlier, right? Is there was a day when all the information was in that library. And if you went to the UCLA psych library, every single reference that you could ever find is in that library, I know I've been there, It was awesome, but that's not the way anymore, right? You can't have all the information and it's pulling your own information along with public information and as much information as you can. where you start to build that competitive advantage. So I think it's a really great way to kind of frame this thing where information in and of itself is really not that valuable. It's about the context, the usability, the speed of these ability and that democratization is where you really start to get these force multipliers and using data as opposed to just talking about data. >> Yeah and I think that that's the big insight, right? Like if you're a CEO and you're kind of looking at your Chief Data Officer or Chief Data and Analytics Officer. The real question that you're trying to ask yourself is, how often do my people use data? How measurable is it? Like how much do people, what is the level at which people are making decisions leveraging data and that's something that, you can talk about in a board room and you can talk about in a management meeting, but that's not where the question gets answered. The question gets really answered in the actual behaviors of individuals. And the only way to answer that question, if you're a Chief Analytics Officer or somebody who's responsible for data usage within the company is by measuring it and managing it and training it and making sure it's a part of every process and every decision by building habit and building those habits are just super hard. And that's, I think the thing that we've chosen to be sort of the best in the world at, and it's really hard. I mean, we're still learning about how to do it, but, from our customers and then taking that knowledge and kind of learning about it over time. >> Right, well, that's fantastic. And if it wasn't hard, it wouldn't be valuable. So those are always the best problems to solve. So Satyen, really enjoyed the conversation. Congratulations to you and the team on the new release. I'm sure there's lots of sweat, blood and tears that went into that effort. So congrats on getting that out and really great to catch up. Look forward to our next catch up. >> You too Jeff, It's been great to talk. Thank you so much. >> All right, take care. All righty Satyen and I'm Jeff, you're watching theCUBE. We'll see you next time. Thanks for watching. (ethereal music)
SUMMARY :
leaders all around the world. We're coming to you today It's good to see you again in the calendar to October and the third is around what we would and I think, you mentioned And the second is for people to be able And again and you got and if you know the person, you speak before. so that you can find and that can help the and cost you six bucks to develop it. that signal is from the noise. and you can figure out like and I'm excited that you guys and they don't have to be and if you're not measuring it, of the data once you get it? So that tends to be a cycle. in that process to continue from the back end to serve and you finish the install and you can show it to is it more often that they just the thing you forgot. get around kind of that you and repaint the golden gate and handing it out to and you can go look it up, and be really good at delivering to homes. and really the opportunity and you can talk about and really great to catch up. Thank you so much. We'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff Frick | PERSON | 0.99+ |
Satyen | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
11 | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
$2 million | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Ronnie Lot | PERSON | 0.99+ |
Steven | PERSON | 0.99+ |
October | DATE | 0.99+ |
24 | QUANTITY | 0.99+ |
200 reports | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Aaron Kalb | PERSON | 0.99+ |
Yelp | ORGANIZATION | 0.99+ |
California | LOCATION | 0.99+ |
six bucks | QUANTITY | 0.99+ |
March | DATE | 0.99+ |
10,000 books | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
third | QUANTITY | 0.99+ |
Satyen Sangani | PERSON | 0.99+ |
Boston | LOCATION | 0.99+ |
April | DATE | 0.99+ |
second thing | QUANTITY | 0.99+ |
Alation | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
two skills | QUANTITY | 0.99+ |
Bob | PERSON | 0.99+ |
theCUBE | ORGANIZATION | 0.98+ |
two years ago | DATE | 0.98+ |
today | DATE | 0.98+ |
second | QUANTITY | 0.98+ |
hundreds of thousands of dollars | QUANTITY | 0.98+ |
yesterday | DATE | 0.98+ |
two buckets | QUANTITY | 0.98+ |
Data Culture Report | TITLE | 0.98+ |
1000 cuts | QUANTITY | 0.98+ |
Joe | PERSON | 0.97+ |
Alation | PERSON | 0.97+ |
5,000 photos | QUANTITY | 0.97+ |
first thing | QUANTITY | 0.97+ |
five different times | QUANTITY | 0.97+ |
55 | QUANTITY | 0.97+ |
three buckets | QUANTITY | 0.97+ |
one thing | QUANTITY | 0.97+ |
three | DATE | 0.96+ |
one case | QUANTITY | 0.96+ |
Alation 2020 | TITLE | 0.95+ |
six months later | DATE | 0.94+ |
each individual person | QUANTITY | 0.94+ |
CUBE | ORGANIZATION | 0.93+ |
COVID | EVENT | 0.92+ |
three books | QUANTITY | 0.91+ |
Mary | PERSON | 0.91+ |
one fifth | QUANTITY | 0.91+ |
three | QUANTITY | 0.91+ |
IDC | ORGANIZATION | 0.88+ |
Alation Analytics | ORGANIZATION | 0.88+ |
4,000 relational database | QUANTITY | 0.86+ |
First Line | QUANTITY | 0.85+ |
42 | QUANTITY | 0.85+ |
Hitchcock | PERSON | 0.84+ |
three things | QUANTITY | 0.82+ |
11 frames a second | QUANTITY | 0.82+ |
42 | OTHER | 0.81+ |
UCLA psych | ORGANIZATION | 0.75+ |
Chris Wright v2 ITA Red Hat Ansiblefest
>> If you want to innovate, you must automate at the edge. I'm Chris Wright, chief technology officer at Red Hat. And that's what I'm here to talk to you about today. So welcome to day two of AnsibleFest, 2020. Let me start with a question, do you remember 3G when you first experienced mobile data connections? The first time that internet on a mobile device was available to everyone? It took forever to load a page, but it was something entirely different. It was an exciting time. And then came 4G, and suddenly data connections actually became usable. Together with the arrival of smartphones, people were suddenly online all the time. The world around us changed immensely. Fast forward to today, things are changing yet again, 5G is entering the market. And it's in evolution that brings about fundamental change of how connections are made and what will be connected. Now it's not only the people anymore who are online all the time, devices are entering the stage, sensors, industrial robots, cars, maybe even the jacket you're wearing. And with this revolutionary change and telecommunications technology, another trend moves into the picture, the rise of edge computing. And that's what I'll be focusing on today. So what is edge computing exactly? Well, it's all about data. Specifically, moving compute closer to the producers and consumers of data. Let's think about how data was handled in the past. Previously, everything was collected, stored and processed in the core of the data center. Think of server racks, one after the other. This was the typical setup. And it worked as long as the environment was similarly traditional. However, with the new way devices are connected and how they work, we have more and more data created at the edge and processed there immediately. Gathering and processing data takes place close to the application users, and close to the systems generating data. The fact that data is processed where it is created means that the computing itself now moves out to the edge as well. Outside of the traditional data center barriers into the hands of application users. Sometimes, literally into the hands of people. Look at your smartphone next to you, is one good example. Data sources are more distributed. The data is generated by your mobile phone, by your thermostat, by your doorbell, and data distribution isn't just happening at home, it's happening in businesses too. It's at the assembly line, high on top of a cell tower, by a pump deep down in a well, and at the side of a train track, every few miles for thousands of miles. This leads to more distributed computing overall. Platforms are pushed outside the data center. Devices are spread across huge areas in inaccessible locations, and applications run on demand close to the data. Often even the ownership of the devices is with other parties. And data gathering and processing is only partially under our direct control. That is what we mean by edge computing. And why is this even interesting for us, for our customers? To say it with the words of a customer, edge computing will be a fundamental enabling technology within industrial automation. Transitioning how you handle IT from a traditional approach, towards a distributed computing model, like edge computing, isn't necessarily easy. Let's imagine how a typical data center works right now. We own the machines, create the containers, run the workloads and carefully decide what external services we connect to, and where the data flows. This is the management sphere we know and love. Think of your primary OpenShift cluster for example. With edge computing, we don't have this level of ownership, knowledge or control. The servo motors in our assembly line are black boxes controlled only via special APIs. The small devices next to our train tracks, running embedded operating system, which does not run our default system management software. And our doorbell is connected to a cloud, which we do not control at all. Yet we still need to be able to exercise control our business processes suddenly depend on what is happening at the edge. That doesn't mean we throw away our ways of running the data centers, in fact, the opposite is true. Our data centers are the backbone of our operations. In the data center, we still tie everything together and run our core workloads. But with edge computing, we have more to manage. To do so, we have to leave our comfort zones and reach into the unknown. To be successful, we need to get data, tools and processes under management and connect it back to our data center. Let's take train tracks as an example. We're in charge of a huge network. Thousands of miles of tracks zig-zagging across the country. We have small boxes next to the train tracks every few miles, which collect data of the passing trains. Takes care of signaling and so on. The train tracks are extremely rugged devices and they're doing their jobs in the coldest winter nights and the hottest summer days. One challenge in our operation is, if we lose connection to one box, we have to stop all traffic on this track segment, no signal, no traffic. So we reroute all of the traffic passengers, cargo, you name it, via other track segments. And while the track segments now suddenly have unexpected traffic congestion and so on, we have sent a maintenance team to figure out why we lost the signal, do root cause analysis, repair what needs to be fixed and make sure it all works again. Only then, can we reopen the segment. As you can imagine, just bringing a maintenance team out there takes time, finding the root issue and solving it, also takes time. And all the while, traffic is rerouted. This can amount to a lot of money lost. Now imagine these little devices get a new software update and are now able to report not only signals sent across the tracks, but also the signal quality. And with those additional data points, we can get to work. Subsequently, we can see trends. And the device itself can act on these trends. If the signal quality is getting worse over time, the device itself can generate an event, and from this event, we can trigger followup actions. We can get our team out there in time, investigating everything before the track goes down. Of course the question here is, how do you even update the device in the first place? And how do you connect such an event to your maintenance team? There are three things we need to be able to properly tie events and everything together to answer this challenge. First, we need to be able to connect through the last mile. We need to reach out from our comfort zones, down the tracks and talk to a device, running a special embedded OS on a chip architecture we don't have in our data center. And we have thousands of them. We need to manage at the edge in a way suited to its scale. Besides connecting, we need the skills to address our individual challenges of edge computing. While the train track example is a powerful image, your challenge might be different. Your boxes might be next to an assembly line or on a shipping container or a unit under an antenna. Finally, the edge is about the interaction of things. Without our data center or humans in the equation at all. As I mentioned previously, in the end, there is an event generated by the little box. We have to take the event and first increase the signal strength temporarily between this box and the other boxes on either side, to buy us some more time. Then we ask the corporate CMDB for the actual location of that box, put all this information into a ticket, assign the ticket to the maintenance team at high priority to make sure they get out there soon. As you can see, our success here critically depends on our ability to create an environment with the right management skills and technical capabilities that can react decentrally in a secure and trusted way. And how do we do these three things, with automation. Yeah, it might not come as much of a surprise, right? However, there is a catch. Automation as a single technology product, won't cut it. It's tempting to say that an automation product can solve all these problems. Hey, we're at a tech conference, right? But that's not enough. Edge computing is not simple. And the solution to the challenges is, is not simply a tool where we buy three buckets full, and spread it across our data center and devices. Automation must be more than a tool. It must be a process, constantly evolving, iterating on and on. We only have a chance if we embed automation as a fundamental component of an organization, and use it as a central means to reach out to the last mile. And the process must not focus on technology itself, but on people. The people who are in charge of the edge IT as well as the people in charge of the data center IT. Automation can't be a handy tool that is used occasionally, it should become the primary language for all people involved to communicate in. This leads to a cooperation and common ground to further evolve the automation. And at the same time, ensure that the people build and improve the necessary skills. But with the processes and the people aligned, we can shed light on the automation technology itself. We need a tool set that is capable of doing more than automating an island here and a pocket there. We need a platform powerful enough to write the capabilities we need and support the various technologies, devices, and services out at the edge. If we connect these three findings, we come to a conclusion. To automate the edge, we need a cultural change that embraces automation in a new and fundamental way. As a new language, integrating across teams and technology alike. Such a unified automation language, speaks natively with the world out there as well as with our data centers at any scale. And this very same language is spoken by domain experts, by application developers and by us as automation experts, to pave the way for the next iteration of our business. And this language has the building blocks to create new interfaces, tools and capabilities, to integrate with the world out there and translate the events and needs into new actions, being the driving motor of the IT at the edge and evolving it further. And yes, we have this language right here, right now. It is the Ansible language. If we come back to our train track, one more time, this Ansible that can reach out and talk to our thousands of little boxes sitting next to the train tracks. The Ansible language, the domain experts of the boxes can natively work together with the train operations experts and the business intelligence people. Together, they can combine their skills to write workflows in a language they can all understand and where the deep down domain knowledge is encapsulated away. And the Ansible platform offers the APIs and components to react to events in a secure and trusted way. If there's one thing I'd like you to take away from this, it is edge computing is complex enough. But luckily we do have the right language, the right tools, and here with you and awesome community at our fingertips, to build upon it and grow it even further. So let's not worry about the tooling, we have that covered. Instead, let's focus on making that tool great. We need to become able to execute automation anywhere we need. At the edge, in the cloud, in other data centers, in the end, just like serverless functions, the location where the code is actually running, should not matter to us anymore. Let's hear this from someone who is right at the core of the development of Ansible, over to Matt Jones, our automation platform architect.
SUMMARY :
And the solution to the challenges is,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Chris Wright | PERSON | 0.99+ |
Matt Jones | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
Thousands of miles | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
one box | QUANTITY | 0.99+ |
thousands of miles | QUANTITY | 0.99+ |
One challenge | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
CMDB | ORGANIZATION | 0.98+ |
Ansible | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
2020 | DATE | 0.97+ |
first time | QUANTITY | 0.97+ |
three things | QUANTITY | 0.97+ |
three buckets | QUANTITY | 0.96+ |
three things | QUANTITY | 0.96+ |
one good example | QUANTITY | 0.93+ |
thousands of little boxes | QUANTITY | 0.93+ |
day two | QUANTITY | 0.89+ |
every few miles | QUANTITY | 0.88+ |
one thing | QUANTITY | 0.83+ |
three findings | QUANTITY | 0.82+ |
one more time | QUANTITY | 0.8+ |
first place | QUANTITY | 0.76+ |
Ansiblefest | ORGANIZATION | 0.75+ |
Ansible | TITLE | 0.74+ |
single technology product | QUANTITY | 0.74+ |
ITA | ORGANIZATION | 0.73+ |
money | QUANTITY | 0.56+ |
OpenShift | ORGANIZATION | 0.47+ |
AnsibleFest | ORGANIZATION | 0.43+ |
4G | OTHER | 0.38+ |