Image Title

Search Results for 1000 BMs:

Webb Brown & Alex Thilen, Kubecost | AWS Startup Showcase S2 E1 | Open Cloud Innovations


 

>>Hi, everyone. Welcome to the cubes presentation of the eight of us startup showcase open cloud innovations. This is season two episode one of the ongoing series covering the exciting startups from ABC ecosystems today. Uh, episode one, steam is the open source community and open cloud innovations. I'm Sean for your host got two great guests, Webb brown CEO of coop costs and as Thielen, head of business development, coop quest, gentlemen, thanks for coming on the cube for the showcase 80, but startups. >>Thanks for having a Sean. Great to be back, uh, really excited for the discussion we have here. >>I keep alumni from many, many coupons go. You guys are in a hot area right now, monitoring and reducing the Kubernetes spend. Okay. So first of all, we know one thing for sure. Kubernetes is the hottest thing going on because of all the benefits. So take us through you guys. Macro view of this market. Kubernetes is growing, what's going on with the company. What is your company's role? >>Yeah, so we've definitely seen this growth firsthand with our customers in addition to the broader market. Um, you know, and I think we believe that that's really indicative of the value that Kubernetes provides, right? And a lot of that is just faster time to market more scalability, improved agility for developer teams and, you know, there's even more there, but it's a really exciting time for our company and also for the broader cloud native community. Um, so what that means for our company is, you know, we're, we're scaling up quickly to meet our users and support our users, every, you know, metric that our company's grown about four X over the last year, including our team. Um, and the reason that one's the most important is just because, you know, the, the more folks and the larger that our company is, the better that we can support our users and help them monitor and reduce those costs, which ultimately makes Kubernetes easier to use for customers and users out there on the market. >>Okay. So I want to get into why Kubernetes is costing so much. Obviously the growth is there, but before we get there, what is the background? What's the origination story? Where did coop costs come from? Obviously you guys have a great name costs. Qube you guys probably reduced costs and Kubernetes great name, but what's the origination story. How'd you guys get here? What HR you scratching? What problem are you solving? >>So yeah, John, you, you guessed it, uh, you know, oftentimes the, the name is a dead giveaway there where we're cost monitoring cost management solutions for Kubernetes and cloud native. Um, and backstory here is our founding team was at Google before starting the company. Um, we were working on infrastructure monitoring, um, both on internal infrastructure, as well as Google cloud. Um, we had a handful of our teammates join the Kubernetes effort, you know, early days. And, uh, we saw a lot of teams, you know, struggling with the problems we're solving. We were solving internally at Google and we're we're solving today. Um, and to speak to those problems a little bit, uh, you know, you, you, you touched on how just scale alone is making this come to the forefront, right. You know, there's now many billions of dollars being spent on CU, um, that is bringing this issue, uh, to make it a business critical questions that is being asked in lots of organizations. Um, you know, that combined with, you know, the dynamic nature and complexity of Kubernetes, um, makes it really hard to manage, um, you know, costs, uh, when you scale across a very large organization. Um, so teams turned to coop costs today, you know, thousands of them do, uh, to get monitoring in place, you know, including alerts, recurring reports and like dynamic management insights or automation. >>Yeah. I know we talked to CubeCon before Webb and I want to come back to the problem statement because when you have these emerging growth areas that are really relevant and enabling technologies, um, you move to the next point of failure. And so, so you scaling these abstraction layers. Now services are being turned on more and more keeping it as clusters are out there. So I have to ask you, what is the main cost driver problem that's happening in the cube space that you guys are addressing? Is it just sheer volume? Is it different classes of services? Is it like different things are kind of working together, different monitoring tools? Is it not a platform and take us through the, the problem area? What do you guys see this? >>Yeah, the number one problem area is still actually what, uh, the CNCF fin ops survey highlighted earlier this year, um, which is that approximately two thirds of companies still don't have kind of baseline to visibility into spend when they moved to Kubernetes. Um, so, you know, even if you had a really complex, you know, chargeback program in place, when you're building all your applications on BMS, you move to Kubernetes and most teams again, can't answer these really simple questions. Um, so we're able to give them that visibility in real time, so they can start breaking these problems down. Right. They can start to see that, okay, it's these, you know, the deployments are staple sets that are driving our costs or no, it's actually, you know, these workloads that are talking to, you know, S3 buckets and, you know, really driving, you know, egress costs. Um, so it's really about first and foremost, just getting the visibility, getting the eyes and ears. We're able to give that to teams in real time at the largest scale Kubernetes clusters in the world. Um, and again, most teams, when they first start working with us, don't have that visibility, not having that visibility can have a whole bunch of downstream impacts, um, including kind of not getting, you know, costs right. You know, performance, right. Et cetera. >>Well, let's get into that downstream benefit, uh, um, problems and or situations. But the first question I have just throw naysayer comment at you would be like, oh, wait, I have all this cost monitoring stuff already. What's different about Kubernetes. Why what's what's the problem I can are my other tool is going to work for me. How do you answer that one? >>Yeah. So, you know, I think first and foremost containers are very dynamic right there. They're often complex, often transient and consume variable cluster resources. And so as much as this enables teams to contract construct powerful solutions, um, the associated costs and actually tracking those, those different variables can be really difficult. And so that's why we see why a solution like food costs. That's purpose built for developers using Kubernetes is really necessary because some of those older, you know, traditional cloud cost optimization tools are just not as fit for, for this space specifically. >>Yeah. I think that's exactly right, Alex. And I would add to that just the way that software is being architected deployed and managed is fundamentally changing with Kubernetes, right? It is deeply impacting every part of scifi software delivery process. And through that, you know, decisions are getting made and, you know, engineers are ultimately being empowered, um, to make more, you know, costs impacting decisions. Um, and so we've seen, you know, organizations that get real time kind of built for Kubernetes are built for cloud native, um, benefit from that massively throughout their, their culture, um, you know, cost performance, et cetera. >>Uh, well, can you just give a quick example because I think that's a great point. The architectures are shifting, they're changing there's new things coming in, so it's not like you can use an old tool and just retrofit it. That's sometimes that's awkward. What specific things you see changing with Kubernetes that's that environments are leveraging that's good. >>Yeah. Yeah. Um, one would be all these Kubernetes primitives are concepts that didn't exist before. Right. So, um, you know, I'm not, you know, managing just a generic workload, I'm managing a staple set and, or, you know, three replica sets. Right. And so having a language that is very much tailored towards all of these Kubernetes concepts and abstractions, et cetera. Um, but then secondly, it was like, you know, we're seeing this very obvious, you know, push towards microservices where, you know, typically again, you're shipping faster, um, you know, teams are making more distributed or decentralized decisions, uh, where there's not one single point where you can kind of gate check everything. Um, and that's a great thing for innovation, right? We can move much faster. Um, but for some teams, um, you know, not using a tool like coop costs, that means sacrificing having a safety net in place, right. >>Or guard rails in place to really help manage and monitor this. And I would just say, lastly, you know, uh, a solution like coop costs because it's built for Kubernetes sits in your infrastructure, um, it can be deployed with a single helmet stall. You don't have to share any data remotely. Um, but because it's listening to your infrastructure, it can give you data in real time. Right. And so we're moving from this world where you can make real time automated decisions or manual decisions as opposed to waiting for a bill, you know, a day, two days or a week later, um, when it may be already too late, you know, to avoid, >>Or he got the extra costs and you know what, he wants that. And he got to fight for a refund. Oh yeah. I threw a switch or wasn't paying attention or human error or code because a lot of automation is going on. So I could see that as a benefit. I gotta, I gotta ask the question on, um, developer uptake, because develop, you mentioned a good point. There that's another key modern dynamic developers are in, in the moment making decisions on security, on policy, um, things to do in the CIC D pipeline. So if I'm a developer, how do I engage with Qube cost? Do I have to, can I just download something? Is it easy? How's the onboarding process for your customers? >>Yeah. Great, great question. Um, so, you know, first and foremost, I think this gets to the roots of our company and the roots of coop costs, which is, you know, born in open-source, everything we do is built on top of open source. Uh, so the answer is, you know, you can go out and install it in minutes. Like, you know, thousands of other teams have, um, it is, you know, the, the recommended route or preferred route on our side is, you know, a helm installed. Um, again, you don't have to share any data remotely. You can truly not lock down, you know, namespace eat grass, for example, on the coop cost namespace. Um, and yeah, and in minutes you'll have this visibility and can start to see, you know, really interesting metrics that, again, most teams, when we started working with them, either didn't have them in place at all, or they had a really rough estimate based on maybe even a coop cost Scruff on a dashboard that they installed. >>How does cube cost provide the visibility across the environment? How do you guys actually make it work? >>Yeah, so we, you know, sit in your infrastructure. Um, we have integrations with, um, for on-prem like custom pricing sheets, uh, with card providers will integrate with your actual billing data, um, so that we can, uh, listen for events in your infrastructure, say like a nude node coming up, or a new pod being scheduled, et cetera. Um, we take that information, join with your billing data, whether it's on-prem or in one of the big three cloud providers. And then again, we can, in real time tell you the cost of, you know, any dimension of your infrastructure, whether it's one of the backing, you know, virtual assets you're using, or one of the application dimensions like a label or annotation namespace, you know, pod container, you name it >>Awesome. Alex, what's your take on the landscape with, with the customers as they look the cost reductions. I mean, everyone loves cost reductions as a, certainly I love the safety net comment that Webb made, but at the end of the day, Kubernetes is not so much a cost driver. It's more of a, I want the modern apps faster. Right? So, so, so people who are buying Kubernetes usually aren't price sensitive, but they also don't want to get gouged either on mistakes. Where is the customer path here around Kubernetes cost management and reduction and a scale? >>Yeah. So I think one thing that we're looking forward to hearing this upcoming year, just like we did last year is continuing to work with the various tools that customers are already using and, you know, meeting those customers where they are. So some examples of that are, you know, working with like CICT tools out there. Like we have a great integration with armoring Spinnaker to help customers actually take the insights from coop costs and deploy those, um, in a more efficient manner. Um, we're also working with a lot of partners, like, you know, for fauna to help customers visualize our data and, you know, integrate with or rancher, which are management platforms for Kubernetes. And all of that I think is just to make cost come more to the forefront of the conversation when folks are using Kubernetes and provide that, that data to customers and all the various tools that they're using across the ecosystem. Um, so I think we really want to surface this and make costs more of a first-class citizen across, you know, the, the ecosystem and then the community partners. >>What's your strategy of the biz dev side. As you guys look at a growing ecosystem with CubeCon CNCF, you mentioned that earlier, um, the community is growing. It's always been growing fast. You know, the number of people entering in are amazing, but now that we start going, you know, the S curves kicking in, um, integration and interoperability and openness is always a key part of company success. What's Qube costs is vision on how you're going to do biz dev going forward. >>Absolutely. So, you know, our products opensource that is deeply important to our company, we're always going to continue to drive innovation on our open source product. Um, as Webb mentioned, you know, we have thousands of teams that are, that are using our product. And most of that is actually on the free, but something that we want to make sure continues to be available for the community and continue to bring that development for the community. And so I think a part of that is making sure that we're working with folks not just on the commercial side, but also those open source, um, types of products, right? So, you know, for Fanta is open source Spinnaker's are open source. I think a lot of the biz dev strategies just sticking to our roots and make sure that we continue to drive it a strong open source presence and product for, for our community of users, keep that >>And a, an open source and commercial and keep it stable. Well, I got to ask you, obviously, the wave is here. I always joke, uh, going back. I remember when the word Kubernetes was just kicked around pre uh, the OpenStack days many, many years ago. It's the luxury of being a old cube guy that I am 11 years doing the cube, um, all fun. But if we remember talking to him in the early days, is that with Kubernetes was, if, if it worked, the, the phrase was rising, tide floats all boats, I would say right now, the tides rising pretty well right now, you guys are in a good spot with the cube costs. Are there areas that you see coming where cost monitoring, um, is going to expand more? Where do you see the Kubernetes? Um, what's the aperture, if you will, of the, of the cost monitoring space at your end that you think you can address. >>Yeah, John, I think you're exactly right. This, uh, tide has risen and it just keeps riding rising, right? Like, um, you know, the, the sheer number of organizations we use C using Kubernetes at massive scale is just mind blowing at this point. Um, you know, what we see is this really natural pattern for teams to start using a solution like coop costs, uh, start with, again, either limited or no visibility, get that visibility in place, and then really develop an action plan from there. And that could again be, you know, different governance solutions like alerts or, you know, management reports or, you know, engineering team reports, et cetera. Um, but it's really about, you know, phase two of taking that information and really starting to do something with it. Right. Um, we, we are seeing and expect to see more teams turn to an increasing amount of, of automation to do that. Um, but ultimately that is, uh, very much after you get this baseline highly accurate, uh, visibility that you feel very comfortable making, potentially critical, very critical related to reliability, performance decisions within your infrastructure. >>Yeah. I think getting it right key, you mentioned baseline. Let me ask you a quick follow-up on that. How fast can companies get there when you say baseline, there's probably levels of baseline. Obviously all environments are different now. Not all one's the same, but what's just anecdotally you see, as that baseline, how fast we will get there, is there a certain minimum viable configuration or architecture? Just take us through your thoughts on that. >>Yeah. Great question. It definitely depends on organizational complexity and, you know, can depend on applicational application complexity as well. But I would say most importantly is, um, you know, the, the array of cost centers, departments, you know, complexity across the org as opposed to, you know, technological. Um, so I would say for, you know, less complex organizations, we've seen it happen in, you know, hours or, you know, a day less, et cetera. Um, because that's, you know, one or two or a smaller engineering games, they can share that visibility really quickly. And, um, you know, they may be familiar with Kubernetes and they just get it right away. Um, for larger organizations, we've seen it take kind of up 90 days where it's really about infusing this kind of into their DNA. When again, there may not have been a visibility or transparency here before. Um, again, I think the, the, the bulk of the time there is really about kind of the cultural element, um, and kind of awareness building, um, and just buy in throughout the organization. >>Awesome. Well, guys got a great product. Congratulations, final question for both of you, it's early days in Kubernetes, even though the tide is rising, keeps rising, more boats are coming in. Harbor is getting bigger, whatever, whatever metaphor you want to use, it's really going great. You guys are seeing customer adoption. We're seeing cloud native. I was told that my friends at dock or the container side is going crazy as well. Everything's going great in cloud native. What's the vision on the innovation? How do you guys continue to push the envelope on value in open source and in the commercial area? What's the vision? >>Yeah, I think there's, there's many areas here and I know Alex will have more to add here. Um, but you know, one area that I know is relevant to his world is just more, really interesting integrations, right? So he mentioned coop costs, insights, powering decisions, and say Spinnaker, right? I think more and more of this tool chain really coming together and really seeing the benefits of all this interoperability. Right. Um, so that I think combined with, uh, just more and more intelligence and automation being deployed again, that's only after the fact that teams are really comfortable with his decisions and the information and the decisions that are being made. Um, but I think that increasingly we see the community again, being ready to leverage this information and really powerful ways. Um, just because, you know, as teams scale, there's just a lot to manage. And so a team, you know, leveraging automation can, you know, supercharge them and in really impactful ways. >>Awesome, great integration integrations, Alex, expand on that. A whole different kind of set of business development integrations. When you have lots of tool chains, lots of platforms and tools kind of coming together, sharing data, working together, automating together. >>Well. Yeah, we, so I think it's going to be super important to keep a pulse on the new tools. Right. Make sure that we're on the forefront of what customers are using and just continuing to meet them where they are. And a lot of that honestly, is working with AWS too, right? Like they have great services and EKS and managed Prometheus's. Um, so we want to make sure that we continue to work with that team and support their services as that launched as well. >>Great stuff. I got a couple of minutes left. I felt I'll throw one more question in there since I got two great experts here. Um, just, you know, a little bit change of pace, more of an industry question. That's really no wrong answer, but I'd love to get your reaction to, um, the SAS conversation cloud has changed what used to be SAS. SAS was, oh yeah. Software as a service. Now that you have all these kinds of new kinds of you have automation, horizontally, scalable cloud and edge, you now have vertical machine learning. Data-driven insights. A lot of things in the stack are changing. So the question is what's the new SAS look like it's the same as the old SAS? Or is it a new kind of refactoring of what SAS is? What's your take on this? >>Yeah. Um, there's a web, please jump in here wherever. But in, in my view, um, it's a spectrum, right? There's there's customers that are on both ends of this. Some customers just want a fully hosted, fully managed product that wouldn't benefit from the luxury of not having to do any, any sort of infrastructure management or patching or anything like that. And they just want to consume a great product. Um, on the other hand, there's other customers that have more highly regulated industries or security requirements, and they're going to need things to deploy in their environment. Um, right now QP cost is, is self hosted. But I think in the future, we want to make sure that, you know, we, we have versions of our product available for customers across that entire spectrum. Um, so that, you know, if somebody wants the benefit of just not having to manage anything, they can use a fully self hosted sat or a fully multitenant managed SAS, or, you know, other customers can use a self hosted product. And then there's going to be customers that are in the middle, right, where there's certain components that are okay to be a SAS or hosted elsewhere. But then there's going to be components that are really important to keep in their own environment. So I think, uh, it's really across the board and it's going to depend on customer and customer, but it's important to make sure we have options for all of them. >>Great guys, we have SAS, same as the old SAS. What's the SAS playbook. Now >>I think it is such a deep and interesting question and one that, um, it's going to touch so many aspects of software and on our lives, I predict that we'll continue to see this, um, you know, tension or real trade-off across on the one hand convenience. And now on the other hand, security, privacy and control. Um, and I think, you know, like Alex mentioned, you know, different organizations are going to make different decisions here based on kind of their relative trade-offs. Um, I think it's going to be of epic proportions. I think, you know, we'll look back on this period and just say that, you know, this was one of the foundational questions of how to get this right. We ultimately view it as like, again, we want to offer choice, um, and make, uh, make every choice be great, but let our users, uh, pick the right one, given their profile on those, on those streets. >>I think, I think it's a great comment choice. And also you got now dimensions of implementations, right? Multitenant, custom regulated, secure. I want have all these controls. Um, it's great. No one, no one SaaS rules the world, so to speak. So it's again, great, great dynamic. But ultimately, if you want to leverage the data, is it horizontally addressable? MultiTech and again, this is a whole nother ball game we're watching this closely and you guys are in the middle of it with cube costs, as you guys are creating that baseline for customers. Uh, congratulations. Uh, great to see you where thanks for coming on. Appreciate it. Thank you so much for having us again. Okay. Great. Conservation aiders startup showcase open cloud innovators here. Open source is driving a lot of value as it goes. Commercial, going to the next generation. This is season two episode, one of the AWS startup series with the cube. Thanks for watching.

Published Date : Jan 26 2022

SUMMARY :

as Thielen, head of business development, coop quest, gentlemen, thanks for coming on the cube for the showcase 80, Great to be back, uh, really excited for the discussion we have here. So take us through you guys. Um, you know, and I think we believe that that's really indicative of the value Obviously you guys have a great name costs. Um, you know, that combined with, you know, the dynamic nature and complexity of Kubernetes, And so, so you scaling these abstraction layers. you know, even if you had a really complex, you know, chargeback program in place, when you're building all your applications But the first question I have just throw naysayer comment at you would be like, oh, wait, I have all this cost monitoring you know, traditional cloud cost optimization tools are just not as fit for, for this space specifically. Um, and so we've seen, you know, organizations that get What specific things you see changing with Kubernetes that's Um, but for some teams, um, you know, not using a tool like coop costs, And I would just say, lastly, you know, uh, a solution like coop costs because it's built for Kubernetes Or he got the extra costs and you know what, he wants that. Uh, so the answer is, you know, you can go out and install it in minutes. Yeah, so we, you know, sit in your infrastructure. comment that Webb made, but at the end of the day, Kubernetes is not so much a cost driver. So some examples of that are, you know, working with like CICT you know, the S curves kicking in, um, integration and interoperability So, you know, our products opensource that is deeply important to our company, I would say right now, the tides rising pretty well right now, you guys are in a good spot with the Um, you know, what we see is this really natural pattern How fast can companies get there when you say baseline, there's probably levels of baseline. you know, complexity across the org as opposed to, you know, technological. How do you guys continue Um, but you know, one area that I know is relevant to his world is just more, When you have lots of tool chains, lots of platforms and tools kind Um, so we want to make sure that we continue to work with that team and Um, just, you know, a little bit change of pace, more of an industry question. But I think in the future, we want to make sure that, you know, we, What's the SAS playbook. Um, and I think, you know, like Alex mentioned, you know, we're watching this closely and you guys are in the middle of it with cube costs, as you guys are creating

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

AWSORGANIZATION

0.99+

Alex ThilenPERSON

0.99+

Webb BrownPERSON

0.99+

11 yearsQUANTITY

0.99+

GoogleORGANIZATION

0.99+

SeanPERSON

0.99+

thousandsQUANTITY

0.99+

twoQUANTITY

0.99+

oneQUANTITY

0.99+

ThielenPERSON

0.99+

AlexPERSON

0.99+

last yearDATE

0.99+

eightQUANTITY

0.99+

KubecostPERSON

0.99+

WebbPERSON

0.99+

90 daysQUANTITY

0.99+

Webb brownPERSON

0.99+

ABCORGANIZATION

0.99+

bothQUANTITY

0.99+

first questionQUANTITY

0.99+

CNCFORGANIZATION

0.98+

KubernetesORGANIZATION

0.98+

CubeConORGANIZATION

0.98+

two great guestsQUANTITY

0.98+

firstQUANTITY

0.98+

both endsQUANTITY

0.97+

KubernetesTITLE

0.97+

two great expertsQUANTITY

0.96+

one more questionQUANTITY

0.96+

a dayQUANTITY

0.96+

single helmetQUANTITY

0.94+

earlier this yearDATE

0.94+

todayDATE

0.94+

secondlyQUANTITY

0.94+

one thingQUANTITY

0.93+

S3COMMERCIAL_ITEM

0.92+

FantaORGANIZATION

0.92+

QubeORGANIZATION

0.91+

a week laterDATE

0.91+

KubernetesPERSON

0.91+

SASORGANIZATION

0.9+

season two episodeQUANTITY

0.88+

approximately two thirdsQUANTITY

0.87+

about four XQUANTITY

0.87+

coopORGANIZATION

0.85+

three replica setsQUANTITY

0.85+

EKSORGANIZATION

0.85+

billions of dollarsQUANTITY

0.84+

80QUANTITY

0.81+

two daysQUANTITY

0.8+

single pointQUANTITY

0.8+

one areaQUANTITY

0.77+

season twoQUANTITY

0.76+

BMSTITLE

0.76+

OpenStackTITLE

0.75+

2021 095 VMware Vijay Ramachandran


 

>>Welcome to the cubes coverage of VMworld 2021. I'm Lisa Martin VJ ramen. Shannon joins me next VP of product management at VMware VJ. Welcome back to the program. >>Thank you. So >>We're going to be talking about disaster recovery, VMware cloud. Dr. We've had a lot of challenges with respect to cybersecurity, but the world has in the last 18 months, I'd like to get your, your thoughts on the disaster recovery as a service, the dearest market. What are some of the key trends? Anything that you've noticed have particular interest in the last year and a half? >>Yeah, actually you're right. I mean that the last one year, since the pandemic, you know, the whole, um, lot of industries want to, uh, deploy DLR systems and want to protect themselves in France, somewhere and other, uh, other areas of the Amazon predicting that the disaster service market is going to reach about $10 billion by 2025. And so we, uh, we introduced bandwidth disaster recovery, you know, the last beam work with an acquisition of a company called atrium. And since then we've had tremendous success and it was really largely driven by two key trends that we seen in the market. One is that a lot of our customers have regulatory and mandates to do have a PR plan in place. And second is ransomware and ransomware a lot more in this interview, but ransomware is top of mind for a lot of customers. So those, these two combined together is really making a huge push to, uh, to protect all the data against, uh, disasters. >>What type of customers and any particular industries that you see that are really keenly adopting VMware cloud and D anything that you think is interesting. >>Yeah, it's actually interesting that you say it's actually not a single vertical or a size of the customer. What we have again, what we're finding is that a lot of the regulated industries, I, you know, having 92 to do the art, but the existing VR and data production systems are extremely complex and not cost effective. So, you know, customers are asked to do more with less. And so a lot of our customers, a lot of those customers are asking for, uh, looking for a cost-effective way to protect all the data. And, you know, and ransomware is not something that, that impacts, you know, any single vertical or, or any single size of customer. It impacts everyone. So we're seeing interest from all different verticals, different sizes of customers, uh, across, uh, the, you know, the B cell this, >>Yeah, you're right. The ransomware is a universal problem. And as we saw in the last few months, a problem that is really one of national public health and safety and security concerns. So you mentioned that customers from a regulatory perspective, those that need to implement Dr. Ransomware, as we talked about, are there, and then you also mentioned legacy solutions are kind of costly complex. Talk to me about some of the challenges with respect to those legacy solutions that you're helping customers to address with VMware cloud disaster recovery. >>Yeah. There are a few traits of chains that are, uh, that are emerging and then the whole data production space. One is, uh, customers want to do more with the data. And so with legacy systems, what they're finding is that customers are, you know, are able to replicate the data, but the data is sitting idle and not being used. And so, um, you know, and that's extremely, very expensive for our customers on the line. And secondly, from an outpatient standpoint, backup and Dr, as kind of merging into a single single solution and ransomware protection is becoming a critical use case as we spoke about at the talk about for that. So, uh, customers are not looking to deploy different systems for different types of production. They're looking for a similar solution that, that the lowest cost and gives them enough production across all these different use cases. >>And so where the NFL disaster recovery comes into play is that, is that we are able to use the data that we protect for other uses such as, uh, such as ransomware recovery, such as data protection, such as disaster recovery. So single copy of data that's being could be used in multiple use cases. Number one. And secondly, uh, it's a very expensive, uh, proposition to have, um, you know, on-prem to on-prem, you know, having to, you know, people who shouldn't capacity just sitting idle. And so where Vizio comes into play is that they're able to use, uh, protect the data into cloud, store it in a cost effective manner, and then just use the data when it's acquired either fatal or during disasters in ransomware. And that's where you're able to in, in, in, in the market today, >>Dig through some of those differentiators, if you will, one by one, because there's so much choice out there, there's a lot of backup solutions. Some that are providing backup only some that are doing also Dr. Depending on how customers have deployed and how they're using the technology. But when you're in customer conversations, what are the three things that you articulate about VMware cloud DVR that really help it stand out above the pack? >>Yeah, number one is the cost, right? Um, we, you know, we're able to bring down the cost of, uh, of a disaster protection, uh, by 65, by 65%. And, uh, and, you know, um, that's one big value proposition that we, uh, that we know highlight in our solution. Number two, a lot of our customers also becoming environmentally friendly and, you know, and I'm in a conscious, I should say. And so, because we're able to store the data in a more cost-effective manner, in a more efficient manner in the cloud, they're able to bring down the carbon footprint by 80% compared to regular, you know, your legacy, uh, disaster recovery and data protection solution. And the third, you know, sort of major value proposition from, from, uh, from the BMS is that, you know, we're able to integrate the, uh, uh, BCDR solution, the disaster coriander data protection solution. So well into our, um, you know, into, into the ecosystem, uh, can easily operationally easily recover data into a BM ware cloud. And so for, for the BMA ecosystem, it just becomes a natural logical extension of their, uh, their, uh, toolset. >>That's huge having a console that you're familiar with, you know, the whole point of, of backing up data and the need to recover from a disaster is to be able to restore the data in a timely fashion. I talked with a lot of customers who were using legacy technologies, and that was one of the biggest challenges backup windows weren't completing, or they simply couldn't recover data that was either, um, lost in an, in a ransomware attack or accidentally lost that recovery is what it's all about. Right. >>That's it, that's exactly right. And so at this rainbow ledger using a key enhancements and features that specifically speak to that, uh, you know, to that pain point that you just mentioned, you know, uh, we are bringing down, uh, the, uh, you know, the replication time, uh, to 30 to 30 minutes. So in other words, your Delta is, is, is, uh, is at a 300 interval now compared to all us in a traditional backup system. And number two, um, we are extending, uh, you know, be in love with a copy of it regardless it's always had with single file recovery. And so, especially for the, for the ransomware, uh, use case customers are quickly able to figure out which file leads to the restore, and they're able to restore those files individually rather than restoring their entire VM for the entire data center. And so it becomes a critical, uh, use case for, uh, critical functionality, I should say, for a ransomware recovery. And the other huge announcement of a major announcement media announcement had been made, uh, uh, others be involved is the integration into the VMware cloud in such a way that customers who move are migrating data into the BMR, the cloud on AWS can, uh, have the opportunity to, um, uh, protect the data, um, you know, uh, you know, easily BCDR and >>Got it. I'd love to get an example of a customer that you helped to recover from ransomware. As we mentioned, it's on the rise. In fact, I was looking at some cybersecurity data in the last few weeks, and it's the first half of 2021 calendar. It was up nearly 11 ax. And obviously the, the, the hockey stick lists looking like it's going to continue to go up into the right. So give me an example of a customer that you helped recover after they were hit with ransomware. >>Yeah. Yeah, I lose. And in fact, before I give you one set, one statistic that I just saw recently, um, it is, um, every Lennon are going to be across the board. There's some ransomware attack and in the world. And so, uh, you know, it is a big, you know, it is a huge, huge top of mind for a lot of, uh, the CEO's across and I, you know, across the globe now, uh, we, I just give you an example of one customer that we helped, um, you know, protect the data against ransomware. Merrick is the customer name, uh, it's a public reference. It can, um, you know, it's, it's in the BMI website and they had legacy systems, just like we talked about before they had legacy systems for protecting the data and they had, you know, backup systems and they had disaster recovery systems. >>And the big pain point was that, you know, they knew that they are, you know, they needed to protect against ransomware and, but they had two different systems backup and disaster recovery, and their cost was high because they were replicating the light data or production data, uh, you know, across different sites. And so they were looking for a, uh, to lower the cost of disaster recovery, but more importantly, they're looking to, uh, to protect themselves against potential ransomware threats and, um, and they were able to deploy VCR. And how does multiple points in time? Um, you know, I, in, in, um, in the, in the cloud that are, that allows them to go to any point, uh, you know, uh, after a ransomware attack and record from it. And as I said, the single file recovery was a huge benefit for them because they can then figure out exactly which, you know, which of those files, uh, you know, required, um, recovery. And so, um, they're able to lower the cost and protect, uh, and at the same time, uh, you know, meet the regulatory requirements and mandates to have a production in place so that the women all up there in all over the place, >>As you said, there, the data show one ransomware attack occurs every 11 seconds. And of course we only hear about the ones that make the news, right, for the most part, our customers talk about, Hey, we've had this problem. So it is no longer a, if we get hit with ransomware for every industry, like you were saying before, no industry is blind to this. It's when we get hit, we've gotta be able to recover the data. It sounds like what you're talking about from a recovery perspective is it's, it's very granular. So folks can go in and find exactly what they're looking for. Like, they don't have to restore entire VM. They can go down to the file level. >>That's exactly right. And, and you need the grant of the recovery because you want to be able to quickly restore, you know, your data, uh, and get back on, uh, you know, get back in the business. And so, uh, we provide that granular, granular recovery at the file level so that you can quickly scan your data, figure out which file needs to be at least a bit of cover and recollect just those files. Of course, you can also the color. We also provide authorization for the whole data center for the whole, uh, you know, BM and all the beings in the data center, but customers when they hit the trends and where they want to be able to quickly get back, get back into production, to those flights that, you know, that they critically need. And so that's, um, yeah, that's, it's a critical functionality. >>So is this whole entire solution in the cloud, or is there anything that the customer needs to have on premise? >>So this is, uh, all the data is go to the cloud in an efficient day, in an efficient way. Again, uh, you know, this is another sort of, um, like be that behalf, which is it's easy to just store data in the cloud in a debate, but what we do is be efficiently store the data so that, you know, you, uh, you know, you can know what the cost of your storage and, uh, uh, in the cloud. And so, you know, we used to be at BCDR, we'll be in the cloud disaster recovery. Those data in the cloud is, uh, and, and, and the data repository is in the cloud. And, uh, you can either recover data back to where you need to recover, or we allow filo or orchestrate automatically feel or of, uh, workloads into VMware on AWS, again, operational consistent, because it's a BMI software that's running on ground BMI software, that's running on data and you can, um, you know, fail a lot and bring the data onto the in-vitro Needham, VSO. It's, uh, uh, it's, uh, you know, and it's all there to look for SAS customer customer doesn't have to really manage anything on prem fuel, >>Which must've been a huge advantage in the last year and a half when it was so hard to get to the on-prem locations. Right. >>That's exactly right. And this is one of the clear differentiators, you know, against, uh, you know, with, um, uh, compared to the legacy systems, because in legacy backup and disaster recovery systems, you need to manage your, not just your target tourists, but also, you know, the Asians and, you know, all the stuff that, uh, uh, all the software that goes along with that, uh, data production and, uh, and the disaster recovery solution. And so by T and Matt upgrades and patches and so on. And so what we do with, with a SAS based approach is take away that burden away from customer. So we deliver this entire service as a SAS first as a cloud service first, um, uh, delivery mechanisms of customers are don't have water. You don't have to whatever any of those things. >>And that's critical, especially as we've seen in the last 18 months with what's been going on the challenge of getting to locations, but also what's been happening as we talked about in the cybersecurity space, on the increase, the massive increase in ransomware. Talk to me a little bit about, I want to dig in before we go about some of the ways that you've simplified and integrated the way to backup VMware cloud on AWS. Talk to me a little bit more about some of those enhancements specifically. Yeah, >>Yeah. So, um, a lot of the customers, customers, as you know, are, uh, you know, have a dual pronged approach where they have, you know, some workloads running on prem and they have some workloads running and the VMware cloud on AWS and for BNB, uh, for VMs that are running on VMware cloud on AWS. Um, you know, now they have a choice of, uh, of protecting, protecting the data and the VM very simply, uh, using the McLaurin disaster cloud disaster recovery. And what that means is that they don't need to have the full band BR solution, but they can simply protect the data and automatically restore and recover of data. If they, you know, if there's a corruption or something goes wrong with their, uh, you know, the beans, they can simply restore the data without going through an entire field processes. So we provide a simplified way for customers to automatically protect data, and then that are running on VMware cloud on AWS. And that's a, and it's fully integrated with our cloud on AWS, you know, workflows. And, um, and so that's a great win for anyone who's, who's migrating data man workloads into BMC >>Is the primary objective of that to deliver a business resiliency. Dr. >>Both actually that's, that's, that's, that's a great part about that. You know, that's a bit part of the solution is that customers don't have to choose between Dr and business resiliency. They get both with a single solution. They can start off, it's a specific business resiliency and protecting the data, but if they choose to, they can them, uh, you know, add BR as well to that, to those workflows. And so it's not either, or it's both. >>Excellent. Got it. Any other enhancements that you guys are announcing at the Emerald this year? >>Yeah. I just want to reiterate the announcements and the key enhancements and the making, making, uh, you know, the balancing beam. Well, um, the first one, as I said is, uh, uh, is 30 minutes RPO. So customers that are business critical workloads can now pro protect the data and be guaranteed that they're, you know, the, the, you know, the demo data, the data that they, um, you know, they lag behind it's, it's in the 30 minute range and not in the other screens, like with other legacy backup solutions. That's one. The second is the integration, uh, as all enhancements that, you know, that I just talked about for ransom recovery, single file, thin file restore. Um, they always had, you know, number of snapshots and, you know, failure was and so on, but silverish was a key and that's what they've been making for a ransomware recovery. And the third one is the integration with BNB coordinator. So the fully integrated solution and provides a simple, you know, sort of plug and play solution for any workload that's funding in being AWS. Those are the three Tiki announcements. There's a lot more in, um, in the world. So you'll see that in the coming weeks and months, but these are the three on to get the input, >>A lot of enhancements to a solution that was launched just about a year ago. VJ, thank you for sharing with us. What's new with VMware cloud DVR, the enhancements, what you're doing, and also how it's enabling customers to recover from that ever pressing, increasing threat of ransomware. We appreciate your thoughts and likewise for VJ Ramachandra and I'm Lisa Martin, you're watching the cubes coverage of VMworld 2021.

Published Date : Sep 27 2021

SUMMARY :

Welcome to the cubes coverage of VMworld 2021. So What are some of the key trends? uh, we introduced bandwidth disaster recovery, you know, the last beam work with adopting VMware cloud and D anything that you think is interesting. uh, across, uh, the, you know, the B cell this, those that need to implement Dr. Ransomware, as we talked about, are there, and then you also mentioned And so, um, you know, and that's extremely, you know, on-prem to on-prem, you know, having to, you know, people who shouldn't capacity Dig through some of those differentiators, if you will, one by one, because there's so much choice out there, And the third, you know, sort of major value proposition from, from, uh, from the BMS is that, and the need to recover from a disaster is to be able to restore the data in a timely and features that specifically speak to that, uh, you know, to that pain point that you just mentioned, So give me an example of a customer that you helped recover after they were hit with ransomware. And so, uh, you know, it is a big, in the cloud that are, that allows them to go to any point, uh, you know, uh, if we get hit with ransomware for every industry, like you were saying before, uh, you know, BM and all the beings in the data center, but customers when they hit the trends It's, uh, uh, it's, uh, you know, and it's all there to look for SAS customer customer doesn't have Which must've been a huge advantage in the last year and a half when it was so hard to get to the on-prem locations. And this is one of the clear differentiators, you know, against, uh, on the challenge of getting to locations, but also what's been happening as we talked about in the cybersecurity And that's a, and it's fully integrated with our cloud on AWS, you know, Is the primary objective of that to deliver a business resiliency. they can them, uh, you know, add BR as well to that, to those workflows. Any other enhancements that you guys are announcing at the Emerald this year? is the integration, uh, as all enhancements that, you know, that I just talked about for ransom VJ, thank you for sharing

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

FranceLOCATION

0.99+

30 minuteQUANTITY

0.99+

AWSORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

30 minutesQUANTITY

0.99+

30QUANTITY

0.99+

ShannonPERSON

0.99+

Vijay RamachandranPERSON

0.99+

65%QUANTITY

0.99+

80%QUANTITY

0.99+

threeQUANTITY

0.99+

92QUANTITY

0.99+

65QUANTITY

0.99+

bothQUANTITY

0.99+

VMwareORGANIZATION

0.99+

VJPERSON

0.99+

2025DATE

0.99+

oneQUANTITY

0.99+

thirdQUANTITY

0.99+

about $10 billionQUANTITY

0.98+

twoQUANTITY

0.98+

BothQUANTITY

0.98+

SASORGANIZATION

0.98+

two key trendsQUANTITY

0.98+

BNBORGANIZATION

0.98+

BMIORGANIZATION

0.98+

OneQUANTITY

0.98+

singleQUANTITY

0.98+

pandemicEVENT

0.98+

MattPERSON

0.98+

three thingsQUANTITY

0.98+

NFLORGANIZATION

0.98+

secondQUANTITY

0.98+

VJ RamachandraPERSON

0.97+

TPERSON

0.97+

one setQUANTITY

0.97+

one statisticQUANTITY

0.97+

third oneQUANTITY

0.97+

last one yearDATE

0.96+

one customerQUANTITY

0.96+

BCDRORGANIZATION

0.96+

last year and a halfDATE

0.96+

VMware VJORGANIZATION

0.95+

AsiansPERSON

0.94+

single copyQUANTITY

0.94+

VMware cloudTITLE

0.94+

single solutionQUANTITY

0.94+

2021DATE

0.93+

this yearDATE

0.93+

MerrickPERSON

0.93+

first oneQUANTITY

0.93+

EmeraldORGANIZATION

0.92+

todayDATE

0.92+

firstQUANTITY

0.92+

VMworld 2021EVENT

0.92+

two different systemsQUANTITY

0.91+

aboutDATE

0.89+

secondlyQUANTITY

0.89+

300 intervalQUANTITY

0.89+

single fileQUANTITY

0.89+

last year and a halfDATE

0.89+

last 18 monthsDATE

0.88+

a year agoDATE

0.86+

DeltaORGANIZATION

0.84+

atriumORGANIZATION

0.81+

VSOORGANIZATION

0.8+

nearly 11 axQUANTITY

0.8+

Talor Holloway, Advent One | IBM Think 2021


 

>>from around the globe. It's the >>cube with digital >>coverage of IBM >>Think 2021 brought to you >>by IBM. Welcome back everyone to the cube coverage of IBM Think 2021 virtual um john for your host of the cube. Our next guest taylor Holloway. Chief technology officer at advent one. Tyler welcome to the cube from down under in Australia and we're in Palo alto California. How are you? >>Well thanks john thanks very much. Glad to be glad to be on here. >>Love love the virtual cube of the virtual events. We can get to talk to people really quickly with click um great conversation here around hybrid cloud, multi cloud and all things software enterprise before we get started. I wanna take a minute to explain what you guys do at advent one. What's the main focus? >>Yeah. So look we have a lot of customers in different verticals. Um so you know generally what we provide depends on the particular industry the customers in. But generally speaking we see a lot of demand for operational efficiency, helping our clients tackle cyber security risks, adopt cloud and set them up to modernize the applications. >>And this is this has been a big wave coming in for sure with you know, cloud and scale. So I gotta ask you, what are the main challenges that you guys are solvent for your customers um and how are you helping them overcome come that way and transformative innovative way? >>Yeah, look, I think helping our clients um improve their security posture is a big one. We're finding as well that our customers are gaining a lot of operational efficiency by adopting sort of open source technology red huts an important partner of ours as his IBM um and we're seeing them sort of move away from some more proprietary solutions. Automation is a big focus for us as well. We've had some great outcomes with our clients or helping them automate um and you know, to live up um you know the stand up and data operations of environments a lot quickly a lot more easily and uh and to be able to sort of apply some standards across multiple sort of areas of their I. T. Estate. >>What are some of the solutions that you guys are doing with IBM's portfolio on the infrastructure side, you got red hat, you've got a lot of open source stuff to meet the needs of clients. What do you mean? What's the mean? >>Uh Yeah I think on the storage side will probably help our clients sort of tackle the expanding data in structured and particularly unstructured data they're trying to take control of so you know, looking at spectrum scale and those type of products from an audio perspective for unstructured data is a good example. And so they're flush systems for more block storage and more run of the mill sort of sort of environments. We have helped our clients consolidate and modernize on IBM power systems. Having Red Hat is both a Lynx operating system and having open shift as a container platform. Um really helps there. And Red Hat also provides management overlay, which has been great on what we do with IBM power systems. We've been working on a few different sort of use cases on power in particular. More recently, SAP Hana is a big one where we've had some success with our clients migrating Muhanna on to onto IBM power systems. And we've also helped our customers, you know, improve some um some environments on the other end of the side, such as IBM I, we still have a large number of customers with, with IBM I and and you know how do we help them? You know some of them are moving to cloud in one way or another others are consuming some kind of IRS and we can sort of wrap around a managed service to to help them through. >>So I gotta ask you the question, you know U C T. Oh you played a lot of technologies kubernetes just become this lingua franca for this kind of like I'll call a middleware kind of orchestration layer uh containers. Also you're awesome but I gotta ask you when you walk into a client's environment you have to name names but you know usually you see kind of two pictures man, they need some serious help or they got their act together. So either way they're both opportunities for Hybrid cloud. How do you how do you how do you evaluate the environment when you go in, when you walk into those two scenarios? What goes through your mind? What some of the conversations that you guys have with those clients. Can you take me through a kind of day in the life of both scenarios? The ones that are like I can't get the job done, I'm so close in on the right team and the other ones, like we're grooving, we're kicking butt. >>Yeah. So look, let's start well, I supposed to start off with you try and take somewhat of a technology agnostic view and just sort of sit down and listen to what they're trying to achieve, how they're going for customers who have got it. You know, as you say, all nailed down things are going really well. Um it's just really understanding what what can we do to help. Is there an opportunity for us to help at all like there? Um, you know, generally speaking, there's always going to be something and it may be, you know, we don't try and if someone is going really well, they might just want someone to help with a bespoke use case or something very specific where they need help. On the other end of the scale where a customer is sort of pretty early on and starting to struggle. We generally try and help them not boil the ocean at once. Just try and get some winds, pick some key use cases, you know, deliver some value back and then sort of growing from there rather than trying to go into a customer and trying to do everything at once tends to be a challenge. Just understand what the priorities are and help them get going. >>What's the impact been for red hat? Um, in your customer base, a lot of overlap. Some overlap, no overlap coming together. What's the general trend that you're seeing? What's the reaction been? >>Yeah I think it's been really good. Obviously IBM have a lot of focus on cloud packs where they're bringing their software on red hat open shift that will run on multiple clouds. So I think that's one that we'll see a lot more of overtime. Um Also helping customers automate their I. T. Operations with answerable is one we do quite a lot of um and there's some really bespoke use cases we've done with that as well as some standardized one. So helping with day two operations and all that sort of thing. But there's also some really sort of out there things customers have needed to automate that's been a challenge for them and being able to use open source tools to do it has worked really well. We've had some good wins there, >>you know, I want to ask you about the architecture and I'm just some simplify it real. Just for the sake of devops, um you know, segmentation, you got hybrid clouds, take a programmable infrastructure and then you've got modern applications that need to have a I some have said I've even sit on the cube and other broadcast that if you don't have a I you're gonna be at a handicap some machine learning, some data has to be in there. You can probably see ai and mostly everything as you go in and try to architect that out for customers um and help them get to a hybrid cloud infrastructure with real modern application front end with using data. What's what's the playbook? Do you have any best practices or examples you can share or scenarios or visions that you see uh playing >>out? I think you're the first one is obviously making sure customers data is in the right place. So if they might be wanting to use um some machine learning in one particular cloud provider and they've got a lot of their applications and data in another, you know, how do we help them make it mobile and able to move data from one cloud to another or back into court data center? So there's a lot of that. I think that we spend a lot of time with customers to try and get a right architecture and also how do we make sure it's secure from end to end. So if they're moving things from into multiple one or more public clouds as well as maybe in their own data center, making sure connectivity is all set up properly. All the security requirements are met. So I think we sort of look at it from a from a high level design point of view, we look at obviously what the target state is going to be versus the current state that really take into account security, performance, connectivity or those sort of things to make sure that they're going to have a good result. >>You know, one of the things you mentioned and this comes up a lot of my interviews with partners of IBM is they always comment about their credibility and all the other than the normal stuff. But one of the things that comes out a lot pretty much consistently is their experience in verticals. Uh they have such a track record in verticals and this is where AI and machine learning data has to be very much scoped in on the vertical. You can't generalize and have a general purpose data plane inside of vertically specialized kind of focus. How how do you see that evolving, how does IBM play there with this kind of the horizontally scalable mindset of a hybrid model, both on premise in the cloud, but that's still saying provide that intimacy with the data to fuel the machine learning or NLP or power that ai which seems to be critical. >>Yeah, I think there's a lot of services where you know, public cloud providers are bringing out new services all the time and some of it is pre can and easy to consume. I think what IBM from what I've observed, being really good at is handling some of those really bespoke use cases. So if you have a particular vertical with a challenge, um you know, there's going to be sort of things that are pre can that you can go and consume. But if you need to do something custom that could be quite challenging. How do they sort of build something that could be quite specific for a particular industry and then obviously being able to repeat that afterwards for us, that's obviously something we're very interested in. >>Yeah, tell I love chatting whether you love getting the low down also, people might not know your co author of a book performance guy with IBM Power Systems, So I gotta ask you, since I got you here and I don't mean to put you on the spot, but if you can just share your vision or any kind of anecdotal observation as people start to put together their architecture and again, you know, Beauty's in the eye of the beholder, every environment is different. But still, hybrid, distributed concept is distributed computing. Is there a KPI is there a best practice on as a manager or systems architect to kind of keep an eye on what what good is and how how good becomes better because the day to operations becomes a super important concept. We're seeing some called Ai ops where okay, I'm provisioning stuff out on a hybrid Cloud operational environment. But now day two hits are things happen as more stuff entered into the equation. What's your vision on KPs and management? What to keep tracking? >>Yeah, I think obviously attention to detail is really important to be able to build things properly. A good KPI particularly managed service area that I'm curious that understanding is how often do you actually have to log into the systems that you're managing? So if you're logging in and recitation into servers and all this sort of stuff all the time, all of your automation and configuration management is not set up properly. So, really a good KPI an interesting one is how often do you log into things all the time? If something went wrong, would you sooner go and build another one and shoot the one that failed or go and restore from backup? So thinking about how well things are automated. If things are immutable using infrastructure as code, those are things that I think are really important when you look at, how is something going to be scalable and easy to manage going forward. What I hate to see is where, you know, someone build something and automates it all in the first place and they're too scared to run it again afterwards in case it breaks something. >>It's funny the next generation of leaders probably won't even know like, hey, yeah, taylor and john they had to log into systems back in the day. You know, I mean, I could be like a story they tell their kids. Uh but no, that's a good Metro. This is this automation. So it's on the next level. Let's go the next level automation. Um what's the low hanging fruit for automation? Because you're getting at really the kind of the killer app there, which is, you know, self healing systems, good networks that are programmable but automation will define more value. What's your take? >>I think the main thing is where you start to move from a model of being able to start small and automate individual things which could be patching or system provisioning or anything like that. But what you really want to get to is to be able to drive everything through, get So instead of having a written up paper, change request, I'm going to change your system and all the rest of it. It really should be driven through a pull request and have things through it and and build pipelines to go and go and make a change running in development, make sure it's successful and then it goes and gets pushed into production. That's really where I think you want to get to and you can start to have a lot of people collaborating really well on this particular project or a customer that also have some sort of guard rails around what happens in some level of governance rather than being a free for all. >>Okay, final question. Where do you see event one headed? What's your future plans to continue to be a leader? I. T. Service leader for this guy? BMS Infrastructure portfolio? >>I think it comes down to people in the end, so really making sure that we partner with our clients and to be well positioned to understand what they want to achieve and and have the expertise in our team to bring to the table to help them do it. I think open source is a key enabler to help our clients adopt a hybrid cloud model to sort of touched on earlier uh as well as be able to make use of multiple clouds where it makes sense from a managed service perspective. I think everyone is really considering themselves and next year managed service provider. But what that means for us is to provide a different, differentiated managed service and also have the strong technical expertise to back it up. >>Taylor Holloway, chief technology officer advent one remote videoing in from down under in Australia. I'm john ferrier and Palo alto with cube coverage of IBM thing. Taylor, thanks for joining me today from the cube. >>Thank you very much. >>Okay, cube coverage. Thanks for watching ever. Mhm mm

Published Date : May 12 2021

SUMMARY :

It's the Welcome back everyone to the cube coverage of IBM Think 2021 Glad to be glad to be on here. I wanna take a minute to explain what you guys do at advent one. Um so you know generally And this is this has been a big wave coming in for sure with you know, cloud and scale. We've had some great outcomes with our clients or helping them automate um and you know, What are some of the solutions that you guys are doing with IBM's portfolio on the infrastructure side, control of so you know, looking at spectrum scale and those type of products from an audio perspective for What some of the conversations that you guys have with those clients. there's always going to be something and it may be, you know, we don't try and if someone is going really well, What's the general trend that you're seeing? and there's some really bespoke use cases we've done with that as well as some standardized one. you know, I want to ask you about the architecture and I'm just some simplify it real. and they've got a lot of their applications and data in another, you know, how do we help them make it mobile and You know, one of the things you mentioned and this comes up a lot of my interviews with partners of IBM is they Yeah, I think there's a lot of services where you know, public cloud providers are bringing out new services all the time and since I got you here and I don't mean to put you on the spot, but if you can just share your vision or is where, you know, someone build something and automates it all in the first place and they're too scared to run it So it's on the next level. I think the main thing is where you start to move from a model of being able to start small Where do you see event one headed? I think it comes down to people in the end, so really making sure that we partner with our clients and I'm john ferrier and Palo alto with cube coverage of IBM Thanks for watching ever.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

AustraliaLOCATION

0.99+

Taylor HollowayPERSON

0.99+

todayDATE

0.99+

taylorPERSON

0.99+

Talor HollowayPERSON

0.99+

TylerPERSON

0.99+

oneQUANTITY

0.99+

TaylorPERSON

0.99+

two scenariosQUANTITY

0.99+

taylor HollowayPERSON

0.99+

Think 2021COMMERCIAL_ITEM

0.99+

johnPERSON

0.99+

next yearDATE

0.99+

both scenariosQUANTITY

0.99+

IBM Power SystemsORGANIZATION

0.98+

two picturesQUANTITY

0.98+

firstQUANTITY

0.98+

Palo alto CaliforniaLOCATION

0.97+

Red HatTITLE

0.96+

first oneQUANTITY

0.96+

bothQUANTITY

0.96+

Palo altoORGANIZATION

0.92+

both opportunitiesQUANTITY

0.92+

two hitsQUANTITY

0.9+

red hatTITLE

0.88+

ThinkCOMMERCIAL_ITEM

0.83+

john ferrierPERSON

0.82+

advent oneORGANIZATION

0.82+

one cloudQUANTITY

0.79+

one wayQUANTITY

0.78+

LynxTITLE

0.75+

two operationsQUANTITY

0.69+

BMSORGANIZATION

0.68+

ChiefPERSON

0.67+

2021DATE

0.63+

SAP HanaTITLE

0.63+

MuhannaTITLE

0.58+

cloudQUANTITY

0.54+

Advent OneORGANIZATION

0.53+

Steve Touw, Immuta | AWS re:Invent 2020


 

>>from around the globe. It's the Cube with digital coverage of AWS reinvent 2020 sponsored by Intel, AWS and our community partners. All right, you're continuing or we're continuing around the clock coverage and around the world coverage off a W s reinvent 2020 virtual conference This year, I'm guessing hundreds of thousands of folks are tuning in for coverage. And we have we have on the other end of the country a cube alarm. Stephen Towel, co founder and CTO of immunity. Stephen, welcome back to the show. >>Great. Great to be here. Thanks for having me again. I hope to match your enthusiasm. >>You know what is, uh, your co founder? I'm sure you could match the enthusiasm. Plus, we're talking about data governance. You You've been on the cute before, and you kind of laid the foundation for us last year. Talking about challenges around data access and data access control. I want to extend this conversation. I had a conversation with a CEO chief data officer a couple of years ago. He shared how his data analysts his the people that actually take the data and make business decisions or create outcomes to make business decisions spent 80% of their time wrangling the data just doing transformations. >>How's the >>Muda helping solve that problem? >>Yeah, great questions. So it's actually interesting. We're seeing a division of roles in these organizations where we have data engineering teams that are actually managing. Ah, lot of the prep work that goes into exposing data and releasing data analysts. Uh, and as part of their day to day job is to ensure that that data that they're released into the analyst is what they're allowed to see. Um and so we kind of see this, this problem of compliance getting in the way of analysts doing their own transformation. So it would be great if we didn't have to have a limited to just this small data engineering team to release the data. What we believe one of the rial issues behind that is that they are the ones that are trusted. They're the only ones that could see all the data in the clear. So it needs to be a very small subset of humans, so to speak, that can do this transformation work and release it. And that means that the data analyst downstream are hamstrung to a certain extent and bottlenecked by requesting these data engineers do some of this transformation work for them. Eso I think because, as you said, that's so critical to being able to analyze data, that bottleneck could could be a back breaker for organization. So we really think that to you need to tie transformation with compliance in order to streamline your analytics in your organization. >>So that has me curious. What does that actually look like? Because Because when I think of a data analyst, they're not always thinking about Well, who should have this data? They're trying to get the answer to the question Thio provide to the data engineer. What does that functionally looked like when that when you want to see that relationship of collaboration? >>Yeah, So we e think the beauty of a Muda and the beauty of governance solutions done right is that they should be invisible to the downstream analysts to a certain extent. So the data engineering team will takes on some requirements from their legal compliance. Seems such as you need a mask p I I or you need Thio. Hi. These kinds of rose from these kinds of analysts, depending on what the users doing. And we've just seen an explosion of different slices or different ways, you should dice up your data and what who's allowed to see what and not just about who they are, but what they're doing on DSO. You can kind of bake all these policies upfront on your data on a tool like Kamuda, and it will dynamically react based on who the analyst is and what they're doing to ensure that the right policies air being enforced. And we could do that in a way that when the analysts I mean, what we also see is just setting your policies on your data. Once up front, that's not the end of the story. Like a lot of people will tap themselves on the back and say, Look, we've got all our data protected appropriately, job done. But that's not really the case, because the analysts will start creating their own data products and they want to share that with other analysts. And so when you think about this, this becomes a very complex problem of okay. Before someone can share their data with anyone else, we need to understand what they were allowed to see eso being able to control the kind of this downstream flow of of transformations and feature engineering to ensure that Onley the right people, are seeing the things that they're allowed to see. But still, enabling analytics is really the challenges that that we saw that in Muda Thio, you know, help the the data teams create those initial policies at scale but also help the analytical teams build driven data products in a way that doesn't introduce data leaks. >>So as I think about the traditional ways in which we do this, we kind of, you know, take a data sad. Let's say, is the databases and we said, security rules etcetera on those data states. That's what you're painting to ISMM or of Dynamic. Has Muto approaching this problem from just a architectural direction? >>Yeah, great question. So I'm sure you've probably heard the term role based access control on, but it's been around forever where you basically aggregate your users in the roles, and then you build rules around those roles on gritty, much every legacy. Already, BMS manages data access this way. Um, what we're seeing now and I call it the private data era that we're now embarking on or have been embarking on for the past three years or so. Where consumers are more aware of their data, privacy and the needs they had their there's, you know, data regulations coming fast and furious with no end in sight. Um, we believe that this role based access control paradigm is just broken. We've got customers with thousands of roles that they're trying to manage Thio to, you know, slice up the data all the different ways that they need Thio. So instead, we we offer an accurate based access control solution and also policy based access control solution. We're. Instead, it's really about How do you dynamically enforced policy by separating who the user is from the policy that needs to be enforced and and having that execute at runtime? A good analogy to this is role based. Access control is like writing code without being able to use variables. You're writing the same block a code over and over again with slight changes based on the roll where actually based access control is, you're able to use variables and basically the policy gets decided at runtime based on who the user is and what they're doing. So >>that dynamic nature kind of lends itself to the public cloud. Were you seeing this applied in the world off a ws were here Reinvent so our customers using this with a W s >>So it all comes down to scalability so that the same reasons that used to separate storage from compute. You know, you get your storage in one place you could ephemera, lee, spin up, compute like EMR if you want. Um, you can use Athena against your storage in a server lis way that that kind of, um, freedom to choose whatever compute you want. Um, the same kind of concepts of apply with policy enforcement. You wanna separate your policy from your platform on that This private data era has has, you know, created this need just like you have to separate your compute from storage in the big data era. And this allows you to have a single plane of glass to enforce policy consistently, no matter what compute you're using or what a U s resource is you're using, um and so this gives our customers power to not only, um, you know, build the rules that they need to build and not have to do it uniquely her service in the U. S. But also proved to their legal and compliance teams that they're doing it correctly because, um, when when you do it this way, it really simplifies everything. And you have one place to go toe, understand how policies being enforced. And this really gives you the auditing and reporting around, um, be enforcement that you've been doing to put every one of these, that everything is being done correctly and that your data consumers can understand You know how your data is being protected. Their data is being protected. Um, and you could actually answer those questions when they come at you. >>So let's put this idea to the test a little bit. So I have the data engineer who kind of designs the security policy around the data or implements that policy using Kamuda Aziz dictated by the security and chief data officer of the organization. Then I have the analyst, and the analyst is just using the tools at their disposal. Let's say that one analyst wants to use AWS Lambda and another analysts wants to use our type database or analysis tools. You're telling me that Muda allows the flexibility for that analyst to use either tool within a W S. >>That's right, because we enforce policy at the data layer. Eso If you think about a Muda, it's really three layers policy authoring, which you touched on where those requirements get turned into real policies. Policy decision ing. So at query time we see who the user is, what they're doing on what policy has been defined to dynamically build that policy at run time and then enforcement, which is what you're getting at. The enforcement happens at the data layer, for example, we can enforce policies, natively and spark. So no matter how you're connecting to spark, that policy is going to get enforced appropriately. So we don't really care about what the clients Liz, because the enforcement is happening at the data or the compute layer is is a more accurate way todo to say it >>so. A practical reality off collaboration, especially around large data sets, is the ability to share data across organizations. How is immune hoping thio just make that barrier? Ah, little lower but ensuring security so that when I'm sharing data with, uh, analysts with within another firm. They're only seeing the data that they need to see, but we can effectively collaborate on those pieces of content. >>Yeah, I'm glad you asked this. I mean, this is like the, you know, the big finale, right? Like, this is what you get when you have this granularity on your own data ecosystem. It enables you to have that granularity now, when you want to share outside of your internal ecosystem. And so I think an important part about this is that when you think about governance, you can't necessarily have one God users so to speak, that has control over all tables and all policies. You really need segmentation of duty, where different parts of the organ hooking their own data build their own policies in a way where people can't step on each other and then this can expand this out. The third party data sharing where you can set different anonymous ation levels on your data when you're sharing an external the organization verse, if it's internal users and then someone else in your ord could share their data with you and then that also do that Third party. So it really enables and freeze these organizations Thio share with each other in ways that weren't possibly before. Because it happens in the day. The layer, um, these organizations can choose their own compute and still have the same policies being forced again. Going back to that consistency piece, um, it provides. Think of it is almost a authoritative way to share data in your organization. It doesn't have to be ad hoc. Oh, I have to share with this group over here. How should I do it? What policies should enforce. There's a single authoritative way to set policy and share your data. >>So the first thing that comes to my mind, especially when we give more power to the users, is when the auditors come and they say, You know what, Keith? I understand this is the policy, but prove it. How do we provide auditors with the evidence that you know, the we're implementing the policy that we designed and then two were ableto audit that policy? >>Yeah. Good question. So, um, I briefly spoke about this a little bit, but the when you author and define the policies in the Muda there immediately being enforced. So when you write something in our platform, um, it's not a glorified Wikipedia, right? It's actually turning those policies on and enforcing it at the data later. And because of that, any query that's coming through a Muda is going to be audited. But I think even more importantly, to be honest, we keep a history of how policy changes happening over time, too. So you could understand, you know, so and so changed the policy on this table versus other table, you know, got newly added, these people got dropped from it. So you get this rich history of not only who's touching what data and what data is important, but you're also getting a rich history off. Okay, how have we been treating this data from a policy perspective over time? How is it like what were my risk levels over the past year? With B six tables on? You can answer those kinds of questions as well. >>And then we're in the era of cloud. We expect to be able to consume these services via AP I via pay as you go type of thing. How is your relationship with AWS and how in the cutting. Ultimately, the customer. How do I consume a music? >>Yeah, so in Munich can pretty much be deployed anywhere. So obviously we're talking to us here. We have a SAS offering where you can spin up Muda pretrial and just be often running building policies and hooking up hooking our policy enforcement engine into your compute. Um, that runs in our, um you know, infrastructure. There's also a deployment model where you deploy immune it into your VPC s so it can run on your infrastructure. Behind your firewalls on DWI do not require any public Internet access at all for that to run. We don't do any kind of phone homing because, obviously, privacy company, we take this very seriously internally as well. We also have on premise deployments, um, again with zero connectivity air gapped environments. Eso. So we offer that kind of flexibility to our customers wherever they want immediate toe to be deployed. An important thing to remember their two is immediate. Does not actually store any data. We just store metadata and policy information. Um, so it's that also provides the customers some flexibility where if they want to use our SAS, they can simply go policy in there, and then the data still lives in their account. We're just kind of pushing policy down into that. Dynamically. >>So Stephen Towel co founder c t o of immunity. I don't think you have to worry about matching my energy level. I through some pretty tough questions at at you and you were ready there with all the answers. You wanna see more interesting conversations from around the world with founders, builders, AWS reinvent is all about builders and we're talking to the builders throughout this show. Visit us on the web. The Cube. You can engage with us on Twitter. Talk to you next episode off the Cube from AWS reinvent 2020.

Published Date : Dec 8 2020

SUMMARY :

end of the country a cube alarm. I hope to match your enthusiasm. been on the cute before, and you kind of laid the foundation for us last year. And that means that the data analyst downstream are hamstrung to a certain extent and like when that when you want to see that relationship of collaboration? of different slices or different ways, you should dice up your data and what who's allowed to see what So as I think about the traditional ways in which we do this, we kind of, you know, data, privacy and the needs they had their there's, you know, data regulations coming fast that dynamic nature kind of lends itself to the public cloud. you know, created this need just like you have to separate your compute from storage in You're telling me that Muda allows the flexibility for that analyst to use either at the data or the compute layer is is a more accurate way todo to They're only seeing the data that they need to see, but we can effectively collaborate on those when you want to share outside of your internal ecosystem. So the first thing that comes to my mind, especially when we give more power to the users, So when you write something in our platform, AP I via pay as you go type of thing. Um, so it's that also provides the customers some flexibility where if they Talk to you next episode off the Cube from AWS

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
StephenPERSON

0.99+

KeithPERSON

0.99+

AWSORGANIZATION

0.99+

80%QUANTITY

0.99+

Stephen TowelPERSON

0.99+

Steve TouwPERSON

0.99+

MunichLOCATION

0.99+

twoQUANTITY

0.99+

last yearDATE

0.99+

U. S.LOCATION

0.99+

thousandsQUANTITY

0.99+

IntelORGANIZATION

0.98+

This yearDATE

0.98+

ThioPERSON

0.98+

singleQUANTITY

0.98+

SASORGANIZATION

0.97+

first thingQUANTITY

0.96+

three layersQUANTITY

0.96+

WikipediaORGANIZATION

0.95+

ImmutaPERSON

0.94+

oneQUANTITY

0.94+

rolesQUANTITY

0.94+

W s reinvent 2020EVENT

0.93+

couple of years agoDATE

0.92+

MutoPERSON

0.92+

one placeQUANTITY

0.91+

one analystQUANTITY

0.91+

single planeQUANTITY

0.91+

Kamuda AzizPERSON

0.91+

hundreds of thousands of folksQUANTITY

0.89+

CubeCOMMERCIAL_ITEM

0.88+

zeroQUANTITY

0.87+

LambdaTITLE

0.85+

past three yearsDATE

0.85+

AthenaORGANIZATION

0.83+

TwitterORGANIZATION

0.82+

KamudaTITLE

0.82+

ISMMORGANIZATION

0.81+

GodPERSON

0.78+

AWS reinvent 2020EVENT

0.74+

past yearDATE

0.73+

InventEVENT

0.72+

CTOPERSON

0.72+

LizPERSON

0.67+

MudaTITLE

0.67+

BMSORGANIZATION

0.58+

2020DATE

0.57+

EMRTITLE

0.54+

sixQUANTITY

0.51+

DynamicORGANIZATION

0.49+

reinventTITLE

0.49+

DWIORGANIZATION

0.45+

OnleyORGANIZATION

0.45+

ThioLOCATION

0.44+

reEVENT

0.4+

2020TITLE

0.39+

Krishna Cheriath, Bristol Myers Squibb | MITCDOIQ 2020


 

>> From the Cube Studios in Palo Alto in Boston, connecting with thought leaders all around the world, this is a Cube Conversation. >> Hi everyone, this is Dave Vellante and welcome back to the Cube's coverage of the MIT CDOIQ. God, we've been covering this show since probably 2013, really trying to understand the intersection of data and organizations and data quality and how that's evolved over time. And with me to discuss these issues is Krishna Cheriath, who's the Vice President and Chief Data Officer, Bristol-Myers Squibb. Krishna, great to see you, thanks so much for coming on. >> Thank you so much Dave for the invite, I'm looking forward to it. >> Yeah first of all, how are things in your part of the world? You're in New Jersey, I'm also on the East coast, how you guys making out? >> Yeah, I think these are unprecedented times all around the globe and whether it is from a company perspective or a personal standpoint, it is how do you manage your life, how do you manage your work in these unprecedented COVID-19 times has been a very interesting challenge. And to me, what is most amazing has been, I've seen humanity rise up and so to our company has sort of snap to be able to manage our work so that the important medicines that have to be delivered to our patients are delivered on time. So really proud about how we have done as a company and of course, personally, it has been an interesting journey with my kids from college, remote learning, wife working from home. So I'm very lucky and blessed to be safe and healthy at this time. So hopefully the people listening to this conversation are finding that they are able to manage through their lives as well. >> Obviously Bristol-Myers Squibb, very, very strong business. You guys just recently announced your quarter. There's a biologics facility near me in Devon's, Massachusetts, I drive by it all the time, it's a beautiful facility actually. But extremely broad portfolio, obviously some COVID impact, but you're managing through that very, very well, if I understand it correctly, you're taking a collaborative approach to a COVID vaccine, you're now bringing people physically back to work, you've been very planful about that. My question is from your standpoint, what role did you play in that whole COVID response and what role did data play? >> Yeah, I think it's a two part as you rightly pointed out, the Bristol-Myers Squibb, we have been an active partner on the the overall scientific ecosystem supporting many different targets that is, from many different companies I think. Across biopharmaceuticals, there's been a healthy convergence of scientific innovation to see how can we solve this together. And Bristol-Myers Squibb have been an active participant as our CEO, as well as our Chief Medical Officer and Head of Research have articulated publicly. Within the company itself, from a data and technology standpoint, data and digital is core to the response from a company standpoint to the COVID-19, how do we ensure that our work continues when the entire global workforce pivots to a kind of a remote setting. So that really calls on the digital infrastructure to rise to the challenge, to enable a complete global workforce. And I mean workforce, it is not just employees of the company but the all of the third-party partners and others that we work with, the whole ecosystem needs to work. And I think our digital infrastructure has proven to be extremely resilient than that. From a data perspective, I think it is twofold. One is how does the core book of business of data continue to drive forward to make sure that our companies key priorities are being advanced. Secondarily, we've been partnering with a research and development organization as well as medical organization to look at what kind of real world data insights can really help in answering the many questions around COVID-19. So I think it is twofold. Main summary; one is, how do we ensure that the data and digital infrastructure of the company continues to operate in a way that allows us to progress the company's mission even during a time when globally, we have been switched to a remote working force, except for some essential staff from lab and manufacturing standpoint. And secondarily is how do we look at the real-world evidence as well as the scientific data to be a good partner with other companies to look at progressing the societal innovations needed for this. >> I think it's a really prudent approach because let's face it, sometimes one shot all vaccine can be like playing roulette. So you guys are both managing your risk and just as I say, financially, a very, very successful company in a sound approach. I want to ask you about your organization. We've interviewed many, many Chief Data Officers over the years, and there seems to be some fuzziness as to the organizational structure. It's very clear with you, you report in to the CIO, you came out of a technical bag, you have a technical degree but you also of course have a business degree. So you're dangerous from that standpoint. You got both sides which is critical, I would think in your role, but let's start with the organizational reporting structure. How did that come about and what are the benefits of reporting into the CIO? >> I think the Genesis for that as Bristol-Myers Squibb and when I say Bristol-Myers Squibb, the new Bristol-Myers Squibb is a combination of Heritage Bristol-Myers Squibb and Heritage Celgene after the Celgene acquisition last November. So in the Heritage Bristol-Myers Squibb acquisition, we came to a conclusion that in order for BMS to be able to fully capitalize on our scientific innovation potential as well as to drive data-driven decisions across the company, having a robust data agenda is key. Now the question is, how do you progress that? Historically, we had approached a very decentralized mechanism that made a different data constituencies. We didn't have a formal role of a Chief Data Officer up until 2018 or so. So coming from that realization that we need to have an effective data agenda to drive forward the necessary data-driven innovations from an analytic standpoint. And equally importantly, from optimizing our execution, we came to conclusion that we need an enterprise-level data organization, we need to have a first among equals if you will, to be mandated by the CEO, his leadership team, to be the kind of an orchestrator of a data agenda for the company, because data agenda cannot be done individually by a singular CDO. It has to be done in partnership with many stakeholders, business, technology, analytics, et cetera. So from that came this notion that we need an enterprise-wide data organization. So we started there. So for awhile, I would joke around that I had all of the accountabilities of the CDO without the lofty title. So this journey started around 2016, where we create an enterprise-wide data organization. And we made a very conscious choice of separating the data organization from analytics. And the reason we did that is when we look at the bowl of Bristol-Myers Squibb, analytics for example, is core and part of our scientific discovery process, research, our clinical development, all of them have deep data science and analytic embedded in it. But we also have other analytics whether it is part of our sales and marketing, whether it is part of our finance and our enabling functions they catch all across global procurement et cetera. So the world of analytics is very broad. BMS did a separation between the world of analytics and from the world of data. Analytics at BMS is in two modes. There is a central analytics organization called Business Insights and Analytics that drive most of the enterprise-level analytics. But then we have embedded analytics in our business areas, which is research and development, manufacturing and supply chain, et cetera, to drive what needs to be closer to the business idea. And the reason for separating that out and having a separate data organization is that none of these analytic aspirations or the business aspirations from data will be met if the world of data is, you don't have the right level of data available, the velocity of data is not appropriate for the use cases, the quality of data is not great or the control of the data. So that we are using the data for the right intent, meeting the compliance and regulatory expectations around the data is met. So that's why we separated out that data world from the analytics world, which is a little bit of a unique construct for us compared to what we see generally in the world of CDOs. And from that standpoint, then the decision was taken to make that report for global CIO. At Bristol-Myers Squibb, they have a very strong CIO organization and IT organization. When I say strong, it is from this lens standpoint. A, it is centralized, we have centralized the budget as well as we have centralized the execution across the enterprise. And the CDO reporting to the CIO with that data-specific agenda, has a lot of value in being able to connect the world of data with the world of technology. So at BMS, their Chief Data Officer organization is a combination of traditional CDO-type accountabilities like data risk management, data governance, data stewardship, but also all of the related technologies around master data management, data lake, data and analytic engineering and a nascent AI data and technology lab. So that construct allows us to be a true enterprise horizontal, supporting analytics, whether it is done in a central analytics organization or embedded analytics teams in the business area, but also equally importantly, focus on the world of data from operational execution standpoint, how do we optimize data to drive operational effectiveness? So that's the construct that we have where CDO reports to the CIO, data organization separated from analytics to really focus around the availability but also the quality and control of data. And the last nuance that is that at BMS, the Chief Data Officer organization is also accountable to be the Data Protection Office. So we orchestrate and facilitate all privacy-related actions across because that allows us to make sure that all personal data that is collected, managed and consumed, meets all of the various privacy standards across the world, as well as our own commitments as a company from across from compliance principles standpoint. >> So that makes a lot of sense to me and thank you for that description. You're not getting in the way of R&D and the scientists, they know data science, they don't need really your help. I mean, they need to innovate at their own pace, but the balance of the business really does need your innovation, and that's really where it seems like you're focused. You mentioned master data management, data lakes, data engineering, et cetera. So your responsibility is for that enterprise data lifecycle to support the business side of things, and I wonder if you could talk a little bit about that and how that's evolved. I mean a lot has changed from the old days of data warehouse and cumbersome ETL and you mentioned, as you say data lakes, many of those have been challenging, expensive, slow, but now we're entering this era of cloud, real-time, a lot of machine intelligence, and I wonder if you could talk about the changes there and how you're looking at and thinking about the data lifecycle and accelerating the time to insights. >> Yeah, I think the way we think about it, we as an organization in our strategy and tactics, think of this as a data supply chain. The supply chain of data to drive business value whether it is through insights and analytics or through operation execution. When you think about it from that standpoint, then we need to get many elements of that into an effective stage. This could be the technologies that is part of that data supply chain, you reference some of them, the master data management platforms, data lake platforms, the analytics and reporting capabilities and business intelligence capabilities that plug into a data backbone, which is that I would say the technology, swim lane that needs to get right. Along with that, what we also need to get right for that effective data supply chain is that data layer. That is, how do you make sure that there is the right data navigation capability, probably you make sure that we have the right ontology mapping and the understanding around the data. How do we have data navigation? It is something that we have invested very heavily in. So imagine a new employee joining BMS, any organization our size has a pretty wide technology ecosystem and data ecosystem. How do you navigate that, how do we find the data? Data discovery has been a key focus for us. So for an effective data supply chain, then we knew that and we have instituted our roadmap to make sure that we have a robust technology orchestration of it, but equally important is an effective data operations orchestration. Both needs to go hand in hand for us to be able to make sure that that supply chain is effective from a business use case and analytic use standpoint. So that has led us on a journey from a cloud perspective, since you refer that in your question, is we have invested very heavily to move from very disparate set of data ecosystems to a more converse cloud-based data backbone. That has been a big focus at the BMS since 2016, whether it is from a research and development standpoint or from commercialization, it is our word for the sales and marketing or manufacturing and supply chain and HR, et cetera. How do we create a converged data backbone that allows us to use that data as a resource to drive many different consumption patterns? Because when you imagine an enterprise of our size, we have many different consumers of the data. So those consumers have different consumption needs. You have deep data science population who just needs access to the data and they have data science platforms but they are at once programmers as well, to the other end of the spectrum where executives need pre-packaged KPIs. So the effective orchestration of the data ecosystem at BMS through a data supply chain and the data backbone, there's a couple of things for us. One, it drives productivity of our data consumers, the scientific researchers, analytic community or other operational staff. And second, in a world where we need to make sure that the data consumption appalls ethical standards as well as privacy and other regulatory expectations, we are able to build it into our system and process the necessary controls to make sure that the consumption and the use of data meets our highest trust advancements standards. >> That makes a lot of sense. I mean, converging your data like that, people always talk about stove pipes. I know it's kind of a bromide but it's true, and allows you to sort of inject consistent policies. What about automation? How has that affected your data pipeline recently and on your journey with things like data classification and the like? >> I think in pursuing a broad data automation journey, one of the things that we did was to operate at two different speed points. In a historically, the data organizations have been bundled with long-running data infrastructure programs. By the time you complete them, their business context have moved on and the organization leaders are also exhausted from having to wait from these massive programs to reach its full potential. So what we did very intentionally from our data automation journey is to organize ourselves in two speed dimensions. First, a concept called Rapid Data Lab. The idea is that recognizing the reality that the data is not well automated and orchestrated today, we need a SWAT team of data engineers, data SMEs to partner with consumers of data to make sure that we can make effective data supply chain decisions here and now, and enable the business to answer questions of today. Simultaneously in a longer time horizon, we need to do the necessary work of moving the data automation to a better footprint. So enterprise data lake investments, where we built services based on, we had chosen AWS as the cloud backbone for data. So how do we use the AWS services? How do we wrap around it with the necessary capabilities so that we have a consistent reference and technical architecture to drive the many different function journeys? So we organized ourselves into speed dimensions; the Rapid Data Lab teams focus around partnering with the consumers of data to help them with data automation needs here and now, and then a secondary team focused around the convergence of data into a better cloud-based data backbone. So that allowed us to one, make an impact here and now and deliver value from data to the dismiss here and now. Secondly, we also learned a lot from actually partnering with consumers of data on what needs to get adjusted over a period of time in our automation journey. >> It makes sense, I mean again, that whole notion of converged data, putting data at the core of your business, you brought up AWS, I wonder if I could ask you a question. You don't have to comment on specific vendors, but there's a conversation we have in our community. You have AWS huge platform, tons of partners, a lot of innovation going on and you see innovation in areas like the cloud data warehouse or data science tooling, et cetera, all components of that data pipeline. As well, you have AWS with its own tooling around there. So a question we often have in the community is will technologists and technology buyers go for kind of best of breed and cobble together different services or would they prefer to have sort of the convenience of a bundled service from an AWS or a Microsoft or Google, or maybe they even go best of breeds for all cloud. Can you comment on that, what's your thinking? >> I think, especially for organizations, our size and breadth, having a converged to convenient, all of the above from a single provider does not seem practical and feasible, because a couple of reasons. One, the heterogeneity of the data, the heterogeneity of consumption of the data and we are yet to find a single stack provider who can meet all of the different needs. So I am more in the best of breed camp with a few caveats, a hybrid best of breed, if you will. It is important to have a converged the data backbone for the enterprise. And so whether you invest in a singular cloud or private cloud or a combination, you need to have a clear intention strategy around where are you going to host the data and how is the data is going to be organized. But you could have a lot more flexibility in the consumption of data. So once you have the data converged into, in our case, we converged on AWS-based backbone. We allow many different consumptions of the data, because I think the analytic and insights layer, data science community within R&D is different from a data science community in the supply chain context, we have business intelligence needs, we have a catered needs and then there are other data needs that needs to be funneled into software as service platforms like the sales forces of the world, to be able to drive operational execution as well. So when you look at it from that context, having a hybrid model of best of breed, whether you have a lot more convergence from a data backbone standpoint, but then allow for best of breed from an analytic and consumption of data is more where my heart and my brain is. >> I know a lot of companies would be excited to hear that answer, but I love it because it fosters competition and innovation. I wish I could talk for you forever, but you made me think of another question which is around self-serve. On your journey, are you at the point where you can deliver self-serve to the lines of business? Is that something that you're trying to get to? >> Yeah, I think it does. The self-serve is an absolutely important point because I think the traditional boundaries of what you consider the classical IT versus a classical business is great. I think there is an important gray area in the middle where you have a deep citizen data scientist in the business community who really needs to be able to have access to the data and I have advanced data science and programming skills. So self-serve is important but in that, companies need to be very intentional and very conscious of making sure that you're allowing that self-serve in a safe containment sock. Because at the end of the day, whether it is a cyber risk or data risk or technology risk, it's all real. So we need to have a balanced approach between promoting whether you call it data democratization or whether you call it self-serve, but you need to balance that with making sure that you're meeting the right risk mitigation strategy standpoint. So that's how then our focus is to say, how do we promote self-serve for the communities that they need self-serve, where they have deeper levels of access? How do we set up the right safe zones for those which may be the appropriate mitigation from a cyber risk or data risk or technology risk. >> Security pieces, again, you keep bringing up topics that I could talk to you forever on, but I heard on TV the other night, I heard somebody talking about how COVID has affected, because of remote access, affected security. And it's like hey, give everybody access. That was sort of the initial knee-jerk response, but the example they gave as well, if your parents go out of town and the kid has a party, you may have some people show up that you don't want to show up. And so, same issue with remote working, work from home. Clearly you guys have had to pivot to support that, but where does the security organization fit? Does that report separate alongside the CIO? Does it report into the CIO? Are they sort of peers of yours, how does that all work? >> Yeah, I think at Bristol-Myers Squibb, we have a Chief Information Security Officer who is a peer of mine, who also reports to the global CIO. The CDO and the CSO are effective partners and are two sides of the coin and trying to advance a total risk mitigation strategy, whether it is from a cyber risk standpoint, which is the focus of the Chief Information Security Officer and whether it is the general data consumption risk. And that is the focus from a Chief Data Officer in the capacities that I have. And together, those are two sides of a coin that the CIO needs to be accountable for. So I think that's how we have orchestrated it, because I think it is important in these worlds where you want to be able to drive data-driven innovation but you want to be able to do that in a way that doesn't open the company to unwanted risk exposures as well. And that is always a delicate balancing act, because if you index too much on risk and then high levels of security and control, then you could lose productivity. But if you index too much on productivity, collaboration and open access and data, it opens up the company for risks. So it is a delicate balance within the two. >> Increasingly, we're seeing that reporting structure evolve and coalesce, I think it makes a lot of sense. I felt like at some point you had too many seats at the executive leadership table, too many kind of competing agendas. And now your structure, the CIO is obviously a very important position. I'm sure has a seat at the leadership table, but also has the responsibility for managing that sort of data as an asset versus a liability which my view, has always been sort of the role of the Head of Information. I want to ask you, I want to hit the Escape key a little bit and ask you about data as a resource. You hear a lot of people talk about data is the new oil. We often say data is more valuable than oil because you can use it, it doesn't follow the laws of scarcity. You could use data in infinite number of places. You can only put oil in your car or your house. How do you think about data as a resource today and going forward? >> Yeah, I think the data as the new oil paradigm in my opinion, was an unhealthy, and it prompts different types of conversations around that. I think for certain companies, data is indeed an asset. If you're a company that is focused on information products and data products and that is core of your business, then of course there's monetization of data and then data as an asset, just like any other assets on the company's balance sheet. But for many enterprises to further their mission, I think considering data as a resource, I think is a better focus. So as a vital resource for the company, you need to make sure that there is an appropriate caring and feeding for it, there is an appropriate management of the resource and an appropriate evolution of the resource. So that's how I would like to consider it, it is a personal end of one perspective, that data as a resource that can power the mission of the company, the new products and services, I think that's a good, healthy way to look at it. At the center of it though, a lot of strategies, whether people talk about a digital strategy, whether the people talk about data strategy, what is important is a company to have a pool north star around what is the core mission of the company and what is the core strategy of the company. For Bristol-Myers Squibb, we are about transforming patients' lives through science. And we think about digital and data as key value levers and drivers of that strategy. So digital for the sake of digital or data strategy for the sake of data strategy is meaningless in my opinion. We are focused on making sure that how do we make sure that data and digital is an accelerant and has a value lever for the company's mission and company strategy. So that's why thinking about data as a resource, as a key resource for our scientific researchers or a key resource for our manufacturing team or a key resource for our sales and marketing, allows us to think about the actions and the strategies and tactics we need to deploy to make that effective. >> Yeah, that makes a lot of sense, you're constantly using that North star as your guideline and how data contributes to that mission. Krishna Cheriath, thanks so much for coming on the Cube and supporting the MIT Chief Data Officer community, it was a really pleasure having you. >> Thank you so much for Dave, hopefully you and the audience is safe and healthy during these times. >> Thank you for that and thank you for watching everybody. This is Vellante for the Cube's coverage of the MIT CDOIQ Conference 2020 gone virtual. Keep it right there, we'll right back right after this short break. (lively upbeat music)

Published Date : Sep 3 2020

SUMMARY :

leaders all around the world, coverage of the MIT CDOIQ. I'm looking forward to it. so that the important medicines I drive by it all the time, and digital infrastructure of the company of reporting into the CIO? So that's the construct that we have and accelerating the time to insights. and the data backbone, and allows you to sort of and enable the business to in areas like the cloud data warehouse and how is the data is to the lines of business? in the business community that I could talk to you forever on, that the CIO needs to be accountable for. about data is the new oil. that can power the mission of the company, and supporting the MIT Chief and healthy during these times. of the MIT CDOIQ Conference

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

Dave VellantePERSON

0.99+

MicrosoftORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

Bristol-Myers SquibbORGANIZATION

0.99+

New JerseyLOCATION

0.99+

AWSORGANIZATION

0.99+

DevonLOCATION

0.99+

Palo AltoLOCATION

0.99+

Rapid Data LabORGANIZATION

0.99+

2013DATE

0.99+

Krishna CheriathPERSON

0.99+

two sidesQUANTITY

0.99+

twoQUANTITY

0.99+

COVID-19OTHER

0.99+

CelgeneORGANIZATION

0.99+

FirstQUANTITY

0.99+

CubeORGANIZATION

0.99+

KrishnaPERSON

0.99+

Heritage Bristol-Myers SquibbORGANIZATION

0.99+

2018DATE

0.99+

both sidesQUANTITY

0.99+

BothQUANTITY

0.98+

BostonLOCATION

0.98+

2016DATE

0.98+

CDOTITLE

0.98+

two modesQUANTITY

0.98+

COVIDOTHER

0.98+

firstQUANTITY

0.98+

Bristol-Myers SquibbORGANIZATION

0.98+

last NovemberDATE

0.98+

Data Protection OfficeORGANIZATION

0.98+

OneQUANTITY

0.98+

two partQUANTITY

0.98+

SecondlyQUANTITY

0.98+

secondQUANTITY

0.98+

MITORGANIZATION

0.98+

bothQUANTITY

0.98+

MIT CDOIQ Conference 2020EVENT

0.97+

Heritage CelgeneORGANIZATION

0.97+

oneQUANTITY

0.97+

COVID-19 timesOTHER

0.96+

todayDATE

0.96+

BMSORGANIZATION

0.96+

single providerQUANTITY

0.95+

single stackQUANTITY

0.93+

Bristol Myers SquibbPERSON

0.93+

one shotQUANTITY

0.92+

Cube StudiosORGANIZATION

0.9+

one perspectiveQUANTITY

0.9+

Bristol-MyersORGANIZATION

0.9+

Business InsightsORGANIZATION

0.89+

two speedQUANTITY

0.89+

twofoldQUANTITY

0.84+

secondaryQUANTITY

0.8+

SecondarilyQUANTITY

0.77+

MIT CDOIQORGANIZATION

0.76+

MassachusettsLOCATION

0.75+

MITCDOIQ 2020EVENT

0.74+

VellantePERSON

0.72+

DataPERSON

0.71+

Chief Data OfficerPERSON

0.61+

Doug Davis, IBM | KubeCon + CloudNativeCon EU 2019


 

>> about >> fifteen live from basically about a room that is a common club native con Europe twenty nineteen by Red Hat, The >> Cloud, Native Computing Foundation and Ecosystem Partners. >> Welcome back to the Cubes. Live coverage of Cloud Native Con Cube Khan, twenty nineteen I'm stupid in my co host is Corey Quinn and having a welcome back to the program, Doug Davis, who's a senior technical staff member and PM of a native. And he happens to be employed by IBM. Thanks so much for joining. Thanks for inviting me. Alright, So Corey got really excited when he saw this because server Lis is something that you know he's been doing for a while. I've been poking in, trying to understand all the pieces have done marvelous conflict couple of times and, you know, I guess, I guess layout for our audience a little bit, you know, k native. You know, I look at it kind of a bridging a solution, but, you know, we're talking. It's not the, you know, you know, containers or server lists. And, you know, we understand that world. They're spectrums and there's overlap. So maybe as that is a set up, you know, What is the surveillance working groups? You know, Charter. Right. So >> the service Working Group is a Sand CF working group. It was originally started back in mid two thousand seventeen by the technical recite committee in Cincy. They basically wanted know what is service all about his new technology is that some of these get involved with stuff like that. So they started up the service working group and our main mission was just doing some investigation. And so the output of this working group was a white paper. Basically describing serval is how it compares with the other as is out there. What is the good use cases for when to use that went out through it? Common architectures, basically just explaining what the heck is going on in that space. And then we also produced a landscape document basically laying out what's out there from a proprietors perspective as well is open source perspective. And then the third piece was at the tail end of the white paper set of recommendations for the TOC or seen stuff in general. What do they do next? And basic came down to three different things. One was education. We want to be educate the community on what services when it's appropriate stuff like that. Two. What should wait? I'm sorry I'm getting somebody Thinks my head recommendations. What other projects we pull into the CNC f others other service projects, you know, getting encouraged in the joint to grow the community. And third, what should we do around improbability? Because obviously, when it comes to open source standards of stuff like that, we want in our ability, portability stuff like that and one of the low hang your food should be identified was, well, service seems to be all about events. So there's something inventing space we could do, and we recognize well, if we could help the processing of events as it moves from Point A to point B, that might help people in terms of middleware in terms of routing, of events, filtering events, stuff like that. And so that's how these convents project that started. Right? And so that's where most of service working group members are nowadays. Is cod events working or project, and they're basically divine, Eva said specification around cloud events, and you kind of think of it as defining metadata to add to your current events because we're not going to tell you. Oh, here's yet another one size fits all cloud of in format, right? It's Take your current events. Sprinkle a little extra metadata in there just to help routing. And that's really what it's all about. >> One of the first things people say about server list is quoted directly from the cover of Missing the Point magazine Server list Runs on servers. Wonderful. Thank you for your valuable contribution. Go away slightly less naive is, I think, an approach, and I've seen a couple of times so far at this conference. When talking to people that they think of it in terms of functions as a service of being able to take arbitrary code and running, I have a wristwatch I can run arbitrary code on. That's not really the point. It's, I think you're right. It's talking more about the event model and what that unlocks As your application. Mohr less starts to become more self aware. Are you finding that acceptance of that viewpoint is taking time to take root? >> Yeah, I think what's interesting is when we first are looking. A serval is, I think, very a lot of people did think of service equals function of the service, and that's all it was. I think what we're finding now is this this mode or people are more open to the idea of sort of as you. I think you're alluding to merging of these worlds because we look at the functionality of service offers, things like event based, which really only means is the messages coming in? It just happens to look like an event. Okay, fine. Mrs comes in you auto scale based upon, you know, loaded stuff like that scale down to zero is a the monkey thought it was really like all these other things are all these features. Why should you limit those two service? Why not a past platform? Why not? Container is a service. Why would you want those just for one little as column? And so my goal with things like a native though I'm glad you mentioned it is because I think he does try to span those, and I'm hoping it kind of merges them altogether and says, Look, I don't care what you call it. Use this piece of technology because it does what you need to do. If you want to think of it as a pass, go for I don't care. This guy over here he wants think that is a FAZ Great. It's the same piece of technology. Does the feature do what you need? Yes or no? Ignore that, nor the terminology around it more than anything >> else. So I agree. Ueda Good, Great discussion with the user earlier and he said from a developer standpoint, I actually don't want to think too much about which one of these pass I go down. I want to reduce the friction for them and make it easy. So you know, how does K native help us move towards that? You know, ideal >> world, right? And I think so fine. With what I said earlier, One of the things I think a native does, aside from trying to bridge all the various as columns is I also look a K native as a simplification of communities because as much as everybody here loves communities, it is kind of complicated, right? It is not the easiest thing in the world to use, and it kind of forced you to be a nightie expert which almost goes against the direction we were headed. When you think of Cloud Foundry stuff like that where it's like, Hey, you don't worry about this something, we're just give us your code, right? Cos well says No, you gotta know about Network Sing Gris on values that everything else it's like, I'm sorry, isn't this going the wrong way? Well, Kania tries to back up a little, say, give you all the features of Cooper Netease, but in a simplified platform or a P I experience that you can get similar Tokat. Foundry is Simo, doctor and stuff, but gives you all the benefits of communities. But the important thing is if for some reason you need to go around K native because it's a little too simplified or opinionated, you could still go around it to get to the complicated stuff. And it's not like you're leaving that a different world or you're entering a different world because it's the same infrastructure they could stuff that you deploy on. K Native can integrate very nicely with the stuff you deploy through vanilla communities if you have to. So it is really nice emerging these two worlds, and I'm I'm really excited by that. >> One thing that I found always strange about server list is at first it was defined by what it's not and then quickly came to be defined almost by its constraints. If you take a look at public cloud offerings around this, most notably a ws land other there, many others it comes down well. You can only run it for experience time or it only runs in certain run times. Or it's something the cold starts become a problem. I think that taking a viewpoint from that perspective artificially hobbles what this might wind up on locking down the road just because these constraints move. And right now it might be a bit of a toy. I don't think it will be as it because it needs to become more capable. The big value proposition that I keep hearing around server listen I've mostly bought into has been that it's about business logic and solving the things that Air Corps to your business and not even having to think about infrastructure. Where do you stand on that >> viewpoint? I completely agree. I think a lot of the limitations you see today are completely artificial. I kind of understand why they're there, because the way things have progressed. But again, that's one reason I excited like a native is because a lot of those limitations aren't there. Now, Kay native doesn't have its own set of limitations. And personally, I do want to try to remove those. Like I said, I would love it if K native, aside from the serval ISS features it offers up, became these simplified, incriminate his experience. So if you think about what you could do with Coronet is right, you could deploy a pod and they can run forever until the system decides to crash. For some reason, right, why not do that with a native and you can't stay with a native? Technically, I have demos that I've been running here where I set the men scale the one it lives forever, and teenager doesn't care right? And so deploying an application through K native communities. I don't care that it's the same thing to me. And so, yes, I do want to merge in those two worlds. I wantto lower those constraints as long as you keep it a simplified model and support the eighty to ninety percent of those use cases that it's actually meant to address. Leave the hard stuff for going around it a little. >> Alright, So, Doug, you know, it's often times, you know, we get caught in this bubble of arguing over, you know? You know what we call it, how the different pieces are. Yesterday you had a practitioner Summit four server list. So what? I want to hear his You know, whats the practitioners of you put What are they excited about? What are they using today and what are the things that they're asking for? Help it become, you know, Maur were usable and useful for them in the future. >> So in full disclosure, we actually kind of a quiet audience, so they weren't very vocal. But what little I did here is they seem very excited by K native and I think a lot of it was because we were just talking about that sort of merging of the worlds because I do think there is still some confusion around, as you said when you use one verse of the other and I think a native is helping to bring those together. And I did hear some excitement around that in terms of what people actually expect from us going in the future. I don't know. Be honest. They didn't actually say a whole lot there. I had my own personal opinion, and lot of years would already stayed in terms of emerging. Stop having me pick a technology or pick a terminology, right? Let me just pick the technology. It gets my job done and hopefully that one will solve a lot of my needs. But for the most parts, I think it was really more about Kaneda than anything else. Yesterday, >> I think like Lennox before it. Any technology? At some point you saw this with virtual ization with cloud, with containers with Cooper Netease. And now we're starting to Syria to see with server lists where some of its most vocal proponents are also the most obnoxious in that they're looking at this from a perspective of what's your problem? I'm not even going to listen to the answer. The absolution is filling favorite technology here. So to that end today, what workloads air not appropriate for surveillance in your mind? >> Um, >> so this is hardly an answer because I have the IBM Army running through my head because what's interesting is I do hear people talk about service is good for this and not this or you can date. It is good for this and not this. And I hear those things, and I'm not sure I actually buy it right. I actually think that the only limitations that I've seen in terms of what you should not run on time like he needed or any of the platform is whatever that platform actually finds you, too. So, for example, on eight of us, they may have time limited in terms of how long you can run. If that's a problem for you, don't use it to me. That's not an artifact of service. That's artifact of that particular choice of how the implement service with K native they don't have that problem. You could let it run forever if you want. So in terms of what workloads or good or bad, I honestly I don't have a good answer for that because I don't necessary by some of the the stories I'm hearing, I personally think, try to run everything you can through something like Cain native, and then when it fails, go someplace else is the same story had when containers first came around. They would say, You know when to use BMS vs Containers. My go to answer was, always try containers first. Your life will be a whole lot easier when it doesn't work, then look at the other things because I don't want to. I don't want to try to pigeonhole something like surly or K native and say, Oh, don't even think about it for these things because it may actually worked just fine for you, right? I don't want people to believe negative hype in a way that makes sense, >> and that's very fair. I tend to see most of the constraints around. This is being implementation details of specific providers and that that will dictate answers to that question. I don't want to sound like I'm coming after you, and that's very thoughtful of measured with >> thank you. That's the usual response back. So don't >> go. I'Ll give you the tough one critical guy had in Seattle. Okay, when I looked at K Native is there's a lot of civilised options out there yet, but when I talked to users, the number one out there is a ws Lambda, and number two is probably as your functions. And as of Seattle, neither of those was fully integrated. Since then, I talk to a little startup called Believers Trigger Mash, that that has made some connections between Lambda Ah, and a native. And there was an announcement a couple of weeks ago, Kedia or Keita? That's azure and some kind of future to get Teo K native. So it feels like it's a maturity thing. And, you know, what can you tell us about, you know, the big cloud guys on Felicia? Google's involved IBM Red Hat on and you know Oracle are involved in K Native. So where do those big cloud players? Right? >> So from my perspective, what I think Kenya has going for it over the others is one A lot of other guys do run on Cooper Netease. I feel like they're sort of like communities as well as everything else, like some of them can run. Incriminate is Dr anything else, and so they're not necessary, tightly integrated and leveraging the community's features the way Kay Native is doing. And I think that's a little bit unique right there. But the other thing that I think K native has going for it is the community around it? I think people were doing were noticing. Is that what you said? There's a lot of other players out there, and it's hard for people to choose. And what? I think Google did a great job of this sort of bringing the community together and said, Look, can we stop bickering and develop a sort of common infrastructure? Like Who Burnett is is that we can all then base our surveillance platforms on, and I think that rallying cry to bring the community together across a common base is something a little bit unique for K native. When you compare it with the others, I think that's a big draw for people. Least from my perspective. I know it from IBM Zzzz Well, because community is a big thing for us, >> obviously. Okay, so will there be a bridge to those other cloud players soon as their road map? For that, >> we think a native itself. Yeah, I am not sure I can answer that one, because I'm not sure I heard a lot of talk about bridging per se. I know that when you talk about things like getting events from other platforms and stuff. Obviously, through the eventing side of a native we do went from a serving perspective. I'm not sure I hold her old water. From that perspective, you have >> to be honest. All right, Well, Doug Davis, we're done for This one. Really appreciate all the updates there. And I definitely look forward, Teo, seeing the progress that the servant working group continues to do, so thank you so much. Thank you for having me. Alright for Corey Quinn. I'm stupid and will be back with more coverage here on the Cube. Thanks for watching.

Published Date : May 21 2019

SUMMARY :

So maybe as that is a set up, you know, What is the surveillance working groups? you know, getting encouraged in the joint to grow the community. Thank you for your valuable contribution. Does the feature do what you need? So you know, how does K native But the important thing is if for some reason you need to go around K that it's about business logic and solving the things that Air Corps to your business and not even having to think I don't care that it's the same thing to me. Alright, So, Doug, you know, it's often times, you know, we get caught in this bubble And I did hear some excitement around that in terms of what people actually expect At some point you saw this with virtual I honestly I don't have a good answer for that because I don't necessary by some of the the I don't want to sound like I'm coming after you, That's the usual response back. And, you know, what can you tell us about, Is that what you said? Okay, so will there be a bridge to those other cloud players soon as their road map? I know that when you talk about things like getting And I definitely look forward, Teo, seeing the progress that the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Doug DavisPERSON

0.99+

Corey QuinnPERSON

0.99+

IBMORGANIZATION

0.99+

SeattleLOCATION

0.99+

CoreyPERSON

0.99+

OracleORGANIZATION

0.99+

EvaPERSON

0.99+

Red HatORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

third pieceQUANTITY

0.99+

Air CorpsORGANIZATION

0.99+

TeoPERSON

0.99+

K NativeORGANIZATION

0.99+

eightyQUANTITY

0.99+

DougPERSON

0.99+

eightQUANTITY

0.99+

IBM ArmyORGANIZATION

0.99+

Ecosystem PartnersORGANIZATION

0.99+

Missing the PointTITLE

0.99+

YesterdayDATE

0.99+

KubeConEVENT

0.99+

OneQUANTITY

0.99+

firstQUANTITY

0.99+

Cloud, Native Computing FoundationORGANIZATION

0.99+

todayDATE

0.99+

oneQUANTITY

0.99+

fifteenQUANTITY

0.99+

TwoQUANTITY

0.98+

two worldsQUANTITY

0.98+

SyriaLOCATION

0.98+

thirdQUANTITY

0.98+

IBM Red HatORGANIZATION

0.98+

two serviceQUANTITY

0.98+

one reasonQUANTITY

0.98+

CincyLOCATION

0.98+

zeroQUANTITY

0.97+

KayPERSON

0.97+

ninety percentQUANTITY

0.96+

K nativeORGANIZATION

0.96+

Believers Trigger MashORGANIZATION

0.96+

Kay NativePERSON

0.95+

One thingQUANTITY

0.95+

EuropeLOCATION

0.95+

point BOTHER

0.95+

Cooper NeteaseORGANIZATION

0.94+

MohrPERSON

0.93+

twentyQUANTITY

0.93+

KaniaPERSON

0.93+

threeQUANTITY

0.91+

one verseQUANTITY

0.89+

KediaPERSON

0.89+

Point AOTHER

0.88+

couple of weeks agoDATE

0.87+

KeitaPERSON

0.83+

fourQUANTITY

0.82+

K NativePERSON

0.81+

CloudNativeCon EU 2019EVENT

0.79+

KenyaORGANIZATION

0.79+

two thousand seventeenQUANTITY

0.78+

Ueda GoodPERSON

0.78+

K nativePERSON

0.76+

coupleQUANTITY

0.76+

Teo K nativePERSON

0.75+

LambdaTITLE

0.75+

twenty nineteenQUANTITY

0.75+

Cloud FoundryORGANIZATION

0.75+

LennoxPERSON

0.74+

CoronetORGANIZATION

0.73+

FeliciaPERSON

0.71+

Cube KhanPERSON

0.71+

K nativeORGANIZATION

0.7+

Network Sing GrisORGANIZATION

0.67+

NeteaseORGANIZATION

0.65+

surlyPERSON

0.64+

ConORGANIZATION

0.64+

Shalu Chadha, Accenture & Kathleen Natriello, Bristol-Myers Squibb | AWS Executive Summit 2018


 

>> Life from Las Vegas, it's theCube, covering the AWS Accenture Executive Summit. Brought to you by Accenture. >> Welcome back everyone to theCube's live coverage of the AWS Executive Summit. I'm your host, Rebecca Knight. And I'm joined by Kathleen Natriello. She is the vice president and the head of IT, digital design at Bristol Myers Squibb. And Shalu Chadha, senior technology services lead at Accenture. Thank you so much for coming on theCube. >> Sure. >> Thank you for having us. >> So we're going to talk about Bristol Myers Squibb's journey to the cloud today, but I want. Bristol Myers Squibb is a household name, but I would love you to just start out, Kathleen, by telling our viewers a little bit about Bristol Myers Squibb. Just how big a global pharma company you are. >> Sure. We're a global company, as you said. We have about 23,000 employees all over the world. And we're very focused on our immuno oncology therapies. And the way that they work is that they boost the immune system to fight cancer. So it's a really exciting development that we've had over the years. >> And so what was it, sort of, in the trajectory of Bristol Myers Squibb, that made you realize, as an organization, we need to do things differently? What challenges were you facing? >> So, we're very science focused in terms of developing treatments for our patients. And so our highest priority was our scientists' productivity. And so we started our cloud journey about 10 years ago. And our initial focus was on leveraging burst computing in AWS, which enabled us to spin up enough capacity for our scientists to do research with very large volumes of data. That's one of the things about biopharma. We use very large volumes for genomics research. >> And also, with this partnership, using AWS, you also partner with Accenture. So, can you describe a little bit, Shalu, how the partnership evolved? >> Right. And so that journey that Kathy mentioned, We've been part of that journey for the last two years now. And I think it's this nice partnership between AWS, BMS, and Accenture. And the teams have gone on with a lot of quick successes and early successes. And I think, going forward, the focus is really now businesses is going to look for a lot more demand and agility. Clouded adoption is going to be key in how we actually expand on that. And I know we're talking amongst us to say, how do we get there faster now? >> A little less conversation, a little more action please. >> Yes. (inaudible speech and laughter) >> Exactly. So, let's talk about this journey. So you're not only migrating existing applications, you're also building your own applications. >> Yes. >> What's the, sort of the wisdom behind that strategy? >> A couple of things. So I mentioned earlier that we started our journey with our scientists and we've continued because that's where AWS really delivers significant value for Bristol Myers Squibb. So, what we have done is implemented several AWS cloud services that enable our scientists to use machine learning, artificial intelligence, a lot of computational approaches and simulations that significantly reduce the amount of time it takes them to do an experiment, as well as the cost. Because they no longer have to use actual physical material, or patients, or investigators. They can do it all through simulation and modeling, which is exciting. >> So, I mean, we all know that the drug discovery process takes a long time, and it's tedious, um, cumbersome. So can you actually bring it back down to earth a little bit and say, what have you seen? What are your scientists? In terms of how the drug discovery process is going. >> Yeah. Our scientists are our biggest advocates of the cloud and the capabilities it delivers. And they will report back to us that they are doing things with machine learning and artificial intelligence with these simulations, that they're doing in a few hours, that used to take them weeks and months. And so that's how it's really shortening that cycle. >> And are the patients feeling the benefits yet, too? >> The patients will feel the benefits with our focus on clinical trials. And so, being able to speed up a clinical trial is very helpful. And both from the patient experience, as well as the investigators. >> Shalu, can you talk about some of the other innovation and automation capabilities? >> Yeah. So, BMS is really on this really exciting journey, and now that they've, like Kathy said, extended some of those capabilities and actually building and enabling for the scientists, of the commercial, the brand sites. It's now about, really, what do you do next and how you bring that next wave of innovation. And so, what's been nice at Bristol Myers Squibb and the partnership we have with Accenture here, is really looking at taking some of the learnings we had in the back office, in the finance and the procurement. Where we've actually brought a lot of process efficiency through our bots taking some of that learnings and bringing that across in many other different ways. And now we have bots across legal, compliance, and moving into the clinical area that adverse events. And we're looking at really that part which is how do you actually get quicker with how the patients are going to see both responses to the adverse events, as well as how do you actually accelerate the clinical trial process. And all of those innovations are really possible with what Kathy has set up in her organization. And actually having that digital acceleration competency and be able to take this span enterprise. >> One of the things that's so interesting about these partnerships is how you work together. >> (in unison) Yes. >> And is it that you're focusing on the science and Accenture is thinking about the technology? I mean, are you, sort of, two different groups? Or how are you coming together to collaborate and build a relationship? >> I really see it as three groups. So it's Bristol Myers Squibb that's focused on science as well as the technology. And if I take an example of how that partnership works, when we were doing our migration to the cloud, the more aggressive plan that we have in place right now, Amazon partnered with us on a migration readiness program. And that enabled us to move as much as 400 plus workloads into the cloud and to other locations. And then Accenture partnered with us, as well, to actually move the applications and migrate them to the cloud and the two other locations. So, I really see it as a three way partnership. And part of the way, one of the reasons it's so successful is it's not just BMS partnering with Accenture, and BMS partnering with Amazon, But it's Amazon and Accenture partnering together. And they would come up with ideas on here's what we think will make BMS even more successful. >> And how, and how is that? Is it because you were really grasping their business challenges? Or, I mean, how are you able to come up with? You're not a life science person. >> Right. >> It's, how are you doing that? >> It's a good question, and I think when I reflect on what I experience with other clients, I think what's so tremendously making us successful here is everything is about interest based. And it's about how we start the conversation. The patient in the center. And then it's about who's interests are we serving. Let's be clear. And let's try and try trigress into what's the solution that actually needs that. So, I think, whether, Kathy mentioned it in the cloud cumulus work, or even with the SAPS four journey right now. It's the combination of AWS, BMS, and Accenture in that journey of how we going to solve this together. Those critical and complex programs. >> Kathy, you said that scientists were some of your biggest advocates for going cloud native. I'm curious about the rest of the work force. I mean, has it been, sometimes introducing new technologies and new ways of doing things can cause consternation among your employees. >> Yeah., but in my organization, we bring a lot of change to the rest of the company. And your right. Sometimes it's well received. But I think when it is well received, is when across the company they can see the productivity gain with our robotics process automation. At a digital workforce, people are able to have, they are able to get a lot more done. And so there is acceptance of that. And very often, the business functions are the ones that introduce the new technologies because they're really interested in it and curious. So it works out well. >> So they're getting more done so >> Yes >> So then they're more satisfied with their work and life >> Yes >> And, exactly. So tell our viewers a little bit more about what's next for this partnership, for this relationship, in terms of new technologies. In terms of what you hope to be able to accomplish in the years to come. >> So, I can start. I really think that's what is next for us is to move a little faster. So, in our cloud journey, as I mentioned, we started 10 years ago and then, we've build on what we've learned. So, as an example, we put our commercial data warehouse into a Amazon Redshift. And then that laid the foundation for us to do, for example, rapid data labs. We started by building some data lakes in HR and R and D. And then, by the time we got to doing that for manufacturing, we did it serverless. And so we've had a nice progression based on learning and going the next step. But I think, we're to the point where the technology's evolving so quickly we can move a lot faster and get the benefits faster. So for me, that's what I view as what's next. >> Shalu, anything? >> Yeah. I would just add that I think analytics set the core. I think there is such a strong foundation set here that now it's about how are we going to extrapolate from there. And really look at bot machine learning and what that could do for us. And that, and we will take a lot from what we've learned here today about actually evolving that journey. And I think the best part is the foundation is set strong. And now it's about accelerating into those specific business areas as well. So I would say analytics and really extending our machine learning capabilities. >> So move faster, analytics machine learning. Great. So we're going to be talking about it next year's summit. Well, Kathy and Shalu, thank you so much for coming on theCube. This was a lot of fun. >> Yes. It was. >> (in unison) Thank you. >> I'm Rebecca Knight. We will have more of theCube's live coverage of the AWS Executive Summit coming up in just a little bit.

Published Date : Nov 30 2018

SUMMARY :

Brought to you by Accenture. And I'm joined by Kathleen Natriello. but I would love you to just start out, Kathleen, And the way that they work is that And so we started our cloud journey about 10 years ago. And also, with this partnership, using AWS, And the teams have gone on with Yes. So you're not only migrating existing applications, So I mentioned earlier that we started our journey So can you actually bring it back down to earth a little bit And they will report back to us And both from the patient experience, and the partnership we have with Accenture here, One of the things that's so interesting And part of the way, one of the reasons And how, and how is that? And it's about how we start the conversation. I'm curious about the rest of the work force. And so there is acceptance of that. In terms of what you hope to be able And then, by the time we got to doing that And that, and we will take a lot Well, Kathy and Shalu, thank you so much of the AWS Executive Summit

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Rebecca KnightPERSON

0.99+

AmazonORGANIZATION

0.99+

KathleenPERSON

0.99+

AWSORGANIZATION

0.99+

KathyPERSON

0.99+

Kathleen NatrielloPERSON

0.99+

ShaluPERSON

0.99+

Shalu ChadhaPERSON

0.99+

AccentureORGANIZATION

0.99+

Bristol Myers SquibbORGANIZATION

0.99+

twoQUANTITY

0.99+

BMSORGANIZATION

0.99+

three groupsQUANTITY

0.99+

400 plusQUANTITY

0.99+

Las VegasLOCATION

0.99+

oneQUANTITY

0.99+

OneQUANTITY

0.99+

bothQUANTITY

0.98+

about 23,000 employeesQUANTITY

0.98+

two different groupsQUANTITY

0.97+

todayDATE

0.97+

next yearDATE

0.97+

10 years agoDATE

0.97+

three wayQUANTITY

0.96+

AWS Executive SummitEVENT

0.96+

about 10 years agoDATE

0.94+

Executive SummitEVENT

0.92+

Bristol-Myers SquibbORGANIZATION

0.89+

theCubeORGANIZATION

0.88+

earthLOCATION

0.86+

AWS Executive Summit 2018EVENT

0.84+

Accenture Executive SummitEVENT

0.81+

last two yearsDATE

0.79+

RedshiftCOMMERCIAL_ITEM

0.7+

waveEVENT

0.67+

weeksQUANTITY

0.59+

monthsQUANTITY

0.57+

SAPSTITLE

0.55+

theCubeCOMMERCIAL_ITEM

0.5+

Stephan Fabel, Canonical | OpenStack Summit 2018


 

(upbeat music) >> Announcer: Live from Vancouver, Canada. It's The Cube covering Openstack Summit, North America, 2018. Brought to you by Red Hat, The Open Stack Foundation, and it's ecosystem partners. >> Welcome back to The Cube's coverage of Openstack Summit 2018 in Vancouver. I'm Stu Miniman with cohost of the week, John Troyer. Happy to welcome back to the program Stephan Fabel, who is the Director of Ubuntu product and development at Canonical. Great to see you. >> Yeah, great to be here, thank you for having me. Alright, so, boy, there's so much going on at this show. We've been talking about doing more things and in more places, is the theme that the Open Stack Foundation put into place, and we had a great conversation with Mark Shuttleworth, and going to dig in a little bit deeper in some of the areas with you. >> Stephan: Okay, absolutely. >> So we have the Cube, and we're go into all of the Kubernetes, Kubeflow, and all those other things that we'll mispronounce how they go. >> Stephan: Yes, yes, absolutely. >> What's your impression of the show first of all? >> Well I think that it's really, you know, there's a consolidation going on, right? I mean, we really have the people who are serious about open infrastructure here, serious about OpenStack. They're serious about Kubenetes. They want to implement, and they want to implement at a speed that fits the agility of their business. They want to really move quick with the obstrain release. I think the time for enterprise hardening delays an inertia there is over. I think people are really looking at the core of OpenStack, that's mature, it's stable, it's time for us to kind of move, get going, get success early, get it soon, then grow. I think most of the enterprise, most of the customers we talk to adopt that notion. >> One of the things that sometimes helps is help us lay out the stack a little bit here because we actually commented that some of the base infrastructure pieces we're not talking as much about because they're kind of mature, but OpenStack very much at the infrastructure level, your compute, storage, and network need to understand. But then we when we start doing things like Kubernetes as well, I can either do or, or on top of, and things like that, so give us your view as to what'd you put, what Canonical's seeing, and what customers-- how you lay out that stack? >> I think you're right, I think there's a little bit of path-finding here that needs to be done on the Kubernetes side, but ultimately, I think it's going to really converge around OpenStack being operative-centric, and operative-friendly, working and operating the infrastructure, scaling that out in a meaningful manner, providing multitenancy to all the different departments. Having Kubernetes be developer-centric and really help to on-board and accelerate the workload that option of the next gen initiatives, right? So, what we see is absolutely a use case for Kubernetes and OpenStack to work perfectly well together, be an extension of each other, possibly also sit next to each other without being too incumbenent there. But I think that ultimately having something like Kubernetes contain a based developer APIs that are providing that orchestration layer are the next thing, and they run just perfectly fine on Canonical OpenStack. >> Yeah, there certainly has been a lot of talk about that here at the show. Let's see, let's go a level above that, things we run on Kubernetes, I wanted to talk a little bit about ML and AI and Kubeflow. It seems like we're, I'd almost say that we're, this is like, if we were a movie, we're in a sequel like AI-5; this time, it's real. I really do see real enterprise applications incorporating these technologies into the workflow for what otherwise might be kind of boring, you know, line of business, can you talk a little bit about where we are in this evolution? >> You mean, John, only since we've been talking about it since the mid-1800s, so yeah. >> I was just about to point that out, I mean, AI's not new, right? We've seen it since about 60 years. It's been around for quite some time. I think that there is an unprecedented amount of sponsorship of new startups in this area, in this space, and there's a reason why this is heating up. I think the reason why ultimately it's there is because we're talking about a scale that's unprecedented, right? We thought the biggest problem we had with devices was going to be the IP addresses running out, and it turns out, that's not true at all, right? At a certain scale, and at a certain distributed nature of your rollout, you're going to have to deal with just such complexity and interaction between the underlying, the under-cloud, the over-cloud, the infrastructure, the developers. How do I roll this out? If I spin up 1000 BMs over here, why am I experiencing dropped calls over there? It's those types of things that need to be self-correlated. They need to be identified, they need to be worked out, so there's a whole operator angle just to be able to cope with that whole scenario. I think there's projects that are out there that are trying to ultimately address that, for example, Acumos (mumbles) Then, there is, of course, the new applications, right? Smart cities to connect to cars, all those car manufacturers who are, right now, faced with the problem: how do I deal with mobile, distributed inference rollout on the edge while still capturing the data continually, train my model, update, then again, distribute out to the edge to get a better experience. How do I catch up to some of the market leaders here that are out there? As the established car manufacturers are going to come and catch up, put more and more miles autonomously on the asphalt, we're going to basically have to deal with a whole lot more of proctization of machine-learning applications that just have to be managed at scale. And so we believe for all certain good company in that belief that having to manage large applications at scale, that containers and Kubernetes is a great way to do that, right? They did that for web apps. They did that for the next generation applications. This is one example where with the right operators in mind, the right CRDs, the right frameworks on top of Kubernetes managed correctly, you are actually in a great position to just go to market with that. >> I wonder if you might have a customer example that might go to walk us through kind of where they are in this discussion, talk to many companies, you know, the whole IOT even pieces were early in this. So what's actually real today, how much is planning, is this years we're talking before some of these really come to fruition? >> So yeah, I can't name a customer, but I can say that every single car manufacturer we're talking to is absolutely interested in solving the operational problem of running machine-learning frameworks as a service, making sure those are up running and up to speed at any given point in time, spin them up in a multitenant fashion, make sure that the GPU enablement is actually done properly at all layers of the virtualization. These are real operational challenges that they're facing today, and they're looking to solve with us. Pick a large car manufacturer you want. >> John: Nice. We're going down to something that I can type on my own keyboard then, and go to GitHub, right? That's one of the places to go where it is run, TensorFlow of machine-learning framework on Kubernetes is Kubeflow, and that little bit yesterday on stage, you want to talk about that maybe? >> Oh, absolutely, yes. That's the core of our current strategy right now. We're looking at Kubeflow as one of the key enablers of machine-learning frameworks as a service on top of Kubernetes, and I think they're a great example because they can really show how that as a service can be implemented on top of a virtualization platform, whether that be KVM, pure KVM, on bare metal, on OpenStack, and actually provide machine-learning frameworks such as TensorFlow, Pipe Torch, Seldon Core. You have all those frameworks being supported, and then basically start mix and matching. I think ultimately it's so interesting to us because the data scientists are really not the ones that are expected to manage all this, right? Yet they are the core of having to interact with it. In the next generation of the workloads, we're talking to PHDs and data scientists that have no interest whatsoever in understanding how all of this works on the back end, right? They just want to know this is where I'm going to submit my artifact that I'm creating, this is how it works in general. Companies pay them a lot of money to do just that, and to just do the model because that's where, until the right model is found, that is exactly where the value is. >> So Stephan, does Canonical go talk to the data scientists, or is there a class of operators who are facilitating the data scientists? >> Yes, we talk to the data scientists who understand their problems, we talk to the operators to understand their problems, and then we work with partners such as Google to try and find solutions to that. >> Great, what kind of conversations are you having here at the show? I can't imagine there's too many of those, great to hear if there are, but where are they? I think everybody here knows containers, very few know Kubernetes, and how far up the stack of building new stuff are they? >> You'd be surprised, I mean, we put this out there, and so far, I want to say the majority of the customer conversations we've had took an AI turn and said, this is what we're trying to do next year, this is what we're trying to do later in the year, this is what we're currently struggling with. So glad you have an approach because otherwise, we would spend a ton of time thinking about this, a ton of time trying to solve this in our own way that then gets us stuck in some deep end that we don't want to be. So, help us understand this, help us pave the way. >> John: Nice, nice. I don't want to leave without talking also about Microcades, that's a Kubernetes snap, you code some clojure download, Can we talk a little bit about that? >> Yeah, glad to. This was an idea that we conceived that came out of this notion of alright, well if I do have, talking to a data scientist, if I do have a data scientist, where does he start? >> Stu: Does Kubernetes have a learning curve to date? >> It does, yeah, it does. So here's the thing, as a developer, you have, what options do you have right when you get started? You can either go out and get a community stood up on one of the public clouds, but what if you're in the plane, right? You don't have a connection, you want to work on your local laptop. Possibly, that laptop also has a GPU, and you're a data scientist and you want to try this out because you know you're going to submit this training job now to a (mumbles) that runs un-prem behind the firewall with a limited training set, right? This is the situation we're talking about. So ultimately, the motivation for creating Microcades was we want to make this very, very equivalent. Now you can deploy Kubeflow on top of Microcades today, and it'll run just fine. You get your TensorBoard, you have Jupyter notebook, and you can do your work, and you can do it in a fashion that will then be compatible to your on-prem and public machine-learning framework. So that was your original motivation for why we went down this road, but then we noticed you know what, this is actually a wider need. People are thinking about local Kubernetes in many different ways. There are a couple of solutions out there. They tend to be cumbersome, or more cumbersome than developers would like it. So we actually said, you know, maybe we should turn this into a more general purpose solution. So hence, Microcades. It works like a snap on your machine, you kick that off, you have Kubernetes API, and under 30 seconds or little longer if your download speed plays a factor here, you enable DNS and you're good to go. >> Stephan, I just want to give you the opportunity, is there anything in the Queens Release that your customers have been specifically waiting for or any other product announcements before we wrap? >> Sure, we're very excited about the Queens Release. We think Queens Release is one of the great examples of the maturity of the code base and really the knot towards the operator, and that, I think was the big challenge beyond the olden days of OpenStack where the operators took a long time for the operators to be heard, and to establish that conversation. We'd like to say and to see that OpenStack Queens has matured in that respect, and we like things like Octavia. We're very exciting about (mumbles) as a service, taking its own life and being treated as a first-class citizen. I think that it was a great decision of the community to get on that road. We're supporting as a part of our distribution. >> Alright, well, appreciate the update. Really fascinating to hear about all, you know, everybody's thinking about it and really starting to move on all the ML and AI stuff. Alright, for John Troyer, I'm Tru Miniman. Lots more coverage here from OpenStack Summit 2018 in Vancouver. Thanks for watching The Cube. (upbeat music)

Published Date : May 22 2018

SUMMARY :

Brought to you by Red Hat, The Open Stack Foundation, Great to see you. Yeah, great to be here, thank you for having me. So we have the Cube, and we're go into all of the I mean, we really have the people who are serious about and what customers-- how you lay out that stack? of path-finding here that needs to be done about that here at the show. since the mid-1800s, so yeah. As the established car manufacturers are going to in this discussion, talk to many companies, a multitenant fashion, make sure that the GPU That's one of the places to go where it is run, and to just do the model because Yes, we talk to the data scientists who understand that we don't want to be. I don't want to leave without talking also about Microcades, talking to a data scientist, and you can do your work, and you can do of the community to get on that road. Really fascinating to hear about all, you know,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
StephanPERSON

0.99+

Mark ShuttleworthPERSON

0.99+

JohnPERSON

0.99+

John TroyerPERSON

0.99+

Stephan FabelPERSON

0.99+

Red HatORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

VancouverLOCATION

0.99+

Open Stack FoundationORGANIZATION

0.99+

KubernetesTITLE

0.99+

CanonicalORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

Vancouver, CanadaLOCATION

0.99+

OpenStackTITLE

0.99+

next yearDATE

0.99+

mid-1800sDATE

0.99+

yesterdayDATE

0.99+

Tru MinimanPERSON

0.99+

under 30 secondsQUANTITY

0.99+

OpenStack Summit 2018EVENT

0.99+

GitHubORGANIZATION

0.98+

QueensORGANIZATION

0.98+

Openstack Summit 2018EVENT

0.98+

one exampleQUANTITY

0.98+

OneQUANTITY

0.98+

OpenStack Summit 2018EVENT

0.98+

KubeflowTITLE

0.97+

Openstack SummitEVENT

0.97+

1000 BMsQUANTITY

0.97+

TensorFlowTITLE

0.96+

about 60 yearsQUANTITY

0.96+

oneQUANTITY

0.96+

JupyterORGANIZATION

0.94+

The CubeORGANIZATION

0.94+

StuPERSON

0.94+

todayDATE

0.94+

asphaltTITLE

0.93+

North AmericaLOCATION

0.92+

UbuntuORGANIZATION

0.88+

The Open Stack FoundationORGANIZATION

0.87+

KubernetesORGANIZATION

0.86+

CubeCOMMERCIAL_ITEM

0.77+

Queens ReleaseTITLE

0.77+

single carQUANTITY

0.76+

Seldon CoreTITLE

0.75+

Pipe TorchTITLE

0.72+

KubeflowORGANIZATION

0.7+

The CubeTITLE

0.69+

OctaviaTITLE

0.67+

firstQUANTITY

0.57+

coupleQUANTITY

0.5+

MicrocadesORGANIZATION

0.5+

KubenetesORGANIZATION

0.49+

2018DATE

0.48+

TensorBoardTITLE

0.48+

KubernetesCOMMERCIAL_ITEM

0.42+

ReleaseTITLE

0.4+

Brian Biles, Datrium | VMworld 2015


 

it's the cube covering vmworld 2015 brought to you by VMware and its ecosystem sponsors and now your host dave vellante welcome back to moscone center everybody this is the cube silicon angles continuous production of vmworld 2015 Brian biles is here he's the CEO and co-founder of day trium Brian of course from data domain Fame David floor and I are really excited to see you thanks for coming on the cue that's great to see you guys again so in a while coming out of stealth right it's been a while you've been you've been busy right you get a domain work the DMC for a while kind of disappeared got really busy again and here you are yeah new hats got new books yeah yeah so tell us about daydream fundamentally guys on time yeah yeah well we're big on ties on the East Coast are you too well he's even more east than I am even though he goes out in California but uh yeah tell us about date you fundamentally different fundamentally different from other kinds of storage different kind of founding team so I was a founder of data domain and Hugo Patterson the CTO there BMC fellow became CTO for us we hadn't when we left emc we weren't sure what we were going to do we end up running into to VMware principal engineers who had been there 10 or 12 years working on all kinds of stuff and they believed that there was a market gap on scalable storage for VMS so we got together we use something about storage they knew something about BMS and three years later date reham is at its first trade show so talk more about that that Gavin happens all the time right guys alpha geeks nah no offense to that term it's a term of endearment yea sorry I'm a marketing guy tech ghastly ok so they get together and they sort of identify these problems and they're able to sniff them out at the root level so what really can you describe that problem or detail sure so broadly there are two kinds of storage right there's sort of arrays and emerging there's hyper converge they approach things in a very different way in a raise there tends to be a bottleneck in the controller the the electronics that that do the data services this the raid and the snapshotting and cloning and compression indeed even whatever and increasingly that takes more and more compute so Intel is you know helping every year but it's still a bottleneck and when you run out it's a cliff and you have to do a pretty expensive upgrade or migrate the data to a different place and that's sticky and takes a long time so in reaction hyper converged has emerged as an alternative and it you know it has the benefit of killing the array completely but it may have over corrected so it has some trade-offs that a lot of people don't like for example if a host goes down you know the host has assumed all the data management problems that are raised used to have so you have to migrate the data or rebuild it to service the hose if you know you can't have a fit very cleanly between a for example a blade server which has one or two drive bays and a hyper converged model where you know you look across the floor the sort of average number of capacity drives is four or five not to mention the cache drives so a blade server it's just not a fit so there's a lot of parts of the industry where that model is just not the right model you know if everybody is writing to everybody then there's a lot of neighbor noise it gets kind of weird to troubleshoot in tune arrays you know we're better in some respects things change with hyper converged a little different we're trying to create a third path in our model there's a box that we sell it's a 2u rackmount a bunch of drives for capacity but the capacity is just for at rest data it's where all the rights go it's where persistence goes but we move all the data service processing the CPU for raid for compression for dee doop whatever to host cycles we upload software to an ESX host and it uses you know anybody's x86 server and you bring your own flash for caching so you know Gartner did a thing at the end of the year where they looked at discounted street price for flash the difference between what you could pay on a server for flash you know just a commodity SSD and what you could pay in an array it was like an 8x difference so if you don't you know we don't put raid on the host all the rate is in the back end so that frees up another whatever twenty percent you end up getting an order of magnitude difference in pricing so what you can get from us in flash on a host is not you don't aim at ten percent you know of your active data in cash it gets close to a hundred dollars a terabyte after you do d Dupin compression on you know server flash so it's just cheap and plentiful you put all your data up there everything runs out of flash locally it never gets a network hit for a read we do read caching locally unlike a hyper converge we don't spread data in a pool across the host we're not interrupting every host for read for rights for you know somebody else everything is local so when you do a write it goes to our box on the end of the wire 10 gig attached but all of the compute operations are local so you're not interrupting everybody all the resourcing you would do for any i/o problem is a local either cores or flash resourcing so it's a different model and it you know it's a really well student from blade servers no one else was doing that in such a good way unlike a cash-only product it's completely organically designed for manageability you don't have a separate tier for managing on the host separate from an array where you know you're probably duplicating provisioning and having to worry about how to do dinner a snapshot when you have to flush the cache on the host it's all completely designed from the ground up so it means the the storage that we store too is minimal cost we don't have the compute overhead that you have with a controller you don't have the flash which is really expensive there that's just cycles on the host everything is you know done with the most efficient path for both data and hardware so if you look at designs in general the flash is either being a cache or it's been 100% flash or it's been a tier of story so you're just fine understand that correctly there isn't any tearing because you've got a hundred percent of it in flash so that your goals yeah we use flash on the host as a cash right but only in the sort of i only use that word guardedly initial degenerate case it's all of the data yeah so it's a cash in the spirit that if the coast dies you haven't lost any data the data is always safe somewhere else right but it's all the data it's all the data so that's sitting on the disk the back end I presume you're writing sequential event all the time with log files answering and you saw the the disk in the most effective way that's right at both sides move the flash it's a log structured and the disk it's a log stretch ownership yeah and you know we had the advantage of data domain it was the most popular log structured file system ever and you know we learned all the tricks about dee doop and garbage collection along time ago so that CTO team is uniquely qualified to get this right so what about if it does go down are you clustering it what happens when it goes down and you have to recover from those disk drives that could take a bit of time good so there's two sides of that if a host fails you know you you use vm h a to restart the vm somewhere else and life goes on if the back end fails it fails the way a traditional mid-range array might fail we have dual controllers so stay over there all the disks are dual attached there's you know dual networks on each controller you can have service which failover it's a raid 6 so there's a rebuild that happens if it disk fails but you could have two of those and keep going but a point i was getting it was that if you fail in the host you've lost all your active data be precise with them we've lost the cache copy in that local flash but you haven't lost any de una lista de menthe you've lost it from the point of view of the only from a standpoint of speed yeah so at that point you know if the ho is down you have to restart the vm somewhere else that's not instant that takes number of minutes and that gives us some time to upload data to that host to know that great good the data is all laid out in our system not for interactive views on the disk drives but for very fast upload to a cash right it's all sort of sequentially laid out unblended per vm for blasting too so what do you see is the key application times that this is going to be particularly suited full so we have the our back-end system has about 30 terabytes usable after all the you know raid and everything and dude even compressions so I figure you know 2 4 6 X data reduction call it 100 terabytes ish depends on mileage so 100 terabyte box will you know sell that that's kind of a mid-range class array it will sell mostly to those markets and our software supports only vm storage virtual disks so as long as it meets those criteria it's pretty flexible the host each host can have up to eight terabytes of raw flash you know post d doofen compression that could be 50 terabytes of effective capacity of flash / host and you know reads never leave the host so you don't get network overhead for read so that's usually two-thirds of most people I own so it's enormously price and cost effective and very performance performant as well right right latency stuff and your IP is the way you lay out the data on the media is that part of the well listen it's it's like to custom file systems from scratch yeah once in one of the hosts not to mention all the management to make it look like there's one thing you know so it's there's a lot going on it's a much more complex project than data domain wise yeah so you mentioned you know you learned from your blog structured file garbage collection days of data but the the problem that you're solving here is much closer to the host much more active data so was that obviously a challenge but so that was part of the new invention required or was really just directly sort of i mean it's at all levels we had to make it fit so we're very vm centric it looks to the software looks to ESX as though it's an NFS share right but NFS terminates in each host and then we use our own protocol to get across 10 gig to the backend and this gives us some special effects will be able to talk about overtime every version alike at entry design in some ways well it's an offense so so you get to see every VMs storage discreetly it's sort of a you know before v vols there was NFS what many support five dot five so this was a logical choice right so everything's vm centric all of the management just it just looks like there's a big pool of storage and everything else is per vm from from diagnostics to capacity planning to whatever clones are per vm you don't have to you know spend a lot of analytics to fig you know back out what the block Lunds look like with respect to the VMS and try to you know look it up figured out it's just that's all there is so I've talked to a lot of we keep on been talking to a lot of flash and you people and this is almost a flash only in the sense that you are everything is going all of the idea is going to that flash once flash is sufficiently cheap and abundant yes no so and we know we write to nvram which is the same as an all-flash array so one of the things that we've noticed is that what they find is that they have to organize things completely differently particularly as they're trying to share things and for example instead of having a the production system and then a separate copy for each application developer another separate coffee for the for the data warehouse they're trying to combine those and share the data across there with snapshots of one sort or knowledge to amortize they're very high costs just because it's much faster and quicker since the customers are doing this and I think you're not they did vendors they don't even know what's going on so but because they can share it you don't have to move the data well so it's good it's allows the developers have a more current copy the data so they can work on near production all right yeah so I was just wondering whether that was an area that you are looking at to again apply a different way of doing storage so it takes a test debuts case you saying yeah well testing or data warehousing or whatever I mean we're certainly sensitive to the overhead of having a lot of copies that's why you insolent Dean you and so on the way we do so it's but you can get so very efficient but it allows you to for example if you're doing a clone it's a you know a dee doo clone so it's it gives you a new name space entry and it keeps the rights separate but it it you know lets the common data the data with commonality across other versions be consistent so we gotta wrap but the time we have remaining so just quick update on the company headcount funding investors maybe just give us the rundown sure we raised Series A and B we've raised about 55 million so far NEA and light speed plus some angels Frank's luqman Kylie Diane Greene original founder of VMware and Ed Boon yan who was the original CTO right about a little over 70 people great and this is our first trade show and yeah awesome well congratulations Brian you know it's really awesome to see you back in and actually not to have been in action but now invisible action so well it's great to be here thanks very much for coming on cue congrat day everybody will be back right after this is the cube rely from vmworld 2015 right back

Published Date : Sep 1 2015

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
Hugo PattersonPERSON

0.99+

100%QUANTITY

0.99+

BrianPERSON

0.99+

CaliforniaLOCATION

0.99+

100 terabytesQUANTITY

0.99+

10 gigQUANTITY

0.99+

Brian BilesPERSON

0.99+

10QUANTITY

0.99+

BMCORGANIZATION

0.99+

50 terabytesQUANTITY

0.99+

oneQUANTITY

0.99+

100 terabyteQUANTITY

0.99+

Kylie Diane GreenePERSON

0.99+

twenty percentQUANTITY

0.99+

Ed BoonPERSON

0.99+

12 yearsQUANTITY

0.99+

VMwareORGANIZATION

0.99+

ten percentQUANTITY

0.99+

two kindsQUANTITY

0.99+

twoQUANTITY

0.99+

two-thirdsQUANTITY

0.99+

Brian bilesPERSON

0.99+

fiveQUANTITY

0.99+

two sidesQUANTITY

0.98+

GartnerORGANIZATION

0.98+

ESXTITLE

0.98+

three years laterDATE

0.98+

both sidesQUANTITY

0.98+

each controllerQUANTITY

0.98+

DMCORGANIZATION

0.98+

about 55 millionQUANTITY

0.98+

dave vellantePERSON

0.97+

Series AOTHER

0.97+

each applicationQUANTITY

0.97+

third pathQUANTITY

0.97+

each hostQUANTITY

0.96+

DatriumORGANIZATION

0.96+

8xQUANTITY

0.96+

fourQUANTITY

0.96+

first trade showQUANTITY

0.95+

10 gigQUANTITY

0.95+

CTOORGANIZATION

0.94+

one thingQUANTITY

0.94+

hundred percentQUANTITY

0.93+

about 30 terabytesQUANTITY

0.91+

up to eight terabytesQUANTITY

0.88+

first trade showQUANTITY

0.88+

over 70 peopleQUANTITY

0.86+

lot of copiesQUANTITY

0.85+

x86OTHER

0.83+

BMSTITLE

0.83+

both dataQUANTITY

0.82+

yearDATE

0.81+

thingsQUANTITY

0.79+

vmworldEVENT

0.79+

East CoastLOCATION

0.75+

a hundred dollars a terabyteQUANTITY

0.74+

two driveQUANTITY

0.71+

one of theQUANTITY

0.71+

lot of partsQUANTITY

0.7+

GavinTITLE

0.69+

end ofDATE

0.69+

David floorPERSON

0.69+

DeanPERSON

0.69+

vmworldORGANIZATION

0.68+

onceQUANTITY

0.68+

lotQUANTITY

0.68+

VMworld 2015EVENT

0.68+

IntelORGANIZATION

0.65+

2015DATE

0.64+

every yearQUANTITY

0.61+

CTOPERSON

0.58+

angelsTITLE

0.56+

VMSTITLE

0.56+

2 4 6QUANTITY

0.56+

rehamORGANIZATION

0.46+

mosconeLOCATION

0.44+

FrankTITLE

0.42+

daydreamORGANIZATION

0.4+

centerORGANIZATION

0.35+