Image Title

Search Results for GCB:

Danny Allan & Brian Schwartz | VeeamON 2021


 

>>Hi lisa martin here with the cubes coverage of demon 2021. I've got to alumni joining me. Please welcome back to the cube Danny. Alan beam's ceo Danny. It's great to see you. >>I am delighted to be here lisa. >>Excellent brian Schwartz is here as well. Google director outbound product management brian welcome back to the program. Uh >>thanks for having me again. Excited to be >>here. Excited to be here. Yes, definitely. We're gonna be talking all about what Demon google are doing today. But let's go ahead and start Danny with you. Seems vision is to be the number one trusted provider of backup and recovery solutions for the, for for modern data protection. Unpack that for me, trust is absolutely critical. But when you're talking about modern data protection to your customers, what does that mean? >>Yeah. So I always, I always tell our customers there's three things in there that are really important. Trust is obviously number one and google knows this. You've been the most trusted search provider uh, forever. And, and so we have 400,000 customers. We need to make sure that our products work. We need to make sure they do data protection, but we need to do it in a modern way. And so it's not just back up and recovery, that's clearly important. It's also all of the automation and orchestration to move workloads across infrastructures, move it from on premises to the google cloud, for example, it also includes things like governance and compliance because we're faced with ransomware, malware and security threats. And so modern data protection is far more than just back up. It's the automation, it's the monitoring, it's a governance and compliance. It's the ability to move workloads. Um, but everything that we look at within our platform, we focus on all of those different characteristics and to make sure that it works for our customers. >>One of the things that we've seen in the last year, Danny big optic in ransom were obviously the one that everyone is the most familiar with right now. The colonial pipeline. Talk to me about some of the things that the team has seen, what your 400,000 customers have seen in the last 12 months of such a dynamic market, a massive shift to work from home and to supporting SAS for clothes and things like that. What have you seen? >>Well, certainly the employees working from home, there's a massive increase in the attack surface for organizations because now, instead of having three offices, they have, you know, hundreds of locations for their end users. And so it's all about protecting their data at the same time as well. There's been this explosion in malware and ransomware attacks. So we really see customers focusing on three different areas. The first is making sure that when they take a copy of their data, that it is actually secure and we can get into, you know, a mutability and keeping things offline. But really taking the data, making sure it's secure. The second thing that we see customers doing is monitoring their environment. So this is both inspection of the compute environment and of the data itself. Because when ransomware hits, for example, you'll see change rates on data explode. So secure your data monitor the environment. And then lastly make sure that you can recover intelligently is let us say because the last thing that you want to do if you're hit by ransomware is to bring the ransomware back online from a backup. So we call this security cover re secure, restore. We really see customers focusing on those three areas >>And that restoration is critical there because as we know these days, it's not if we get hit with ransomware, it's really a matter of when. Let's go ahead now and go into the google partnership, jenny talked to me about it from your perspective, the history of the strength of the partnership, all that good stuff. >>Yeah. So we have a very deep and long and lengthy relationship with google um, on a number of different areas. So for example, we have 400,000 customers. Where do they send their backups? Most customers don't want to continue to invest in storage solutions on their premises. And so they'll send their data from on premises and tear it into google cloud storage. So that's one integration point. The second is when the running workloads within the clouds. So this is now cloud native. If you're running on top of the google cloud platform, we are inside the google America place and we can protect those workloads. A third area is around the google vm ware engine, there's customers that have a hybrid model where they have some capacity on premises and some in google using the VM ware infrastructure and we support that as well. That's a third area and then 1/4 and perhaps the longest running um, google is synonymous with containers and especially kubernetes, they were very instrumental in the foundations of kubernetes and so r K 10 product which does data protection for kubernetes is also in the google America place. So a very long and deep relationship with them and it's to the benefit of our customers. >>Absolutely. And I think I just saw the other day that google celebrated the search engine. It's 15th birthday. I thought what, what did we do 16 years ago when we couldn't just find anything we wanted brian talked to me about it from Google's perspective of being partnership. >>Yeah, so as Danny mentioned, it's really multifaceted, um it really starts with a hybrid scenario, you know, there's still a lot of customers that are on their journey into the cloud and protecting those on premises workloads and in some senses, even using beams capabilities to move data to help migrate into the cloud is I'd say a great color of the relationship. Um but as Danny mentioned increasingly, more and more primary applications are running in the cloud and you know, the ability to protect those and have, you know, the great features and capabilities, uh you know, that being provides, whether it be for GCB er VM where you know, capability and google cloud or things like G k e R kubernetes offering, which has mentioned, you know, we've been deep and wide in kubernetes, we really birthed it many, many years ago um and have a huge successful business in, in the managing and hosting containers, that having the capabilities to add to those. It really adds to our ecosystem. So we're super excited about the partnership, we're happy to have this great foundation to build together with them into the future. >>And Danny Wien launched, just been in february a couple of months ago, being backup for google cloud platform. Talk to us about that technology and what you're announcing at them on this year. >>Yeah, sure. So back in february we released the first version of the VM backup for G C p product in the marketplace and that's really intended to protect of course, i as infrastructure as a service workloads running on top of G C p and it's been very, very successful. It has integration with the core platform and what I mean by that is if you do a backup in G C P, you can do you can copy that back up on premises and vice versa. So it has a light integration at the data level. What we're about to release later on this summer is version two of that product that has a deep integration with the VM platform via what we call the uh team service platform, a PS themselves. And that allows a rich bidirectional uh interaction between the two products that you can do not just day one operations, but also day to operations. So you can update the software, you can harmonize schedules between on premises and in the cloud. It really allows customers to be more successful in a hybrid model where they're moving from on premises to the cloud. >>And that seems to be really critically important. As we talk about hybrid club all the time, customers are in hybrid. They're living in the hybrid cloud for many reasons, whether it's acquisition or you know, just the nature of lines of business leveraging their cloud vendor of choice. So being able to support the hybrid cloud environment for customers and ensure that that data is recoverable is table stakes these days. Does that give them an advantage over your competition Danny? >>It does. Absolutely. So customers want the hybrid cloud experience. What we find over time is they do trend towards the cloud. There's no question. So if you have the hybrid experience, if they're sending their data there, for example, a step one, step two, of course, is just to move the workload into the cloud and then step three, they really start to be able to unleash their data. If you think about what google is known for, they have incredible capabilities around machine learning and artificial intelligence and they've been doing that for a very long time. So you can imagine customers after they start putting their data there, they start putting their workloads here, they want to unlock it into leverage the insights from the data that they're storing and that's really exciting about where we're going. It's, they were early days for most customers. They're still kind of moving and transitioning into the cloud. But if you think of the capabilities that are unlocked with that massive platform in google, it just opens up the ability to address big challenges of today, like climate change and sustainability and you know, all the health care challenges that we're faced with it. It really is an exciting time to be partnered with Google >>Ryan. Let's dig into the infrastructure in the architecture from your perspective, help us unpack that and what customers are coming to you for help with. >>Yeah. So Danny mentioned, you know the prowess that google has with data and analytics and, and a, I I think we're pretty well known for that. Uh, there's a tremendous opportunity for people in the future. Um, the thing that people get just right out of the box is the access to the technology that we built to build google cloud itself. Just the scale and, and technology, it's, you know, it's, it's a, you know, just incredible. You know, it's a fact that we have eight products here at google that have a billion users and when you have, you know, most people know the search and maps and gmail and all these things. When you have that kind of infrastructure, you build a platform like google cloud platform and you know, the network as a perfect example, the network endpoints, they're actually close to your house. There's a reason our technology is so fast because you get onto the google private network, someplace really close to where you actually live. We have thousands and thousands of points of presence spread around the world and from that point forward you're riding on our internal network, you get better quality of service. Uh the other thing I like to mention is, you know, the google cloud storage, that team is built on our object storage. It's uh it's the same technology that underpins Youtube and other things that most people are familiar with and you just think about that for a minute, you can find the most obscure Youtube video and it's gonna load really fast. You know, you're not going to sit there waiting for like two minutes waiting for something to load and that same under underlying technology underpins GCS So when you're going to go and you know, go back to an old restore, you know, to do a restore, it's gonna load fast even if you're on one of the more inexpensive storage classes. So it's a really nice experience for data protection. It has this global network properties you can restore to a different region if there was ever a disaster, there's just the scale of our foundation of infrastructure and also, you know, Danny mentioned if we're super proud about the investments that google has made for sustainability, You know, our cloud runs on 100% renewable energy at the cloud at our scale. That's a lot of, that's a lot of green energy. We're happy to be one of the largest consumers of green energy out there and make continued investments in sustainability. So, you know, we think we have some of the greenest data centers in the world and it's just one more benefit that people have when they come to run on Google Cloud. >>I don't know what any of us would do without google google cloud platform or google cloud storage. I mean you just mentioned all of the enterprise things as well as the at home. I've got to find this really crazy, obscure youtube video but as demanding customers as we are, we want things asAP not the same thing. If you know, an employee can't find a file or calendar has been deleted or whatnot. Let's go in to finish our time here with some joint customer use case examples. Let's talk about backing up on prem workloads to google cloud storage using existing VM licensing Danny. Tell us about that. >>Yeah. So one of the things that we've introduced at beam is this beam, universal licensing and it's completely portable license, you can be running your workloads on premises now and on a physical system and then you can, you know, make that portable to go to a virtual system and then if you want to go to the cloud, you can send that data up to the work load up to the cloud. One of the neat things about this transition for customers from a storage perspective, we don't charge for that. If you're backing up a physical system and sending your your back up on premises, you know, we don't charge for that. If you want to move to the cloud, we don't charge for that. And so as they go through this, there's a predictability and and customers want that predictability so much um that it's a big differentiating factor for us. They don't want to be surprised by a bill. And so we just make it simple and seamless. They have a single licensing model and its future proof as they move forward on the cloud journey. They don't have to change anything. >>Tell me what you mean by future proof as a marketer. I know that term very well, but it doesn't mean different things to different people. So for means customers in the context of the expansion of partnership with google the opportunities, the choices that you're giving customers to your customers, what does future proof actually delivered to them? >>It means that they're not locked into where they are today. If you think about a customer right now that's running a workload on premises maybe because they have to um they need to be close to the data that's being generated or feeding into that application system. Maybe they're locked into that on premises model. Now they have one of two choices when their hardware gets to the end of life. They can either buy more hardware which locks them into where they are today for the next three years in the next four years Or they can say, you know what, I don't want to lock into that. I want to model the license that is portable that maybe 12 months from now, 18 months from now, I can move to the cloud and so it future proof some, it doesn't give them another reason to stay on premises. It allows them the flexibility that licensing is taken off the table because it moves with you that there's zero thought or consideration and that locks you into where you are today. And that's exciting because it unlocks the capabilities of the cloud without being handicapped if you will by what you have on premises. >>Excellent. Let's go to the second uh use case lift and shift in that portability brian. Talk to us about it from your perspective. >>Yeah, so we obviously constantly in discussions with our customers about moving more applications to the cloud and there's really two different kind of approach is the lift and shift and modernization. You know, do you want to change and run on kubernetes when you come to the cloud as you move it in? In some cases people want to do that or they're gonna obviously build a new application in the cloud. But increasingly we see a lot of customers wanting to do lift and shift, they want to move into the cloud relatively quickly. As Danny said, there's like compelling events on like refreshes and in many cases we've had a number of customers come to us and say look we're going to exit our data centers. We did a big announcement Nokia, they're gonna exit 50 data centers in the coming years around the world and just move that into the cloud. In many cases you want to lift and shift that application to do the migration with his little change as possible. And that's one of the reasons we've really invested in a lot of enterprise, more classic enterprise support type technologies. And also we're super excited to have a really wide set of partners and ecosystem like the folks here at Wien. So the customers can really preserve those technologies, preserve that operational experience that they're already familiar with on prem and use that in the cloud. It just makes it easier for them to move to the cloud faster without having to rebuild as much stuff on the way in. >>And that's critical. Let's talk about one more use case and that is native protection of workloads that run on g c p Danny. What are you enabling customers to do there? >>Well? So we actually merged the capabilities of two different things. One is we leverage the native Api is of G C p to take a snapshot and we merge that with our ability to put it in a portable data format. Now. Why is that important? Because you want to use the native capabilities of G CPU want to leverage those native snapshots. The fastest way to recover a file or the fastest way to recover of'em is from the G C p snapshot. However, if you want to take a copy of that and move it into another locale or you want to pull it back on premises for compliance reasons or put it in a long term storage format, you probably want to put it in GCS or in our portable storage format. And so we merge those two capabilities, the snapshot and back up into a single product. And in addition to that, one of the things that we do, again, I talked about predictability. We tell customers what that policy is going to cost them because if for example a customer said, well I like the idea of doing my backups in the cloud, but I want to store it on premises. We'll tell them, well if you're copying that data continually, you know what the network charges look like, What the CPU and compute charges look like, What do the storage costs looks like. So we give them the forecast of what the cost model looks like even before they do a single backup. >>That forecasting has got to be key, as you said with so much unpredicted things that we can't predict going on in this world the last year has taught us that with a massive shift, the acceleration of digital business and digital transformation, it's really critical that customers have an idea of what their costs are going to be so that they can make adjustments and be agile as they need the technology to be. Last question Bryant is for you, give us a view uh, and all the V mon attendees, what can we expect from the partnership in the next 12 >>months? You know, we're excited about the foundation of the partnership across hybrid and in cloud for both VMS and containers. I think this is the real beginning of a long standing relationship. Um, and it's really about a marriage of technology. You think about all the great data protection and orchestration, all the things that Danny mentioned married with the cloud foundation that we have at scale this tremendous network. You know, we just signed a deal with SpaceX in the last couple of days to hook their satellite network up to the google cloud network, you know, chosen again because we just have this foundational capability to push large amounts of data around the world. And that's you know, for Youtube. We signed a deal with Univision, same type of thing, just massive media uh, you know, being pushed around the world. And if you think about it that that same foundation is used for data protection. Data protection. There's a lot of data and moving large sets of data is hard. You know, we have just this incredible prowess and we're excited about the future of how our technology and beans. Technology is going to evolve over time >>theme and google a marriage of technology Guys, thank you so much for joining me, sharing what's new? The opportunities that demand google are joined me delivering to your joint customers. Lots of great step. We appreciate your time. >>Thanks lisa >>For Danielle in and Brian Schwartz. I'm Lisa Martin. You're watching the cubes coverage of Lehman 2021.

Published Date : May 25 2021

SUMMARY :

It's great to see you. the program. Excited to be Excited to be here. It's the ability to move workloads. the last 12 months of such a dynamic market, a massive shift to work from home and the last thing that you want to do if you're hit by ransomware is to bring the ransomware back online And that restoration is critical there because as we know these days, it's not if we get hit with ransomware, So for example, we have 400,000 customers. I thought what, what did we do 16 years ago when we couldn't just find anything we the ability to protect those and have, you know, the great features and capabilities, uh you know, Talk to us about that technology and what you're announcing at them on this year. the two products that you can do not just day one operations, but also day to operations. And that seems to be really critically important. the cloud and then step three, they really start to be able to unleash their data. that and what customers are coming to you for help with. go back to an old restore, you know, to do a restore, it's gonna load fast even Let's go in to finish our time here with some joint customer use If you want to move to the cloud, we don't charge for that. the expansion of partnership with google the opportunities, the choices that you're giving customers with you that there's zero thought or consideration and that locks you into where you are today. Let's go to the second uh use case lift and shift in that portability brian. You know, do you want to change and run on kubernetes when you come to the cloud as you move it in? What are you enabling customers to do there? Api is of G C p to take a snapshot and we merge that with our ability to put That forecasting has got to be key, as you said with so much unpredicted And that's you know, for Youtube. The opportunities that demand google are joined me delivering to your joint customers. For Danielle in and Brian Schwartz.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Brian SchwartzPERSON

0.99+

Lisa MartinPERSON

0.99+

DannyPERSON

0.99+

SpaceXORGANIZATION

0.99+

YoutubeORGANIZATION

0.99+

NokiaORGANIZATION

0.99+

lisa martinPERSON

0.99+

februaryDATE

0.99+

GoogleORGANIZATION

0.99+

thousandsQUANTITY

0.99+

brian SchwartzPERSON

0.99+

100%QUANTITY

0.99+

Danny AllanPERSON

0.99+

400,000 customersQUANTITY

0.99+

UnivisionORGANIZATION

0.99+

firstQUANTITY

0.99+

two productsQUANTITY

0.99+

brianPERSON

0.99+

BryantPERSON

0.99+

youtubeORGANIZATION

0.99+

googleORGANIZATION

0.99+

three thingsQUANTITY

0.99+

three officesQUANTITY

0.99+

oneQUANTITY

0.99+

Danny WienPERSON

0.99+

secondQUANTITY

0.99+

bothQUANTITY

0.99+

two minutesQUANTITY

0.99+

last yearDATE

0.99+

lisaPERSON

0.99+

second thingQUANTITY

0.99+

16 years agoDATE

0.98+

OneQUANTITY

0.98+

two choicesQUANTITY

0.98+

jennyPERSON

0.98+

eight productsQUANTITY

0.98+

RyanPERSON

0.98+

15th birthdayQUANTITY

0.97+

third areaQUANTITY

0.97+

todayDATE

0.97+

single productQUANTITY

0.97+

SASORGANIZATION

0.96+

two capabilitiesQUANTITY

0.96+

50 data centersQUANTITY

0.96+

zero thoughtQUANTITY

0.96+

DaniellePERSON

0.96+

12 monthsQUANTITY

0.96+

three areasQUANTITY

0.95+

this yearDATE

0.95+

single backupQUANTITY

0.94+

WienLOCATION

0.94+

Clayton Coleman, Red Hat | Red Hat Summit 2021 Virtual Experience


 

>>mhm Yes, Welcome back to the cubes coverage of red hat summit 2021 virtual, which we were in person this year but we're still remote. We still got the Covid coming around the corner. Soon to be in post. Covid got a great guest here, Clayton Coleman architect that red hat cuba love and I've been on many times expanded role again this year. More cloud, more cloud action. Great, great to see you. Thanks for coming on. >>It's a pleasure >>to be here. So great to see you were just riffing before we came on camera about distributed computing uh and the future of the internet, how it's all evolving, how much fun it is, how it's all changing still. The game is still the same, all that good stuff. But here at Red had some and we're gonna get into that, but I want to just get into the hard news and the real big, big opportunities here you're announcing with red hat new managed cloud services portfolio, take us through that. >>Sure. We're continuing to evolve our open shift managed offerings which has grown now to include um the redhead open shift service on amazon to complement our as your redhead open shift service. Um that means that we have um along with our partnership on IBM cloud and open ship dedicated on both a W S and G C P. We now have um managed open shift on all of the major clouds. And along with that we are bringing in and introducing the first, I think really the first step what we see as uh huh growing and involving the hybrid cloud ecosystem on top of open shift and there's many different ways to slice that, but it's about bringing capabilities on top of open shift in multiple environments and multiple clouds in ways that make developers and operation teams more productive because at the heart of it, that's our goal for open shift. And the broader, open source ecosystem is do what makes all of us safer, more, uh, more productive and able to deliver business value? >>Yeah. And that's a great steak you guys put in the ground. Um, and that's great messaging, great marketing, great value proposition. I want to dig into a little bit with you. I mean, you guys have, I think the only native offering on all the clouds out there that I know of, is that true? I mean, you guys have, it's not just, you know, you support AWS as your and I B M and G C P, but native offerings. >>We do not have a native offering on GCPD offered the same service. And this is actually interesting as we've evolved our approach. You know, everyone, when we talk about hybrid, Hybrid is, um, you know, dealing with the realities of the computing world, We live in, um, working with each of the major clouds, trying to deliver the best immigration possible in a way that drives that consistency across those environments. And so actually are open shift dedicated on AWS service gave us the inspiration a lot of the basic foundations for what became the integrated Native service. And we've worked with amazon very closely to make sure that that does the right thing for customers who have chosen amazon. And likewise, we're trying to continue to deliver the best experience, the best operational reliability that we can so that the choice of where you run your cloud, um, where you run your applications, um, matches the decisions you've already made and where your future investments are gonna be. So we want to be where customers are, but we also want to give you that consistency. That has been a hallmark of um of open shift since the beginning. >>Yeah. And thanks for clarifying, I appreciate that because the manage serves on GCB rest or native. Um let me ask about the application services because Jeff Barr from AWS posted a few weeks ago amazon celebrated their 15th birthday. They're still teenagers uh relatively speaking. But one comment he made that he that was interesting to me. And this applies kind of this cloud native megatrend happening is he says the A. P. I. S are basically the same and this brings up the hybrid environment. You guys are always been into the api side of the management with the cloud services and supporting all that. As you guys look at this ecosystem in open source. How is the role of A PS and these integrations? Because without solid integration all these services could break down and certainly the open source, more and more people are coding. So take me through how you guys look at these applications services because many people are predicting more service is going to be on boarding faster than ever before. >>It's interesting. So um for us working across multiple cloud environments, there are many similarities in those mps, but for every similarity there is a difference and those differences are actually what dr costs and drive complexity when you're integrating. Um and I think a lot of the role of this is, you know, the irresponsible to talk about the role of an individual company in the computing ecosystem moving to cloud native because as many of these capabilities are unlocked by large cloud providers and transformations in the kinds of software that we run at scale. You know, everybody is a participant in that. But then you look at the broad swath of developer and operator ecosystem and it's the communities of people who paper over those differences, who write run books and build um you know, the policies and who build the experience and the automation. Um not just in individual products or an individual clouds, but across the open source ecosystem. Whether it's technologies like answerable or Terror form, whether it's best practices websites around running kubernetes, um every every part of the community is really involved in driving up uh driving consistency, um driving predictability and driving reliability and what we try to do is actually work within those constraints um to take the ecosystem and to push it a little bit further. So the A. P. I. S. May be similar, but over time those differences can trip you up. And a lot of what I think we talked about where the industry is going, where where we want to be is everyone ultimately is going to own some responsibility for keeping their services running and making sure that their applications and their businesses are successful. The best outcome would be that the A. P. R. S are the same and they're open and that both the cloud providers and the open source ecosystem and vendors and partners who drive many of these open source communities are actually all working together to have the most consistent environment to make portability a true strength. But when someone does differentiate and has a true best to bring service, we don't want to build artificial walls between those. I mean, I mean, that's hybrid cloud is you're going to make choices that make sense for you if we tell people that their choices don't work or they can't integrate or, you know, an open source project doesn't support this vendor, that vendor, we're actually leaving a lot of the complexity buried in those organizations. So I think this is a great time to, as we turn over for cloud. Native looking at how we, as much as possible try to drive those ap is closer together and the consistency underneath them is both a community and a vendor. And uh for red hat, it's part of what we do is a core mission is trying to make sure that that consistency is actually real. You don't have to worry about those details when you're ignoring them. >>That's a great point. Before I get into some architectural impact, I want to get your thoughts on um, the, this trends going on, Everyone jumps on the bandwagon. You know, you say, oh yeah, I gotta, I want a data cloud, you know, everything is like the new, you know, they saw Snowflake Apollo, I gotta have some, I got some of that data, You've got streaming data services, you've got data services and native into the, these platforms. But a lot of these companies think it's just, you're just gonna get a data cloud, just, it's so easy. Um, they might try something and then they get stuck with it or they have to re factor, >>how do you look >>at that as an architect when you have these new hot trends like say a data cloud, how should customers be thinking about kicking the tires on services like that And how should they think holistically around architect in that? >>There's a really interesting mindset is, uh, you know, we deal with this a lot. Everyone I talked to, you know, I've been with red hat for 10 years now in an open shift. All 10 years of that. We've gone through a bunch of transformations. Um, and every time I talked to, you know, I've talked to the same companies and organizations over the last 10 years, each point in their evolution, they're making decisions that are the right decision at the time. Um, they're choosing a new capability. So platform as a service is a great example of a capability that allowed a lot of really large organizations to standardize. Um, that ties into digital transformation. Ci CD is another big trend where it's an obvious wind. But depending on where you jumped on the bandwagon, depending on when you adopted, you're going to make a bunch of different trade offs. And that, that process is how do we improve the ability to keep all of the old stuff moving forward as well? And so open api is open standards are a big part of that, but equally it's understanding the trade offs that you're going to make and clearly communicating those so with data lakes. Um, there was kind of the 1st and 2nd iterations of data lakes, there was the uh, in the early days these capabilities were knew they were based around open source software. Um, a lot of the Hadoop and big data ecosystem, you know, started based on some of these key papers from amazon and google and others taking infrastructure ideas bringing them to scale. We went through a whole evolution of that and the input and the output of that basically let us into the next phase, which I think is the second phase of data leak, which is we have this data are tools are so much better because of that first phase that the investments we made the first time around, we're going to have to pay another investment to make that transformation. And so I've actually, I never want to caution someone not to jump early, but it has to be the right jump and it has to be something that really gives you a competitive advantage. A lot of infrastructure technology is you should make the choices that you make one or two big bets and sometimes people say this, you call it using their innovation tokens. You need to make the bets on big technologies that you operate more effectively at scale. It is somewhat hard to predict that. I certainly say that I've missed quite a few of the exciting transformations in the field just because, um, it wasn't always obvious that it was going to pay off to the degree that um, customers would need. >>So I gotta ask you on the real time applications side of it, that's been a big trend, certainly in cloud. But as you look at hybrid hybrid cloud environments, for instance, streaming data has been a big issue. Uh any updates there from you on your managed service? >>That's right. So one of we have to manage services um that are both closely aligned three managed services that are closely aligned with data in three different ways. And so um one of them is redhead open shift streams for Apache Kafka, which is managed cloud service that focuses on bringing that streaming data and letting you run it across multiple environments. And I think that, you know, we get to the heart of what's the purpose of uh managed services is to reduce operational overhead and to take responsibilities that allow users to focus on the things that actually matter for them. So for us, um managed open shift streams is really about the flow of data between applications in different environments, whether that's from the edge to an on premise data center, whether it's an on premise data center to the cloud. And increasingly these services which were running in the public cloud, increasingly these services have elements that run in the public cloud, but also key elements that run close to where your applications are. And I think that bridge is actually really important for us. That's a key component of hybrid is connecting the different locations and different footprints. So for us the focus is really how do we get data moving to the right place that complements our API management service, which is an add on for open ship dedicated, which means once you've brought the data and you need to expose it back out to other applications in the environment, you can build those applications on open shift, you can leverage the capabilities of open shift api management to expose them more easily, both to end customers or to other applications. And then our third services redhead open shift data science. Um and that is a, an integration that makes it easy for data scientists in a kubernetes environment. On open shift, they easily bring together the data to make, to analyze it and to help route it is appropriate. So those three facets for us are pretty important. They can be used in many different ways, but that focus on the flow of data across these different environments is really a key part of our longer term strategy. >>You know, all the customer checkboxes there you mentioned earlier. I mean I'll just summarize that that you said, you know, obviously value faster application velocity time to value. Those are like the checkboxes, Gardner told analysts check those lower complexity. Oh, we do the heavy lifting, all cloud benefits, so that's all cool. Everyone kind of gets that, everyone's been around cloud knows devops all those things come into play right now. The innovation focuses on operations and day to operations, becoming much more specific. When people say, hey, I've done some lift and shift, I've done some Greenfield born in the cloud now, it's like, whoa, this stuff, I haven't seen this before. As you start scaling. So this brings up that concept and then you add in multi cloud and hybrid cloud, you gotta have a unified experience. So these are the hot areas right this year, I would say, you know, that day to operate has been around for a while, but this idea of unification around environments to be fully distributed for developers is huge. >>How do you >>architect for that? This is the number one question I get. And I tease out when people are kind of talking about their environments that challenges their opportunities, they're really trying to architect, you know, the foundation that building to be um future proof, they don't want to get screwed over when they have, they realize they made a decision, they weren't thinking about day to operation or they didn't think about the unified experience across clouds across environments and services. This is huge. What's your take on this? >>So this is um, this is probably one of the hardest questions I think I could get asked, which is uh looking into the crystal ball, what are the aspects of today's environments that are accidental complexity? That's really just a result of the slow accretion of technologies and we all need to make bets when, when the time is right within the business, um and which parts of it are essential. What are the fundamental hard problems and so on. The accidental complexity side for red hat, it's really about um that consistent environment through open shift bringing capabilities, our connection to open source and making sure that there's an open ecosystem where um community members, users vendors can all work together to um find solutions that work for them because there's not, there's no way to solve for all of computing. It's just impossible. I think that is kind of our that's our development process and that's what helps make that accidental complexity of all that self away over time. But in the essential complexity data is tied the location, data has gravity data. Lakes are a great example of because data has gravity. The more data that you bring together, the bigger the scale the tools you can bring, you can invest in more specialized tools. I've almost do that as a specialization centralization. There's a ton of centralization going on right now at the same time that these new technologies are available to make it easier and easier. Whether that's large scale automation um with conflict management technologies, whether that's kubernetes and deploying it in multiple sites in multiple locations and open shift, bringing consistency so that you can run the apps the same way. But even further than that is concentrating, mhm. More of what would have typically been a specialist problem, something that you build a one off around in your organization to work through the problem. We're really getting to a point where pretty soon now there is a technology or a service for everyone. How do you get the data into that service out? How do you secure it? How do you glue it together? Um I think of, you know, some people might call this um you know, the ultimate integration problem, which is we're going to have all of this stuff and all of these places, what are the core concepts, location, security, placement, topology, latency, where data resides, who's accessing that data, We think of these as kind of the building blocks of where we're going next. So for us trying to make investments in, how do we make kubernetes work better across lots of environments. I have a coupon talk coming up this coupon, it's really exciting for me to talk about where we're going with, you know, the evolution of kubernetes, bringing the different pieces more closely together across multiple environments. But likewise, when we talk about our managed services, we've approached the strategy for managed services as it's not just the service in isolation, it's how it connects to the other pieces. What can we learn in the community, in our services, working with users that benefits that connectivity. So I mentioned the open shift streams connecting up environments, we'd really like to improve how applications connect across disparate environments. That's a fundamental property of if you're going to have data uh in one geographic region and you didn't move services closer to that well, those services I need to know and encode and have that behavior to get closer to where the data is, whether it's one data lake or 10. We gotta have that flexibility in place. And so those obstructions are really, and to >>your point about the building blocks where you've got to factor in those building blocks, because you're gonna need to understand the latency impact, that's going to impact how you're gonna handle the compute piece, that's gonna handle all these things are coming into play. So, again, if you're mindful of the building blocks, just as a cloud concept, um, then you're okay. >>We hear this a lot. Actually, there's real challenges in the, the ecosystem of uh, we see a lot of the problems of I want to help someone automate and improved, but the more balkanize, the more spread out, the more individual solutions are in play, it's harder for someone to bring their technology to bear to help solve the problem. So looking for ways that we can um, you know, grease the skids to build the glue. I think open source works best when it's defining de facto solutions that everybody agrees on that openness and the easy access is a key property that makes de facto standards emerged from open source. What can we do to grow defacto standards around multi cloud and application movement and application interconnect I think is a very, it's already happening and what can we do to accelerate it? That's it. >>Well, I think you bring up a really good point. This is probably a follow up, maybe a clubhouse talk or you guys will do a separate session on this. But I've been riffing on this idea of uh, today's silos, tomorrow's component, right, or module. If most people don't realize that these silos can be problematic if not thought through. So you have to kill the silos to bring in kind of an open police. So if you're open, not closed, you can leverage a monolith. Today's monolithic app or full stack could be tomorrow's building block unless you don't open up. So this is where interesting design question comes in, which is, it's okay to have pre existing stuff if you're open about it. But if you stay siloed, you're gonna get really stuck >>and there's going to be more and more pre existing stuff I think, you know, uh even the data lake for every day to lake, there is a huge problem of how to get data into the data lake or taking existing applications that came from the previous data link. And so there's a, there's a natural evolutionary process where let's focus on the mechanisms that actually move that day to get that data flowing. Um, I think we're still in the early phases of thinking about huge amounts of applications. Microservices or you know, 10 years old in the sense of it being a fairly common industry talking point before that we have service oriented architecture. But the difference now is that we're encouraging and building one developer, one team might run several services. They might use three or four different sas vendors. They might depend on five or 10 or 15 cloud services. Those integration points make them easier. But it's a new opportunity for us to say, well, what are the differences to go back to? The point is you can keep your silos, we just want to have great integration in and out of >>those. Exactly, they don't have to you have to break down the silos. So again, it's a tried and true formula integration, interoperability and abstracting away the complexity with some sort of new software abstraction layer. You bring that to play as long as you can paddle with that, you apply the new building blocks, you're classified. >>It sounds so that's so simple, doesn't it? It does. And you know, of course it'll take us 10 years to get there. And uh, you know, after cloud native will be will be galactic native or something like that. You know, there's always going to be a new uh concept that we need to work in. I think the key concepts we're really going after our everyone is trying to run resilient and reliable services and the clouds give us in the clouds take it away. They give us those opportunities to have some of those building blocks like location of geographic hardware resources, but they will always be data that spread. And again, you still have to apply those principles to the cloud to get the service guarantees that you need. I think there's a completely untapped area for helping software developers and software teams understand the actual availability and guarantees of the underlying environment. It's a property of the services you run with. If you're using a disk in a particular availability zone, that's a property of your application. I think there's a rich area that hasn't been mined yet. Of helping you understand what your effective service level goals which of those can be met. Which cannot, it doesn't make a lot of sense in a single cluster or single machine or a single location world the moment you start to talk about, Well I have my data lake. Well what are the ways my data leg can fail? How do we look at your complex web of interdependencies and say, well clearly if you lose this cloud provider, you're going to lose not just the things that you have running there, but these other dependencies, there's a lot of, there's a lot of next steps that we're just learning what happens when a major cloud goes down for a day or a region of a cloud goes down for a day. You still have to design and work around those >>cases. It's distributed computing. And again, I love the space where galactic cloud, you got SpaceX? Where's Cloud X? I mean, you know, space is the next frontier. You know, you've got all kinds of action happening in space. Great space reference there. Clayton, Great insight. Thanks for coming on. Uh, Clayton Coleman architect at red Hat. Clayton, Thanks for coming on. >>Pretty pleasure. >>Always. Great chat. I'm talking under the hood. What's going on in red hats? New managed cloud service portfolio? Again, the world's getting complex, abstract away. The complexities with software Inter operate integrate. That's the key formula with the cloud building blocks. I'm john ferry with the cube. Thanks for watching. Yeah.

Published Date : Apr 28 2021

SUMMARY :

We still got the Covid coming around the corner. So great to see you were just riffing before we came on camera about distributed computing in and introducing the first, I think really the first step what we see as uh I mean, you guys have, it's not just, you know, you support AWS as so that the choice of where you run your cloud, um, So take me through how you guys Um and I think a lot of the role of this is, you know, the irresponsible to I want a data cloud, you know, everything is like the new, you know, they saw Snowflake Apollo, I gotta have some, But depending on where you jumped on the bandwagon, depending on when you adopted, you're going to make a bunch of different trade offs. So I gotta ask you on the real time applications side of it, that's been a big trend, And I think that, you know, we get to the heart of what's the purpose of You know, all the customer checkboxes there you mentioned earlier. you know, the foundation that building to be um future proof, shift, bringing consistency so that you can run the apps the same way. latency impact, that's going to impact how you're gonna handle the compute piece, that's gonna handle all you know, grease the skids to build the glue. So you have to kill the silos to bring in kind and there's going to be more and more pre existing stuff I think, you know, uh even the data lake for You bring that to play as long as you can paddle with that, you apply the new building blocks, the things that you have running there, but these other dependencies, there's a lot of, there's a lot of next I mean, you know, space is the next frontier. That's the key formula with the cloud building blocks.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jeff BarrPERSON

0.99+

fiveQUANTITY

0.99+

amazonORGANIZATION

0.99+

oneQUANTITY

0.99+

ClaytonPERSON

0.99+

GardnerPERSON

0.99+

10 yearsQUANTITY

0.99+

threeQUANTITY

0.99+

CovidPERSON

0.99+

1stQUANTITY

0.99+

AWSORGANIZATION

0.99+

Clayton ColemanPERSON

0.99+

first phaseQUANTITY

0.99+

three facetsQUANTITY

0.99+

10QUANTITY

0.99+

first timeQUANTITY

0.99+

TodayDATE

0.99+

john ferryPERSON

0.99+

fourQUANTITY

0.99+

one teamQUANTITY

0.99+

RedORGANIZATION

0.99+

googleORGANIZATION

0.99+

two big betsQUANTITY

0.99+

2nd iterationsQUANTITY

0.99+

second phaseQUANTITY

0.99+

firstQUANTITY

0.99+

tomorrowDATE

0.99+

single machineQUANTITY

0.99+

15 cloud servicesQUANTITY

0.98+

15th birthdayQUANTITY

0.98+

this yearDATE

0.98+

red HatORGANIZATION

0.98+

bothQUANTITY

0.98+

each pointQUANTITY

0.98+

eachQUANTITY

0.98+

third servicesQUANTITY

0.98+

one commentQUANTITY

0.98+

todayDATE

0.98+

a dayQUANTITY

0.97+

IBMORGANIZATION

0.97+

first stepQUANTITY

0.97+

red hat summit 2021EVENT

0.96+

three different waysQUANTITY

0.96+

Red HatORGANIZATION

0.96+

ApacheORGANIZATION

0.95+

Cloud XTITLE

0.95+

one developerQUANTITY

0.95+

single clusterQUANTITY

0.94+

Snowflake ApolloTITLE

0.94+

three managed servicesQUANTITY

0.9+

SpaceXORGANIZATION

0.87+

Red Hat Summit 2021 Virtual ExperienceEVENT

0.85+

W SORGANIZATION

0.83+

few weeks agoDATE

0.82+

red hatsORGANIZATION

0.82+

one data lakeQUANTITY

0.78+

GCBORGANIZATION

0.77+

A. P. R.ORGANIZATION

0.77+

GreenfieldORGANIZATION

0.74+

single locationQUANTITY

0.72+

G C P.ORGANIZATION

0.71+

GCPDTITLE

0.7+

Ci CDTITLE

0.68+

last 10 yearsDATE

0.66+

G C PORGANIZATION

0.63+

B MCOMMERCIAL_ITEM

0.62+

hatORGANIZATION

0.58+

A. P. I. S.ORGANIZATION

0.56+

redORGANIZATION

0.54+

themQUANTITY

0.5+

HadoopTITLE

0.43+

Enterprise Data Automation | Crowdchat


 

>>from around the globe. It's the Cube with digital coverage of enterprise data automation, an event Siri's brought to you by Iot. Tahoe Welcome everybody to Enterprise Data Automation. Ah co created digital program on the Cube with support from my hotel. So my name is Dave Volante. And today we're using the hashtag data automated. You know, organizations. They really struggle to get more value out of their data, time to data driven insights that drive cost savings or new revenue opportunities. They simply take too long. So today we're gonna talk about how organizations can streamline their data operations through automation, machine intelligence and really simplifying data migrations to the cloud. We'll be talking to technologists, visionaries, hands on practitioners and experts that are not just talking about streamlining their data pipelines. They're actually doing it. So keep it right there. We'll be back shortly with a J ahora who's the CEO of Iot Tahoe to kick off the program. You're watching the Cube, the leader in digital global coverage. We're right back right after this short break. Innovation impact influence. Welcome to the Cube disruptors. Developers and practitioners learn from the voices of leaders who share their personal insights from the hottest digital events around the globe. Enjoy the best this community has to offer on the Cube, your global leader. High tech digital coverage from around the globe. It's the Cube with digital coverage of enterprise, data, automation and event. Siri's brought to you by Iot. Tahoe. Okay, we're back. Welcome back to Data Automated. A J ahora is CEO of I O ta ho, JJ. Good to see how things in London >>Thanks doing well. Things in, well, customers that I speak to on day in, day out that we partner with, um, they're busy adapting their businesses to serve their customers. It's very much a game of ensuring the week and serve our customers to help their customers. Um, you know, the adaptation that's happening here is, um, trying to be more agile. Got to be more flexible. Um, a lot of pressure on data, a lot of demand on data and to deliver more value to the business, too. So that customers, >>as I said, we've been talking about data ops a lot. The idea being Dev Ops applied to the data pipeline, But talk about enterprise data automation. What is it to you. And how is it different from data off >>Dev Ops, you know, has been great for breaking down those silos between different roles functions and bring people together to collaborate. Andi, you know, we definitely see that those tools, those methodologies, those processes, that kind of thinking, um, lending itself to data with data is exciting. We look to do is build on top of that when data automation, it's the it's the nuts and bolts of the the algorithms, the models behind machine learning that the functions. That's where we investors, our r and d on bringing that in to build on top of the the methods, the ways of thinking that break down those silos on injecting that automation into the business processes that are going to drive a business to serve its customers. It's, um, a layer beyond Dev ops data ops. They can get to that point where well, I think about it is is the automation behind new dimension. We've come a long way in the last few years. Boy is, we started out with automating some of those simple, um, to codify, um, I have a high impact on organization across the data a cost effective way house. There's data related tasks that classify data on and a lot of our original pattern certain people value that were built up is is very much around that >>love to get into the tech a little bit in terms of how it works. And I think we have a graphic here that gets into that a little bit. So, guys, if you bring that up, >>sure. I mean right there in the middle that the heart of what we do it is, you know, the intellectual property now that we've built up over time that takes from Hacha genius data sources. Your Oracle Relational database. Short your mainframe. It's a lay and increasingly AP eyes and devices that produce data and that creates the ability to automatically discover that data. Classify that data after it's classified. Them have the ability to form relationships across those different source systems, silos, different lines of business. And once we've automated that that we can start to do some cool things that just puts of contact and meaning around that data. So it's moving it now from bringing data driven on increasingly where we have really smile, right people in our customer organizations you want I do some of those advanced knowledge tasks data scientists and ah, yeah, quants in some of the banks that we work with, the the onus is on, then, putting everything we've done there with automation, pacifying it, relationship, understanding that equality, the policies that you can apply to that data. I'm putting it in context once you've got the ability to power. Okay, a professional is using data, um, to be able to put that data and contacts and search across the entire enterprise estate. Then then they can start to do some exciting things and piece together the the tapestry that fabric across that different system could be crm air P system such as s AP and some of the newer brown databases that we work with. Snowflake is a great well, if I look back maybe five years ago, we had prevalence of daily technologies at the cutting edge. Those are converging to some of the cloud platforms that we work with Google and AWS and I think very much is, as you said it, those manual attempts to try and grasp. But it is such a complex challenges scale quickly runs out of steam because once, once you've got your hat, once you've got your fingers on the details Oh, um, what's what's in your data state? It's changed, You know, you've onboard a new customer. You signed up a new partner. Um, customer has, you know, adopted a new product that you just Lawrence and there that that slew of data keeps coming. So it's keeping pace with that. The only answer really is is some form of automation >>you're working with AWS. You're working with Google, You got red hat. IBM is as partners. What is attracting those folks to your ecosystem and give us your thoughts on the importance of ecosystem? >>That's fundamental. So, I mean, when I caimans where you tell here is the CEO of one of the, um, trends that I wanted us CIO to be part of was being open, having an open architecture allowed one thing that was close to my heart, which is as a CEO, um, a c i o where you go, a budget vision on and you've already made investments into your organization, and some of those are pretty long term bets. They should be going out 5 10 years, sometimes with the CRM system training up your people, getting everybody working together around a common business platform. What I wanted to ensure is that we could openly like it using AP eyes that were available, the love that some investment on the cost that has already gone into managing in organizations I t. But business users to before. So part of the reason why we've been able to be successful with, um, the partners like Google AWS and increasingly, a number of technology players. That red hat mongo DB is another one where we're doing a lot of good work with, um and snowflake here is, um Is those investments have been made by the organizations that are our customers, and we want to make sure we're adding to that. And they're leveraging the value that they've already committed to. >>Yeah, and maybe you could give us some examples of the r A y and the business impact. >>Yeah, I mean, the r a y David is is built upon on three things that I mentioned is a combination off. You're leveraging the existing investment with the existing estate, whether that's on Microsoft Azure or AWS or Google, IBM, and I'm putting that to work because, yeah, the customers that we work with have had made those choices. On top of that, it's, um, is ensuring that we have got the automation that is working right down to the level off data, a column level or the file level we don't do with meta data. It is being very specific to be at the most granular level. So as we've grown our processes and on the automation, gasification tagging, applying policies from across different compliance and regulatory needs that an organization has to the data, everything that then happens downstream from that is ready to serve a business outcome now without hoping out which run those processes within hours of getting started And, um, Bill that picture, visualize that picture and bring it to life. You know, the PR Oh, I that's off the bat with finding data that should have been deleted data that was copies off on and being able to allow the architect whether it's we're working on GCB or a migration to any other clouds such as AWS or a multi cloud landscape right off the map. >>A. J. Thanks so much for coming on the Cube and sharing your insights and your experience is great to have you. >>Thank you, David. Look who is smoking in >>now. We want to bring in the customer perspective. We have a great conversation with Paul Damico, senior vice president data architecture, Webster Bank. So keep it right there. >>Utah Data automated Improve efficiency, Drive down costs and make your enterprise data work for you. Yeah, we're on a mission to enable our customers to automate the management of data to realise maximum strategic and operational benefits. We envisage a world where data users consume accurate, up to date unified data distilled from many silos to deliver transformational outcomes, activate your data and avoid manual processing. Accelerate data projects by enabling non I t resources and data experts to consolidate categorize and master data. Automate your data operations Power digital transformations by automating a significant portion of data management through human guided machine learning. Yeah, get value from the start. Increase the velocity of business outcomes with complete accurate data curated automatically for data, visualization tours and analytic insights. Improve the security and quality of your data. Data automation improves security by reducing the number of individuals who have access to sensitive data, and it can improve quality. Many companies report double digit era reduction in data entry and other repetitive tasks. Trust the way data works for you. Data automation by our Tahoe learns as it works and can ornament business user behavior. It learns from exception handling and scales up or down is needed to prevent system or application overloads or crashes. It also allows for innate knowledge to be socialized rather than individualized. No longer will your companies struggle when the employee who knows how this report is done, retires or takes another job, the work continues on without the need for detailed information transfer. Continue supporting the digital shift. Perhaps most importantly, data automation allows companies to begin making moves towards a broader, more aspirational transformation, but on a small scale but is easy to implement and manage and delivers quick wins. Digital is the buzzword of the day, but many companies recognized that it is a complex strategy requires time and investment. Once you get started with data automation, the digital transformation initiated and leaders and employees alike become more eager to invest time and effort in a broader digital transformational agenda. Yeah, >>everybody, we're back. And this is Dave Volante, and we're covering the whole notion of automating data in the Enterprise. And I'm really excited to have Paul Damico here. She's a senior vice president of enterprise Data Architecture at Webster Bank. Good to see you. Thanks for coming on. >>Nice to see you too. Yes. >>So let's let's start with Let's start with Webster Bank. You guys are kind of a regional. I think New York, New England, uh, leave headquartered out of Connecticut, but tell us a little bit about the >>bank. Yeah, Webster Bank is regional, Boston. And that again in New York, Um, very focused on in Westchester and Fairfield County. Um, they're a really highly rated bank regional bank for this area. They, um, hold, um, quite a few awards for the area for being supportive for the community. And, um, are really moving forward. Technology lives. Currently, today we have, ah, a small group that is just working toward moving into a more futuristic, more data driven data warehouse. That's our first item. And then the other item is to drive new revenue by anticipating what customers do when they go to the bank or when they log into there to be able to give them the best offer. The only way to do that is you have timely, accurate, complete data on the customer and what's really a great value on off something to offer that >>at the top level, what were some of what are some of the key business drivers there catalyzing your desire for change >>the ability to give the customer what they need at the time when they need it? And what I mean by that is that we have, um, customer interactions and multiple weights, right? And I want to be able for the customer, too. Walk into a bank, um, or online and see the same the same format and being able to have the same feel, the same look and also to be able to offer them the next best offer for them. >>Part of it is really the cycle time, the end end cycle, time that you're pressing. And then there's if I understand it, residual benefits that are pretty substantial from a revenue opportunity >>exactly. It's drive new customers, Teoh new opportunities. It's enhanced the risk, and it's to optimize the banking process and then obviously, to create new business. Um, and the only way we're going to be able to do that is that we have the ability to look at the data right when the customer walks in the door or right when they open up their app. >>Do you see the potential to increase the data sources and hence the quality of the data? Or is that sort of premature? >>Oh, no. Um, exactly. Right. So right now we ingest a lot of flat files and from our mainframe type of runnin system that we've had for quite a few years. But now that we're moving to the cloud and off Prem and on France, you know, moving off Prem into, like, an s three bucket Where that data king, we can process that data and get that data faster by using real time tools to move that data into a place where, like, snowflake Good, um, utilize that data or we can give it out to our market. The data scientists are out in the lines of business right now, which is great, cause I think that's where data science belongs. We should give them on, and that's what we're working towards now is giving them more self service, giving them the ability to access the data in a more robust way. And it's a single source of truth. So they're not pulling the data down into their own like tableau dashboards and then pushing the data back out. I have eight engineers, data architects, they database administrators, right, um, and then data traditional data forwarding people, Um, and because some customers that I have that our business customers lines of business, they want to just subscribe to a report. They don't want to go out and do any data science work. Um, and we still have to provide that. So we still want to provide them some kind of read regiment that they wake up in the morning and they open up their email. And there's the report that they just drive, um, which is great. And it works out really well. And one of the things. This is why we purchase I o waas. I would have the ability to give the lines of business the ability to do search within the data, and we read the data flows and data redundancy and things like that and help me cleanup the data and also, um, to give it to the data. Analysts who say All right, they just asked me. They want this certain report and it used to take Okay, well, we're gonna four weeks, we're going to go. We're gonna look at the data, and then we'll come back and tell you what we dio. But now with Iot Tahoe, they're able to look at the data and then, in one or two days of being able to go back and say, Yes, we have data. This is where it is. This is where we found that this is the data flows that we've found also, which is what I call it is the birth of a column. It's where the calm was created and where it went live as a teenager. And then it went to, you know, die very archive. >>In researching Iot Tahoe, it seems like one of the strengths of their platform is the ability to visualize data the data structure, and actually dig into it. But also see it, um, and that speeds things up and gives everybody additional confidence. And then the other pieces essentially infusing ai or machine intelligence into the data pipeline is really how you're attacking automation, right? >>Exactly. So you're able to let's say that I have I have seven cause lines of business that are asking me questions. And one of the questions I'll ask me is, um, we want to know if this customer is okay to contact, right? And you know, there's different avenues so you can go online to go. Do not contact me. You can go to the bank And you could say, I don't want, um, email, but I'll take tests and I want, you know, phone calls. Um, all that information. So seven different lines of business asked me that question in different ways once said Okay to contact the other one says, You know, just for one to pray all these, you know, um, and each project before I got there used to be siloed. So one customer would be 100 hours for them to do that and analytical work, and then another cut. Another of analysts would do another 100 hours on the other project. Well, now I can do that all at once, and I can do those type of searches and say yes we already have that documentation. Here it is. And this is where you can find where the customer has said, You know, you don't want I don't want to get access from you by email, or I've subscribed to get emails from you. I'm using Iot typos eight automation right now to bring in the data and to start analyzing the data close to make sure that I'm not missing anything and that I'm not bringing over redundant data. Um, the data warehouse that I'm working off is not, um a It's an on prem. It's an oracle database. Um, and it's 15 years old, so it has extra data in it. It has, um, things that we don't need anymore. And Iot. Tahoe's helping me shake out that, um, extra data that does not need to be moved into my S three. So it's saving me money when I'm moving from offering on Prem. >>What's your vision or your your data driven organization? >>Um, I want for the bankers to be able to walk around with on iPad in their hands and be able to access data for that customer really fast and be able to give them the best deal that they can get. I want Webster to be right there on top, with being able to add new customers and to be able to serve our existing customers who had bank accounts. Since you were 12 years old there and now our, you know, multi. Whatever. Um, I want them to be able to have the best experience with our our bankers. >>That's really what I want is a banking customer. I want my bank to know who I am, anticipate my needs and create a great experience for me. And then let me go on with my life. And so that's a great story. Love your experience, your background and your knowledge. Can't thank you enough for coming on the Cube. >>No, thank you very much. And you guys have a great day. >>Next, we'll talk with Lester Waters, who's the CTO of Iot Toe cluster takes us through the key considerations of moving to the cloud. >>Yeah, right. The entire platform Automated data Discovery data Discovery is the first step to knowing your data auto discover data across any application on any infrastructure and identify all unknown data relationships across the entire siloed data landscape. smart data catalog. Know how everything is connected? Understand everything in context, regained ownership and trust in your data and maintain a single source of truth across cloud platforms, SAS applications, reference data and legacy systems and power business users to quickly discover and understand the data that matters to them with a smart data catalog continuously updated ensuring business teams always have access to the most trusted data available. Automated data mapping and linking automate the identification of unknown relationships within and across data silos throughout the organization. Build your business glossary automatically using in house common business terms, vocabulary and definitions. Discovered relationships appears connections or dependencies between data entities such as customer account, address invoice and these data entities have many discovery properties. At a granular level, data signals dashboards. Get up to date feeds on the health of your data for faster improved data management. See trends, view for history. Compare versions and get accurate and timely visual insights from across the organization. Automated data flows automatically captured every data flow to locate all the dependencies across systems. Visualize how they work together collectively and know who within your organization has access to data. Understand the source and destination for all your business data with comprehensive data lineage constructed automatically during with data discovery phase and continuously load results into the smart Data catalog. Active, geeky automated data quality assessments Powered by active geek You ensure data is fit for consumption that meets the needs of enterprise data users. Keep information about the current data quality state readily available faster Improved decision making Data policy. Governor Automate data governance End to end over the entire data lifecycle with automation, instant transparency and control Automate data policy assessments with glossaries, metadata and policies for sensitive data discovery that automatically tag link and annotate with metadata to provide enterprise wide search for all lines of business self service knowledge graph Digitize and search your enterprise knowledge. Turn multiple siloed data sources into machine Understandable knowledge from a single data canvas searching Explore data content across systems including GRP CRM billing systems, social media to fuel data pipelines >>Yeah, yeah, focusing on enterprise data automation. We're gonna talk about the journey to the cloud Remember, the hashtag is data automate and we're here with Leicester Waters. Who's the CTO of Iot Tahoe? Give us a little background CTO, You've got a deep, deep expertise in a lot of different areas. But what do we need to know? >>Well, David, I started my career basically at Microsoft, uh, where I started the information Security Cryptography group. They're the very 1st 1 that the company had, and that led to a career in information, security. And and, of course, as easy as you go along with information security data is the key element to be protected. Eso I always had my hands and data not naturally progressed into a roll out Iot talk was their CTO. >>What's the prescription for that automation journey and simplifying that migration to the cloud? >>Well, I think the first thing is understanding what you've got. So discover and cataloging your data and your applications. You know, I don't know what I have. I can't move it. I can't. I can't improve it. I can't build upon it. And I have to understand there's dependence. And so building that data catalog is the very first step What I got. Okay, >>so So we've done the audit. We know we've got what's what's next? Where do we go >>next? So the next thing is remediating that data you know, where do I have duplicate data? I may have often times in an organization. Uh, data will get duplicated. So somebody will take a snapshot of the data, you know, and then end up building a new application, which suddenly becomes dependent on that data. So it's not uncommon for an organization of 20 master instances of a customer, and you can see where that will go. And trying to keep all that stuff in sync becomes a nightmare all by itself. So you want to sort of understand where all your redundant data is? So when you go to the cloud, maybe you have an opportunity here to do you consolidate that that data, >>then what? You figure out what to get rid of our actually get rid of it. What's what's next? >>Yes, yes, that would be the next step. So figure out what you need. What, you don't need you Often times I've found that there's obsolete columns of data in your databases that you just don't need. Or maybe it's been superseded by another. You've got tables have been superseded by other tables in your database, so you got to kind of understand what's being used and what's not. And then from that, you can decide. I'm gonna leave this stuff behind or I'm gonna I'm gonna archive this stuff because I might need it for data retention where I'm just gonna delete it. You don't need it. All were >>plowing through your steps here. What's next on the >>journey? The next one is is in a nutshell. Preserve your data format. Don't. Don't, Don't. Don't boil the ocean here at music Cliche. You know, you you want to do a certain degree of lift and shift because you've got application dependencies on that data and the data format, the tables in which they sent the columns and the way they're named. So some degree, you are gonna be doing a lift and ship, but it's an intelligent lift and ship. The >>data lives in silos. So how do you kind of deal with that? Problem? Is that is that part of the journey? >>That's that's great pointed because you're right that the data silos happen because, you know, this business unit is start chartered with this task. Another business unit has this task and that's how you get those in stance creations of the same data occurring in multiple places. So you really want to is part of your cloud migration. You really want a plan where there's an opportunity to consolidate your data because that means it will be less to manage. Would be less data to secure, and it will be. It will have a smaller footprint, which means reduce costs. >>But maybe you could address data quality. Where does that fit in on the >>journey? That's that's a very important point, you know. First of all, you don't want to bring your legacy issues with U. S. As the point I made earlier. If you've got data quality issues, this is a good time to find those and and identify and remediate them. But that could be a laborious task, and you could probably accomplish. It will take a lot of work. So the opportunity used tools you and automate that process is really will help you find those outliers that >>what's next? I think we're through. I think I've counted six. What's the What's the lucky seven >>Lucky seven involved your business users. Really, When you think about it, you're your data is in silos, part of part of this migration to cloud as an opportunity to break down the silos. These silence that naturally occurs are the business. You, uh, you've got to break these cultural barriers that sometimes exists between business and say so. For example, I always advise there's an opportunity year to consolidate your sensitive data. Your P I. I personally identifiable information and and three different business units have the same source of truth From that, there's an opportunity to consolidate that into one. >>Well, great advice, Lester. Thanks so much. I mean, it's clear that the Cap Ex investments on data centers they're generally not a good investment for most companies. Lester really appreciate Lester Water CTO of Iot Tahoe. Let's watch this short video and we'll come right back. >>Use cases. Data migration. Accelerate digitization of business by providing automated data migration work flows that save time in achieving project milestones. Eradicate operational risk and minimize labor intensive manual processes that demand costly overhead data quality. You know the data swamp and re establish trust in the data to enable data signs and Data analytics data governance. Ensure that business and technology understand critical data elements and have control over the enterprise data landscape Data Analytics ENABLEMENT Data Discovery to enable data scientists and Data Analytics teams to identify the right data set through self service for business demands or analytical reporting that advanced too complex regulatory compliance. Government mandated data privacy requirements. GDP Our CCP, A, e, p, R HIPPA and Data Lake Management. Identify late contents cleanup manage ongoing activity. Data mapping and knowledge graph Creates BKG models on business enterprise data with automated mapping to a specific ontology enabling semantic search across all sources in the data estate data ops scale as a foundation to automate data management presences. >>Are you interested in test driving the i o ta ho platform Kickstart the benefits of data automation for your business through the Iot Labs program? Ah, flexible, scalable sandbox environment on the cloud of your choice with set up service and support provided by Iot. Top Click on the link and connect with the data engineer to learn more and see Iot Tahoe in action. Everybody, we're back. We're talking about enterprise data automation. The hashtag is data automated and we're going to really dig into data migrations, data migrations. They're risky, they're time consuming and they're expensive. Yousef con is here. He's the head of partnerships and alliances at I o ta ho coming again from London. Hey, good to see you, Seth. Thanks very much. >>Thank you. >>So let's set up the problem a little bit. And then I want to get into some of the data said that migration is a risky, time consuming, expensive. They're they're often times a blocker for organizations to really get value out of data. Why is that? >>I think I mean, all migrations have to start with knowing the facts about your data. Uh, and you can try and do this manually. But when you have an organization that may have been going for decades or longer, they will probably have a pretty large legacy data estate so that I have everything from on premise mainframes. They may have stuff which is probably in the cloud, but they probably have hundreds, if not thousands of applications and potentially hundreds of different data stores. >>So I want to dig into this migration and let's let's pull up graphic. It will talk about We'll talk about what a typical migration project looks like. So what you see, here it is. It's very detailed. I know it's a bit of an eye test, but let me call your attention to some of the key aspects of this, uh and then use if I want you to chime in. So at the top here, you see that area graph that's operational risk for a typical migration project, and you can see the timeline and the the milestones That Blue Bar is the time to test so you can see the second step. Data analysis. It's 24 weeks so very time consuming, and then let's not get dig into the stuff in the middle of the fine print. But there's some real good detail there, but go down the bottom. That's labor intensity in the in the bottom, and you can see hi is that sort of brown and and you could see a number of data analysis data staging data prep, the trial, the implementation post implementation fixtures, the transition to be a Blu, which I think is business as usual. >>The key thing is, when you don't understand your data upfront, it's very difficult to scope to set up a project because you go to business stakeholders and decision makers, and you say Okay, we want to migrate these data stores. We want to put them in the cloud most often, but actually, you probably don't know how much data is there. You don't necessarily know how many applications that relates to, you know, the relationships between the data. You don't know the flow of the basis of the direction in which the data is going between different data stores and tables. So you start from a position where you have pretty high risk and probably the area that risk you could be. Stack your project team of lots and lots of people to do the next phase, which is analysis. And so you set up a project which has got a pretty high cost. The big projects, more people, the heavy of governance, obviously on then there, then in the phase where they're trying to do lots and lots of manual analysis, um, manual processes, as we all know, on the layer of trying to relate data that's in different grocery stores relating individual tables and columns, very time consuming, expensive. If you're hiring in resource from consultants or systems integrators externally, you might need to buy or to use party tools. Aziz said earlier the people who understand some of those systems may have left a while ago. CEO even higher risks quite cost situation from the off on the same things that have developed through the project. Um, what are you doing with Ayatollah? Who is that? We're able to automate a lot of this process from the very beginning because we can do the initial data. Discovery run, for example, automatically you very quickly have an automated validator. A data met on the data flow has been generated automatically, much less time and effort and much less cars stopped. >>Yeah. And now let's bring up the the the same chart. But with a set of an automation injection in here and now. So you now see the sort of Cisco said accelerated by Iot, Tom. Okay, great. And we're gonna talk about this, but look, what happens to the operational risk. A dramatic reduction in that, That that graph and then look at the bars, the bars, those blue bars. You know, data analysis went from 24 weeks down to four weeks and then look at the labor intensity. The it was all these were high data analysis, data staging data prep trialling post implementation fixtures in transition to be a you all those went from high labor intensity. So we've now attacked that and gone to low labor intensity. Explain how that magic happened. >>I think that the example off a data catalog. So every large enterprise wants to have some kind of repository where they put all their understanding about their data in its price States catalog. If you like, imagine trying to do that manually, you need to go into every individual data store. You need a DB, a business analyst, reach data store. They need to do an extract of the data. But it on the table was individually they need to cross reference that with other data school, it stores and schemers and tables you probably with the mother of all Lock Excel spreadsheets. It would be a very, very difficult exercise to do. I mean, in fact, one of our reflections as we automate lots of data lots of these things is, um it accelerates the ability to water may, But in some cases, it also makes it possible for enterprise customers with legacy systems take banks, for example. There quite often end up staying on mainframe systems that they've had in place for decades. I'm not migrating away from them because they're not able to actually do the work of understanding the data, duplicating the data, deleting data isn't relevant and then confidently going forward to migrate. So they stay where they are with all the attendant problems assistance systems that are out of support. You know, you know, the biggest frustration for lots of them and the thing that they spend far too much time doing is trying to work out what the right data is on cleaning data, which really you don't want a highly paid thanks to scientists doing with their time. But if you sort out your data in the first place, get rid of duplication that sounds migrate to cloud store where things are really accessible. It's easy to build connections and to use native machine learning tools. You well, on the way up to the maturity card, you can start to use some of the more advanced applications >>massive opportunities not only for technology companies, but for those organizations that can apply technology for business. Advantage yourself, count. Thanks so much for coming on the Cube. Much appreciated. Yeah, yeah, yeah, yeah

Published Date : Jun 23 2020

SUMMARY :

of enterprise data automation, an event Siri's brought to you by Iot. a lot of pressure on data, a lot of demand on data and to deliver more value What is it to you. into the business processes that are going to drive a business to love to get into the tech a little bit in terms of how it works. the ability to automatically discover that data. What is attracting those folks to your ecosystem and give us your thoughts on the So part of the reason why we've IBM, and I'm putting that to work because, yeah, the A. J. Thanks so much for coming on the Cube and sharing your insights and your experience is great to have Look who is smoking in We have a great conversation with Paul Increase the velocity of business outcomes with complete accurate data curated automatically And I'm really excited to have Paul Damico here. Nice to see you too. So let's let's start with Let's start with Webster Bank. complete data on the customer and what's really a great value the ability to give the customer what they need at the Part of it is really the cycle time, the end end cycle, time that you're pressing. It's enhanced the risk, and it's to optimize the banking process and to the cloud and off Prem and on France, you know, moving off Prem into, In researching Iot Tahoe, it seems like one of the strengths of their platform is the ability to visualize data the You know, just for one to pray all these, you know, um, and each project before data for that customer really fast and be able to give them the best deal that they Can't thank you enough for coming on the Cube. And you guys have a great day. Next, we'll talk with Lester Waters, who's the CTO of Iot Toe cluster takes Automated data Discovery data Discovery is the first step to knowing your We're gonna talk about the journey to the cloud Remember, the hashtag is data automate and we're here with Leicester Waters. data is the key element to be protected. And so building that data catalog is the very first step What I got. Where do we go So the next thing is remediating that data you know, You figure out what to get rid of our actually get rid of it. And then from that, you can decide. What's next on the You know, you you want to do a certain degree of lift and shift Is that is that part of the journey? So you really want to is part of your cloud migration. Where does that fit in on the So the opportunity used tools you and automate that process What's the What's the lucky seven there's an opportunity to consolidate that into one. I mean, it's clear that the Cap Ex investments You know the data swamp and re establish trust in the data to enable Top Click on the link and connect with the data for organizations to really get value out of data. Uh, and you can try and milestones That Blue Bar is the time to test so you can see the second step. have pretty high risk and probably the area that risk you could be. to be a you all those went from high labor intensity. But it on the table was individually they need to cross reference that with other data school, Thanks so much for coming on the Cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

Dave VolantePERSON

0.99+

Paul DamicoPERSON

0.99+

Paul DamicoPERSON

0.99+

IBMORGANIZATION

0.99+

AzizPERSON

0.99+

Webster BankORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

WestchesterLOCATION

0.99+

AWSORGANIZATION

0.99+

24 weeksQUANTITY

0.99+

SethPERSON

0.99+

LondonLOCATION

0.99+

oneQUANTITY

0.99+

hundredsQUANTITY

0.99+

ConnecticutLOCATION

0.99+

New YorkLOCATION

0.99+

100 hoursQUANTITY

0.99+

iPadCOMMERCIAL_ITEM

0.99+

CiscoORGANIZATION

0.99+

four weeksQUANTITY

0.99+

SiriTITLE

0.99+

thousandsQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

sixQUANTITY

0.99+

first itemQUANTITY

0.99+

20 master instancesQUANTITY

0.99+

todayDATE

0.99+

second stepQUANTITY

0.99+

S threeCOMMERCIAL_ITEM

0.99+

I o ta hoORGANIZATION

0.99+

first stepQUANTITY

0.99+

Fairfield CountyLOCATION

0.99+

five years agoDATE

0.99+

firstQUANTITY

0.99+

each projectQUANTITY

0.99+

FranceLOCATION

0.98+

two daysQUANTITY

0.98+

Leicester WatersORGANIZATION

0.98+

Iot TahoeORGANIZATION

0.98+

Cap ExORGANIZATION

0.98+

seven causeQUANTITY

0.98+

Lester WatersPERSON

0.98+

5 10 yearsQUANTITY

0.98+

BostonLOCATION

0.97+

IotORGANIZATION

0.97+

TahoeORGANIZATION

0.97+

TomPERSON

0.97+

FirstQUANTITY

0.97+

15 years oldQUANTITY

0.96+

seven different linesQUANTITY

0.96+

single sourceQUANTITY

0.96+

UtahLOCATION

0.96+

New EnglandLOCATION

0.96+

WebsterORGANIZATION

0.95+

12 years oldQUANTITY

0.95+

Iot LabsORGANIZATION

0.95+

Iot. TahoeORGANIZATION

0.95+

1st 1QUANTITY

0.95+

U. S.LOCATION

0.95+

J ahoraORGANIZATION

0.95+

CubeCOMMERCIAL_ITEM

0.94+

PremORGANIZATION

0.94+

one customerQUANTITY

0.93+

OracleORGANIZATION

0.93+

I O ta hoORGANIZATION

0.92+

SnowflakeTITLE

0.92+

sevenQUANTITY

0.92+

singleQUANTITY

0.92+

LesterORGANIZATION

0.91+

David Aronchick, Microsoft | KubeCon 2018


 

I'm from Seattle Washington it's the cube covering Gube Khan and cloud native Khan North America 2018 brought to you by Red Hat the cloud native computing foundation and its ecosystem partners ok welcome back everyone we are here live with cube covers three days with wall-to-wall coverage here at coop con cloud native con 2018 in Seattle I'm John fer with the cubes to Minutemen here breaking it down we're at day two we've got a lot of action David Ronn chick who's the head of open source ml strategy at Azure at Microsoft Microsoft Azure formerly of Google now at Microsoft welcome back to the cube we had a great chat at Copenhagen good to see you great to see you too thank you so much for having me you've been there from day one it's still kind of day one in Korea is still growing you got a new gig here at Microsoft formerly at Google you had a great talk at Google next by the way which we watched and and caught on online you just you're still doing the same thing think of me to explain kind of what the new job is what your focus is absolutely so in many ways I'm doing a very similar job to the one I was doing at Google except now across all of Asher you know when you look at machine learning today the truth of the matter is is it is about open source it's about pulling in the best from academia and open source contributors developers across the spectrum and while I was at Google I was able to launch the cube flow project which solves the very specific but very important problem now that you look at Azure a company that is growing excuse me a division that is growing extremely quickly and looking to expand their overall open source offerings make investments work with partners and projects and make sure that that researchers and customers are able to get to machine learning solutions very quickly I'm coming in to help them think about how to make those investments and accelerate customers overall time to solutions so both on the commercial side Asscher which is got a business objective to make money but also open source how is it still open source for you is it all open sores or is it crossing a little bit of bulk just quickly clarify that yeah there's no question um you know obviously as you as a business they pay me a salary and and we're gonna have a great first party solution for all of these very things but the reality is much like kubernetes has both a commercial offering and an open-source offering I think that all the major cloud providers will have that kind of duality they'll work in open source and and you can measure you know how many contributions and what they're doing in the open source projects but then they'll also have hosted and other versions that make it easier for customers to migrate their data and adopt some of these new so you know one of the things that's interesting on that point is this a super important point is that open source community that's here with kubernetes around kubernetes it's all kind of upstream kind of concept but the downstream impacts our IT and your classic developer so you have your open source yeah and a thing going on that's the core of this community an event the IT investments are shifting in 2019 we are seeing the trend of somewhat radical but certainly a reimagining of the IT I mean certainly you guys have gone cloud at Azure has seen that that result absolutely good pick up by customers office 365 that's now a SAS that's now now you've got cloud you have cloud scale this is what machine learning is really shining so I the question to you is what do you think is gonna be the big impact of 2019 to IT investment strategies in terms of what they how they procure and consume technology how they build their apps with the new goodness coming in from kubernetes etc absolutely um you know I remember back in the day you know I was an IT admin myself and and I carried a pager for literally when you know a machine went down or a power supply went out or this Ram was bad or something like that today if you went to even the most sophisticated IT shop they would be like what are you crazy you you should never carry a pager for that you should have a system that understands it's ok if something that low-level goes out that's exactly what kubernetes provided it provided this abstraction layer on top of this so if you went down kubernetes knew had a reschedule a pod and move things back and forth taking that one step further now into machine learning unfortunately today people are carrying pagers for the equivalent of if a power supply goes out or something goes wrong it's still way too low-level we're asking data scientists ml engineers to think about how to provision pods how'd it work on drivers how to do all these very very low-level things with things like kubernetes with things like hume flow you're now able to give higher level abstraction so a data scientist can in and you know open up their Jupiter notebook work on the model see how it works and when they're done they hit a button and it will provision out all the machines necessary all the drivers all the everything spin it up run that training job and bring it back and shut everything down so they won't wonder if you can help expand on that a little bit more so you know what one of the things that that's great about kubernetes is it can live in a diverse amount of infrastructure one of the biggest challenges with machine learning is you know where's my data how do I get to the right place where do I do the training you know we've spending a lot a couple of years looking at you know edge and you know what's the connectivity and how we're gonna do this you help just kind of pan us picture the landscape and what do we have solved and what are we working at trying to get put together yeah you know I think that's a really excellent question today there's so much focus on well are you gonna choose pi torch or tensorflow CNT k MX net you know numpy scikit-learn there are a bunch of really great frameworks out there done in the open source and we're really excited but the reality is when you look at the overall landscape that's just 5% of the work that the average data scientist goes through exactly your point how do I get my data in how do I transform it how do I visualize it generate statistics on it make sure that it's not biased towards certain populations and then once I'm done training how do I roll it out to production and monitor it and log and all these things and that's really what we're talking about that's what we tried to get work on when it comes to cute flow is is to think about this in a much broader sense and so you take things like data the reality is you can't beat the speed of light if I have a petabyte of data here it's gonna take a long time to move it over there and so you're gonna be really thoughtful about those kind of things i I'm very hopeful that academic research and and industry will figure out ways to reduce the amount of data and make it much much more sane in overall addressing this problem and make it easier to train in various locations but the reality is is I think you're ultimately gonna have models and training and inference move to many many different locations and so you'll do inference at the edge on my phone or on a you know little Bluetooth device in the corner of my house saying whether or not it's too hot or too cold we're gonna need that kind of intelligence and we're gonna do that kind of training and data collection at the edge do you see a landscape evolving where you have specialty ml for instance like the big caution in IOT is move you know compute to the data yeah reads that latency you see machine learning models moving around at code so I can throw a machine learning at a problem and there's that and that is that what kubernetes fits and I'm trying to put together a mental model of how to think about how ml scales yeah what's your vision on that how do you see that evolving yeah absolutely I think that you know going back to what we talked about at the beginning we're really moving to much more of a solution driven architecture today ml you know is great and the academic research is phenomenal but it is academic research it didn't really start to take off until people invented things are you know creating things like image Nets and mobile net and things like that that did very important things like object detection but then people that you know commercial researchers were able to take that and move that into locations where people actually need it in I think you will continue to see that that migration I don't think you're gonna have single ml models that do a hundred different things you're gonna have a single ml model that does a vertical specific thing anomaly detection in whatever factories and you're gonna use that in a whole variety of locations rather than trying to you know develop 1 ml model to solve them all so it's application specific or vertical alright so that means the data is super important quality data clean data is clean results dirty date bad result absolutely right people have been in this kind of virtuous circle of cleaning data you know you guys know at Google certainly Microsoft as well you know datum data quality is critical but you got the horizontally scalable cloud but you need specialism around the data and for them ml how do you see that is that I mean obviously sounds like the right architecture this is where the finesse is and the nuance I don't see that so you know you you bring up a really interesting point today the the biggest problem is is how much data there is right it's not a matter of whether or not you're able to process it you are but but it's so easy to get lost caught and little anomalies you know if you have a petabyte of data and whatever a megabyte of it is the thing that's causing your model to go sideways that's really hard to detect I think what you're seeing right now is a lot of academic research which I'm very optimistic about that will ultimately reduce that that will both call out hey this particular data is smells kind of weird maybe take a closer look at this or you will see a smaller need for training you know where it was once a petabyte you're able to train on just 10 gigabytes I'm very optimistic that both of those things happen and as you start to get to that you get better signal-to-noise and you start saying oh in fact this is questionable data let's move that off to the side or spend more time on it rather than what happens today which is oh I got this model and it works pretty well I'm just going to throw everything at it and trying you know get some answer out and then we'll go from there and that's with a lot of false positives come in all absolutely all right so take the next level here at Kubb con cloud native con in this community where kubernetes is the center of all these sets of services and building blocks where's the ML action what if I Michelle wanna jump in this community I'm watching this with hey you know what I got Amazon Web Services reinvent just pumping up a lot of MLA I you know stage maker and a bunch of other things what's going on in this community where are the projects what are the notable things where can I jump in and engage what's the what's that what's that map look like I don't know yeah absolutely so obviously I'm pretty biased you know I helped start cube flow we're very very excited about that the cube flows one yeah absolutely but let me speak a little bit more broadly kubernetes gives you this wonderful platform highly scalable incredibly portable and and I can't overstate how valuable that portability is the reality is is that customers have we talked about data a bunch already they have data on Prem they've data in cloud hey cloud B it's everywhere they want to bring it together they want to bring the the training and the inference to where the data is kubernetes solves that for you it gives you portability and lets you abstract away the underlying stuff it gives you great scalability and reliability and it lets you compose these highly complex pipelines together that let you do real training anywhere rather than having to take all your data and move it through cloud and train on a single VM that you're not sure whether or not it's been updated or not this is the way to go versus the old way which was what cuz that's an easier way orchestrating and managing that what was the alternative the alternative was you built it yourself you you piece together a whole bunch of solutions you wired it together you made sure that this service over here had the right user account to access the data that that service over there was outputting it was just a crazy time now you use kubernetes constructs use first-class objects you extend the native kubernetes api and it works on your laptop and it works on Cloud a and B and on pram and wherever you need it that's the magic basically absolutely so multi cloud has come up a lot hybrid clouds the buzzword of the year I call that the 2000 18 maybe 19 buzzword but I think the real end game and all this is what from a customer standpoint that we are reporting a silk'n angle on the cube is choice yeah multi vendor is the new multi cloud is the multi clouds the modern version of the old multi vendor comes yes which basically is choice absolutely so how does kubernetes fit into the multi cloud why is that good for the industry and what's your take on that can you share your perspective absolutely so when you go and look at the recent right scale reports 81 percent of enterprises today are multi cloud . 81 percent and not just one cloud there they're on five different clouds that could be on pram could be multi zone could be Google or Amazon or a Salesforce you name how you define cloud they're spreading they're doing it because that kind of portability is right for their business kubernetes gives you the opportunity to operate in an abstraction layer that works across all of these clouds so whether or not you're on your laptop and you're using docker or mini cube you're on your private training rig whether that you go to Google cloud or as you're on Google clouds you can eat user you have a KS these you're able to build C I'd CD systems continuous delivery systems that that use common kubernetes constructs I want to roll this application out I want there to be seven pods I wanted to have an endpoint that looks like this and that works anywhere you have a kubernetes conformant cluster and when it gets to really complex apps like machine learning you're able to do that it even a higher level using constructs like cube flow and all the many many packages that go into coop load we have Nvidia contributing and we have you know Intel and I mean just countless Cisco I you know I hesitate to keep naming names because I'll be here all day but you know we have literally over Cisco's rays tailwind Francisco they're gonna have Network forever everybody wins at the the CI CD sides for developers one common construct the network guys get more programming because if you decompose an application absolutely the network ties it together yes everybody wins in the stack absolutely I think I breed is really interesting you know hybrid kind of gets a dirty word people like oh my god you know why would you ever deploy to multiple clouds why would you ever spread across multiple clouds and that I agree with a true hybrid deployment today isn't well I'm gonna take my app and I'm gonna spread it across six different locations in fact what you really want to do is have isolated deployments to each place that it enables you in a single button deploy to all three of these locations but to isolate them to have this particular application go and if you know AWS hasn't added GCP is there or if GCB does manage asher is there and you can do that very readily or you can bring it closed for geographic reasons or legal reasons or whatever it might be those kind of flexibility that ability to take a single construct of your application and deploy it to each one of these locations not spreading them but in fact just giving you that flexibility gives you pricing power gives you flexibility and lets you take advantage of the operating model if the if the if the ICD is common and that's the key value right there absolutely right David thanks so much coming on cue as usual great commentary great insight there there from the beginning just final question predictions for 2019 I think kubernetes what's gonna happen in 2019 with kubernetes what's your prediction well III think I think you've heard this message over and over again you're seeing kubernetes become boring and and that is incredibly powerful the the stability the flexibility people are building enormous businesses on top of it but not just that they're also continuing to build things like the the custom resource definition which lets you extend kubernetes in a safe and secure way and that's incredibly important that means you don't have to go and check in code into the main tree in order to make extension you're able to build on top of it and you're seeing more and more businesses build eight solutions customer focus solutions well next time we get together I want to do a drill down on the what the word stack means I heard me say kubernetes stack I'm like yeah I think that you love the stack words let a stack anymore sets the services David thanks so much come on I appreciate it here the queue coverage live here in Seattle for coop con cloud native found I'm John Fourier was too many men we back with more after this short break

Published Date : Dec 12 2018

SUMMARY :

really shining so I the question to you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

Red HatORGANIZATION

0.99+

2019DATE

0.99+

SeattleLOCATION

0.99+

AmazonORGANIZATION

0.99+

KoreaLOCATION

0.99+

81 percentQUANTITY

0.99+

GoogleORGANIZATION

0.99+

David AronchickPERSON

0.99+

John FourierPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Amazon Web ServicesORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

AWSORGANIZATION

0.99+

David Ronn chickPERSON

0.99+

Seattle WashingtonLOCATION

0.99+

todayDATE

0.99+

three daysQUANTITY

0.99+

NvidiaORGANIZATION

0.98+

CopenhagenLOCATION

0.98+

10 gigabytesQUANTITY

0.98+

1 mlQUANTITY

0.97+

81 percentQUANTITY

0.97+

Kubb conORGANIZATION

0.97+

AsscherORGANIZATION

0.96+

John ferPERSON

0.96+

oneQUANTITY

0.96+

bothQUANTITY

0.95+

a couple of yearsQUANTITY

0.95+

IntelORGANIZATION

0.94+

each oneQUANTITY

0.94+

2018DATE

0.94+

each placeQUANTITY

0.93+

six different locationsQUANTITY

0.93+

day twoQUANTITY

0.92+

one cloudQUANTITY

0.91+

singleQUANTITY

0.9+

one stepQUANTITY

0.89+

five different cloudsQUANTITY

0.89+

single buttonQUANTITY

0.89+

kubernetesORGANIZATION

0.89+

day oneQUANTITY

0.89+

single VMQUANTITY

0.88+

SalesforceORGANIZATION

0.87+

AzureTITLE

0.85+

threeQUANTITY

0.84+

5% ofQUANTITY

0.83+

single mlQUANTITY

0.82+

single ml modelQUANTITY

0.81+

JupiterLOCATION

0.81+

IOTTITLE

0.8+

coop conORGANIZATION

0.78+

petabyteQUANTITY

0.75+

thingsQUANTITY

0.75+

KubeCon 2018EVENT

0.75+

Google cloudTITLE

0.74+

AsherORGANIZATION

0.73+

first partyQUANTITY

0.72+

AzureORGANIZATION

0.71+

a hundred different thingsQUANTITY

0.7+

coop con cloudORGANIZATION

0.7+

a petabyteQUANTITY

0.67+

seven podsQUANTITY

0.67+

petabyte of dataQUANTITY

0.66+

19DATE

0.66+

solutionsQUANTITY

0.63+

North AmericaLOCATION

0.62+

Traci Gusher, KPMG | Google Cloud Next 2018


 

>> Live from San Francisco, it's theCube, covering Google Cloud Next 2018. Brought to you by Google Cloud and its ecosystem partners. >> Hello everyone, welcome back, this is theCUBE's live coverage, we're here in San Francisco, Moscone West for Google Cloud's big conference called Next 2018. The hashtag is GoogleNext18. I'm John Furrier, Dave Vellante, our next guest is Traci Gusher, Principal, Data and Analytics at KPMG. Great to have you on, thanks for joining us today. >> Yeah, thanks for having me. >> We love bringing on the big system, global, some integrators, you guys have great domain expertise. You also work with customers, you have all the best stories. You work with the best tech. Google Cloud is like a kid in the candy store >> It sure is. when it comes to tech, so my first question is obviously AI in super important to Google. Huge scale, they bring out all the goodies to the party. Spanner, Bigtable, BigQuery, I mean they got a lot of good stuff. TensorFlow, all this open source goodness, pretty impressive, right, >> Yeah, absolutely. the past couple years what they've done. How are you guys partnering with Google, because now that's out there, they need help, they've been acknowledging it for a couple years, they're building an ecosystem, and they want to help end user customers. >> Yeah, we've been working with Google for quite some time, but we actually just formalized our partnership with Google in May of this year. From our perspective, all of the good work that we have done, we're ready to hit the accelerator on and really move forward fast. Some of the things that were announced this week, I think, are prime examples of areas where we see opportunity for us to hit the accelerator on. Something like what was announced this week with their new contact center, API suite, launched by the Advanced Solutions Lab. We had early access to test some of that and really were able to witness just how accelerated some of these things can help us be when we're building end-to-end solutions for clients. >> There's a shortcut to the solutions because with Cloud, the time to value is so much faster, so it's almost an innovator's dilemma. The longer deployments probably meant more billings, ( laughs) right, for a lot of integrators. We've heard people saying hey we've gone, the old days were eight months to eight weeks to eight minutes on some of these techs, so the engagements have changed. At the end of the day, there's still a huge demand for architectural shift. How has the delivery piece of tech helped you guys serve your customers, because I think that's now a conversation that we're hearing is that look, I can move faster, but I don't want to break anything. The old Facebook move fast, break stuff, that doesn't fly in enterprise. >> No, it doesn't (laughs). >> I want to move fast, but I need to have some support there. What are some of the things that you're seeing that are impacting the delivery from integrators? >> Well, some of the technology that's come, that's reduced the length of time to deliver, we see and a lot of our customers see as opportunity to do the next thing, right? If you can implement a solution to a problem quicker, better, faster, than you can move on to the next problem and implement that one quicker, better, faster. I think the first impact is just being able to solve more problems, just being able to really apply some benefits in a lot more areas. The second thing is that we're looking at problems differently, the way that problems used to be solved is changing, and that's most powerfully noted, as we see, at this conference by what's happening with artificial intelligence and with all the accelerators that are being released in machine learning and the like. There's a big difference in just how we're solving the problems that impacts it. >> What are some of the problems that you guys are attacking now, obviously AI's got a lot of goodness to it. What are some of the challenges that you're attacking for customers, what are some examples? >> Our customers have varying problems as they're looking to capitalize on artificial intelligence. One of the big problems is where do I start, right? Often you'll have a big hype cycle where people are really interested, executives are really interested, and I want to use AI, I want to be an AI-enabled company. But they're not really sure where to start. One of the areas that we're really hoping a lot of our customers do is identify where the low hanging fruit is to get immediate value. And at the same time, plan for longer strategic types of opportunities. The second area is that one of the faults that we're seeing, or failure points that we're seeing in using artificial intelligence is failure to launch. What I mean by that is there's a lot of great modeling, a lot of great prototyping and experimentation happening in the lab as it relates to applying AI to different problems and opportunities, but they're staying in the lab, they're not making it in to production, they're not making it in to BAU, business as usual processes inside organizations. So a big area that we're helping our clients in is actually bridging that gap, and that's actually how I refer to it, I refer to it as mind the gap. >> That is a great example, I hear this all the time, classic. Is it, what's the reasons, just group think, I'm nervous, there's no process, what's holding that back from the failure to launch? >> There's a few things. The first is that a lot of traditional IT organizations embedded in enterprises don't necessarily have all of the skills and capabilities or the depth of skills and capabilities that they need to deploy these models in to production. There's even just basic programming types of gaps, where a lot of models are being constructed using things like Python, and a lot of traditional IT organizations are Java shops and they're saying what do I do now? Do I convert, do I learn, do I use different talent? There's technology areas that prove to be challenging. The other area is in the people, and I actually spoke with an analyst this morning about this very topic. There's a lot of organizations that have started productionalizing some of these systems and some of these applications, and they're a little bit discouraged that they're not seeing the kind of lift and the kind of benefits that they thought they would. In most cases-- >> Who, the customers or the analysts? >> The customers. >> OK, alright. >> Yeah, I was having a conversation with an analyst about it. But in most cases, it's not that the technology is falling short, it's not that the model isn't as accurate as you need it to be, it's that the workforce hasn't been transitioned to utilize it, the processes haven't been changed. >> Operationalizing it, yeah. >> The user interfaces aren't transitioning the workforce to a new type of model, they're not being retrained on how to utilize the new technology or the new insights coming from these models. >> That's a huge issue, I agree. >> Isn't there also, Traci, some complacency in certain industries? I mean you think about businesses that haven't yet totally transformed, I think of healthcare, I think of financial services, as examples that are ripe for transformation but really haven't yet. You hear a lot of people say well, it's not really urgent for us, we're doing pretty well, I'll be retired by then, there seems to be a sense of complacency in certain segments of enterprises. Do you see that? >> I do. And I'll say that we've seen a lot more movement in some of those complacent industries in the last six to 18 months than we have previously. I'll also say going back to that where do I start element, there's a lot of organizations that have pressing business challenges, those burning platforms, and that's where they're starting and I'm not advocating against it, I'm actually advocating very much for that, because that's how you can prove some real immediate value. Some organizations, particularly in life sciences or financial services, they're starting to use these technologies to solve their regulatory challenges. How do I comply faster, how do I comply better, how do I avoid any type of compliance issues in the future, how do I avoid other challenges that could come in those areas? The answer to a lot of those questions is if I use AI, I can do it quicker, more accurately, etc. >> Are you able to help them get ancillary value out of that or is it just sort of, compliance a lot of times is like insurance, if I don't do it I get in trouble or I get fined. But are you able to, this is like the holy grail of compliance and governance, are you able to get additional value out of that when you sort of apply machine intelligence to solve those problems? >> That's always the goal. Solving the regulatory problem is certainly what I would say are the table stakes, right? The must-have. But the ability to gain insight that can actually drive value in the organization, that's where your aim really is. In fact, we've worked with a lot of organizations, take life sciences, we've worked with some life sciences organizations that are trying to solve some compliance issues and what we've found is that many times in helping them solve these compliance issues, we're actually gathering insights that significantly increase the capability of their sales organization, because the insights are giving them real information about their customers, their customers' buying patterns, how they're buying, where they might be buying improperly. And it's not the table stake of what we're trying to do, the table stake was maybe contract compliance, but the value that they're actually getting out of it is not only the compliance over their distributors or their pharmacies, but it's also over the impact that they're going to have on their sales organization. For something like an internal audit department to have value to sales, that' like holy grail stuff. >> Yeah, right, yeah. >> What about the data challenges? Even in a bank, who's essentially a data company, the data tends to be very siloed, maybe tucked away in different business units. How are you seeing organizations, how are you helping organizations deal with that data silo problem, specifically as it relates to AI? >> It used to be that the devil was in the details, but now the devil's in the data, right? >> I love that. >> There was a great Harvard Business Review article that came out, and I think Diane Green actually quoted this in one of her presentations, that companies that can't do analytics well can't do AI yet. A lot of companies that can't do analytics well yet, it isn't because they don't have the analytical talent, it's not because they don't know the insights they want to drive, it's because the data isn't in the right format, isn't usable to be able to gain value from it. There's a few different ways that we're helping our clients deal with those things. Just at the very basic level is good data governance. Do you have data stewards that are owning data, that are making sure that data is being created and governed the right way? >> That's a huge deal, I imagine-- >> Inequality and. >> It's huge. >> Inequality-- >> inequality, meta data. >> Garbage in, garbage out. >> Lineage of data, how it's transformed. Being able to govern those things is just imperative. >> It could be just a database thing, could be a database thing, too, it's one of those things where there's so many areas that could be mistakes on the data side. Want to get your thoughts on the point you said earlier which I thought was about technology not coming out and getting commercialized or operationalized. For a variety of reasons, one of them being processes in place, and we hear this a lot. This is a big opportunity, because the human side of these new jobs, whether you're operating the network, really they need help, customers need help. I think you guys should do a great job there given the history. The other trend that came out of the keynote today I want to get your reaction to is there's a tweet here, I'll read it, it says "GCB Cloud will start serving "managing services, enterprise workloads, including Oracle, RAC and Oracle exit data, and SAP HANA through partners." Interesting mind shift again, talk about a mind shift, OK. Partners aren't used to dealing with multi-vendors, but now as a managed service will change the mechanism a bit on delivery because now it's like OK, hey, you want to sling some APIs around, no problem. You want to manage it, we got Kubernetes and Istio. You want a little Oracle with a little bit of HANA? It brings up a much more diverse landscape of solutions. >> It does. Which makes the partners like sous chefs. You can cut the solutions up any way you want. To your point about going faster, to the next challenge. Normal, is that going to be the new normal, this kind of managed service dashboarding? You see that as the... >> I think it is, and I'll take it a step, sir, I'll take it a step further beyond managed service and actually get a little more discreet. One of the things that we're doing increasingly more of is insights as a service, right? If you think about managed service in the traditional sense of I've got a process and you're going to manage that process end to end for me, that technology end to end for me, I do think that that's going to slowly become more and more prevalent. That has to happen with our movement to putting our applications in the cloud, and our ERPs in the cloud. I think it is going to become more of the norm than the less but I also think that it's opening the door for a lot of other things as a service, including insights as a service. Organizations can't find the data science talent that they need to do the really complex types of analysis. >> Your insights as a service comment just gave me an insightful, original idea, thank you very much. >> You're welcome. >> I'll put this in the wrap-up, Dave, when we talk about it. Think about insight as a service, to make that happen with all the underpinning tech, whether it's Oracle or whatever, the insights are an abstraction layer on top of that so if the job is to create great experiences or insights, it should be independent of that. Google Cloud is bringing out a lot more of the concept of abstractions. Kubernetes, Istio, so this notion of an abstraction layer is not just technical, there's also business logic involved. >> Yeah, absolutely. >> This is going to be a dream scenario for KPMG, >> We think so. for your customers, for other partners. Cause now you can add value in those abstraction layers. >> Absolutely. >> By reducing the complexity. Well Oracle, that's not my department, that's HANA's, that's SAP, who does that? He or she's the product lead over it, gone. Insights as a service completely horizontally flattens that. >> Yeah, and to that point, there's magic that happens when you bring different data together. Having data silos because their data's in different systems just, that's the analytics of 1990. Organizations can't operate on that anymore, and real analytics comes when you are working at a layer above the system's and working with the data that's coming from those systems and in fact even creating signals from the data. Not even using the data anymore, creating a signal from the data as an input to a model. I couldn't agree with you more. >> Whole new way of doing business. This is digital transmitting, this is the magic of Cloud. Traci, great to have you on. >> Yeah, thanks for having me. >> It's going to be a whole new landscape changeover, new way to do business. You guys are doing a great job, KPMG, Traci Gusher. Here inside theCUBE talking about analytics AI. If you can't do analytics good, why even go to AI? Love that line. theCUBE bringing you all the data here, stick with us for more after this short break. (bubbly electronic tones)

Published Date : Jul 25 2018

SUMMARY :

Brought to you by Google Cloud Great to have you on, the big system, global, all the goodies to the party. the past couple years what they've done. Some of the things that were the time to value is so What are some of the things the length of time to deliver, a lot of goodness to it. One of the areas that we're that back from the failure to launch? that prove to be challenging. that the technology is falling new technology or the new there seems to be a sense of in the future, how do I is like the holy grail But the ability to gain the data tends to be very know the insights they want Being able to govern those the point you said earlier Normal, is that going to be One of the things that we're idea, thank you very much. of the concept of abstractions. Cause now you can add value He or she's the product from the data as an input to a model. Traci, great to have you on. It's going to be a whole

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
RonanPERSON

0.99+

Dave VellantePERSON

0.99+

Traci GusherPERSON

0.99+

JohnPERSON

0.99+

John FarerPERSON

0.99+

Ronen SchwartzPERSON

0.99+

TraciPERSON

0.99+

Diane GreenPERSON

0.99+

Savannah PetersonPERSON

0.99+

John FurrierPERSON

0.99+

GoogleORGANIZATION

0.99+

KevinPERSON

0.99+

DavePERSON

0.99+

70QUANTITY

0.99+

tensQUANTITY

0.99+

San FranciscoLOCATION

0.99+

10 timesQUANTITY

0.99+

BroadcomORGANIZATION

0.99+

eight monthsQUANTITY

0.99+

Kevin McGrathPERSON

0.99+

John FurrierPERSON

0.99+

eight weeksQUANTITY

0.99+

KPMGORGANIZATION

0.99+

NettaORGANIZATION

0.99+

tomorrowDATE

0.99+

OracleORGANIZATION

0.99+

AWSORGANIZATION

0.99+

eight minutesQUANTITY

0.99+

1990DATE

0.99+

PythonTITLE

0.99+

2022DATE

0.99+

Advanced Solutions LabORGANIZATION

0.99+

NutanixORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

70,000QUANTITY

0.99+

FirstQUANTITY

0.99+

first questionQUANTITY

0.99+

this weekDATE

0.99+

bothQUANTITY

0.99+

99.9%QUANTITY

0.99+

OneQUANTITY

0.99+

Las Vegas, NevadaLOCATION

0.99+

HANATITLE

0.99+

oneQUANTITY

0.99+

thousands of customersQUANTITY

0.99+

todayDATE

0.99+

10 more timesQUANTITY

0.99+

ONAPORGANIZATION

0.99+

DruvaORGANIZATION

0.99+

firstQUANTITY

0.98+

this yearDATE

0.98+

ZeroQUANTITY

0.98+

second thingQUANTITY

0.98+

VMwareORGANIZATION

0.98+

17QUANTITY

0.97+

first impactQUANTITY

0.97+