Image Title

Search Results for Christopher:

Savitha Raghunathan, Red Hat & Christopher Nuland, Konveyor | KubeCon + CloudNativeCon NA 2022


 

(upbeat music) >> Good afternoon and welcome back to KubeCon. John Furrier and I are live here from theCUBE Studios in Detroit, Michigan. And very excited for an afternoon shock full of content. John, how you holding up day too? >> I'm doing great and got a great content. This episode should be really good. We're going to be talking about modern applications, Red Hat and Konveyor, all the great stuff going on. >> Yes, and it's got a little bit of a community spin, very excited. You know I've been calling out the great Twitter handles of our guests all week and I'm not going to stop now. We have with us Coffee Art Lover, Savitha, and she's joined with Christopher here from Konveyor and Red Hat, welcome to the show. >> Thank you. >> How you doing and what's the vibe? >> The vibe is good. >> Yeah, pretty good. >> Has anything caught your attention? You guys are KubeCon veterans, we were talking about Valencia and shows prior. Anything sticking out to you this year? >> Yeah, just the amount of people here in this like post-COVID it's just so nice to see this many people get together. 'Cause the last couple of KubeCons that we've had they've been good but they've been much smaller and we haven't seen the same presence that we've had. And I feel like we're just starting to get back to normal what we had going like pre-COVID with KubeCon. >> Go ahead. >> Oh, sorry. And for me it's how everyone's like still respectful of everyone else and that's what sticking out to me. Like you go out of the conference center and you cannot see anyone like most or like respecting anyone's space. But here it's still there, it keeps you safe. So I'm super happy to be here. >> Yeah, I love that. I think that plays to the community. I mean, the CNCF community is really special. All these open source projects are layered. You run community at Red Hat so tell us a little bit more about that. >> So I have been focusing on the Konveyor community site for a while now since Konveyor got accepted into the CNCF Sandbox project. Yeah, it's so exciting and it's like I'm so thrilled and I'm so excited for the project. So it's something that I believe in and I do a lot of (indistinct) stuff and I learned a lot from the community. The community is what keeps me coming back to every KubeCon and keep me contributing. So I'm taking all the good stuff from there and then like trying to incorporate that into the conveyor community world. But not at a scale of like 20,000 or like 30,000 people but at a scale of like hundreds, we are in hundreds and hoping to like expand it to like thousands by next year. Hopefully, yeah. >> Talk about the project, give a quick overview what it is, where it's at now, obviously it's got traction, you got some momentum, I want to hear the customer. But give a quick overview of the project. Why are people excited about it? >> Sure. It is one of the open source of modernization tool sets that's available right now. So that's super exciting. So many people want to contribute to it. And what we basically do is like you see a lot of large companies and they want to like do the migration and the journey and we just want to help them, make their life easier. So we are in this environment which is like surrounded by cars, think of it like lane assist system or like think of it as an additional system, smart system but that's not taking control, like full control. But then it's there to like guide you through your journey safe and in a predictable way and you'll reach your destination point in a much happier, safer and like sooner. So that's what we are doing. I know that's a lot of talk but if you want the technical thing then I'll just say like we are here to help everyone who wants to modernize. Help them by refractoring and replatforming their applications in a safer and predictable way at scale. I think I got everything. What do you think Christopher? >> Yeah. I mean, we've seen a real need in the market to solve this problem as more and more companies are looking to go cloud native. And I feel like in the last 10 years we had this period where a lot of companies were kind of dabbling in the cloud and they're identifying the low hanging fruit for their migrations, or they were starting out with new applications in the cloud. We're just starting to move into a period where now they're trying to bring over legacy applications. Now they're trying to bring over the applications that have been running their business for 10, 20, even 30 years. And we're trying to help them solve the problem of how do we start with that? How do we take a holistic look at our applications and come up with a game plan of how we're going to bring those into being cloud native? >> Oh, yeah, go. >> One other thing I want to get to you mentioned replatforming and refactoring. A lot of discussion on what that means now. Refactoring with the cloud, we see a lot of great examples, people really getting a competitive advantage by refactoring in that case. But re-platforming also has meaning, it seems to be evolving. So guys can you share your your thoughts on what's re-platforming versus refactoring? >> I'll let you go. >> So for re-platforming, there's a few different stages that we can do this in. So we have this term in migration called lift and shift. It's basically taking something as is and just plopping it in and then having certain technologies around it that make it act in a similar way as it was before but in more of a cloud type of way. And this is a good way for people to get their feet wet, to get their applications into the cloud. But a lot of times they're not optimized around it, they're not able to scale, they're not able to have a lot of the cost effective things that go with it as well. So that's like the next step is that that's the refactoring. Where we're actually taking apart this idea, these domains is what we would call it for the business. And then breaking them down into their parts which then leads to things like microservices and things like being able to scale horizontally and proving that is. >> So the benefits of the cloud higher level services. >> Absolutely. >> So you shift to the platform which is cloud, lift and shift or get it over there, and then set it up so it can take advantage and increase the functionality. Is that kind of the difference? >> And one thing that we're seeing too is that these companies are operating this hybrid model. So they've brought some containers over and then they have legacy like virtual machines that they want to bring over into the cloud, but they're not in a position right now where they can re refactor or even- >> In position, it's not even on a table yet. >> So that's where we're also seeing opportunities where we can identify ways that we can actually lift and shift that VM closer at least to the containers. And that's where a lot of my conversations as a cloud success architect are of how do we refactor but also re-platform the most strategic candidate? >> So is Konveyor a good fit for these kinds of opportunities? >> Yes, 100%. It actually asks you like it starts certain phases like assessment phase, then it ask you a bunch of question about your infrastructure, applications and everything to gauge, and then provide you with the right strategy. It's not like one strategy. So it will provide you with the right strategy either re-platform, refracture or like what is best, retire, rehost, whatever, but replatform and refactor are the most that we are focused on right now. Hopefully that we might expand but I'm not sure. >> I think you just brought up a really good point and I was curious about this too 'cause Christopher you mentioned you're working with largely Fortune 50 companies, so some of the largest companies on earth. We're not talking about scale, we are talking about extraordinarily large scale. >> Thousands sometimes of applications. >> And I'm thinking a lot, I'm just sitting here listening to you thinking about the complexity. The complexity of each one of these situations. And I'm sure you've seen some of it before, you've been doing this for a while, and you're mentioning that Konveyor has different sorts of strategies. What's the flow like for that? I mean, just even thinking about it feels complex for me sitting here right now. >> Yeah, so typically when we're doing a large scale migration that lasts anywhere for like a year or two sometimes with these Fortune 50 companies. >> Some of this legacy stuff has got to be. >> This is usually when they're already at the point where they're ready to move and we're just there to tell them how to move it at that point. So you're right, there's years that have been going on to get to the point that even I'm involved. But from an assessment standpoint, we spend months just looking at applications and assessing them using tools like Konveyor to just figure out, okay, are you ready to go? Do you have the green light or do we have to pull the brakes? And you're right, so much goes into that and it's all strategic. >> Oh my gosh. >> So I guess, a quarter or a third of our time we're not even actually moving applications, we're assessing the applications and cutting up the strategy. >> That's right, there's many pieces to this puzzle. >> Absolutely. >> And I bet there's some even hidden in the corners under the couch that people forgot were even there. >> We learn new things every time too. Every migration we learn new patterns and new difficulties which is what's great about the community aspect. Because we take those and then we add them into the community, into Konveyor and then we can build off of that. So it's like you're sharing when we're doing those migrations or companies are using Konveyor and sharing that knowledge, we're building off what other people have done, we're expanding that. So there's a real advantage to using a tool like Konveyor when it comes to previous experiences. >> So tell me about some of the trends that you're seeing across the board with the folks that you're helping. >> Yeah, so trends wise like I said, I feel like the low hanging fruit has been already done in the last 10 years. We're seeing very critical like mission critical applications that are typically 10, 20 years old that need to get into the the cloud. Because that term data gravity is what's preventing them from moving into the cloud. And it's usually a large older what we would call monolithic application that's preventing them from moving. And trying to identify the ways that we can take that apart and strategically move it into the cloud. And we had a customer survey that went out to a few hundred different people that were using Konveyor. And the feedback we got was about 50% of them are currently migrating like have large migrations going on like this. And then another 30, 40% have that targeted next two years. >> So it's happening. >> It's happening now. This is a problem, this isn't a problem that we're trying to future proof, it is happening now for most corporations. They are focused on finding ways to be cost optimized and especially in the way our market is working in this post-COVID world, it's more critical than ever. And a lot of people are pouring even though they're cutting back expenses, they're still putting focus their IT for these type of migrations. >> What's the persona of people that you're trying to talk to about Konveyor? Who is out there? >> What's the community like? >> What's the community makeup and why should someone join the team? Why should someone come in and work on the project? >> So someone who is interested or trying to start their journey or someone who's already like going through a journey and someone who has went through the journey, right? They have the most experience of like what went wrong and where it could be improved. So we cater to like everyone out there pretty much, right? Because some point of the time right now it's cloud native right now this is a ecosystem. In five years it would be like totally different thing. So the mission of the project is going to be like similar or like probably same, help someone replatform and rehost things into the next generation of whatever that's going to come. So we need everyone. So that is the focus area or like the targeted audience. Right now we have interest from people who are actually actively ongoing the migration and the challenges that they are facing right now. >> So legacy enterprises that up and running, full workloads, multiple productions, hundreds and hundreds of apps, whose boss has said, "We're going to the cloud." And they go, oh boy. How do we do this? Lift and shift, get re-platform? There's a playbook, there's a method. You lift and shift, you get it in there, get the core competency, use some manage service restitch it together, go cloud native. So this is the cloud native roadmap. >> And the beauty of Konveyor is that it also gives you like plans. So like once it assists and analyzed it, it comes up with plans and reports so that you can actually take it to your management and say like, well, let's just target these, these and many application, X number of application in like two weeks. Now let's just do it in waves. So that is some feature that we are looking forward to in conveyor three which is going to be released in the first quarter of 2023. So it's exciting, right? >> It is exciting and it makes a lot of sense. >> It makes everyone happy, it makes the engineers happy. Don't have to be overworked. It also like makes the architects like Chris happy and it also makes- >> Pretty much so. >> As exemplified right here, love that. >> It makes the management happy because they see that there is like progress going on and they can like ramp it up or wrap it down holiday season. Do not touch prediction, right? Do not touch prediction. >> You hear that manager, do not touch production. >> It's also friendships too 'cause people want to be in a tribe that's experiencing the same things over and over again. I think that is really the comradery and the community data sharing. >> Yeah, that's the beauty of community, right? You can be on any number of teams but you are on the same team. Like any number of companies but on the same team. It also like reflected in the keynotes I think yesterday someone mentioned it. Sorry, I cannot recall the name of who mentioned it but it's like different companies, same team, similar goal. We all go through the journey together. >> Water level rises together too. We learn from each other and that's what community is really all about. You can tell folks at home might not be able to feel it but I can. You can tell how community first you both are. Last question for you before we wrap up, is there anything that you wish the world knew about Konveyor that they don't know right now, or more people knew? And if not, your marketing team is nailing it and we'll just give them a high five. >> I think it goes with just what we were talking about. It's not just a tool for individual applications and how to move it, it's how do we see things from a bigger picture? And this is what this tool ultimately is also trying to solve is how do we work together to move hundreds if not thousands of applications? Because it takes a village. >> Quite literally with that volume size. >> My biggest advice to people who are considering this who are in large enterprise or even smaller enterprise. Make sure that you understand this is a team effort. Make sure you're communicating and lessons learned on one team is going to be lessons learned for another team. So share that information. When you're doing migrations make sure that all that knowledge is spread because you're just going to end up repeating the same mistakes over and over again. >> That is a beautiful way to close the show. Savitha, Christopher, thank you so much for being with us. John, always a pleasure. And thank you for tuning into theCUBE live from Detroit. We'll be back with our next interview in just a few. (upbeat music)

Published Date : Oct 27 2022

SUMMARY :

John Furrier and I are live the great stuff going on. out the great Twitter handles Anything sticking out to you this year? Yeah, just the amount of people here and you cannot see anyone like most I mean, the CNCF community and I'm so excited for the project. But give a quick overview of the project. It is one of the open source And I feel like in the last 10 years So guys can you share So that's like the next step is that So the benefits of the and increase the functionality. over into the cloud, not even on a table yet. that VM closer at least to the containers. are the most that we are some of the largest companies listening to you thinking a large scale migration that lasts stuff has got to be. and we're just there to and cutting up the strategy. many pieces to this puzzle. even hidden in the corners and then we can build off of that. across the board with the And the feedback we got and especially in the So that is the focus area or So legacy enterprises that And the beauty of Konveyor is that it makes a lot of sense. It also like makes the It makes the management happy You hear that manager, and the community data sharing. It also like reflected in the keynotes and that's what community and how to move it, Make sure that you understand And thank you for tuning into

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
SavithaPERSON

0.99+

ChristopherPERSON

0.99+

John FurrierPERSON

0.99+

ChrisPERSON

0.99+

100%QUANTITY

0.99+

hundredsQUANTITY

0.99+

two weeksQUANTITY

0.99+

JohnPERSON

0.99+

yesterdayDATE

0.99+

DetroitLOCATION

0.99+

20,000QUANTITY

0.99+

10QUANTITY

0.99+

30,000 peopleQUANTITY

0.99+

Savitha RaghunathanPERSON

0.99+

20QUANTITY

0.99+

thousandsQUANTITY

0.99+

KonveyorORGANIZATION

0.99+

first quarter of 2023DATE

0.99+

twoQUANTITY

0.99+

theCUBE StudiosORGANIZATION

0.99+

next yearDATE

0.99+

Detroit, MichiganLOCATION

0.99+

a yearQUANTITY

0.99+

Red HatORGANIZATION

0.98+

a quarterQUANTITY

0.98+

one teamQUANTITY

0.98+

CNCFORGANIZATION

0.98+

about 50%QUANTITY

0.98+

KubeConEVENT

0.98+

OneQUANTITY

0.98+

30 yearsQUANTITY

0.98+

Red HatORGANIZATION

0.98+

ThousandsQUANTITY

0.97+

thousands of applicationsQUANTITY

0.96+

five yearsQUANTITY

0.96+

Christopher NulandPERSON

0.96+

oneQUANTITY

0.96+

threeQUANTITY

0.96+

this yearDATE

0.96+

one strategyQUANTITY

0.95+

bothQUANTITY

0.95+

earthLOCATION

0.93+

a thirdQUANTITY

0.92+

one thingQUANTITY

0.92+

KubeConsEVENT

0.91+

30, 40%QUANTITY

0.9+

NA 2022EVENT

0.89+

CloudNativeConEVENT

0.89+

fiveQUANTITY

0.89+

last 10 yearsDATE

0.89+

TwitterORGANIZATION

0.88+

10, 20 years oldQUANTITY

0.87+

next two yearsDATE

0.86+

each oneQUANTITY

0.86+

50 companiesQUANTITY

0.86+

hundreds of appsQUANTITY

0.85+

firstQUANTITY

0.83+

hundredQUANTITY

0.81+

50QUANTITY

0.77+

LastQUANTITY

0.71+

CNCF SandboxORGANIZATION

0.69+

ValenciaLOCATION

0.68+

CoffeeORGANIZATION

0.61+

KonveyorTITLE

0.58+

Art LoverPERSON

0.58+

thingQUANTITY

0.51+

COVIDEVENT

0.49+

Christopher Voss, Microsoft | Kubecon + Cloudnativecon Europe 2022


 

>> theCUBE presents KubeCon and CloudNativeCon, Europe, 2022. Brought to you by Red Hat, the cloud-native computing foundation and its ecosystem partners. >> Welcome to Valencia, Spain in KubeCon, CloudNativeCon, Europe, 2022. I'm Keith Townsend with my cohosts, Enrico Signoretti, Senior IT Analyst at GigaOm. >> Exactly. >> 7,500 people I'm told, Enrico. What's the flavor of the show so far? >> It's a fantastic mood, I mean, I found a lot of people wanting to track, talk about what they're doing with Kubernetes, sharing their you know, stories, some war stories that bit tough. And you know, this is where you learn actually. Because we had a lot of Zoom calls, webinar and stuff. But it is when you talk a video, "Oh, I did it this way, and it didn't work out very well." So, and, you start a conversation like this that is really different from learning from Zoom, when, you know, everybody talks about things that work it well, they did it right. No, it's here that you learn from other experiences. >> So we're talking to amazing people the whole week, talking about those experiences here on theCUBE. Fresh on the theCUBE for the first time, Chris Voss, senior software engineer at Microsoft Xbox. Chris, welcome to the theCUBE. >> Thank you so much for having me. >> So first off, give us a high level picture of the environment that you're running at Microsoft. >> Yeah. So, you know, we've got 20 well probably close to 30 clusters at this point around the globe, you know 700 to 1,000 pods per cluster, roughly. So about 22,000 pods total. So yeah, it's pretty, pretty sizable footprint and yeah. So we've been running on Kubernetes since 2018 and well actually might be 2017, but anyways, so yeah, that's kind of our footprint. Yeah. >> So all of that, let's talk about the basics which is security across multiple I'm assuming containers, microservices, etcetera. Why did you and the team settle on Linkerd? >> Yeah, so previously we had our own kind of solution for managing TLS certs and things like that. And we found it to be pretty painful, pretty quickly. And so we knew, you know we wanted something that was a little bit more abstracted away from the developers and things like that, that allowed us to move quickly. And so we began investigating, you know, solutions to that. And a few of our colleagues went to Kubecon in San Diego in 2019, Cloudnativecon as well. And basically they just, you know, sponged it all up. And actually funny enough, my old manager was one of the people who was there and he went to the Linkerd booth and they had a thing going that was like, "Hey, get set up with MTLS in five minutes." And he was like, "This is something we want to do, why not check this out?" And he was able to do it. And so that put it on our radar. And so yeah, we investigated several others and Linkerd just perfectly fit exactly what we needed. >> So, in general we are talking about, you know, security at scale. So how you manage security scale and also flexibility. Right? So, but you know, what is the... You told us about the five minutes to start using there but you know, again, we are talking about war stories. We're talking about, you know, all these. So what kind of challenges you found at the beginning when you started adopting this technology? >> So the biggest ones were around getting up and running with like a new service, especially in the beginning, right, we were, you know, adding a new service almost every day. It felt like. And so, you know, basically it took someone going through a whole bunch of different repos, getting approvals from everyone to get the certs minted, all that fun stuff getting them put into the right environments and in the right clusters, to make sure that, you know, everybody is talking appropriately. And just the amount of work that that took alone was just a huge headache and a huge barrier to entry for us to, quickly move up the number of services we have. >> So, I'm trying to wrap my head around the scale of the challenge. When I think about certification or certificate management, I have to do it on a small scale. And every now and again, when a certificate expires it is just a troubleshooting pain. >> Yes. >> So as I think about that, it costs it's not just certificates across 22,000 pods, or it's certificates across 22,000 pods in multiple applications. How were you doing that before Linkerd? Like, what was the... And what were the pain points? Like what happens when a certificate either fails? Or expired up? Not updated? >> So, I mean, to be completely honest, the biggest thing is we're just unable to make the calls, you know, out or in, based on yeah, what is failing basically. But, you know, we saw essentially an uptick in failures around a certain service and pretty quickly, pretty quickly, we got used to the fact that it was like, oh, it's probably a cert expiration issue. And so we tried, you know, a few things in order to make that a little bit more automated and things like that. But we never came to a solution that like didn't require every engineer on the team to know essentially quite a bit about this, just to get into it, which was a huge issue. >> So talk about day two, after you've deployed Linkerd, how did this alleviate software engineers? And what was like the benefits of now having this automated way of managing certs? >> So the biggest thing is like, there is no touch from developers, everyone on our team... Well, I mean, there are a lot of people who are familiar with security and certs and all of that stuff. But no one has to know it. Like it's not a requirement. Like for instance, I knew nothing about it when I joined the team. And even when I was setting up our newer clusters, I knew very little about it. And I was still able to really quickly set up Linkerd, which was really nice. And it's been, you know, essentially we've been able to just kind of set it, and not think about it too much. Obviously, you know, there're parts of it that you have to think about, we monitor it and all that fun stuff, but yeah, it's been pretty painless almost day one. It took a long time to trust it for developers. You know, anytime there was a failure, it's like, "Oh, could this be Linkerd?" you know. But after a while, like now we don't have that immediate assumption because people have built up that trust, but. >> Also you have this massive infrastructure I mean, 30 clusters. So, I guess, that it's quite different to manage a single cluster in 30. So what are the, you know, consideration that you have to do to install this software on, you know, 30 different cluster, manage different, you know versions probably, et cetera, et cetera, et cetera. >> So, I mean, you know, as far as like... I guess, just to clarify, are you asking specifically with Linkerd? Or are you just asking in more in general? >> Well, I mean, you can take that the question in two ways. >> Okay. >> Sure, yeah, so Linkerd in particular but the 30 cluster also quite interesting. >> Yeah. So, I mean, you know, more generally, you know how we manage our clusters and things like that. We have, you know, a CLI tool that we use in order to like change context very quickly, and switch and communicate with whatever cluster we're trying to connect to and you know, are we debugging or getting logs, whatever. And then, you know, with Linkerd it's nice because again, you know, we aren't having to worry about like, oh, how is this cert being inserted in the right node? Or not the right node, but in the right cluster or things like that. Whereas with Linkerd, we don't really have that concern. When we spin up our clusters, essentially we get the route certificate and everything like that packaged up, passed along to Linkerd on installation. And then essentially, there's not much we have to do after that. >> So talk to me about your upcoming section here at Kubecon. what's the high level talking points? Like what attendees learn? >> Yeah. So it's a journey. Those are the sorts of talks that I find useful. Having not been, you know, I'm not a deep Kubernetes expert from, you know decades or whatever of experience, but-- >> I think nobody is. >> (indistinct). >> True, yes. >> That's also true. >> That's another story >> That's a job posting decades of requirements for-- >> Of course, yeah. But so, you know, it's a journey. It's really just like, hey, what made us decide on a service mesh in the first place? What made us choose Linkerd? And then what are the ways in which, you know, we use Linkerd? So what are those, you know we use some of the extra plugins and things like that. And then finally, a little bit about more what we're going to do in the future. >> Let's talk about not just necessarily the future as in two or three days from now, or two or three years from now. Well, the future after you immediately solve the low level problems with Linkerd, what were some of the surprises? Because Linkerd in service mesh and in general have side benefits. Do you experience any of those side benefits as well? >> Yeah, it's funny, you know, writing the blog post, you know, I hadn't really looked at a lot of the data in years on, you know when we did our investigations and things like that. And we had seen that we like had very low latency and low CPU utilization and things like that. And looking at some of that, I found that we were actually saving time off of requests. And I couldn't really think of why that was and I was talking with someone else and the biggest, unfortunately all that data's gone now, like the source data. So I can't go back and verify this but it makes sense, you know, there's the availability zone routing that Linkerd supports. And so I think that's actually doing it where, you know essentially, if a node is closer to another node, it's essentially, you know, routing to those ones. So when one service is talking to another service and maybe they're on the same node, you know, it short circuits that and allows us to gain some time there. It's not huge, but it adds up after, you know, 10, 20 calls down the line. >> Right. In general, so you are saying that it's smooth operations at this very, you know, simplifying your life. >> And again, we didn't have to really do anything for that. It handled that for us. >> It was there? >> Yep. Yeah, exactly. >> So we know one thing when I do it on my laptop it works fine. When I do it with across 22,000 pods, that's a different experience. What were some of the lessons learned coming out of Kubecon 2018 in San Diego? I was there. I wish I would've ran into the Microsoft folks, but what were some of the hard lessons learned scaling Linkerd across the 22,000 nodes? >> So, you know, the first one and this seems pretty obvious, but was just not something I knew about was the high availability mode of Linkerd. So obviously makes sense. You would want that in, you know a large scale environment. So like, that's one of the big lessons that like, we didn't ride away. No. Like one of the mistakes we made in one of our pre-production clusters was not turning that on. And we were kind of surprised. We were like, whoa, like all of these pods are spinning up but they're having issues, like actually getting injected and things like that. And we found, oh, okay. Yeah, you need to actually give it some more resources. But it's still very lightweight considering, you know, they have high availability mode but it's just a few instances still. >> So from, even from, you know, binary perspective and running Linkerd how much overhead is it? >> That is a great question. So I don't remember off the top of my head, the numbers but it's very lightweight. We evaluated a few different service missions and it was the lightest weight that we encountered at that point. >> And then from a resource perspective, is it a team of Linkerd people? Is it a couple of people? Like how? >> To be completely honest for a long time, it was one person Abraham, who actually is the person who proposed this talk. He couldn't make it to Valencia, but he essentially did probably 95% of the work to get into production. And then this was before, we even had a team dedicated to our infrastructure. And so we have, now we have a team dedicated, we're all kind of Linkerd folks, if not Linkerd experts, we at least can troubleshoot basically. And things like that. So it's, I think a group of six people on our team and then, you know various people who've had experience with it on other teams. >> But others, dedicated just to that. >> No one is dedicated just to it. No, it's pretty like pretty light touch once it's up and running. It took a very long time for us to really understand it and to, you know, get like not getting started, but like getting to where we really felt comfortable letting it go in production. But once it was there, like, it is very, very light touch. >> Well, I really appreciate you stopping by Chris. It's been an amazing conversation to hear how Microsoft is using a open source project. >> Exactly. >> At scale, it's just a few years ago when you would've heard the concept of Microsoft and open source together and like OS, just, you know-- >> They have changed a lot in the last few years. Now, there are huge contributors. And, you know, if you go to Azure, it's full of open source stuff, everywhere so. >> Yeah. >> Wow. The Kubecon 2022, how the world has changed in so many ways. From Valencia Spain, I'm Keith Townsend, along with Enrico Signoretti. You're watching theCUBE, the leader in high tech coverage. (upbeat music)

Published Date : May 19 2022

SUMMARY :

Brought to you by Red Hat, Welcome to Valencia, Spain What's the flavor of the show so far? And you know, this is Fresh on the theCUBE for the first time, of the environment that at this point around the globe, you know Why did you and the And so we knew, you know So, but you know, what is the... right, we were, you know, I have to do it on a small scale. How were you doing that before Linkerd? And so we tried, you know, And it's been, you know, So what are the, you know, So, I mean, you know, as far as like... Well, I mean, you can take that but the 30 cluster also quite interesting. And then, you know, with Linkerd So talk to me about Having not been, you know, But so, you know, you immediately solve but it makes sense, you know, you know, simplifying your life. And again, we didn't have So we know one thing So, you know, the first one and it was the lightest and then, you know dedicated just to that. and to, you know, get you stopping by Chris. And, you know, if you go to Azure, how the world has changed in so many ways.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
EnricoPERSON

0.99+

ChrisPERSON

0.99+

Enrico SignorettiPERSON

0.99+

Christopher VossPERSON

0.99+

Chris VossPERSON

0.99+

Keith TownsendPERSON

0.99+

95%QUANTITY

0.99+

700QUANTITY

0.99+

2017DATE

0.99+

LinkerdORGANIZATION

0.99+

San DiegoLOCATION

0.99+

30 clustersQUANTITY

0.99+

Red HatORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

AbrahamPERSON

0.99+

10QUANTITY

0.99+

2019DATE

0.99+

20QUANTITY

0.99+

ValenciaLOCATION

0.99+

six peopleQUANTITY

0.99+

22,000 podsQUANTITY

0.99+

30QUANTITY

0.99+

Valencia, SpainLOCATION

0.99+

Valencia SpainLOCATION

0.99+

KubeConEVENT

0.99+

7,500 peopleQUANTITY

0.99+

2018DATE

0.99+

1,000 podsQUANTITY

0.99+

two waysQUANTITY

0.99+

five minutesQUANTITY

0.99+

EuropeLOCATION

0.99+

CloudNativeConEVENT

0.98+

Enrico SignorePERSON

0.98+

three daysQUANTITY

0.98+

GigaOmORGANIZATION

0.98+

twoQUANTITY

0.98+

first timeQUANTITY

0.98+

firstQUANTITY

0.98+

CloudnativeconORGANIZATION

0.97+

one serviceQUANTITY

0.97+

KubeconORGANIZATION

0.97+

three yearsQUANTITY

0.97+

30 different clusterQUANTITY

0.96+

first oneQUANTITY

0.96+

22,000 nodesQUANTITY

0.96+

oneQUANTITY

0.96+

30 clusterQUANTITY

0.95+

one thingQUANTITY

0.94+

XboxCOMMERCIAL_ITEM

0.93+

about 22,000 podsQUANTITY

0.92+

single clusterQUANTITY

0.92+

20 callsQUANTITY

0.91+

day twoQUANTITY

0.91+

one personQUANTITY

0.89+

few years agoDATE

0.88+

decadesQUANTITY

0.87+

2022DATE

0.85+

AzureTITLE

0.79+

KubernetesTITLE

0.77+

Christopher Voss, Microsoft | Kubecon + Cloudnativecon Europe 2022


 

>>The cube presents, Coon and cloud native con Europe 22, brought to you by the cloud native computing foundation. >>Welcome to Valencia Spain in co con cloud native con Europe, 2022. I'm Keith Townsend with my cohos on Rico senior. Etti senior it analyst at gig home. Exactly 7,500 people I'm told en Rico. What's the flavor of the show so far, >>It's a fantastic mood. I mean, I found a lot of people wanting to track talk about what they're doing with Kubernetes, sharing their, you know, stories, some word stories that meet tough. And you know, this is where you learn actually, because we had a lot of zoom calls, webinar and stuff, but it is when you talk a video, oh, I did it this way and it didn't work out very well. So, and, and you start a conversation like this that is really different from learning from zoom. When, you know, everybody talks about things that working well, they did it, right. No, it's here that you learn from other experiences. >>So we're talking to amazing people the whole week, talking about those experiences here on the queue, fresh on the queue for the first time, Chris Vos, senior software engineer at Microsoft Xbox, Chris, welcome to the queue. >>Thank you so much for having >>Me. So first off, give us a high level picture of the environment that you're running at Microsoft. >>Yeah. So, you know, we've got 20, well probably close to 30 clusters at this point around the globe, you know, 700 to a thousand pods per cluster, roughly. So about 22,000 pods total. So yeah, it's pretty pretty sizable footprint and yeah. So we've been running on Kubernetes since 2018 and well actually might be 2017, but anyways, so yeah, that, that's kind of our, our footprint. >>Yeah. So all of that, let's talk about the basics, which is security across multiple I'm assuming containers, work, microservices, et cetera. Why did you and the team settle on link or do >>Yeah, so previously we had our own kind of solution for managing TLS certs and things like that. And we found it to be pretty painful pretty quickly. And so we knew, you know, we wanted something that was a little bit more abstracted away from the developers and, and things like that that allowed us to move quickly. And so we began investigating, you know, solutions to that. And a few of our colleagues went to Cuban in San Diego in 2019 cloud native con as well. And basically they just, you know, sped it all up. And actually funny enough, my, my old manager was one of the people who was there and he went to the link D booth and they had a thing going that was like, Hey, get set up with MTLS in five minutes. And he was like, this is something we want to do, why not check this out? And he was able to do it. And so that, that put it on our radar. And so yeah, we investigated several others and Leer D just perfectly fit exactly what we needed. >>So, so in general, we are talking about, you know, security at scale. So how you manage security to scale and also flexibility, right. But you know, what is the you, this there, you told us about the five minutes to start using there, but you know, again, we are talking about word stories. We talk about, you know, all these. So what, what, what kind of challenges you found at the beginning when you start adopting this technology? >>So the biggest ones were around getting up and running with like a new service, especially in the beginning, right. We were, you know, adding a new service almost every day. It felt like. And so, you know, basically it took someone going through a whole bunch of different repos, getting approvals from everyone to get the SEARCHs minted, all that fun stuff, getting them put into the right environments and in the right clusters to make sure that, you know, everybody is talking appropriately. And just the amount of work that, that took alone was just a huge headache and a huge barrier to entry for us to, you know, quickly move up the number of services we have. So, >>So I'm, I'm trying to wrap my head around the scale of the challenge. When I think about certification or certificate management, I have to do it on a small scale and the, the, every now and again, when a certificate expires, it is just a troubleshooting pain. Yes. So as I think about that, it costs, it's not just certificates across 22,000 pods or it's certificates across 22,000 pods in multiple applications. How were you doing that before link D like, what was the, what and what were the pain points? Like? What happens when a certificate either fails or expired up not, not updated? >>So, I mean, to be completely honest, the biggest thing is we're just unable to make the calls, you know, out or, or in, based on yeah. What is failing basically. But, you know, we saw essentially an uptick in failures around a certain service and pretty quickly, I pretty quickly, we got used to the fact that it was like, oh, it's probably a cert expiration issue. And so we tried, you know, a few things in order to make that a little bit more automated and things like that, but we never came to a solution that like didn't require every engineer on the team to know essentially quite a bit about this, just to get into it, which was a huge issue. >>So talk about day two after you've deployed link D how did this alleviate software engineers and what was like the, the benefits of now having this automated way of managing >>Certs? So the biggest thing is like, there is no touch from developers, everyone on our team. Well, I mean, there are a lot of people who are familiar with security and certs and all of that stuff, but no one has to know it. Like it's not a requirement. Like for instance, I knew nothing about it when I joined the team. And even when I was setting up our newer clusters, I knew very little about it. And I was still able to really quickly set up blinker D, which was really nice. And, and it's been, you know, essentially we've been able to just kind of set it and not think about it too much. Obviously, you know, there are parts of it that you have to think about. We monitor it and all that fun stuff, but, but yeah, it's been pretty painless almost day one. It took a lot, a long time to trust it for developers. You know, anytime there was a failure, it's like, oh, could this be link or D you know, but after a while, like now we don't have that immediate assumption because people have built up that trust, but >>Also you have this massive infrastructure, I mean, 30 cluster. So I guess that it's quite different to manage a single cluster and 30. So what are the, you know, consideration that you have to do to install this software on, you know, 30 different cluster manage different, you know, versions probably etcetera, etcetera, et cetera. >>So, I mean, you know, the, the, as far as like, I guess, just to clarify, are you asking specifically with Linky or are you just asking in more in general? Well, >>I mean, you, you can take the, the question in the, in two ways, so, okay. Yeah. Yes. Link in particular, but the 30 cluster also quite interesting. >>Yeah. So, I mean, you know, more generally, you know, how we manage our clusters and things like that. We have, you know, a CLI tool that we use in order to like, change context very quickly and switch and communicate with whatever cluster we're trying to connect to and, you know, are we debugging or getting logs, whatever. And then, you know, with link D it's nice because again, you know, we, we, aren't having to worry about like, oh, how is this cert being inserted in the right node or, or not the right node, but in the right cluster or things like that. Whereas with link D we don't, we don't really have that concern when we spin up our, our clusters, essentially we get the root certificate and, and everything like that packaged up, passed along to link D on installation. And then essentially there's not much we have to do after that. >>So talk to me about your upcoming coming section here at Q con what's the, what's the high level talking points? Like what, what will attendees learn? >>Yeah. So it's, it's a journey. Those are the sorts of talks that I find useful. Having not been, you know, I, I'm not a deep Kubernetes expert from, you know, decades or whatever of experience, but I think >>Nobody is >>Also true. That's another story. That's a, that's, that's a job posting decades of requirements for >>Of course. Yeah. But so, you know, it, it's a journey it's really just like, Hey, what made us decide on a service mesh in the first place? What made us choose link D and then what are the ways in which, you know, we, we use link D so what are those, you know, we use some of the extra plugins and things like that. And then finally, a little bit about more, what we're gonna do in the future. >>Let's talk about not just necessarily the future as in two or three days from now, or two or three years from now. Well, the future after you immediately solve the, the low level problems with link D what were some of the, the surprises, because link D in service me in general has have side benefits. Do you experience any of those side benefits as well? >>Yeah, it's funny, you know, writing the, the blog post, you know, I hadn't really looked at a lot of the data in years on, you know, when we did our investigations and things like that. And we had seen that we like had very low latency and low CPU utilization and things like that. And looking at some of that, I found that we were actually saving time off of requests. And I couldn't really think of why that was, and I was talking with someone else and the biggest, unfortunately, all that data's gone now, like the source data. So I can't go back and verify this, but it, it makes sense, you know, there's the availability zone routing that linker D supports. And so I think that's actually doing it where, you know, essentially if a node is closer to another node, it's essentially, you know, routing to those ones. So when one service is talking to another service and maybe on they're on the same node, you know, it, it short circuits that, and allows us to gain some, some time there. It's not huge, but it adds up after, you know, 10, 20 calls down the line. Right. >>In general. So you are saying that it's smooth operations in, in ATS, very, you know, simplifying your life. >>And again, we didn't have to really do anything for that. It, it, it handled that for it was there. Yeah. Yep. Yeah, exactly. >>So we know one thing when I do it on my laptop, it works fine when I do it with across 22,000 pods, that's a different experience. What were some of the lessons learned coming out of KU con 2018 in San Diego was there? I wish I would've ran to the microphone folks, but what were some of the hard lessons learned scaling link D across the 22,000 nodes? >>So, you know, the, the first one, and this seems pretty obvious, but was just not something I knew about was the high availability mode of link D so obviously makes sense. You would want that in a, you know, a large scale environment. So like, that's one of the big lessons that like, we didn't ride away. No. Like one of the mistakes we made in, in one of our pre-production clusters was not turning that on. And we were kind of surprised. We were like, whoa, like all of these pods are spinning up, but they're having issues like actually getting injected and things like that. And we found, oh, okay. Yeah, you need to actually give it some, some more resources, but it's still very lightweight considering, you know, they have high availability mode, but it's just a few instances still. >>So from, even from a, you know, binary perspective and running link D how much overhead is it? >>That is a great question. So I don't remember off the top of my head, the numbers, but it's very lightweight. We, we evaluated a few different service missions and it was the lightest weight that we encountered at that point. >>And then from a resource perspective, is it a team of link D people? Is it a couple of people, like how >>To be completely honest for a long time, it was one person, Abraham who actually is the person who proposed this talk. He couldn't make it to Valencia, but he essentially did probably 95% of the work to get a into production. And then this was before we even had a team dedicated to our infrastructure. And so we have, now we have a team dedicated, we're all kind of Linky folks, if not Linky experts, we at least can troubleshoot basically. And things like that. So it's, I think a group of six people on our team, and then, you know, various people who've had experience with it >>On other teams, but I'm not dedicated just to that. >>I mean, >>No one is dedicated just to it. No, it's pretty like pretty light touch once it's, once it's up and running, it took a very long time for us to really understand it and, and to, you know, get like, not getting started, but like getting to where we really felt comfortable letting it go in production. But once it was there, like, it is very, very light touch. >>Well, I really appreciate you stopping by Chris. It's been an amazing conversation to hear how Microsoft is using a open source project. Exactly. At scale. It's just a few years ago, when you would've heard the concept of Microsoft and open source together and like, oh, that's just, you know, but >>They have changed a lot in the last few years now, there are huge contributors. And, you know, if you go to Azure, it's full of open source stuff, every >>So, yeah. Wow. The Cuban 2022, how the world has changed in so many ways from Licia Spain, I'm Keith Townsend, along with a Rico senior, you're watching the, the leader in high tech coverage.

Published Date : May 18 2022

SUMMARY :

brought to you by the cloud native computing foundation. What's the flavor of the show so far, And you know, on the queue, fresh on the queue for the first time, Chris Vos, Me. So first off, give us a high level picture of the environment that you're at this point around the globe, you know, 700 to a thousand pods per you and the team settle on link or do And so we began investigating, you know, solutions to that. So, so in general, we are talking about, you know, security at scale. And so, you know, basically it took someone going through a whole How were you doing that before link D like, what was the, what and what were the pain points? we tried, you know, a few things in order to make that a little bit more automated and things like that, You know, anytime there was a failure, it's like, oh, could this be link or D you know, but after a while, you know, consideration that you have to do to install this software on, Link in particular, but the 30 cluster also quite interesting. And then, you know, with link D it's nice Having not been, you know, I, I'm not a deep Kubernetes expert from, Also true. What made us choose link D and then what are the ways in which, you know, we, we use link D so what Well, the future after you immediately solve I hadn't really looked at a lot of the data in years on, you know, when we did our investigations and very, you know, simplifying your life. And again, we didn't have to really do anything for that. So we know one thing when I do it on my laptop, it works fine when I do it with across 22,000 So, you know, the, the first one, and this seems pretty obvious, but was just not something I knew about was So I don't remember our team, and then, you know, various people who've had experience with it you know, get like, not getting started, but like getting to where together and like, oh, that's just, you know, but you know, if you go to Azure, it's full of open source stuff, every how the world has changed in so many ways from Licia Spain,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Keith TownsendPERSON

0.99+

ChrisPERSON

0.99+

Christopher VossPERSON

0.99+

2017DATE

0.99+

Chris VosPERSON

0.99+

AbrahamPERSON

0.99+

20QUANTITY

0.99+

95%QUANTITY

0.99+

700QUANTITY

0.99+

San DiegoLOCATION

0.99+

MicrosoftORGANIZATION

0.99+

10QUANTITY

0.99+

30QUANTITY

0.99+

five minutesQUANTITY

0.99+

2019DATE

0.99+

22,000 podsQUANTITY

0.99+

six peopleQUANTITY

0.99+

ValenciaLOCATION

0.99+

twoQUANTITY

0.99+

2018DATE

0.99+

two waysQUANTITY

0.99+

oneQUANTITY

0.99+

20 callsQUANTITY

0.99+

7,500 peopleQUANTITY

0.99+

22,000 podsQUANTITY

0.99+

first timeQUANTITY

0.98+

CubanLOCATION

0.98+

firstQUANTITY

0.98+

one serviceQUANTITY

0.98+

Valencia SpainLOCATION

0.98+

EuropeLOCATION

0.98+

LinkyORGANIZATION

0.97+

three daysQUANTITY

0.97+

2022DATE

0.97+

one personQUANTITY

0.97+

first oneQUANTITY

0.97+

link DORGANIZATION

0.96+

KubeconORGANIZATION

0.96+

30 clusterQUANTITY

0.96+

22,000 nodesQUANTITY

0.96+

KU con 2018EVENT

0.95+

CoonORGANIZATION

0.94+

Licia SpainPERSON

0.94+

30 clustersQUANTITY

0.94+

day twoQUANTITY

0.92+

link DOTHER

0.92+

XboxCOMMERCIAL_ITEM

0.91+

RicoORGANIZATION

0.91+

Q conORGANIZATION

0.91+

about 22,000 podsQUANTITY

0.91+

KubernetesPERSON

0.9+

few years agoDATE

0.9+

three yearsQUANTITY

0.89+

linkORGANIZATION

0.86+

single clusterQUANTITY

0.85+

one thingQUANTITY

0.82+

Leer DORGANIZATION

0.79+

a thousand podsQUANTITY

0.77+

CloudnativeconORGANIZATION

0.75+

lastDATE

0.74+

clusterQUANTITY

0.74+

MTLSORGANIZATION

0.72+

EttiORGANIZATION

0.72+

AzureTITLE

0.71+

RicoLOCATION

0.69+

ATSORGANIZATION

0.68+

yearsDATE

0.64+

cloud native conORGANIZATION

0.61+

CubanPERSON

0.6+

day oneQUANTITY

0.59+

decadesQUANTITY

0.56+

linkOTHER

0.56+

KubernetesORGANIZATION

0.53+

linkTITLE

0.52+

22EVENT

0.5+

Amy Chandler, Jean Younger & Elena Christopher | UiPath FORWARD III 2019


 

>> Live, from Las Vegas, it's theCUBE covering UiPath Forward Americas 2019. Brought to you by UiPath. >> Welcome back to the Bellagio in Las Vegas, everybody. You're watching theCUBE, the leader in live tech coverage. My name is Dave Vellante. Day one of UiPath Forward III, hashtag UiPathForward. Elena Christopher is here. She's the senior vice president at HFS Research, and Elena, I'm going to recruit you to be my co-host here. >> Co-host! >> On this power panel. Jean Youngers here, CUBE alum, VP, a Six Sigma Leader at Security Benefit. Great to see you again. >> Thank you. >> Dave: And Amy Chandler, who is the Assistant Vice President and Director of Internal Controls, also from Security Benefit. >> Hello. >> Dave: Thanks for coming on theCUBE. >> Thank you. >> Alright Elena, let's start off with you. You follow this market, you have for some time, you know HFS is sort of anointed as formulating this market place, right? >> Elena: We like to think of ourselves as the voice-- >> You guys were early on. >> The voice of the automation industry. >> So, what are you seeing? I mean, process automation has been around forever, RPA is a hot recent trend, but what are you seeing the last year or two? What are the big trends and rip currents that you see in the market place? >> I mean, I think one of the big trends that's out there, I mean, RPA's come on to the scene. I like how you phrase it Dave, because you refer to it as, rightly so, automation is not new, and so we sort of say the big question out there is, "Is RPA just flavor of the month?" RPA is definitely not, and I come from a firm, we put out a blog earlier this year called "RPA is dead. Long live automation." And that's because, when we look at RPA, and when we think about what it's impact is in the market place, to us the whole point of automation in any form, regardless of whether it's RPA, whether it be good old old school BPM, whatever it may be, it's mission is to drive transformation, and so the HFS perspective, and what all of our research shows and sort of justifies that the goal is, what everyone is striving towards, is to get to that transformation. And so, the reason we put out that piece, the "RPA is dead. Long live integrated automation platforms" is to make the point that if you're not- 'cause what does RPA allow? It affords an opportunity for change to drive transformation so, if you're not actually looking at your processes within your company and taking this opportunity to say, "What can I change, what processes are just bad, "and we've been doing them, I'm not even sure why, "for so long. What can we transform, "what can we optimize, what can we invent?" If you're not taking that opportunity as an enterprise to truly embrace the change and move towards transformation, that's a missed opportunity. So I always say, RPA, you can kind of couch it as one of many technologies, but what RPA has really done for the market place today, it's given business users and business leaders the realization that they can have a role in their own transformation. And that's one of the reasons why it's actually become very important, but a single tool in it's own right will never be the holistic answer. >> So Jean, Elena's bringing up a point about transformation. We, Stew Bennett and I interviewed you last year and we've played those clips a number of times, where you sort of were explaining to us that it didn't make sense before RPA to try to drive Six Sigma into business processes; you couldn't get the return. >> Jean: Right. >> Now you can do it very cheaply. And for Six Sigma or better, is what you use for airplane engines, right? >> Right. >> So, now you're bringing up the business process. So, you're a year in, how's it going? What kind of results are you seeing? Is it meeting your expectations? >> It's been wonderful. It has been the best, it's been probably the most fun I've had in the last fifteen years of work. I have enjoyed, partly because I get to work with this great person here, and she's my COE, and helps stand up the whole RPA solution, but you know, we have gone from finance into investment operations, into operations, you know we've got one sitting right now that we're going to be looking at statements that it's going to be fourteen thousand hours out of both time out as well as staff hours saved, and it's going to touch our customer directly, that they're not going to get a bad statement anymore. And so, you know, it has just been an incredible journey for us over the past year, it really has. >> And so okay Amy, your role is, you're the hardcore practitioner here right? >> Amy: That's right. >> You run the COE. Tell us more about your role, and I'm really interested in how you're bringing it out, RPA to the organization. Is that led by your team, or is it kind of this top-down approach? >> Yeah, this last year, we spent a lot of time trying to educate the lower levels and go from a bottom-up perspective. Pretty much, we implemented our infrastructure, we had a nice solid change management process, we built in logical access, we built in good processes around that so that we'd be able to scale easily over this last year, which kind of sets us up for next year, and everything that we want to accomplish then. >> So Elena, we were talking earlier on theCUBE about you know, RPA, in many ways, I called it cleaning up the crime scene, where stuff is kind of really sort of a mass and huge opportunities to improve. So, my question to you is, it seems like RPA is, in some regards, successful because you can drop it into existing processes, you're not changing things, but in a way, this concerns that, oh well, I'm just kind of paving the cow path. So how much process reinvention should have to occur in order to take advantage of RPA? >> I love that you use that phrase, "paving the cow path." As a New Englander, as you know the roads in Boston are in fact paved cow paths, so we know that can lead to some dodgy roads, and that's part of, and I say it because that's part of what the answer is, because the reinvention, and honestly the optimization has to be part of what the answer is. I said it just a little bit earlier in my comments, you're missing an opportunity with RPA and broader automation if you don't take that step to actually look at your processes and figure out if there's just essentially deadwood that you need to get rid of, things that need to be improved. One of the sort of guidelines, because not all processes are created equal, because you don't want to spend the time and effort, and you guys should chime in on this, you don't want to spend the time and effort to optimize a process if it's not critical to your business, if you're not going to get lift from it, or from some ROI. It's a bit of a continuum, so one of the things that I always encourage enterprises to think about, is this idea of, well what's the, obviously, what business problem are you trying to solve? But as you're going through the process optimization, what kind of user experience do you want out of this? And your users, by the way, you tend to think of your user as, it could be your end customer, it could be your employee, it could even be your partner, but trying to figure out what the experience is that you actually want to have, and then you can actually then look at the process and figure out, do we need to do something different? Do we need to do something completely new to actually optimize that? And then again, line it with what you're trying to solve and what kind of lift you want to get from it. But I'd love to, I mean, hopping over to you guys, you live and breathe this, right? And so I think you have a slightly different opinion than me, but-- >> We do live and breathe it, and every process we look at, we take into consideration. But you've also got to, you have a continuum right? If it's a simple process and we can put it up very quickly, we do, but we've also got ones where one process'll come into us, and a perfect example is our rate changes. >> Amy: Rate changes. >> It came in and there was one process at the very end and they ended up, we did a wing to wing of the whole thing, followed the data all the way back through the process, and I think it hit, what, seven or eight-- >> Yeah. >> Different areas-- >> Areas. >> Of the business, and once we got done with that whole wing to wing to see what we could optimize, it turned into what, sixty? >> Amy: Yeah, sixty plus. Yeah. >> Dave: Sixty plus what? >> Bot processes from one entry. >> Yeah. >> And so, right now, we've got 189 to 200 processes in the back log. And so if you take that, and exponentially increase it, we know that there's probably actually 1,000 to 2,000 more processes, at minimum, that we can hit for the company, and we need to look at those. >> Yeah, and I will say, the wing to wing approach is very important because you're following the data as it's moving along. So if you don't do that, if you only focus on a small little piece of it, you don't what's happening to the data before it gets to you and you don't know what's going to happen to it when it leaves you, so you really do have to take that wing to wing approach. >> So, internal controls is in your title, so talking about scale, it's a big theme here at UiPath, and these days, things scale really fast, and boo-boos can happen really fast. So how are you ensuring, you know that the edicts of the organization are met, whether it's security, compliance, governance? Is that part of your role? >> Yeah, we've actually kept internal audit and internal controls, and in fact, our external auditors, EY. We've kept them all at the table when we've gone through processes, when we've built out our change management process, our logical access. When we built our whole process from beginning to end they kind of sat at the table with us and kind of went over everything to make sure that we were hitting all the controls that we needed to do. >> And actually, I'd like to piggyback on that comment, because just that inclusion of the various roles, that's what we found as an emerging best practice, and in all of our research and all of the qualitative conversations that we have with enterprises and service providers, is because if you do things, I mean it applies on multiple levels, because if you do things in a silo, you'll have siloed impact. If you bring the appropriate constituents to the table, you're going to understand their perspective, but it's going to have broader reach. So it helps alleviate the silos but it also supports the point that you just made Amy, about looking at the processes end to end, because you've got the necessary constituents involved so you know the context, and then, I believe, I mean I think you guys shared this with me, that particularly when audit's involved, you're perhaps helping cultivate an understanding of how even their processes can improve as well. >> Right. >> That is true, and from an overall standpoint with controls, I think a lot of people don't realize that a huge benefit is your controls, cause if you're automating your controls, from an internal standpoint, you're not going to have to test as much, just from an associate process owner paying attention to their process to the internal auditors, they're not going to have to test as much either, and then your external auditors, which that's revenue. I mean, that's savings. >> You lower your auditing bill? >> Yeah. Yeah. >> Well we'll see right? >> Yeah. (laughter) >> That's always the hope. >> Don't tell EY. (laughter) So I got to ask you, so you're in a little over a year So I don't know if you golf, but you know a mulligan in golf. If you had a mulligan, a do over, what would you do over? >> The first process we put in place. At least for me, it breaks a lot, and we did it because at the time, we were going through decoupling and trying to just get something up to make sure that what we stood up was going to work and everything, and so we kind of slammed it in, and we pay for that every quarter, and so actually it's on our list to redo. >> Yeah, we automated a bad process. >> Yeah, we automated a bad process. >> That's a really good point. >> So we pay for it in maintenance every quarter, we pay for it, cause it breaks inevitably. >> Yes. >> Okay so what has to happen? You have to reinvent the process, to Elena's? >> Yes, you know, we relied on a process that somebody else had put in place, and in looking at it, it was kind of a up and down and through the hoop and around this way to get what they needed, and you know there's much easier ways to get the data now. And that's what we're doing. In fact, we've built our own, we call it a bot mart. That's where all our data goes, they won't let us touch the other data marts and so forth so they created us a bot mart, and anything that we need data for, they dump in there for us and then that's where our bot can hit, and our bot can hit it at anytime of the day or night when we need the data, and so it's worked out really well for us, and so the bot mart kind of came out of that project of there's got to be a better way. How can we do this better instead of relying on these systems that change and upgrade and then we run the bot and its working one day and the next day, somebody has gone in and tweaked something, and when all's I really need out of that system is data, that's all I need. I don't need, you know, a report. I don't need anything like that, cause the reports change and they get messed up. I just want the raw data, and so that's what we're starting to do. >> How do you ensure that the data is synchronized with your other marts and warehouses, is that a problem? >> Not yet. >> No not yet! (laughter) >> I'm wondering cause I was thinking the exact same question Dave, because on one hand its a nice I think step from a governance standpoint. You have what you need, perhaps IT or whomever your data curators are, they're not going to have a heart attack that you're touching stuff that they don't want you to, but then there is that potential for synchronization issues, cause that whole concept of golden source implies one copy if you will. >> Well, and it is. It's all coming through, we have a central data repository that the data's going to come through, and it's all sitting there, and then it'll move over, and to me, what I most worry about, like I mentioned on the statement once, okay, I get my data in, is it the same data that got used to create those statements? And as we're doing the testing and as we're looking at going live, that's one of our huge test cases. We need to understand what time that data comes in, when will it be into our bot mart, so when can I run those bots? You know, cause they're all going to be unattended on those, so you know, the timing is critical, and so that's why I said not yet. >> Dave: (chuckle) >> But you want to know what, we can build the bot to do that compare of the data for us. >> Haha all right. I love that. >> I saw a stat the other day. I don't know where it was, on Twitter or maybe it was your data, that more money by whatever, 2023 is going to be spent on chat bots than mobile development. >> Jean: I can imagine, yes. >> What are you doing with chat bots? And how are you using them? >> Do you want to answer that one or do you want me to? >> Go ahead. >> Okay so, part of the reason I'm so enthralled by the chat bot or personal assistant or anything, is because the unattended robots that we have, we have problems making sure that people are doing what they're supposed to be doing in prep. We have some in finance, and you know, finance you have a very fine line of what you can automate and what you need the user to still understand what they're doing, right? And so we felt like we had a really good, you know, combination of that, but in some instances, they forget to do things, so things aren't there and we get the phone call the bot broke, right? So part of the thing I'd like to do is I'd like to move that back to an unattended bot, and I'm going to put a chat bot in front of it, and then all's they have to do is type in "run my bot" and it'll come up if they have more than one bot, it'll say "which one do you want to run?" They'll click it and it'll go. Instead of having to go out on their machine, figure out where to go, figure out which button to do, and in the chat I can also send them a little message, "Did you run your other reports? Did you do this?" You know, so, I can use it for the end user, to make that experience for them better. And plus, we've got a lot of IT, we've got a lot of HR stuff that can fold into that, and then RPA all in behind it, kind of the engine on a lot of it. >> I mean you've child proofed the bot. >> Exactly! There you go. There you go. >> Exactly. Exactly. And it also provides a means to be able to answer those commonly asked questions for HR for example. You know, how much vacation time do I have? When can I change my benefits? Examples of those that they answer frequently every day. So that provides another avenue for utilization of the chat bot. >> And if I may, Dave, it supports a concept that I know we were talking about yesterday. At HFS it's our "Triple-A Trifecta", but it's taking the baseline of automation, it intersects with components of AI, and then potentially with analytics. This is starting to touch on some of the opportunities to look at other technologies. You say chat bots. At HFS we don't use the term chat bot, just because we like to focus and emphasize the cognitive capability if you will. But in any case, you guys essentially are saying, well RPA is doing great for what we're using RPA for, but we need a little bit of extension of functionality, so we're layering in the chat bot or cognitive assistant. So it's a nice example of some of that extension of really seeing how it's, I always call it the power of and if you will. Are you going to layer these things in to get what you need out of it? What best solves your business problems? Just a very practical approach I think. >> So Elena, Guy has a session tomorrow on predictions. So we're going to end with some predictions. So our RPA is dead, (chuckle) will it be resuscitated? What's the future of RPA look like? Will it live up to the hype? I mean so many initiatives in our industry haven't. I always criticize enterprise data warehousing and ETL and big data is not living up to the hype. Will RPA? >> It's got a hell of a lot of hype to live up to, I'll tell you that. So, back to some of our causality about why we even said it's dead. As a discrete software category, RPA is clearly not dead at all. But unless it's helping to drive forward with transformation, and even some of the strategies that these fine ladies from Security Benefit are utilizing, which is layering in additional technology. That's part of the path there. But honestly, the biggest challenge that you have to go through to get there and cannot be underestimated, is the change that your organization has to go through. Cause think about it, if we look at the grand big vision of where RPA and broader intelligent automation takes us, the concept of creating a hybrid workforce, right? So what's a hybrid workforce? It's literally our humans complemented by digital workers. So it still sounds like science fiction. To think that any enterprise could try and achieve some version of that and that it would be A, fast or B, not take a lot of change management, is absolutely ludicrous. So it's just a very practical approach to be eyes wide open, recognize that you're solving problems but you have to want to drive change. So to me, and sort of the HFS perspective, continues to be that if RPA is not going to die a terrible death, it needs to really support that vision of transformation. And I mean honestly, we're here at a UiPath event, they had many announcements today that they're doing a couple of things. Supporting core functionality of RPA, literally adding in process discovery and mining capabilities, adding in analytics to help enterprises actually track what your benefit is. >> Jean: Yes. >> These are very practical cases that help RPA live another day. But they're also extending functionality, adding in their whole announcement around AI fabric, adding in some of the cognitive capability to extend the functionality. And so prediction-wise, RPA as we know it three years from now is not going to look like RPA at all. I'm not going to call it AI, but it's going to become a hybrid, and it's honestly going to look a lot like that Triple-A Trifecta I mentioned. >> Well, and UiPath, and I presume other suppliers as well, are expanding their markets. They're reaching, you hear about citizens developers and 100% of the workforce. Obviously you guys are excited and you see a long-run way for RPA. >> Jean: Yeah, we do. >> I'll give you the last word. >> It's been a wonderful journey thus far. After this morning's event where they showed us everything, I saw a sneak peek yesterday during the CAB, and I had a list of things I wanted to talk to her about already when I came out of there. And then she saw more of 'em today, and I've got a pocketful of notes of stuff that we're going to take back and do. I really, truly believe this is the future and we can do so much. Six Sigma has kind of gotten a rebirth. You go in and look at your processes and we can get those to perfect. I mean, that's what's so cool. It is so cool that you can actually tell somebody, I can do something perfect for you. And how many people get to do that? >> It's back to the user experience, right? We can make this wildly functional to meet the need. >> Right, right. And I don't think RPA is the end all solution, I think it's just a great tool to add to your toolkit and utilize moving forward. >> Right. All right we'll have to leave it there. Thanks ladies for coming on, it was a great segment. Really appreciate your time. >> Thanks. >> Thank you. >> Thank you for watching, everybody. This is Dave Vellante with theCUBE. We'll be right back from UiPath Forward III from Las Vegas, right after this short break. (technical music)

Published Date : Oct 16 2019

SUMMARY :

Brought to you by UiPath. and Elena, I'm going to recruit you to be my co-host here. Great to see you again. Assistant Vice President and Director of Internal Controls, You follow this market, you have for some time, and so we sort of say the big question out there is, We, Stew Bennett and I interviewed you last year is what you use for airplane engines, right? What kind of results are you seeing? and it's going to touch our customer directly, Is that led by your team, and everything that we want to accomplish then. So, my question to you is, it seems like RPA is, and what kind of lift you want to get from it. If it's a simple process and we can put it up very quickly, Amy: Yeah, sixty plus. And so if you take that, and exponentially increase it, and you don't know what's going to happen So how are you ensuring, you know that the edicts and kind of went over everything to make sure that but it also supports the point that you just made Amy, and then your external auditors, So I don't know if you golf, and so actually it's on our list to redo. So we pay for it in maintenance every quarter, and you know there's much easier ways to get the data now. You have what you need, and to me, what I most worry about, But you want to know what, we can build the bot to do I love that. 2023 is going to be spent on chat bots than mobile development. And so we felt like we had a really good, you know, There you go. And it also provides a means to be able and emphasize the cognitive capability if you will. and ETL and big data is not living up to the hype. that you have to go through and it's honestly going to look a lot like and you see a long-run way for RPA. It is so cool that you can actually tell somebody, It's back to the user experience, right? and utilize moving forward. Really appreciate your time. Thank you for watching, everybody.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Amy ChandlerPERSON

0.99+

ElenaPERSON

0.99+

Dave VellantePERSON

0.99+

JeanPERSON

0.99+

DavePERSON

0.99+

Jean YoungersPERSON

0.99+

Stew BennettPERSON

0.99+

BostonLOCATION

0.99+

AmyPERSON

0.99+

Elena ChristopherPERSON

0.99+

189QUANTITY

0.99+

1,000QUANTITY

0.99+

Jean YoungerPERSON

0.99+

fourteen thousand hoursQUANTITY

0.99+

Las VegasLOCATION

0.99+

100%QUANTITY

0.99+

yesterdayDATE

0.99+

UiPathORGANIZATION

0.99+

next yearDATE

0.99+

last yearDATE

0.99+

HFSORGANIZATION

0.99+

one processQUANTITY

0.99+

HFS ResearchORGANIZATION

0.99+

200 processesQUANTITY

0.99+

one copyQUANTITY

0.99+

eightQUANTITY

0.98+

tomorrowDATE

0.98+

sevenQUANTITY

0.98+

one entryQUANTITY

0.98+

Six SigmaORGANIZATION

0.98+

oneQUANTITY

0.98+

more than one botQUANTITY

0.97+

todayDATE

0.97+

SixtyQUANTITY

0.97+

CUBEORGANIZATION

0.97+

2019DATE

0.97+

earlier this yearDATE

0.96+

sixtyQUANTITY

0.96+

single toolQUANTITY

0.96+

past yearDATE

0.95+

martsDATE

0.95+

both timeQUANTITY

0.95+

Security BenefitORGANIZATION

0.94+

bot martORGANIZATION

0.94+

TwitterORGANIZATION

0.94+

next dayDATE

0.93+

first processQUANTITY

0.93+

Day oneQUANTITY

0.93+

2,000 more processesQUANTITY

0.9+

OneQUANTITY

0.9+

over a yearQUANTITY

0.88+

Triple-A TrifectaORGANIZATION

0.88+

martsORGANIZATION

0.87+

UiPath Forward IIITITLE

0.84+

FORWARD IIITITLE

0.84+

Amy Chandler, Security Benefit, Jean Younger, Security Benefit & Elena Christopher, HFS Research | U


 

>> Live, from Las Vegas, it's theCUBE covering UiPath Forward Americas 2019. Brought to you by UiPath. >> Welcome back to the Bellagio in Las Vegas, everybody. You're watching theCUBE, the leader in live tech coverage. My name is Dave Vellante. Day one of UiPath Forward III, hashtag UiPathForward. Elena Christopher is here. She's the senior vice president at HFS Research, and Elena, I'm going to recruit you to be my co-host here. >> Co-host! >> On this power panel. Jean Youngers here, CUBE alum, VP, a Six Sigma Leader at Security Benefit. Great to see you again. >> Thank you. >> Dave: And Amy Chandler, who is the Assistant Vice President and Director of Internal Controls, also from Security Benefit. >> Hello. >> Dave: Thanks for coming on theCUBE. >> Thank you. >> Alright Elena, let's start off with you. You follow this market, you have for some time, you know HFS is sort of anointed as formulating this market place, right? >> Elena: We like to think of ourselves as the voice-- >> You guys were early on. >> The voice of the automation industry. >> So, what are you seeing? I mean, process automation has been around forever, RPA is a hot recent trend, but what are you seeing the last year or two? What are the big trends and rip currents that you see in the market place? >> I mean, I think one of the big trends that's out there, I mean, RPA's come on to the scene. I like how you phrase it Dave, because you refer to it as, rightly so, automation is not new, and so we sort of say the big question out there is, "Is RPA just flavor of the month?" RPA is definitely not, and I come from a firm, we put out a blog earlier this year called "RPA is dead. Long live automation." And that's because, when we look at RPA, and when we think about what it's impact is in the market place, to us the whole point of automation in any form, regardless of whether it's RPA, whether it be good old old school BPM, whatever it may be, it's mission is to drive transformation, and so the HFS perspective, and what all of our research shows and sort of justifies that the goal is, what everyone is striving towards, is to get to that transformation. And so, the reason we put out that piece, the "RPA is dead. Long live integrated automation platforms" is to make the point that if you're not- 'cause what does RPA allow? It affords an opportunity for change to drive transformation so, if you're not actually looking at your processes within your company and taking this opportunity to say, "What can I change, what processes are just bad, "and we've been doing them, I'm not even sure why, "for so long. What can we transform, "what can we optimize, what can we invent?" If you're not taking that opportunity as an enterprise to truly embrace the change and move towards transformation, that's a missed opportunity. So I always say, RPA, you can kind of couch it as one of many technologies, but what RPA has really done for the market place today, it's given business users and business leaders the realization that they can have a role in their own transformation. And that's one of the reasons why it's actually become very important, but a single tool in it's own right will never be the holistic answer. >> So Jean, Elena's bringing up a point about transformation. We, Stew Bennett and I interviewed you last year and we've played those clips a number of times, where you sort of were explaining to us that it didn't make sense before RPA to try to drive Six Sigma into business processes; you couldn't get the return. >> Jean: Right. >> Now you can do it very cheaply. And for Six Sigma or better, is what you use for airplane engines, right? >> Right. >> So, now you're bringing up the business process. So, you're a year in, how's it going? What kind of results are you seeing? Is it meeting your expectations? >> It's been wonderful. It has been the best, it's been probably the most fun I've had in the last fifteen years of work. I have enjoyed, partly because I get to work with this great person here, and she's my COE, and helps stand up the whole RPA solution, but you know, we have gone from finance into investment operations, into operations, you know we've got one sitting right now that we're going to be looking at statements that it's going to be fourteen thousand hours out of both time out as well as staff hours saved, and it's going to touch our customer directly, that they're not going to get a bad statement anymore. And so, you know, it has just been an incredible journey for us over the past year, it really has. >> And so okay Amy, your role is, you're the hardcore practitioner here right? >> Amy: That's right. >> You run the COE. Tell us more about your role, and I'm really interested in how you're bringing it out, RPA to the organization. Is that led by your team, or is it kind of this top-down approach? >> Yeah, this last year, we spent a lot of time trying to educate the lower levels and go from a bottom-up perspective. Pretty much, we implemented our infrastructure, we had a nice solid change management process, we built in logical access, we built in good processes around that so that we'd be able to scale easily over this last year, which kind of sets us up for next year, and everything that we want to accomplish then. >> So Elena, we were talking earlier on theCUBE about you know, RPA, in many ways, I called it cleaning up the crime scene, where stuff is kind of really sort of a mass and huge opportunities to improve. So, my question to you is, it seems like RPA is, in some regards, successful because you can drop it into existing processes, you're not changing things, but in a way, this concerns that, oh well, I'm just kind of paving the cow path. So how much process reinvention should have to occur in order to take advantage of RPA? >> I love that you use that phrase, "paving the cow path." As a New Englander, as you know the roads in Boston are in fact paved cow paths, so we know that can lead to some dodgy roads, and that's part of, and I say it because that's part of what the answer is, because the reinvention, and honestly the optimization has to be part of what the answer is. I said it just a little bit earlier in my comments, you're missing an opportunity with RPA and broader automation if you don't take that step to actually look at your processes and figure out if there's just essentially deadwood that you need to get rid of, things that need to be improved. One of the sort of guidelines, because not all processes are created equal, because you don't want to spend the time and effort, and you guys should chime in on this, you don't want to spend the time and effort to optimize a process if it's not critical to your business, if you're not going to get lift from it, or from some ROI. It's a bit of a continuum, so one of the things that I always encourage enterprises to think about, is this idea of, well what's the, obviously, what business problem are you trying to solve? But as you're going through the process optimization, what kind of user experience do you want out of this? And your users, by the way, you tend to think of your user as, it could be your end customer, it could be your employee, it could even be your partner, but trying to figure out what the experience is that you actually want to have, and then you can actually then look at the process and figure out, do we need to do something different? Do we need to do something completely new to actually optimize that? And then again, line it with what you're trying to solve and what kind of lift you want to get from it. But I'd love to, I mean, hopping over to you guys, you live and breathe this, right? And so I think you have a slightly different opinion than me, but-- >> We do live and breathe it, and every process we look at, we take into consideration. But you've also got to, you have a continuum right? If it's a simple process and we can put it up very quickly, we do, but we've also got ones where one process'll come into us, and a perfect example is our rate changes. >> Amy: Rate changes. >> It came in and there was one process at the very end and they ended up, we did a wing to wing of the whole thing, followed the data all the way back through the process, and I think it hit, what, seven or eight-- >> Yeah. >> Different areas-- >> Areas. >> Of the business, and once we got done with that whole wing to wing to see what we could optimize, it turned into what, sixty? >> Amy: Yeah, sixty plus. Yeah. >> Dave: Sixty plus what? >> Bot processes from one entry. >> Yeah. >> And so, right now, we've got 189 to 200 processes in the back log. And so if you take that, and exponentially increase it, we know that there's probably actually 1,000 to 2,000 more processes, at minimum, that we can hit for the company, and we need to look at those. >> Yeah, and I will say, the wing to wing approach is very important because you're following the data as it's moving along. So if you don't do that, if you only focus on a small little piece of it, you don't what's happening to the data before it gets to you and you don't know what's going to happen to it when it leaves you, so you really do have to take that wing to wing approach. >> So, internal controls is in your title, so talking about scale, it's a big theme here at UiPath, and these days, things scale really fast, and boo-boos can happen really fast. So how are you ensuring, you know that the edicts of the organization are met, whether it's security, compliance, governance? Is that part of your role? >> Yeah, we've actually kept internal audit and internal controls, and in fact, our external auditors, EY. We've kept them all at the table when we've gone through processes, when we've built out our change management process, our logical access. When we built our whole process from beginning to end they kind of sat at the table with us and kind of went over everything to make sure that we were hitting all the controls that we needed to do. >> And actually, I'd like to piggyback on that comment, because just that inclusion of the various roles, that's what we found as an emerging best practice, and in all of our research and all of the qualitative conversations that we have with enterprises and service providers, is because if you do things, I mean it applies on multiple levels, because if you do things in a silo, you'll have siloed impact. If you bring the appropriate constituents to the table, you're going to understand their perspective, but it's going to have broader reach. So it helps alleviate the silos but it also supports the point that you just made Amy, about looking at the processes end to end, because you've got the necessary constituents involved so you know the context, and then, I believe, I mean I think you guys shared this with me, that particularly when audit's involved, you're perhaps helping cultivate an understanding of how even their processes can improve as well. >> Right. >> That is true, and from an overall standpoint with controls, I think a lot of people don't realize that a huge benefit is your controls, cause if you're automating your controls, from an internal standpoint, you're not going to have to test as much, just from an associate process owner paying attention to their process to the internal auditors, they're not going to have to test as much either, and then your external auditors, which that's revenue. I mean, that's savings. >> You lower your auditing bill? >> Yeah. Yeah. >> Well we'll see right? >> Yeah. (laughter) >> That's always the hope. >> Don't tell EY. (laughter) So I got to ask you, so you're in a little over a year So I don't know if you golf, but you know a mulligan in golf. If you had a mulligan, a do over, what would you do over? >> The first process we put in place. At least for me, it breaks a lot, and we did it because at the time, we were going through decoupling and trying to just get something up to make sure that what we stood up was going to work and everything, and so we kind of slammed it in, and we pay for that every quarter, and so actually it's on our list to redo. >> Yeah, we automated a bad process. >> Yeah, we automated a bad process. >> That's a really good point. >> So we pay for it in maintenance every quarter, we pay for it, cause it breaks inevitably. >> Yes. >> Okay so what has to happen? You have to reinvent the process, to Elena's? >> Yes, you know, we relied on a process that somebody else had put in place, and in looking at it, it was kind of a up and down and through the hoop and around this way to get what they needed, and you know there's much easier ways to get the data now. And that's what we're doing. In fact, we've built our own, we call it a bot mart. That's where all our data goes, they won't let us touch the other data marts and so forth so they created us a bot mart, and anything that we need data for, they dump in there for us and then that's where our bot can hit, and our bot can hit it at anytime of the day or night when we need the data, and so it's worked out really well for us, and so the bot mart kind of came out of that project of there's got to be a better way. How can we do this better instead of relying on these systems that change and upgrade and then we run the bot and its working one day and the next day, somebody has gone in and tweaked something, and when all's I really need out of that system is data, that's all I need. I don't need, you know, a report. I don't need anything like that, cause the reports change and they get messed up. I just want the raw data, and so that's what we're starting to do. >> How do you ensure that the data is synchronized with your other marts and warehouses, is that a problem? >> Not yet. >> No not yet! (laughter) >> I'm wondering cause I was thinking the exact same question Dave, because on one hand its a nice I think step from a governance standpoint. You have what you need, perhaps IT or whomever your data curators are, they're not going to have a heart attack that you're touching stuff that they don't want you to, but then there is that potential for synchronization issues, cause that whole concept of golden source implies one copy if you will. >> Well, and it is. It's all coming through, we have a central data repository that the data's going to come through, and it's all sitting there, and then it'll move over, and to me, what I most worry about, like I mentioned on the statement once, okay, I get my data in, is it the same data that got used to create those statements? And as we're doing the testing and as we're looking at going live, that's one of our huge test cases. We need to understand what time that data comes in, when will it be into our bot mart, so when can I run those bots? You know, cause they're all going to be unattended on those, so you know, the timing is critical, and so that's why I said not yet. >> Dave: (chuckle) >> But you want to know what, we can build the bot to do that compare of the data for us. >> Haha all right. I love that. >> I saw a stat the other day. I don't know where it was, on Twitter or maybe it was your data, that more money by whatever, 2023 is going to be spent on chat bots than mobile development. >> Jean: I can imagine, yes. >> What are you doing with chat bots? And how are you using them? >> Do you want to answer that one or do you want me to? >> Go ahead. >> Okay so, part of the reason I'm so enthralled by the chat bot or personal assistant or anything, is because the unattended robots that we have, we have problems making sure that people are doing what they're supposed to be doing in prep. We have some in finance, and you know, finance you have a very fine line of what you can automate and what you need the user to still understand what they're doing, right? And so we felt like we had a really good, you know, combination of that, but in some instances, they forget to do things, so things aren't there and we get the phone call the bot broke, right? So part of the thing I'd like to do is I'd like to move that back to an unattended bot, and I'm going to put a chat bot in front of it, and then all's they have to do is type in "run my bot" and it'll come up if they have more than one bot, it'll say "which one do you want to run?" They'll click it and it'll go. Instead of having to go out on their machine, figure out where to go, figure out which button to do, and in the chat I can also send them a little message, "Did you run your other reports? Did you do this?" You know, so, I can use it for the end user, to make that experience for them better. And plus, we've got a lot of IT, we've got a lot of HR stuff that can fold into that, and then RPA all in behind it, kind of the engine on a lot of it. >> I mean you've child proofed the bot. >> Exactly! There you go. There you go. >> Exactly. Exactly. And it also provides a means to be able to answer those commonly asked questions for HR for example. You know, how much vacation time do I have? When can I change my benefits? Examples of those that they answer frequently every day. So that provides another avenue for utilization of the chat bot. >> And if I may, Dave, it supports a concept that I know we were talking about yesterday. At HFS it's our "Triple-A Trifecta", but it's taking the baseline of automation, it intersects with components of AI, and then potentially with analytics. This is starting to touch on some of the opportunities to look at other technologies. You say chat bots. At HFS we don't use the term chat bot, just because we like to focus and emphasize the cognitive capability if you will. But in any case, you guys essentially are saying, well RPA is doing great for what we're using RPA for, but we need a little bit of extension of functionality, so we're layering in the chat bot or cognitive assistant. So it's a nice example of some of that extension of really seeing how it's, I always call it the power of and if you will. Are you going to layer these things in to get what you need out of it? What best solves your business problems? Just a very practical approach I think. >> So Elena, Guy has a session tomorrow on predictions. So we're going to end with some predictions. So our RPA is dead, (chuckle) will it be resuscitated? What's the future of RPA look like? Will it live up to the hype? I mean so many initiatives in our industry haven't. I always criticize enterprise data warehousing and ETL and big data is not living up to the hype. Will RPA? >> It's got a hell of a lot of hype to live up to, I'll tell you that. So, back to some of our causality about why we even said it's dead. As a discrete software category, RPA is clearly not dead at all. But unless it's helping to drive forward with transformation, and even some of the strategies that these fine ladies from Security Benefit are utilizing, which is layering in additional technology. That's part of the path there. But honestly, the biggest challenge that you have to go through to get there and cannot be underestimated, is the change that your organization has to go through. Cause think about it, if we look at the grand big vision of where RPA and broader intelligent automation takes us, the concept of creating a hybrid workforce, right? So what's a hybrid workforce? It's literally our humans complemented by digital workers. So it still sounds like science fiction. To think that any enterprise could try and achieve some version of that and that it would be A, fast or B, not take a lot of change management, is absolutely ludicrous. So it's just a very practical approach to be eyes wide open, recognize that you're solving problems but you have to want to drive change. So to me, and sort of the HFS perspective, continues to be that if RPA is not going to die a terrible death, it needs to really support that vision of transformation. And I mean honestly, we're here at a UiPath event, they had many announcements today that they're doing a couple of things. Supporting core functionality of RPA, literally adding in process discovery and mining capabilities, adding in analytics to help enterprises actually track what your benefit is. >> Jean: Yes. >> These are very practical cases that help RPA live another day. But they're also extending functionality, adding in their whole announcement around AI fabric, adding in some of the cognitive capability to extend the functionality. And so prediction-wise, RPA as we know it three years from now is not going to look like RPA at all. I'm not going to call it AI, but it's going to become a hybrid, and it's honestly going to look a lot like that Triple-A Trifecta I mentioned. >> Well, and UiPath, and I presume other suppliers as well, are expanding their markets. They're reaching, you hear about citizens developers and 100% of the workforce. Obviously you guys are excited and you see a long-run way for RPA. >> Jean: Yeah, we do. >> I'll give you the last word. >> It's been a wonderful journey thus far. After this morning's event where they showed us everything, I saw a sneak peek yesterday during the CAB, and I had a list of things I wanted to talk to her about already when I came out of there. And then she saw more of 'em today, and I've got a pocketful of notes of stuff that we're going to take back and do. I really, truly believe this is the future and we can do so much. Six Sigma has kind of gotten a rebirth. You go in and look at your processes and we can get those to perfect. I mean, that's what's so cool. It is so cool that you can actually tell somebody, I can do something perfect for you. And how many people get to do that? >> It's back to the user experience, right? We can make this wildly functional to meet the need. >> Right, right. And I don't think RPA is the end all solution, I think it's just a great tool to add to your toolkit and utilize moving forward. >> Right. All right we'll have to leave it there. Thanks ladies for coming on, it was a great segment. Really appreciate your time. >> Thanks. >> Thank you. >> Thank you for watching, everybody. This is Dave Vellante with theCUBE. We'll be right back from UiPath Forward III from Las Vegas, right after this short break. (technical music)

Published Date : Oct 15 2019

SUMMARY :

Brought to you by UiPath. and Elena, I'm going to recruit you to be my co-host here. Great to see you again. Assistant Vice President and Director of Internal Controls, You follow this market, you have for some time, and so we sort of say the big question out there is, We, Stew Bennett and I interviewed you last year is what you use for airplane engines, right? What kind of results are you seeing? and it's going to touch our customer directly, Is that led by your team, and everything that we want to accomplish then. So, my question to you is, it seems like RPA is, and what kind of lift you want to get from it. If it's a simple process and we can put it up very quickly, Amy: Yeah, sixty plus. And so if you take that, and exponentially increase it, and you don't know what's going to happen So how are you ensuring, you know that the edicts and kind of went over everything to make sure that but it also supports the point that you just made Amy, and then your external auditors, So I don't know if you golf, and so actually it's on our list to redo. So we pay for it in maintenance every quarter, and you know there's much easier ways to get the data now. You have what you need, and to me, what I most worry about, But you want to know what, we can build the bot to do I love that. 2023 is going to be spent on chat bots than mobile development. And so we felt like we had a really good, you know, There you go. And it also provides a means to be able and emphasize the cognitive capability if you will. and ETL and big data is not living up to the hype. that you have to go through and it's honestly going to look a lot like and you see a long-run way for RPA. It is so cool that you can actually tell somebody, It's back to the user experience, right? and utilize moving forward. Really appreciate your time. Thank you for watching, everybody.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Amy ChandlerPERSON

0.99+

ElenaPERSON

0.99+

Dave VellantePERSON

0.99+

JeanPERSON

0.99+

DavePERSON

0.99+

Jean YoungersPERSON

0.99+

Stew BennettPERSON

0.99+

BostonLOCATION

0.99+

AmyPERSON

0.99+

Elena ChristopherPERSON

0.99+

189QUANTITY

0.99+

Jean YoungerPERSON

0.99+

1,000QUANTITY

0.99+

Las VegasLOCATION

0.99+

fourteen thousand hoursQUANTITY

0.99+

100%QUANTITY

0.99+

yesterdayDATE

0.99+

UiPathORGANIZATION

0.99+

next yearDATE

0.99+

last yearDATE

0.99+

HFSORGANIZATION

0.99+

one processQUANTITY

0.99+

HFS ResearchORGANIZATION

0.99+

200 processesQUANTITY

0.99+

one copyQUANTITY

0.99+

eightQUANTITY

0.98+

tomorrowDATE

0.98+

sevenQUANTITY

0.98+

one entryQUANTITY

0.98+

Six SigmaORGANIZATION

0.98+

oneQUANTITY

0.98+

more than one botQUANTITY

0.97+

todayDATE

0.97+

SixtyQUANTITY

0.97+

CUBEORGANIZATION

0.97+

Security BenefitORGANIZATION

0.96+

earlier this yearDATE

0.96+

sixtyQUANTITY

0.96+

single toolQUANTITY

0.96+

past yearDATE

0.95+

martsDATE

0.95+

both timeQUANTITY

0.95+

bot martORGANIZATION

0.94+

TwitterORGANIZATION

0.94+

next dayDATE

0.93+

first processQUANTITY

0.93+

Day oneQUANTITY

0.93+

2,000 more processesQUANTITY

0.9+

OneQUANTITY

0.9+

over a yearQUANTITY

0.88+

Triple-A TrifectaORGANIZATION

0.88+

martsORGANIZATION

0.87+

UORGANIZATION

0.86+

George Gagne & Christopher McDermott, Defense POW/MIA Account Agency | AWS Public Sector Summit 2019


 

>> Live from Washington, DC, it's theCUBE, covering AWS Public Sector Summit. Brought to you by Amazon Web Services. >> Welcome back everyone to theCUBE's live coverage of the AWS Public Sector Summit, here in our nation's capital. I'm your host, Rebecca Knight, co-hosting with John Furrier. We have two guests for this segment, we have George Gagne, he is the Chief Information Officer at Defense POW/MIA Accounting Agency. Welcome, George. And we have Christopher McDermott, who is the CDO of the POW/MIA Accounting Agency. Welcome, Chris. >> Thank you. >> Thank you both so much for coming on the show. >> Thank you. >> So, I want to start with you George, why don't you tell our viewers a little bit about the POW/MIA Accounting Agency. >> Sure, so the mission has been around for decades actually. In 2015, Secretary of Defense, Hagel, looked at the accounting community as a whole and for efficiency gains made decision to consolidate some of the accounting community into a single organization. And they took the former JPAC, which was a direct reporting unit to PACOM out of Hawaii, which was the operational arm of the accounting community, responsible for research, investigation, recovery and identification. They took that organization, they looked at the policy portion of the organization, which is here in Crystal City, DPMO and then they took another part of the organization, our Life Sciences Support Equipment laboratory in Dayton, Ohio, and consolidated that to make the defense POW/MIA Accounting Agency, Under the Office of Secretary Defense for Policy. So that was step one. Our mission is the fullest possible accounting of missing U.S. personnel to their families and to our nation. That's our mission, we have approximately 82,000 Americans missing from our past conflicts, our service members from World War II, Korea War, Korea, Vietnam and the Cold War. When you look at the demographics of that, we have approximately 1,600 still missing from the Vietnam conflict. We have just over a 100 still missing from the Cold War conflict. We have approximately 7,700 still missing from the Korean War and the remainder of are from World War II. So, you know, one of the challenges when our organization was first formed, was we had three different organizations all had different reporting chains, they had their own cultures, disparate cultures, disparate systems, disparate processes, and step one of that was to get everybody on the same backbone and the same network. Step two to that, was to look at all those on-prem legacy systems that we had across our environment and look at the consolidation of that. And because our organization is so geographically dispersed, I just mentioned three, we also have a laboratory in Offutt, Nebraska. We have detachments in Southeast Asia, Thailand, Vietnam, Laos, and we have a detachment in Germany. And we're highly mobile. We conduct about, this year we're planned to do 84 missions around the world, 34 countries. And those missions last 30 to 45 day increments. So highly mobile, very globally diverse organization. So when we looked at that environment obviously we knew the first step after we got everybody on one network was to look to cloud architectures and models in order to be able to communicate, coordinate, and collaborate, so we developed a case management system that consist of a business intelligence software along with some enterprise content software coupled with some forensics software for our laboratory staff that make up what we call our case management system that cloud hosted. >> So business challenges, the consolidation, the reset or set-up for the mission, but then the data types, it's a different kind of data problem to work, to achieve the outcomes you're looking for. Christopher, talk about that dynamic because, >> Sure. >> You know, there are historical different types of data. >> That's right. And a lot of our data started as IBM punchcards or it started from, you know, paper files. When I started the work, we were still looking things up on microfiche and microfilm, so we've been working on an aggressive program to get all that kind of data digitized, but then we have to make it accessible. And we had, you know as George was saying, multiple different organizations doing similar work. So you had a lot of duplication of the same information, but kept in different structures, searchable in different pathways. So we have to bring all of that together and make and make it accessible, so that the government can all be on the same page. Because again, as George said, there's a large number of cases that we potentially can work on, but we have to be able to triage that down to the ones that have the best opportunity for us to use our current methods to solve. So rather than look for all 82,000 at once, we want to be able to navigate through that data and find the cases that have the most likelihood of success. >> So where do you even begin? What's the data that you're looking at? What have you seen has had the best indicators for success, of finding those people who are prisoners of war or missing in action? >> Well, you know, for some degrees as George was saying, our missions has been going on for decades. So, you know, a lot of the files that we're working from today were created at the time of the incidents. For the Vietnam cases, we have a lot of continuity. So we're still working on the leads that the strongest out of that set. And we still send multiple teams a year into Vietnam and Laos, Cambodia. And that's where, you know, you try to build upon the previous investigations, but that's also where if those investigations were done in the '70s or the '80s we have to then surface what's actionable out of that information, which pathways have we trod that didn't pay off. So a lot of it is, What can we reanalyze today? What new techniques can we bring? Can we bring in, you know, remote sensing data? Can we bring GIS applications to analyze where's the best scenario for resolving these cases after all this time? >> I mean, it's interesting one of the things we hear from the Amazon, we've done so many interviews with Amazon executives, we've kind of know their messaging. So here's one of them, "Eliminate the undifferentiated heavy lifting." You hear that a lot right. So there might be a lot of that here and then Teresa had a slide up today talking about COBOL and mainframe, talk about punch cards >> Absolutely. >> So you have a lot of data that's different types older data. So it's a true digitization project that you got to enable as well as other complexity. >> Absolutely, when the agency was formed in 2015 we really begin the process of an information modernization effort across the organization. Because like I said, these were legacy on-prem systems that were their systems' of record that had specific ways and didn't really have the ability to share the data, collaborate, coordinate, and communicate. So, it was a heavy lift across the board getting everyone on one backbone. But then going through an agency information modernization evolution, if you will, that we're still working our way through, because we're so mobilely diversified as well, our field communications capability and reach back and into the cloud and being able to access that data from geographical locations around the world, whether it's in the Himalayas, whether it's in Vietnam, whether it's in Papua New Guinea, wherever we may be. Not just our fixed locations. >> George and Christopher, if you each could comment for our audience, I would love to get this on record as you guys are really doing a great modernization project. Talk about, if you each could talk about key learnings and it could be from scar tissue. It could be from pain and suffering to an epiphany or some breakthrough. What was some of the key learnings as you when through the modernization? Could you share some from a CIO perspective and from a CDO perspective? >> Well, I'll give you a couple takeaways of what I thought I think we did well and some areas I thought that we could have done better. And for us as we looked at building our case management system, I think step one of defining our problem statement, it was years in planning before we actually took steps to actually start building out our infrastructure in the Amazon Cloud, or our applications. But building and defining that problem statement, we took some time to really take a look at that, because of the different in cultures from the disparate organizations and our processes and so on and so forth. Defining that problem statement was critical to our success and moving forward. I'd say one of the areas that I say that we could have done better is probably associated with communication and stakeholder buy-in. Because we are so geographically dispersed and highly mobile, getting the word out to everybody and all those geographically locations and all those time zones with our workforce that's out in the field a lot at 30 to 45 days at a time, three or four missions a year, sometimes more. It certainly made it difficult to get part of that get that messaging out with some of that stakeholder buy-in. And I think probably moving forward and we still deal regarding challenges is data hygiene. And that's for us, something else we did really well was we established this CDO role within our organization, because it's no longer about the systems that are used to process and store the data. It's really about the data. And who better to know the data but our data owners, not custodians and our chief data officer and our data governance council that was established. >> Christopher you're learnings, takeaways? >> What we're trying to build upon is, you define your problem statement, but the pathway there is you have to get results in front of the end users. You have get them to the people who are doing the work, so you can keep guiding it toward the solution actually meets all the needs, as well as build something that can innovate continuously over time. Because the technology space is changing so quickly and dynamically that the more we can surface our problem set, the more help we can to help find ways to navigate through that. >> So one of the things you said is that you're using data to look at the past. Whereas, so many of the guests we're talking today and so many of the people here at this summit are talking about using data to predict the future. Are you able to look your data sets from the past and then also sort of say, And then this is how we can prevent more POW. Are you using, are you thinking at all, are you looking at the future at all with you data? >> I mean, certainly especially from our laboratory science perspective, we have have probably the most advanced human identification capability in the world. >> Right. >> And recovery. And so all of those lessons really go a long ways to what what information needs to be accessible and actionable for us to be able to, recover individuals in those circumstances and make those identifications as quickly as possible. At the same time the cases that we're working on are the hardest ones. >> Right. >> The ones that are still left. But each success that we have teaches us something that can then be applied going forward. >> What is the human side of your job? Because here you are, these two wonky data number crunchers and yet, you are these are people who died fighting for their country. How do you manage those two, really two important parts of your job and how do you think about that? >> Yeah, I will say that it does amp up the emotional quotient of our agency and everybody really feels passionately about all the work that they do. About 10 times a year our agency meets with family members of the missing at different locations around the country. And those are really powerful reminders of why we're doing this. And you do get a lot of gratitude, but at the same time each case that's waiting still that's the one that matters to them. And you see that in the passion our agency brings to the data questions and quickly they want us to progress. It's never fast enough. There's always another case to pursue. So that definitely adds a lot to it, but it is very meaningful when we can help tell that story. And even for a case where we may never have the answers, being able to say, "This is what the government knows about your case and these are efforts that have been undertaken to this point." >> The fact there's an effort going on is really a wonderful thing for everybody involved. Good outcomes coming out from that. But interesting angle as a techy, IT, former IT techy back in the day in the '80s, '90s, I can't help but marvel at your perspective on your project because you're historians in a way too. You've got type punch cards, you know you got, I never used punch cards. >> Put them in a museum. >> I was the first generation post punch cards, but you have a historical view of IT state of the art at the time of the data you're working with. You have to make that data actionable in an outcome scenario workload work-stream for today. >> Yeah, another example we have is we're reclaiming chest X-rays that they did for induction when guys were which would screen for tuberculosis when they came into service. We're able to use those X-rays now for comparison with the remains that are recovered from the field. >> So you guys are really digging into history of IT. >> Yeah. >> So I'd love to get your perspective. To me, I marvel and I've always been critical of Washington's slowness with respect to cloud, but seeing you catch up now with the tailwinds here with cloud and Amazon and now Microsoft coming in with AI. You kind of see the visibility that leads to value. As you look back at the industry of federal, state, and local governments in public over the years, what's your view of the current state of union of modernization, because it seems to be a renaissance? >> Yeah, I would say the analogy I would give you it's same as that of the industrial revolutions went through in the early 20th century, but it's more about the technology revolution that we're going through now. That's how I'd probably characterize it. If I were to look back and tell my children's children about, hey, the advent of technology and that progression of where we're at. Cloud architecture certainly take down geographical barriers that before were problems for us. Now we're able to overcome those. We can't overcome the timezone barriers, but certainly the geographical barriers of separation of an organization with cloud computing has certainly changed. >> Do you see your peers within the government sector, other agencies, kind of catching wind of this going, Wow, I could really change the game. And will it be a step function into your kind of mind as you kind of have to project kind of forward where we are. Is it going to a small improvement, a step function? What do you guys see? What's the sentiment around town? >> I'm from Hawaii, so Chris probably has a better perspective of that with some of our sister organizations here in town. But, I would say there's more and more organizations that are adopting cloud architectures. It's my understanding very few organizations now are co-located in one facility and one location, right. Take a look at telework today, cost of doing business, remote accessibility regardless of where you're at. So, I'd say it's a force multiplier by far for any line of business, whether it's public sector, federal government or whatever. It's certainly enhanced our capabilities and it's a force multiplier for us. >> And I think that's where the expectation increasingly is that the data should be available and I should be able to act on it wherever I am whenever the the opportunity arises. And that's where the more we can democratize our ability to get that data out to our partners to our teams in the field, the faster those answers can come through. And the faster we can make decisions based upon the information we have, not just the process that we follow. >> And it feeds the creativity and the work product of the actors involved. Getting the data out there versus hoarding it, wall guarding it, asylumming it. >> Right, yeah. You know, becoming the lone expert on this sack of paper in the filing cabinet, doesn't have as much power as getting that data accessible to a much broader squad and everyone can contribute. >> We're doing our part. >> That's right, it's open sourcing it right here. >> To your point, death by PowerPoint. I'm sure you've heard that before. Well business intelligence software now by the click of a button reduces the level of effort for man-power and resources to put together slide decks. Where in business intelligence software can reach out to those structured data platforms and pull out the data that you want at the click of a button and build those presentations for you on the fly. Think about, I mean, if that's our force multiplier in advances in technology of. I think the biggest thing is we understand as humans how to exploit and leverage the technologies and the capabilities. Because I still don't think we fully grasp the potential of technology and how it can be leveraged to empower us. >> That's great insight and really respect what you guys do. Love your mission. Thanks for sharing. >> Yeah, thanks so much for coming on the show. >> Thank you for having us. >> I'm Rebecca Knight for John Ferrer. We will have much more coming up tomorrow on the AWS Public Sector Summit here in Washington, DC. (upbeat music)

Published Date : Jun 11 2019

SUMMARY :

Brought to you by Amazon Web Services. of the AWS Public Sector Summit, for coming on the show. about the POW/MIA Accounting Agency. and look at the consolidation of that. the reset or set-up for the mission, You know, there are historical so that the government can in the '70s or the '80s we have to then one of the things we hear project that you got to enable and into the cloud and being as you guys are really doing and store the data. and dynamically that the more we can So one of the things you said is capability in the world. At the same time the cases But each success that we What is the human side of your job? that's the one that matters to them. back in the day in the '80s, '90s, at the time of the data recovered from the field. So you guys are really You kind of see the visibility it's same as that of the Wow, I could really change the game. a better perspective of that with some And the faster we can make decisions and the work product in the filing cabinet, That's right, it's open and pull out the data that you really respect what you guys do. for coming on the show. on the AWS Public Sector

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Christopher McDermottPERSON

0.99+

GeorgePERSON

0.99+

George GagnePERSON

0.99+

MicrosoftORGANIZATION

0.99+

Rebecca KnightPERSON

0.99+

ChrisPERSON

0.99+

VietnamLOCATION

0.99+

GermanyLOCATION

0.99+

2015DATE

0.99+

Christopher McDermottPERSON

0.99+

ChristopherPERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

JPACORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

TeresaPERSON

0.99+

HawaiiLOCATION

0.99+

Papua New GuineaLOCATION

0.99+

Washington, DCLOCATION

0.99+

Crystal CityLOCATION

0.99+

LaosLOCATION

0.99+

IBMORGANIZATION

0.99+

POW/MIA Accounting AgencyORGANIZATION

0.99+

ThailandLOCATION

0.99+

PACOMORGANIZATION

0.99+

threeQUANTITY

0.99+

twoQUANTITY

0.99+

two guestsQUANTITY

0.99+

World War II.EVENT

0.99+

John FurrierPERSON

0.99+

John FerrerPERSON

0.99+

Korean WarEVENT

0.99+

30QUANTITY

0.99+

Southeast AsiaLOCATION

0.99+

HagelPERSON

0.99+

PowerPointTITLE

0.99+

34 countriesQUANTITY

0.99+

Cold WarEVENT

0.99+

84 missionsQUANTITY

0.99+

early 20th centuryDATE

0.99+

World War IIEVENT

0.99+

approximately 7,700QUANTITY

0.99+

HimalayasLOCATION

0.99+

first stepQUANTITY

0.99+

45 daysQUANTITY

0.99+

this yearDATE

0.99+

oneQUANTITY

0.99+

Dayton, OhioLOCATION

0.99+

Korea WarEVENT

0.99+

approximately 1,600QUANTITY

0.99+

two important partsQUANTITY

0.99+

each caseQUANTITY

0.99+

82,000QUANTITY

0.98+

tomorrowDATE

0.98+

approximately 82,000QUANTITY

0.98+

one facilityQUANTITY

0.98+

todayDATE

0.98+

one networkQUANTITY

0.98+

VietnamEVENT

0.98+

step oneQUANTITY

0.98+

Vietnam conflictEVENT

0.98+

one locationQUANTITY

0.98+

Defense POW/MIA Accounting AgencyORGANIZATION

0.97+

Step twoQUANTITY

0.97+

bothQUANTITY

0.97+

firstQUANTITY

0.97+

AWS Public Sector SummitEVENT

0.97+

eachQUANTITY

0.97+

first generationQUANTITY

0.97+

AWSEVENT

0.96+

single organizationQUANTITY

0.96+

U.S.LOCATION

0.96+

decadesQUANTITY

0.95+

Offutt, NebraskaLOCATION

0.95+

Christopher Forte, ThreeBx | HoshoCon 2018


 

(upbeat techno music) >> From the Hard Rock Hotel in Las Vegas, It's theCUBE, covering HoshoCon 2018. Brought to you by Hosho. >> Hello everyone, welcome to this special Cube coverage. We are here, live in Las Vegas, for HoshoCon. I'm John Furrier, the host of theCUBE, and this is part of our continuing coverage and our initiating coverage of the blockchain crypto world, been doing it since January, covering it on our journal site siliconangle.com since 2011, covering Bitcoin and all blockchain stuff, but this is the first security conference dedicated around block chain and crypto put on by Hosho, and it's called HoshoCon. It's an industry conference, and we are here covering it. And this is an open, small kernel of smart people, really trying to have a top level conversation around security . And our next guest is Christopher Forte He's the CTO of 3BX, welcome to theCUBE. Thanks for joining us. >> Yeah, pleasure to be here. >> So, before we get into some questions about security, what do you guys do? What's the company do? You guys have a unique approach. Take a minute to explain what you guys do. >> 3BX is essentially a marketplace. It's a digital asset marketplace. We're trying to build a community around trading digital assets. We're really trying to focus on pulling away from the term 'cryptocurrency,' because we think it'll expand into much a much broader term. So we're structuring our platform on the support of any kind of digital asset, whether it be a cryptokitty or a an e-book, a concert ticket, you know, something that has a digital form that can be traded person to person. >> So, basically, you're expanding the definition, or actually, depositioning crypto, because it's kind of narrow, relative to how you guys see it. >> Yes, it's pretty narrow. >> Digital assets, I mean, look at gaming. >> Yep, absolutely. >> Gaming culture is not new. >> Yeah. >> I mean, they trade stuff all the time. >> Yeah, sure, even like in game tokens, they don't exist on a block chain yet. They're not cryptographically secured, so those are the types of things that I expect to see hitting a lot of these marketplaces soon. >> Well, that's smart, I mean, I think if you look at it, certainly we at (mumbles) blockchain, our entire media company's been moving to blockchain, and crypto, and token economics, but really the blockchain piece has been very limited. It's got very poor functionality, and all the top blockchain implementations are either private block chain, low latency, and fast, and developer friendly. >> Sure. so Ethereum's great for smart contracts, but just as a scale, relative to what most people need. >> Yeah. >> If you're running, you need a million IOPS, you've got a marketplace. >> Yeah. Some of these large scale, hyperscale networks, they're massive marketplaces. >> Yeah, they're huge. >> How do you guys fit in there? What problem are you trying to solve? Let me just start with that. >> You know, we're trying to pull away from the complexities of an exchange. We're trying to give the community a good tool to trade without a lot of knowledge of tokenomics. One of our unique assets, or unique features, is that you can trade with no market impact. You don't have to worry about price slippage, or the complexities behind order books, so we give a familiar interface to trading. Something you'd see on a traditional e-commerce platform. So we're trying to kind of introduce it to a wider range of people. We've talked to a lot of people who have a lot of difficulties, especially with the decentralized exchanges. >> Yeah. What are their problems? Just, like, reliability? >> Reliability. >> Black box- >> Liquidity, there's a lot of issues with liquidity around them, which causes problems when you try to trade any significant amount of coin. So, we're trying to give traders and the coin companies another outlet to trade without having to worry about liquidity. Or the risks of liquidity associated with it. >> So what's the status of the company? How many people do you guys got? What's the size? Do you have any deployments? Are you guys engaging certain communities? >> We are live. We released a kind of invite only beta about two months ago. So we've been out there having traders for about two months. We're a very small team, we're based out of Las Vegas. There's a development team of three people. We're just now broadening into more partnerships, more marketing- >> So you guys are hardening the platform, basically. >> Yep. >> By jamming and coding- >> Yeah, we went kind of product first, and then took a step back and are now approaching the market. So yeah, we're really excited. >> That's smart, you didn't hype it up first. >> Yeah, we didn't hype it up. >> But you could have definitely hyped it up, I mean, a lot of people who are winning right now are quality deals that had opportunities to do an ICO. >> Yeah. >> just, people are throwing money around. Just go back to February, the numbers are just off the charts. The, kind of, bubble burst in February, and certainly the SEC announced today, I'm covering the news, a major crackdown on all those ICOs, on violations right here in the United States, It just causes a distraction. I brought this up with Hartej last time I interviewed him in Toronto at The Futurist, which is exactly what you guys are doing, and this is the core trend and I want to get your thoughts on it. A lot of the alpha entrepreneurs, the ones that are building companies, don't want to get distracted from stuff that's not optimized on building a company. For instance, if I do an ICO, or you get involved in domicile issues outside the United States, you're optimizing all your energy, they're on an airplane, or market dynamics that aren't building a company. Yeah. >> This is kind of, almost a distinction at this point, you can almost look at opportunities, startups, entrepreneurs, inventures, and say, "Okay, we can almost see who's doing what." >> Yep. >> You do agree. >> Yeah, I think it's important to have something before you go and you spend a lot of energy raising money, building a pipe around the company. I think we're going to see a huge trend towards product first, having something, having a development team, a concept, a patent. Not just based on a theoretical white paper, so it'll be very interesting to see how it goes. We decided to go product first, so, no one had heard of us until we went live with our product. >> Good approach, I like it, I think it's solid. Good, we'll see how it turns out. I got to ask you, and I want to dig into the product a little later in this interview, but I want to ask you specifically, around some core trends I'm seeing, and patterns. >> Sure. >> It's pretty clear that when these emerging markets develop, total activity on the entrepreneurial side, a lot of people build and developing, attacking the market, but it's a trend, everyone's throwing out a common thing, I need to have community, and I need a two-sided marketplace. So the common thread's- and people don't have those- you can't just buy a community. >> Yeah. Communities aren't bought. >> Sure. You can't just say, "Hey, I need a community." Put a telegram channel, write some bots, >> Yeah. >> the next thing you know you got 25,000 people in telegram. >> Yep. >> That's not a community. >> That's not a community. >> That is AI bots looking like a community. >> Sure. >> And then a two-sided marketplace, you got to have a value proposition. So these are things that people are putting into their plans. >> Yep. >> That they don't have answers for. >> Sure. >> What's your thoughts on that, around community, 6and about marketplace? What are you seeing in the market, in market developing right now? >> I mean, building a strong community is very difficult. They have to align with your product, they have to align with your vision, they have to understand what you're doing, and at least have a use case for it. So, we're really trying to kind of have the community drive our development road map. So, we've done a lot of outreach, trying to get what people are interested in, what's lacking in the industry currently, what they want to see, what they're unhappy with. And we're trying to build a community around allowing people to have input and influence into the product that we're building. So, we're really early into the process, so it's difficult for me to really say that it's easy or difficult to build the community. >> So you're engaging the community to help. >> We are engaging the community. >> What are the number one things you guys are solving? Problems that you see are immediate, low hanging fruit that you're knocking out right away? What are the core things? >> I think some of the big things are simplicity, the usability of these interfaces. Kind of the knowledge around it, trying to do a knowledge transfer to our customer base. And trying to help people realize that there's a company behind these coins. I think that's a huge thing that we have to kind of push towards, is, it's not just a token. It's a token produced by a company with a cause. >> So how does your product work? >> It's like a basic marketplace that you would see in kind of a eBay or an Amazon, where someone posts an offer, posts a listing, and other people can buy from it. So, it's a buy and sell kind of- >> And you have your own native token? >> We have a native ERC-20 token that we use for fees. Because we're targeting the digital asset generally, we've externalized fees from traded goods. So, we want to make sure we can handle something that may not be divisible by, as Bitcoin is. So if you trade a book, for example, a lot of these exchanges would take a page out of it. If you use the current model of fees, they're kind of coin shaving off of your trades. So we're trying to eliminate that so we can expand into non-fungible, or non-breakable assets. We're also developing a wallet that basically encapsulates cryptocurrency into smaller assets to be traded off chain. So, we plan on kind of revolving around our internal token to handle fees of those assets. >> So it's a blend of on/off chain dynamics. >> Yep. >> So you can do a lot of stuff, and not have to do a lot of writing to the chain, if you're going to be doing a lot of re-re-writes. >> Yeah. >> All right, so the question I want to ask you that I think is important and in everyone's mind is, okay, Hosho Con is the first, inaugural- we love going to inaugural events because, you don't know, it could be the last one. >> Sure. >> Or, it's going to be big. I think this is a big trend, and one of the things we heard last night at dinner was, when we were having a conversation about it was, is no real conference, these conferences don't put security in the front. >> Yep. >> They really kind of have it as a side panel. It's always kind of an adjunct to something bigger, pitch competition, you know, big sponsor driven kind of programs. This is a security conference. What is the impact, in your opinion, of this Hosho Con, and security in the blockchain, that's going to shape the industry? What is your opinion? What is your commentary on that? >> I mean, obviously it's important to focus on security. I think a lot of people had a lot of, kind of assumptions that blockchain-specific, or blockchain-based technologies were unhackable. You know, the decentralization of something makes it secure, and I think that's a myth that they're going to have to debug, and we're seeing it with hacks. There's a lot of, I think, assumptions even around the hacks that are incorrect. So, bringing the idea to people that blockchain still needs to be managed, you still need to be careful. The smart contracts still have vulnerabilities and risks involved, it's not- >> Software is software. >> Software is software. It's unavoidable, when you start writing code, that there's going to be- >> You don't want a blue screen of death, certainly, you don't want to have to reboot, I mean, move fast and break stuff was great for webscale, but when you're talking about security and currency, you need rock solid, hundred percent reliability. >> Yeah. >> Otherwise, you lose your cash. Or your e-money. >> Yeah, it's something of value that you're going to lose. >> It's not a social media account, it's not something like that, you know, you're losing money. And it's very interesting, I think the more people know about the security around blockchain, cryptocurrency, the more they're going to realize that it's not an end all solution to everything. It takes time to evolve. Standards will probably have to be put in place. >> There's a lot of people, I remember when I was your age, and the web was coming around, everyone was afraid to put their credit card down on basic e-commerce transactions. >> Sure. >> And that was natural, because like, oh my god, it's online, it almost felt like a black box, and then they got over that pretty quickly, you saw PayPal and those kinds of companies came out. You mention eBay, these online sites are now secure. Crypto, there's almost like an unknown, a lack of education in the mainstream. And so we got to get to that point where, you know, wallets are wallets, and they actually do a good job, and you don't forget, and leave your wallet at the restaurant. There's some hygiene, and practices that are needed. Older generations, maybe, might not get it, but the younger generations, they're getting it, right? >> Yeah >> What's your opinion of this? Because, this is a generational shift. >> Yeah. >> This crypto, blockchain market, it's really generational. >> Sure. Anyone under the age of thirty pretty much loves it. >> Yeah. >> So, it's happening, right? >> Yep. >> So, what is the views around security, generally, in the mainstream? >> I mean, I don't think there are too many. Like I said, I think people kind of put a lot of assumptions in the inherent security of blockchain stuff. And I think they don't realize that we're trying to make it easier through mnemonic sequences, or passwords, so we're hosting wallets online now. It's not necessarily a pure wallet in the sense that it sits on a piece of paper. So we're going towards usability, which we're sacrificing security for. So the more usability we get with a lot of these mainstream products, the more we're going to have to realize we're getting back to a place of existing security vulnerabilities, with passwords, or stuff you would see with your bank account. So it'll be interesting to see the balance between the raw security inherent with Bitcoin, or a traditional cryptographic wallet, and then usability, whether it be cloud based stuff, or these exchanges. >> You know, Chris, one of things you're doing, that I think's interesting, and kind of points to the- if you connect the dots- the trend of, really, levels of granularity getting down to the micro level. >> Yeah. >> It's microeconomics. >> The beautiful thing about this market, is that, you could take a page out of a book, you can track it, and how you use that page like a pay for, all kind of digital rights stuff, digital assets. So you look at the world as a digital asset. This brings up the question of, okay, there's going to be software that's going to have to be written to manage this level of microtransaction, or microassets. So, how do you view, in your opinion, this whole notion of token economics? Because we've used tokens for years on all the stuff we program, on authentication. >> Yep. Tokens are used in computer science- not a new concept. >> Yep. But if you think about tokens as a currency, and as a mechanism for computer science, software, >> Sure. >> do you see a multi-token world? Why wouldn't everyone have their own token? >> Sure. And then there's going to have to be software- >> Sure. to manage the tokens. >> Yes. If you have a token and I have a token called a Cube Coin- >> Yeah. >> and you have your token, there's probably going to have to be some interaction between coins. Do you see that day happening sooner than later, or do you even see it happening? >> It's going to really depend on the use cases that they find. Whether a single platform is going to come out, and kind of take over the standardization of managing it, or, who knows, you see some of these transactional bridges, like between Dogecoin or Ethereum. So you can see that happening between tokens, or, everything being built on the same chain, or, having these bridges between chains, whether it be like an EOS to Ethereum token chain bridge. I don't know, I mean, we really have no idea. >> (mumbles) multichain, it's interesting, right? This is an interesting conversation. My vision is, I think multichain is a good trend. Why wouldn't you want to have multiple chains, if the use cases are not overlapping? I just don't feel comfortable about a monolithic approach of tokens. I'm just uncomfortable, generally, with that philosophy. >> I think it'll be important, and like you said, it'll be very important to have a good solution to manage them. People aren't going to want a hundred programs on their computer to manage their tokens. They're not going to want multiple apps on their phones. There's going to have to be some kind of standardization so that people can manage it easily. Otherwise, it's going to be impossible to keep up with. And kind of the interchangeability between tokens will be important. >> Chris, final question for you. What's this event like here? Describe for the folks who aren't here, what's the vibe, who are the people, what are some of the conversations in the hallways so far. What kind of person is here? What is this event about? What's the relevance of Hosho Con? >> Well, it seems like it's a lot of technically minded people, kind of hoping to push forward the security in the blockchain world. We've had conversations about everything from educating the masses, so kind of the average person, who doesn't understand the complexities of Bitcoin, and how do you inform them of what we're doing, all the way up to, what's the next step in security auditing. Hosho is really pushing forward, how do audit your code on the blockchain, or on a lot of these platforms, and I think it's really important to have these conversations, cause it's opening up new worlds of new thought habits for each of these companies. Everyone has their expertise, Hosho specializes in smart contract auditing, and we may not have that in depth knowledge of how to audit the contracts, so it's nice to kind of share the knowledge, and see that there's other solutions out there than everyone doing it on their own. >> What do you hope to be known for, for your company? If you could have that vision down the road, three years from now, when you look back, what do you want to be known for? >> I think it would be best if we were known as a platform to bring newcomers into the space. Informing, caring about the community, making sure that they understand what they're doing before they do it. As you know, Bitcoin is very unforgiving. A lot of these cryptos are very unforgiving. So I think it's very important for us to be known as someone who helps bridge that kind of intimidation. >> All right, Chris Forte, for 3BX, CTO, entrepreneur, building a company, doing it the right way, plans to use tokens, You guys, did you raise any money? >> No raised money. We're privately funded. >> Nice. >> So, we're going that route. >> Good. >> Bootstrapping, getting it done. Taking a different approach, which is the classic approach, of building a company the right way. TheCUBE, we are here in Las Vegas for Hosho Con. I'm John Furrier. Stay with us for more coverage after this short break. (upbeat techno music)

Published Date : Oct 10 2018

SUMMARY :

Brought to you by Hosho. and our initiating coverage of the blockchain Take a minute to explain what you guys do. an e-book, a concert ticket, you know, relative to how you guys see it. expect to see hitting a lot of these marketplaces soon. Well, that's smart, I mean, I think if you but just as a scale, relative to what most people need. you need a million IOPS, you've got a marketplace. Some of these large scale, hyperscale networks, How do you guys fit in there? is that you can trade with no market impact. Or the risks of liquidity associated with it. We're a very small team, we're based out of Las Vegas. So you guys are hardening are now approaching the market. are quality deals that had opportunities to do an ICO. A lot of the alpha entrepreneurs, you can almost look at opportunities, Yeah, I think it's important to have but I want to ask you specifically, and developing, attacking the market, Yeah. You can't just say, "Hey, I need a community." the next thing you know you got 25,000 you got to have a value proposition. they have to align with your vision, Kind of the knowledge around it, It's like a basic marketplace that you would see So if you trade a book, for example, and not have to do a lot of writing to the chain, All right, so the question I want to ask you that I and one of the things we heard last night at dinner was, It's always kind of an adjunct to something bigger, So, bringing the idea to people that blockchain still that there's going to be- you don't want to have to reboot, I mean, Otherwise, you lose your cash. the more they're going to realize that it's not an and the web was coming around, And so we got to get to that point where, you know, What's your opinion of this? Sure. So the more usability we get with a lot of that I think's interesting, and kind of points to the- So, how do you view, in your opinion, Tokens are used in computer science- not a new concept. But if you think about tokens as a currency, And then there's going to have to be software- to manage the tokens. If you have a token and I have a token called a Cube Coin- and you have your token, and kind of take over the standardization of managing it, Why wouldn't you want to have multiple chains, And kind of the interchangeability between tokens Describe for the folks who aren't here, and I think it's really important to have a platform to bring newcomers into the space. We're privately funded. of building a company the right way.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ChrisPERSON

0.99+

Chris FortePERSON

0.99+

FebruaryDATE

0.99+

Las VegasLOCATION

0.99+

TorontoLOCATION

0.99+

Christopher FortePERSON

0.99+

AmazonORGANIZATION

0.99+

United StatesLOCATION

0.99+

25,000 peopleQUANTITY

0.99+

John FurrierPERSON

0.99+

The FuturistORGANIZATION

0.99+

firstQUANTITY

0.99+

three peopleQUANTITY

0.99+

Hosho ConEVENT

0.99+

HoshoConEVENT

0.99+

eBayORGANIZATION

0.99+

SECORGANIZATION

0.99+

JanuaryDATE

0.98+

ERC-20OTHER

0.98+

HoshoORGANIZATION

0.98+

2011DATE

0.98+

eachQUANTITY

0.98+

todayDATE

0.97+

PayPalORGANIZATION

0.97+

two-sidedQUANTITY

0.97+

HoshoCon 2018EVENT

0.97+

DogecoinORGANIZATION

0.96+

OneQUANTITY

0.96+

ThreeBxORGANIZATION

0.96+

3BXORGANIZATION

0.96+

last nightDATE

0.96+

about two monthsQUANTITY

0.95+

oneQUANTITY

0.94+

Hard Rock HotelLOCATION

0.93+

siliconangle.comOTHER

0.93+

hundred percentQUANTITY

0.93+

single platformQUANTITY

0.92+

TheCUBEORGANIZATION

0.88+

HartejPERSON

0.87+

EthereumOTHER

0.86+

about two months agoDATE

0.85+

a million IOPSQUANTITY

0.84+

first securityQUANTITY

0.84+

underQUANTITY

0.8+

Hosho Con.EVENT

0.77+

hundred programsQUANTITY

0.71+

theCUBEORGANIZATION

0.65+

yearsQUANTITY

0.62+

threeDATE

0.56+

CubeORGANIZATION

0.51+

thirtyQUANTITY

0.5+

EthereumTITLE

0.44+

tokenomicsORGANIZATION

0.42+

Christopher Bergey, Western Digital | Autotech Council 2018


 

>> Announcer: From Milpitas, California at the edge of Silicon Valley, it's The CUBE. Covering autonomous vehicles. Brought to you by Western Digital. >> Hey, welcome back everybody. Jeff Frick here with The Cube. We are at the Autotech Council Autonomous Vehicle event at Western Digital. Part of our Data Makes Possible Program with Western Digital where we're looking at all these cool applications and a lot of cutting edge technology that at the end of the day, it's data dependent and data's got to sit somewhere. But really what's interesting here is that the data, and more and more of the data is moving out to the edge and edge computing and nowhere is that more apparent than in autonomous vesicles so we're really excited to have maybe the best title at Western Digital, I don't know. Chris Bergey, VP of Product Marketing. That's not so special, but all the areas that he's involved with: mobile, compute, automotive, connected homes, smart cities, and if that wasn't enough, industrial IOT. Chris, you must be a busy guy. >> Hey, we're having a lot of fun here. This data world is an exciting place to be right now. >> so we're her at the Autonomous Vehicle event. We could talk about smart cities, which is pretty interesting, actually ties to it and internet of things and industrial internets, but what are some of the really unique challenges in autonomous vehicles that most people probably aren't thinking of? >> Well, I think that we all understand that really, autonomous vehicles are being made possible by just the immense amount of sensors that are being put into the car. Not much different than as our smartphones or our phones evolved from really not having a lot of sensors to today's smartphones have many, many sensors. Whether it's sensing your face, gyroscopes, GPS, all these kind of things. The car is having the exact thing happen but many, many more sensors. And, of course, those sensors just drive a tremendous amount of data and then it's really about trying to pull the intelligence out of that data and that's really what the whole artificial intelligence or autonomous is really trying to do is, okay, we've got all this data, how do I understand what's happening in the autonomous vehicle in a very short period of time? >> Right, and there's two really big factors that you've talked about and some of the other things that you've done. I did some homework and one of them is the metadata around the data, so there's the raw data itself that's coming off those sensors, but the metadata is a whole nother level, and a big level, and even more importantly is the context. What is the context of that data and without context, it's just data. It's not really intelligence or smarts or things you can do anything about so that baseline sensor data gets amplified significantly in terms of actually doing anything with that information. >> That's correct. I think one of the examples I give that's easier for people to understand is surveillance, right? We're very familiar with walking into a retail store where there's surveillance cameras and they're recording in the case that maybe there's a theft or something goes wrong, but there's so much data there that's not acutely being processed, right? How may people walked into the store? What was the average time a person came to the store? How many men? How many women? That's the context of the data and that's what's really would be very valuable if you were, say, an owner of the store or a regional manager. So that's really pulling the context out of the raw data. And in the car example, autonomous vehicles, hey, there's going to be something, my sensors are seeing something, and then, of course, you'd use multiple sensors. That's the sensor fusion between them of, "Hey, that's a person, that's a deer, oh, don't worry, "that's a car moving alongside of us and he's "staying in his lane." Those are the types of decisions we're making with this data and that's the context. >> Right, and even they had in the earlier presentation today the reflection of the car off the side of a bus, I mean, these are the nuance things that aren't necessarily obvious when you first start exploring. >> And we're dealing with human life, I mean, so obviously it needs to be right 99.999 plus percent. So that's the challenge, right? It's the corner cases and I think that's what we see with autonomous vehicles. It's really exciting to see the developments going on and, of course, there's been a couple challenges, but we just have so much learning to do to really get to that fifth nine or whatever it is from a probability point of view. And that's where we'll continue to work on those corner cases, but the technology is coming along so fast, it's just mind-boggling how quickly we are starting to attack these more difficult challenges. And we'll get there but it's going to take time like anything. >> The other really important thing, especially now where we're in the rise of Cloud, if you will. Amazon is going bananas. Google Cloud Platform, Microsoft Azure, so we're seeing this huge move of Cloud and enterprise IT. But in a car, right, there's this little thing called latency and this other thing called physics where you've got a real issue when you have to make a quick decision based on data and those sensors when something jumps out in front of the car. So really, the rise of edge computing and moving so much of that stored compute and intelligence into the vehicle and then deciding what goes back to the car to retrain the algorithm. So it's really a shift to back out to the edge, if you will, dependent because of this latency issue. >> Yeah, I mean, they're very complimentary, right? But there's a lot of decisions you can make locally and, obviously, there's a lot of advantages in doing that. Latency being one of them, but just cost of communications and again, what people don't necessarily understand is how big this data is. You see statistics thrown out there, one gigabit per second, two gigabits per second. I mean, that is just massive data. At the end of the day, actually, in some of the development, it's pretty interesting that we have the car developers actually FedExing the terabyte drives that they've captured data because it's the easiest way for them to actually transfer the data. I mean, people think, "Oh, internet connectivity, no problem." You try to ship 80 terabytes in a cost effective manner, FedEx ends up being the best shot right now. So it's pretty interesting. >> The old sneaker, that is pretty funny. But the quantities of this data are so big. I was teasing you on Twitter earlier today. I think we took it up to an xobyte, a zedobyte, a yodabyte, and then the crowd responded. No, it's a brontosaurousbyte is even bigger than a yodabyte. We were at Flink Forward earlier this week and really this whole idea of stream processing, it's really taking new approaches to data processing. You'll be able to take all that stuff in in real time, which probably state of the market now is financial trading and advertising markets. But to do that now in a car where if you make a mistake, there's really significant consequences. It's a really different challenge. >> It is and again, that's really this advent of the sensor data, right? The sensor data is going to swamp probably every other data set that's in the world, but a lot of it's not interesting because you don't know when that interesting event is going to happen. So what you actually find is that you try to put it's intelligence as close as you can to the data, end storage, and again, storage may be 30 seconds to if you had an accident, you want to be able to go back 30 seconds. It may be lifetimes. So just thinking about these data flows and what's the half life of the data relative to the value? But what we're actually finding with many of the machine learning is that data we thought was not valuable, data we thought, "Oh, we have the right amount of granularity," now with machine learning we're going back and saying, "Oh, why didn't we record at an even higher granularity?" We could have pulled out more of these trends or more of these corner cases. So I think that's one of the challenges enterprise are going through right now is that everyone's so scared of getting rid of any data, yet there's just tremendous data growth. And we're sitting right here in the middle of it at Western Digital. >> Well, thankfully for you guys, you're going to store all that data and it is really important, though, because it used to be, it's funny to me. It used to be a sample of things that happened in the past is how you would make your decisions. Now it's not a sample, it's all of what's happening now and hopefully you can make a decision while you still have time to have an impact. So it's a very different world but sampling is going away when, in theory, you don't know what you're going to need that data for and you have the ability to store it. >> Making real-time decisions but then also learning how to use that decision to make better decisions in the future. That's really where Silicon Valley's focused right now. >> All right, Chris, well you're a busy guy so we're going to let you get back to it because you also have to do IOT and industrial internet and mobile an compute. So thanks for taking ... >> And I try to eat in between there too. >> And you try to eat and hopefully see your kids Friday night, so hopefully you'll take >> Absolutely. your wife out to a movie tonight. >> All right, Chris, great to see you. Thanks for taking a few minutes. >> Chris: Thank you very much. >> All right, I'm Jeff Frick. You're watching The CUBE from Autotech Council Autonomous Vehicle event. Thanks for watching.

Published Date : Apr 14 2018

SUMMARY :

Brought to you by Western Digital. and more and more of the data is moving out to the edge Hey, we're having a lot of fun here. and internet of things and industrial internets, that are being put into the car. and a big level, and even more importantly is the context. So that's really pulling the context out of the raw data. necessarily obvious when you first start exploring. I mean, so obviously it needs to be right So it's really a shift to back out to the edge, captured data because it's the easiest way for them But to do that now in a car where if you make a mistake, of the sensor data, right? and hopefully you can make a decision while you still Making real-time decisions but then also learning how to so we're going to let you get back to it And I try to eat your wife out to a movie tonight. All right, Chris, great to see you. All right, I'm Jeff Frick.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jeff FrickPERSON

0.99+

ChrisPERSON

0.99+

Chris BergeyPERSON

0.99+

Western DigitalORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

30 secondsQUANTITY

0.99+

Christopher BergeyPERSON

0.99+

Silicon ValleyLOCATION

0.99+

oneQUANTITY

0.99+

Friday nightDATE

0.99+

FedExORGANIZATION

0.99+

80 terabytesQUANTITY

0.99+

Flink ForwardORGANIZATION

0.99+

todayDATE

0.99+

tonightDATE

0.98+

MicrosoftORGANIZATION

0.98+

Milpitas, CaliforniaLOCATION

0.97+

two gigabits per secondQUANTITY

0.95+

FedExingORGANIZATION

0.95+

Autotech Council Autonomous VehicleEVENT

0.95+

fifth nineQUANTITY

0.93+

one gigabit per secondQUANTITY

0.93+

TwitterORGANIZATION

0.93+

firstQUANTITY

0.92+

earlier this weekDATE

0.92+

earlier todayDATE

0.91+

99.999 plus percentQUANTITY

0.91+

two really big factorsQUANTITY

0.9+

Autotech CouncilEVENT

0.84+

Cloud PlatformTITLE

0.82+

Autonomous VehicleEVENT

0.79+

The CubeORGANIZATION

0.76+

zedobyteCOMMERCIAL_ITEM

0.69+

GoogleORGANIZATION

0.67+

coupleQUANTITY

0.63+

Autotech CouncilORGANIZATION

0.57+

decDATE

0.48+

2018DATE

0.47+

yodabyteCOMMERCIAL_ITEM

0.46+

AzureTITLE

0.46+

CUBETITLE

0.43+

yodabyteOTHER

0.3+

Christopher Penn, SHIFT Communications | IBM CDO Strategy Summit 2017


 

>> Live from Boston, Massachusetts, it's theCUBE, Covering IBM Chief Data Officer Summit. Brought to you by IBM. >> Welcome back to theCUBE's live coverage of IBM Chief Data Strategy Summit. My name is Rebecca Knight, and I'm here with my co-host Dave Vellante, we are joined by Christopher Penn, the VP of Marketing Technology at SHIFT Communications, here in Boston. >> Yes. >> Thanks so much for joining us. >> Thank you for having me. >> So we're going to talk about cognitive marketing. Tell our viewers: what is cognitive marketing, and what your approach to it is. >> Sure, so cognitive marketing essentially is applying machine learning and artificial intelligence strategies, tactics and technologies to the discipline of marketing. For a really long time marketing has been kind of known as the arts and crafts department, which was fine, and there's certainly, creativity is an essential part of the discipline, that's never going away. But we have been tasked with proving our value. What's the ROI of things, is a common question. Where's the data live? The chief data officer would be asking, like, who's responsible for this? And if we don't have good answers to those things, we kind of get shown the door. >> Well it sort of gets back to that old adage in advertising, I know half my marketing budget is wasted, I just don't know which half. >> Exactly. >> So now we're really able to know which half is working. >> Yeah, so I mean, one of the more interesting things that I've been working on recently is using what's called Markov chains, which is a type of very primitive machine learning, to do attribution analysis, to say what actually caused someone to become a new viewer of theCUBE, for example. And you would take all this data that you have from your analytics. Most of it that we have, we don't really do anything with. You might pull up your Google Analytics console, and go, "Okay, I got more visitors today than yesterday." but you don't really get a lot of insights from the stock software. But using a lot of tools, many of which are open source and free of financial cost, if you have technical skills you can get much deeper insights into your marketing. >> So I wonder, just if we can for our audience... When we talk about machine learning, and deep learning, and A.I., we're talking about math, right, largely? >> Well so let's actually go through this, because this is important. A.I. is a bucket category. It means teaching a machine to behave as though it had human intelligence. So if your viewers can see me, and disambiguate me from the background, they're using vision, right? If you're hearing sounds coming out of my mouth and interpreting them into words, that's natural language processing. Humans do this naturally. It is now trying to teach machines to do these things, and we've been trying to do this for centuries, in a lot of ways, right? You have the old Mechanical Turks and stuff like that. Machine learning is based on algorithms, and it is mostly math. And there's two broad categories, supervised and unsupervised. Supervised is you put a bunch of blocks on the table, kids blocks, and you hold the red one, and you show the machine over and over again this is red, this is red, and eventually you train it, that's red. Unsupervised is- >> Not a hot dog. (Laughter) >> This is an apple, not a banana. Sorry CNN. >> Silicon Valley fans. >> Unsupervised is there's a whole bunch of blocks on the table, "Machine, make as many different sequences as possible," some are big, some are small, some are red, some are blue, and so on, and so forth. You can sort, and then you figure out what's in there, and that's a lot of what we do. So if you were to take, for example, all of the comments on every episode of theCUBE, that's a lot, right? No humans going to be able to get through that, but you can take a machine and digest through, just say, what's in the bag? And then there's another category, beyond machine learning, called deep learning, and that's where you hear a lot of talk today. Deep learning, if you think of machine learning as a pancake, now deep learnings like a stack of pancakes, where the data gets passed from one layer to the next, until what you get at the bottom is a much better, more tuned out answer than any human can deliver, because it's like having a hundred humans all at once coming up with the answer. >> So when you hear about, like, rich neural networks, and deep neural networks, that's what we're talking about. >> Exactly, generative adversarial networks. All those things are ... Any kind of a lot of the neural network stuff is deep learning. It's tying all these piece together, so that in concert, they're greater than the sum of any one. >> And the math, I presume, is not new math, right? >> No. >> SVM and, it's stuff that's been around forever, it's just the application of that math. And why now? Cause there's so much data? Cause there's so much processing power? What are the factors that enable this? >> The main factor's cloud. There's a great shirt that says: "There's no cloud, it's just somebody else's computer." Well it's absolutely true, it's all somebody else's computer but because of the scale of this, all these tech companies have massive server farms that are kind of just waiting for something to do. And so they offer this as a service, so now you have computational power that is significantly greater than we've ever had in human history. You have the internet, which is a major contributor, the ability to connect machines and people. And you have all these devices. I mean, this little laptop right here, would have been a supercomputer twenty years ago, right? And the fact that you can go to a service like GitHub or Stack Exchange, and copy and paste some code that someone else has written that's open source, you can run machine learning stuff right on this machine, and get some incredible answers. So that's why now, because you've got this confluence of networks, and cloud, and technology, and processing power that we've never had before. >> Well with this emphasis on math and science in marketing, how does this change the composition of the marketing department at companies around the world? >> So, that's a really interesting question because it means very different skill sets for people. And a lot of people like to say, well there's the left brain and then there's a right brain. The right brains the creative, the left brains the quant, and you can't really do that anymore. You actually have to be both brained. You have to be just as creative as you've always been, but now you have to at least have an understanding of this technology and what to do with it. You may not necessarily have to write code, but you'd better know how to think like a coder, and say, how can I approach this problem systematically? This is kind of a popular culture joke: Is there an app for that, right? Well, think about that with every business problem you face. Is there an app for that? Is there an algorithm for that? Can I automate this? And once you go down that path of thinking, you're on the path towards being a true marketing technologist. >> Can you talk about earned, paid, and owned media? How those lines are blurring, or not, and the relationship between sort of those different forms of media, and results in PR or advertising. >> Yeah, there is no difference, media is media, because you can take a piece of content that this media, this interview that we're doing here on theCUBE is technically earned media. If I go and embed this on my website, is that owned media? Well it's still the same thing, and if I run some ads to it, is it technically now paid media? It's the thing, it's content that has value, and then what we do with it, how we distribute it, is up to us, and who our audience is. One of the things that a lot of veteran marketing and PR practitioners have to overcome is this idea that the PR folks sit over there, and they just smile and dial and get hits, go get another hit. And then the ad folks are over here... No, it's all the same thing. And if we don't, as an industry realize that those silos are artificially imposed, basically to keep people in certain jobs, we will eventually end up turning over all of it to the machines, because the machines will be able to cross those organizational barriers much faster. When you have the data, and whatever the data says that's what you do. So if the data says this channels going to be more effective, yes it's a CUBE interview, but actually it's better off as a paid YouTube video. So the machine will just go do that for us. >> I want to go back to something you were talking about at the very beginning of the conversation, which is really understanding, companies understanding, how their marketing campaigns and approaches are effectively working or not working. So without naming names of clients, can you talk about some specific examples of what you've seen, and how it's really changed the way companies are reaching customers? >> The number one thing that does not work, is for any business executive to have a pre-conceived idea of the way things should be, right? "Well we're the industry leader in this, we should have all the market share." Well no, the world doesn't work like that anymore. This lovely device that we all carry around in our pockets is literally a slot-machine for your attention. >> I like it, you've got to copyright that. A slot machine for your attention. >> And there's a million and a half different options, cause that's how many apps there are in the app store. There's a million and half different options that are more exciting than your white paper. (Laughter) Right, so for companies that are successful, they realize this, they realize they can't boil the ocean, that you are competing every single day with the Pope, the president, with Netflix, you know, all these things. So it's understanding: When is my audience interested in something? Then, what are they interested in? And then, how do I reach those people? There was a story on the news relatively recently, Facebook is saying, "Oh brand pages, we're not going to show "your stuff in the regular news feed anymore, "there will be a special feed over here "that no one will ever look at, unless you pay up." So understanding that if we don't understand our audiences, and recruit these influencers, these people who have the ability to reach these crowds, our ability to do so through the "free" social media continues to dwindle, and that's a major change. >> So the smart companies get this, where are we though, in terms of the journey? >> We're in still very early days. I was at major Fortune 50, not too long ago, who just installed Google Analytics on their website, and this is a company that if I named the name you would know it immediately. They make billions of dollars- >> It would embarrass them. >> They make billions of dollars, and it's like, "Yeah, we're just figuring out this whole internet thing." And I'm like, "Cool, we'd be happy to help you, but why, what took so long?" And it's a lot of organizational inertia. Like, "Well, this is the way we've always done it, and it's gotten us this far." But what they don't realize is the incredible amount of danger they're in, because their more agile competitors are going to eat them for lunch. >> Talking about organizational inertia, and this is a very big problem, we're here at a CDO summit to share best practices, and what to learn from each other, what's your advice for a viewer there who's part of an organization that isn't working fast enough on this topic? >> Update your LinkedIn profile. (Laughter) >> Move on, it's a lost cause. >> One of the things that you have to do an honest assessment of, is whether the organization you're in is capable of pivoting quickly enough to outrun its competition. And in some cases, you may be that laboratory inside, but if you don't have that executive buy in, you're going to be stymied, and your nearest competitor that does have that willingness to pivot, and bet big on a relatively proven change, like hey data is important, yeah, you make want to look for greener pastures. >> Great, well Chris thanks so much for joining us. >> Thank you for having me. >> I'm Rebecca Knight, for Dave Vellante, we will have more of theCUBE's coverage of the IBM Chief Data Strategy Officer Summit, after this.

Published Date : Oct 25 2017

SUMMARY :

Brought to you by IBM. the VP of Marketing Technology and what your approach to it is. of the discipline, Well it sort of gets back to that to know which half is working. of the more interesting and A.I., we're talking the red one, and you show Not a hot dog. This is an apple, not a banana. and that's where you So when you hear about, greater than the sum of any one. it's just the application of that math. And the fact that you can And a lot of people like to and the relationship between So if the data says this channels beginning of the conversation, is for any business executive to have a got to copyright that. that you are competing every that if I named the name is the incredible amount Update your LinkedIn profile. One of the things that you have to do so much for joining us. the IBM Chief Data Strategy

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Rebecca KnightPERSON

0.99+

Rebecca KnightPERSON

0.99+

Christopher PennPERSON

0.99+

IBMORGANIZATION

0.99+

ChrisPERSON

0.99+

BostonLOCATION

0.99+

YouTubeORGANIZATION

0.99+

CNNORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

NetflixORGANIZATION

0.99+

yesterdayDATE

0.99+

billions of dollarsQUANTITY

0.99+

Boston, MassachusettsLOCATION

0.99+

a million and halfQUANTITY

0.99+

billions of dollarsQUANTITY

0.99+

GitHubORGANIZATION

0.99+

todayDATE

0.98+

bothQUANTITY

0.98+

PopePERSON

0.98+

a million and a halfQUANTITY

0.98+

one layerQUANTITY

0.98+

LinkedInORGANIZATION

0.98+

Google AnalyticsTITLE

0.97+

twenty years agoDATE

0.97+

two broad categoriesQUANTITY

0.96+

Silicon ValleyLOCATION

0.95+

SHIFT CommunicationsORGANIZATION

0.95+

oneQUANTITY

0.94+

Google AnalyticsTITLE

0.94+

IBM Chief Data Strategy SummitEVENT

0.94+

OneQUANTITY

0.93+

Stack ExchangeORGANIZATION

0.9+

IBM Chief Data Strategy Officer SummitEVENT

0.88+

IBM Chief Data Officer SummitEVENT

0.87+

Fortune 50ORGANIZATION

0.86+

centuriesQUANTITY

0.86+

IBMEVENT

0.82+

CDO Strategy Summit 2017EVENT

0.79+

a hundred humansQUANTITY

0.79+

muchQUANTITY

0.77+

single dayQUANTITY

0.74+

theCUBEORGANIZATION

0.72+

VPPERSON

0.72+

halfQUANTITY

0.71+

CUBEORGANIZATION

0.63+

TechnologyPERSON

0.6+

CDOEVENT

0.51+

TurksORGANIZATION

0.39+

Martin Hood & Christopher VanAsselberg, Hologic - VeeamOn 2017 - #VeeamOn - #theCUBE


 

>> Announcer: Live from New Orleans, it's theCUBE covering VeeamON 2017, brought to you by Veeam. >> Welcome back to New Orleans everybody. This is Dave Vellante with Stu Miniman and this is theCUBE, the leader in live tech coverage. We go out to the events, we extract the signal from the noise. This is our first day of coverage of VeeamON 2017, the first year Stu we've ever done VeeamON, and we love the customer segments. We have a great one coming up now. Martin Hood is the IS Manager of Hologic, and Chris VanAsselberg is the manager of Server Ops, also at Hologic. Gents, welcome to theCUBE. >> [Martin And Chris] Thanks very much, thank you so much. >> Chris, give us the set-up on Hologic. What do you guys do, what's your shtick? >> Sure, Hologic is a developer, manufacturer and supplier of diagnostic surgical and breast imaging equipment all in the medical field. >> So what's happening in the business that affects IT? What's the conversation like from the business? The good stuff. >> The conversation the last couple of years has all been cloud, cloud, cloud. Very, very interesting topic, but this year it's all about digital transformation, IoT, and probably most importantly to Martin and I is availability. >> Well, when you think about IoT, it just changes everything. It scares the life out of you with security and-- >> Always being watched. >> And then availability obviously, they're like two sides of the same coin, so when you guys sit down and, the business moves fast. I mean, generally speaking, don't hate me for saying this, but the business oftentimes moves faster than IT can move. Is that changing in your organization? How are you changing it and what are you doing to change it? >> I think we're using better tools. We haven't the stuff like many IT departments so we have to adapt by using the best tools that are available. About 12 to 15 months ago explored Veeam as an opportunity and it's clearly made a difference. Staff have a lot more time to dedicate to things that will make a positive difference to the business rather than fixing problems. Those problems were taking up an awful lot of time in the past, not so much so now. >> So, maybe paint a picture of what your environment looks like. Apps that you're servicing, what the infrastructure looks like, virtualization, maybe components of that, major vendors. >> Our core infrastructure is foundation on Cisco UCS, EMC storage and back-ups using obviously ExoGrid storage and then Veeam is our availability platform. From an internal IT organization, we run everything from Oracle to Salesforce to Hadoop, Iceone storage with petabytes of image data, et cetera, so lots and lots of applications. Obviously, no down time expected from anybody but we have a pretty good infrastructure to run all that on. >> What is your sort of strategy and architecture around availability? Back-up, availability is sort of morphing together. >> Yeah, well we live in a world where everybody wants things instantly and it's no different when it comes to restoring fails, for example. Hologic has gone on a heavy recruitment drive for top talent and obviously that top talent has high expectations, so we have to deliver on those expectations. No longer can we wait a week to restore a fail. Even a few days is too long, so we need the right tools to get that job done quickly. >> Yeah, and to be honest, availability is not out of our grasp anymore with the technology available today, it's actually very easy to do it. We have data centers around the world we're able to replicate real time over a gigabyte, plus, you know, connection, five gig connections, 10 gig connections if need be. Replicate data real time, failover between data centers and also even between on prem and in the cloud. That is all possible today to achieve superior up time. >> And when you sit down with a business, do you, well first of all, do you do chargebacks? >> We do not do chargeback, we do showback. It's important for people to understand what something costs but obviously chargeback is a different model that we don't use. >> So when you have a conversation with a business about back-up, I mean in the old days it was, and maybe not so old days, it was one-size-fits-all, here you go, you get the bronze level of service, everybody gets it. Are you able to tune the granularity of your service offering to the business? >> Chris: Absolutely, there are systems that we want to back up and we, for example, back up our east coast data center to an exogrid. We replicate that to San Diego and for DR purposes the acceptance is that it's okay that it might take a day, a week, or even up to a month to be able to restore that data, to become back online. We also have the option to restore to Microsoft Azure if we want to, but we also have systems where it's not a back-up issue, it's yes, we need the back-ups, we need them every fifteen minutes, two disk replicated off-site as soon as possible, but they also want us to replicate the data real time from data center to data center, provide real time monitoring and real time failover. >> Sorry Stu, I'm going to let you jump in. Is the enabler there Veeam? Is it stuff that you've architected yourself? Some kind of combination? >> Veeam's our primary system for our back-ups. It's obviously phenomenal, works great, goes to an exogrid, replicates real time exogrid to exogrid, east coast to west coast. Veeam availability also has replication which we've pursued on many core VMs that require it. System integration tools that are not really on prem, they're tools that exist on prem but their purpose is to pull data from the Salesforces of the world, interface with business systems that might also be off site and we replicate them from the east coast to the west coast, real time. >> You mentioned that from on top you were hearing the cloud, cloud, cloud message. Is cloud a strategic initiative now? How do you put together the pieces, and where does Veeam fit in that discussion? >> I think it's being looked at, it's quite an expensive option for us to go down and I think we have the results-- >> You're saying public cloud would be expensive? >> Yeah, for us yes, I mean we have the resources ourselves. We have multiple data centers globally and we have the staff with the skill set to deliver so it's not really been a financially viable option at the moment. >> Stu: Azure you're doing some things with. >> We actually do business with Azure and vCloud Air. We're actually one of VMware's first customers in vCloud Air and we also do business in AWS. The important thing about a cloud strategy is to understand its strengths and its weaknesses. The idea of the cloud for Hologic is not to put a virtual machine up in the cloud. We can run those virtual machines on prem less expensive than we can run them on the cloud. Now on the flip side, if you look at some SAAS applications like email, Skype for Business, IoT, et cetera. Where the cost isn't the compute, memory, storage, et cetera, it's really in the whole package of maintaining these systems, patching these systems, the skill sets to maintain it, et cetera, it sometimes makes sense for the SAAS apps to host it in the public cloud but for the virtual machines that exist as legacy systems, to host them on prem. >> How's that ride for vCloud Air been for you? They recently moved. I believe it's OVH that's taken over management of that. What's your experience been? >> It's been interesting. Lot of premises, strong VMware partnership, we have always been an EMC partner. Obviously that continued when they acquired VMware, and unfortunately we started in their Texas data center. They offered to move us to Japan seamlessly. It wasn't the most seamless thing, but it worked well overall. They then asked us to move out of their Japan data center because they closed it March 31st I believe, so we had to move out of that, so they're no longer one of our key public clouds. We have a Germany data center that we replicate exchange real time using DAG replication and front-end it with load balancers. One of the data centers that we're utilizing is a vCloud instance in Germany that will also go away shortly. >> And what brings both of you to VeeamOn? What were your expectations coming in and how's the experience been so far? >> A lot of the things we saw this morning, the new innovations, these are all things that we've been on our wish list if you want for some time. Particularly things like continuous replication. That's a huge, huge thing for us. It's sort of phase two, we've rolled out Veeam. Now we're looking for the next step and that's the continuous replication of RVMs so that was a real boon to hear such news coming soon. >> Some of the other priorities obviously, we really want to hear about the new technology. As Martin just said, the replication piece is working well today, but the continuous replication, the method where we're no longer snapshot based and instead there's a driver within the VMware tools, there's some other methodology to allow that real time OS replication is a benefit to us, but we are looking at lots of SAAS apps. Obviously, SharePoint for Hologic is in Office 365. We don't want to go back to five years ago where it was five different back-up products depending on what system we're looking at. We want to use Veeam to back up our SharePoint environment. We want to use Veeam to back up our exchange environment, whether it's on prem or Office 365, and long-term we want to back up AWS or Office Veeam or Azure as well, to make sure that we have one system to back it all up. >> [You want Veeam to be your single back up platform and it is today, or it's becoming today? >> Veeam is our only back up product today that we have. When we sent Sharepoint to the cloud, we put a halt on the second phase, which is to move our team sites which is where our data is, and it is literally waiting for the Veeam SharePoint back up technology to become available, and then the rest of it will move up there seamlessly to make sure that Hologic is protected. >> The business value and benefit of having that simple, single architecture is worth the wait is what you're saying. >> Yeah, I mean if you look at VMware, the reason they've been successful isn't just their technology is amazing. It's also their certification program. They brought a bunch of IT people in. Companies everywhere have VCPs or even higher nowadays, so you have talented people working on a stable platform. With Veeam we sent three of our guys off to get their VMCs and that's been hugely successful. They're very confident with the system. They're able to do everything we need to quickly. They're not guessing, they're not Googling. They just know how to use the system. Going to other platforms will be a complete failure because now when someone wants something, you're in the hot seat, something's down, you need to bring it back up, but you don't use it every day so what do you do? >> Pull out the manual, Google. What's the coolest thing you guys have seen here? Anything that really excites you? >> Good question. It's been great hospitality outside of these four walls, of course. It's been superb. We've been well looked after, and looking forward to further experiences tomorrow as well. We're on stage tomorrow as well, so a little nervous about that. >> And the CVP's interesting to you. >> Particularly interesting. We were actually looking at other solutions to purchase in the next year to take it to the next level to provide the more real time replication for systems that really have to stay up rather than be restored. >> And the driver there is just to minimize, get as close to RPO zero as possible? >> Absolutely. If you look at an exchange environment for example their typical design is to build four servers in a DAG cluster so that you can do active-passive but instantaneous failover, right? But the problem with that comes in licensing. If you do Oracle it's the same thing. It doesn't cost a license if a system goes down to then restore that system someplace else, so do you want to pay twice as much licensing and build environments twice as big, or do you want to be able to just instantaneously failover, which won't cost more money. Which one meets the business needs? They both meet the business needs and one costs a lot less which means more money to do other things for the business. >> At SAAS they always love the practitioner perspective. Thanks guys for coming on theCUBE. Really, I appreciate it. >> Yeah, thanks. >> No problem. >> You're welcome. All right, keep it right there buddy. We'll be back with our next guest. This is theCUBEr live from VeeamOn 2017. We'll be right back. (techno music)

Published Date : May 17 2017

SUMMARY :

covering VeeamON 2017, brought to you by Veeam. and Chris VanAsselberg is the manager of Server Ops, What do you guys do, what's your shtick? all in the medical field. What's the conversation like from the business? The conversation the last couple of years It scares the life out of you with security and-- so when you guys sit down and, the business moves fast. We haven't the stuff like many IT departments so we have to of what your environment looks like. but we have a pretty good infrastructure to run all that on. What is your sort of strategy has high expectations, so we have to deliver and also even between on prem and in the cloud. It's important for people to understand what something costs I mean in the old days it was, and maybe not so old days, We also have the option to restore to Microsoft Azure Sorry Stu, I'm going to let you jump in. is to pull data from the Salesforces of the world, You mentioned that from on top you were hearing and we have the staff with the skill set to deliver Now on the flip side, if you look at some SAAS applications How's that ride for vCloud Air been for you? We have a Germany data center that we replicate exchange A lot of the things we saw this morning, to make sure that we have one system to back it all up. on the second phase, which is to move our team sites of having that simple, single architecture They're able to do everything we need to quickly. What's the coolest thing you guys have seen here? and looking forward to further experiences tomorrow as well. for systems that really have to stay up in a DAG cluster so that you can do active-passive Thanks guys for coming on theCUBE. This is theCUBEr live from VeeamOn 2017.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MartinPERSON

0.99+

GermanyLOCATION

0.99+

Dave VellantePERSON

0.99+

Martin HoodPERSON

0.99+

JapanLOCATION

0.99+

March 31stDATE

0.99+

ChrisPERSON

0.99+

Chris VanAsselbergPERSON

0.99+

TexasLOCATION

0.99+

San DiegoLOCATION

0.99+

10 gigQUANTITY

0.99+

AWSORGANIZATION

0.99+

HologicORGANIZATION

0.99+

New OrleansLOCATION

0.99+

VMwareORGANIZATION

0.99+

two sidesQUANTITY

0.99+

tomorrowDATE

0.99+

VeeamOnORGANIZATION

0.99+

five gigQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

bothQUANTITY

0.99+

a dayQUANTITY

0.99+

second phaseQUANTITY

0.99+

threeQUANTITY

0.99+

OneQUANTITY

0.99+

Stu MinimanPERSON

0.99+

a weekQUANTITY

0.99+

VeeamORGANIZATION

0.99+

twiceQUANTITY

0.99+

Office 365TITLE

0.99+

OracleORGANIZATION

0.99+

SAASORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

Christopher VanAsselbergPERSON

0.99+

next yearDATE

0.99+

five years agoDATE

0.98+

SharePointTITLE

0.98+

first dayQUANTITY

0.98+

todayDATE

0.98+

first customersQUANTITY

0.97+

this yearDATE

0.97+

singleQUANTITY

0.97+

one systemQUANTITY

0.97+

StuPERSON

0.97+

two diskQUANTITY

0.96+

VeeamON 2017EVENT

0.95+

first yearQUANTITY

0.94+

VeeamTITLE

0.94+

SalesforceORGANIZATION

0.93+

up to a monthQUANTITY

0.93+

Server OpsORGANIZATION

0.93+

EMCORGANIZATION

0.93+

five differentQUANTITY

0.93+

fifteen minutesQUANTITY

0.92+

Office VeeamTITLE

0.92+

oneQUANTITY

0.92+

four serversQUANTITY

0.91+

theCUBEORGANIZATION

0.9+

SAASTITLE

0.9+

vCloud AirTITLE

0.89+

last couple of yearsDATE

0.89+

Christoph Scholtheis, Emanuele Baldassarre, & Philip Schmokel | AWS Executive Summit 2022


 

foreign welcome to thecube's coverage of AWS re invent 2022. this is a part of our AWS executive Summit AT AWS re invent sponsored by Accenture I'm your host Lisa Martin I've got three guests here with me Christoph schulteis head of devops and infrastructure at Vodafone Germany joins us as well as IMAP baldasare the Accenture AWS business group Europe delivery lead attic Center and Philip schmuckel senior manager at Accenture technology we're going to be talking about what Vodafone Germany is doing in terms of its agile transformation the business and I.T gentlemen it's great to have you on thecube Welcome to the program thank you thanks for having us my pleasure Kristoff let's go ahead and start with you talk to us about what Vodafone Germany is doing in its transformation project with Accenture and with AWS certainly these are but let me first start with explaining what Vodafone does in general so Vodafone is one of the leading telephone and Technology service providers in Germany half of all German citizens are Vodafone customers using Vodafone technology to access the internet make calls and watch TV in the economic sector we provide connectivity for office farms and factories so this is vodafone's largest business and I.T transformation and we're happy to have several Partners on this journey with more than a thousand people working in scaled agile framework with eight Agile Release strings and one of the largest safe implementations in Europe why are we doing this transformation well not only since the recent uncertainties the Telco Market is highly volatile and there are a few challenges that Vodafone was facing in the last years as there are Market changes caused by disruptions from technological advances in competitors or changing customer customer expectations who for example use more of the top services like Netflix or Amazon Prime video what is coming up in the next wave is unknown so Technologies evolve continual disruption from non-tel causes to be expected and being able to innovate fast is the key Focus for everyone in order to be able to react to that we need to cope with that and do so in different aspects to become the leading digital technology company therefore Vodafone Germany is highly simplifying its products as well as processes for example introducing free product upgrades for customers we're driving the change from a business perspective and modernize the it landscape which we call the technology transformation so simply business-led but it driven for that Accenture is our integration partner and AWS provides the services for our platforms got it thank you for the background on the Vodafone the impact that it's making you mentioned the volatility in the Telecom market and also setting the context for what Vodafone Germany is doing with Accenture and AWS email I want to bring you into the conversation now talk to us about the partnership between Accenture and Vodafone in AWS and how is it set up to provide maximum value for customers yeah that's a great question actually well I mean working in Partnership allows obviously to bring in transparency and trust and these are key starting points for a program of this magnitude and a program like this comes out of strong willingness to change the game both internally and on the market so as you can imagine particular attention is required that's top level alignment in general when you implement a program like this you also need to couple the long-term vision of how you want to manage your customers what are the new products that you want to bring to the market with the long-term technology roadmap because the thing that you don't want to happen is that you invest many years and a lot of efforts and then when it comes the end of the journey you figure out that you have to restart a New Journey and then you enter in the NeverEnding Loop so obviously all these things must come together and they come together in what we call the power of three and it consists in AWS Vodafone and Accenture having a strategic Vision alignment and constant updates and most importantly the best of breed in terms of technology and also people so what we do in practice is uh we bring together Market understanding business Vision technical expertise energy collaboration and what is even more important we work as a unique team everybody succeeds here and this is a true win-win partnership more specifically Vodafone leads the Strategic Direction obviously they understand the market they are close to their customers AWS provides all the expertise around the cloud infrastructure insights on the roadmap and this is a key element elasticity both technical but also Financial and the then Accenture comes with its ability to deliver with the strong industry expertise flexibility and when you combine all these ingredients together obviously you understand it's easy to succeed together the power of three it sounds quite compelling it sounds like a very partnership that has a lot of flexibility elasticity as you mentioned and obviously the customer at the end of the day benefits tremendously from that Kristoff I'd like to bring you back into the conversation talk to us about the unified unified platform approach how is walk us through how Vodafone is implementing it with AWS and with Accenture so the applications that form the basis for the transformation program were originally pursuing all kinds of approaches for deployment and use of AWS services in order to support faster adoption and optimize the usage that I mentioned before and we have provided the Vodafone Cloud framework that has been The Trusted platform for several projects within the it in Germany as a side effect the framework facilitates the compliance with Vodafone security requirements and the unified approach also has the benefit that someone who is moving from one team to another will find a structure that looks familiar the best part of the framework though is the operative rights deployment process that helps us reducing the time from implementing for example a new stage from a few weeks to me hours and that together with improvements of the cicd pipeline greatly helped us reducing the time to speed up something and deploy the software on it in order to reach our Target kpis the unified platform provides all kinds of setups like AWS eks and the ecosystem that is commonly used with coping dentists like service mesh monitoring logging and tracing but it can also be used for non-continental erased applications that we have and provide the integration with security monitoring and other tools at the moment we are in contact with other markets of Vodafone to globally share our experience in our code which makes introducing a similar system into other markets straightforward we are also continuously improving our approach and the completely new version of the framework is currently being introduced into the program Germany is doing is really kind of setting the stage as you mentioned Christopher other parts of the business who want to learn from so that's a great thing there that that what you're building is really going to spread throughout the organization and make a positive impact Philip let's bring you into the conversation now let's talk about how you're using AWS specifically to build the new Vodafone Cloud integration platform talk to us about that as part of this overall transformation program sure and let's make it even more specific let's talk API management so looking at the program and from a technology point of view what it really is it is a bold step for Vodafone it's rebuilding huge parts of the infrastructure of their business ID infrastructure on AWS it's Greenfield it's new it's a bold step I would say and then if you put the perspective of API management or integration architecture what I call it it's a unique opportunity at the same time so what it what it gives you is the the opportunity to build the API management layer or an API platform with standardized apis right from the get-go so from the beginning you can build the API platform on top which is in contrast what we see throughout the industry where we see huge problems at our clients at other engagements that try to build these layers as well but they're building them on Legacy so that really makes it unique here for Vodafone and a unique opportunity to we have this API first platform built as part of the transformation program so what we have been built is exactly this platform and as of today there is more than 50 standardized apis throughout the application landscape already available to give you a few examples there is an API where I can change customer data for instance I can change the payment method of a customer straight from an API or I can reboot a customer equipment right from it from an API to fix a network issue other than that of course I can submit an order to order one of vodafone's gigabit internet offerings so on top of the platform there's a developer portal which gives me the option to explore all of the apis yeah in a convenient way in a portal and that's yeah that's developer experience meaning I can log into this portal look through the apis understand what I what I need and just try it out directly from the portal I see the response of an API live in the portal and this is it is really in contrast to what what we've seen before where you would have a long word document a cumbersome spreadsheet a long lasting process to get your hands on and this really gives you the opportunity to just go in try out an API and see how it works so it's really developer experience and a big step forward here then yeah how have we built this platform of course it's running on AWS it's Cloud native it's using eks but what I want to point out here is three principles that that we applied where the first one is of course the cloud native principle meaning we using AKs we are using containers we have infrastructure scales so we aim for every component being Cloud native being meant to be run in the cloud so our infrastructure will sleep at night to save Vodafone cost and it will wake up for the Christmas business where Vodafone intends to do the biggest business and scale of its platform second there is the uh the aim for open API specifications what we aim for is event non-vendor-specific apis so it should not matter whether there's an mdocs backend there's a net tracker back end or an sap Behind These apis it is really meant to decouple the different Business Systems of of a Vodafone by these apis that can be applied by a new custom front-end or by a new business to business application to integrate these apis last but not least there's the automate everything so there's infrastructure as code all around our platform where where I would say the biggest magic of cloud is if we were to lose our production environment lose all apis today it will take us just a few minutes to get everything back and whatever everything I mean redeploy the platform redeploy all apis all services do the configuration again and it will be back in a few minutes that's impressive as downtime is so costly for so many different reasons I think we're gonna know when the vision of this transformation project when it's been achieved how are you going to know that okay so it's kind of flipping the perspective a bit uh maybe uh when I joined Vodafone in in late 2019 I would say the vision for Vodafone was already set and it was really well well put out there it was lived in in the organization it was for Vodafone to become a digital company to become a digital service provider to to get the engineering culture into the company and I would say this Vision has not changed until today maybe now call it a North star and maybe pointing out two big Milestones that have been achieved with this transformation program so we've talked about the safe framework already so with this program we wrote out the one of the biggest safe implementations in the industry which is a big step for Vodafone in its agile Journey as of today there's the safe framework supporting more than 1 000 FTE or 1000 colleagues working and providing value in the transformation program second example or second big milestone was the first go-life of the program so moving stuff to production really proving it works showcasing to the business that it it is actually working there is actually a value provided or constant value provided with a platform and then of course you're asking for next steps right uh talking next steps there is a renewed focus on value and A Renewed focus on value between Accenture and Vodafone means focus on what really provides the most value to Vodafone and I would like to point out two things here the first being migrate more customers scale the platform really prove the the the the the cloud native platform by migrating more customers to it and then second it enables you to decommission the Legacy Stacks decommissioning Legacy Stacks is why we are doing it right so it's migrating to the new migrating to the new platform so last but not least maybe you can hear it we will continue this journey together with with Vodafone to become a digital company or to say that their own words from Telco to TECO I love that from Telco to technology gentlemen thank you so much for joining us on thecube today talking about the power of three Accenture AWS Vodafone how you're really enabling Vodafone to transform into that digital technology company that consumers at the end of the day that demanding consumers want we appreciate your insights and your time thank you so much thank you for having us my pleasure for my guests I'm Lisa Martin you're watching thecube's coverage of the AWS executive Summit AT AWS re invent sponsored by Accenture thanks for watching

Published Date : Nov 30 2022

SUMMARY :

so from the beginning you can build the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Christoph ScholtheisPERSON

0.99+

Emanuele BaldassarrePERSON

0.99+

Philip SchmokelPERSON

0.99+

Lisa MartinPERSON

0.99+

Philip schmuckelPERSON

0.99+

Lisa MartinPERSON

0.99+

VodafoneORGANIZATION

0.99+

GermanyLOCATION

0.99+

Christoph schulteisPERSON

0.99+

EuropeLOCATION

0.99+

AccentureORGANIZATION

0.99+

AWSORGANIZATION

0.99+

Vodafone GermanyORGANIZATION

0.99+

TelcoORGANIZATION

0.99+

vodafoneORGANIZATION

0.99+

TECOORGANIZATION

0.99+

more than a thousand peopleQUANTITY

0.99+

late 2019DATE

0.99+

ChristopherPERSON

0.99+

todayDATE

0.99+

more than 1 000 FTEQUANTITY

0.99+

KristoffPERSON

0.98+

firstQUANTITY

0.98+

two thingsQUANTITY

0.98+

threeQUANTITY

0.98+

AgileTITLE

0.98+

three guestsQUANTITY

0.98+

first oneQUANTITY

0.98+

three principlesQUANTITY

0.98+

secondQUANTITY

0.98+

1000 colleaguesQUANTITY

0.97+

first platformQUANTITY

0.96+

oneQUANTITY

0.96+

one teamQUANTITY

0.93+

apisORGANIZATION

0.92+

AWS executive SummitEVENT

0.92+

NetflixORGANIZATION

0.92+

CB Bohn, Principal Data Engineer, Microfocus | The Convergence of File and Object


 

>> Announcer: From around the globe it's theCUBE. Presenting the Convergence of File and Object brought to you by Pure Storage. >> Okay now we're going to get the customer perspective on object and we'll talk about the convergence of file and object, but really focusing on the object pieces this is a content program that's being made possible by Pure Storage and it's co-created with theCUBE. Christopher CB Bohn is here. He's a lead architect for MicroFocus the enterprise data warehouse and principal data engineer at MicroFocus. CB welcome good to see you. >> Thanks Dave good to be here. >> So tell us more about your role at Microfocus it's a pan Microfocus role because we know the company is a multi-national software firm it acquired the software assets of HP of course including Vertica tell us where you fit. >> Yeah so Microfocus is you know, it's like I can says it's wide, worldwide company that it sells a lot of software products all over the place to governments and so forth. And it also grows often by acquiring other companies. So there is there the problem of integrating new companies and their data. And so what's happened over the years is that they've had a number of different discreet data systems so you've got this data spread all over the place and they've never been able to get a full complete introspection on the entire business because of that. So my role was come in, design a central data repository and an enterprise data warehouse, that all reporting could be generated against. And so that's what we're doing and we selected Vertica as the EDW system and Pure Storage FlashBlade as the communal repository. >> Okay so you obviously had experience with with Vertica in your previous role, so it's not like you were starting from scratch, but paint a picture of what life was like before you embarked on this sort of consolidated approach to your data warehouse. Was it just dispared data all over the place? A lot of M and A going on, where did the data live? >> CB: So >> Right so again the data is all over the place including under people's desks and just dedicated you know their own private SQL servers, It, a lot of data in a Microfocus is one on SQL server, which has pros and cons. Cause that's a great transactional database but it's not really good for analytics in my opinion. So but a lot of stuff was running on that, they had one Vertica instance that was doing some select reporting. Wasn't a very powerful system and it was what they call Vertica enterprise mode where it had dedicated nodes which had the compute and storage in the same locus on each server okay. So Vertica Eon mode is a whole new world because it separates compute from storage. Okay and at first was implemented in AWS so that you could spin up you know different numbers of compute nodes and they all share the same communal storage. But there has been a demand for that kind of capability, but in an on-prem situation. Okay so Pure storage was the first vendor to come along and have an S3 emulation that was actually workable. And so Vertica worked with Pure Storage to make that all happen and that's what we're using. >> Yeah I know back when back from where we used to do face-to-face, we would be at you know Pure Accelerate, Vertica was always there it stopped by the booth, see what they're doing so tight integration there. And you mentioned Eon mode and the ability to scale, storage and compute independently. And so and I think Vertica is the only one I know they were the first, I'm not sure anybody else does that both for cloud and on-prem, but so how are you using Eon mode, are you both in AWS and on-prem are you exclusively cloud? Maybe you could describe that a little bit. >> Right so there's a number of internal rules at Microfocus that you know there's, it's not AWS is not approved for their business processes. At least not all of them, they really wanted to be on-prem and all the transactional systems are on-prem. And so we wanted to have the analytics OLAP stuff close to the OLTP stuff right? So that's why they called there, co-located very close to each other. And so we could, what's nice about this situation is that these S3 objects, it's an S3 object store on the Pure Flash Blade. We could copy those over if we needed it to AWS and we could spin up a version of Vertica there, and keep going. It's like a tertiary GR strategy cause we actually have a, we're setting up a second, Flash Blade Vertica system geo located elsewhere for backup and we can get into it if you want to talk about how the latest version of the Pure software for the Flash Blade allows synchronization across network boundaries of those Flash Blade which is really nice because if, you know there's a giant sinkhole opens up under our Koll of facility and we lose that thing then we just have to switch to DNS. And we were back in business of the DR. And then the third one was to go, we could copy those objects over to AWS and be up and running there. So we're feeling pretty confident about being able to weather whatever comes along. >> Yeah I'm actually very interested in that conversation but before we go there. you mentioned you want, you're going to have the old lab close to the OLTP, was that for latency reasons, data movement reasons, security, all of the above. >> Yeah it's really all of the above because you know we are operating under the same sub-net. So to gain access to that data, you know you'd have to be within that VPN environment. We didn't want to going out over the public internet. Okay so and just for latency reasons also, you know we have a lot of data and we're continually doing ETL processes into Vertica from our production data, transactional databases. >> Right so they got to be approximate. So I'm interested in so you're using the Pure Flash Blade as an object store, most people think, oh object simple but slow. Not the case for you is that right? >> Not the case at all >> Why is that. >> This thing had hoop It's ripping, well you have to understand about Vertica and the way it stores data. It stores data in what they call storage containers. And those are immutable, okay on disc whether it's on AWS or if you had a enterprise mode Vertica, if you do an update or delete it actually has to go and retrieve that object container from disc and it destroys it and rebuilds it, okay which is why you don't, you want to avoid updates and deletes with vertica because the way it gets its speed is by sorting and ordering and encoding the data on disk. So it can read it really fast. But if you do an operation where you're deleting or updating a record in the middle of that, then you've got to rebuild that entire thing. So that actually matches up really well with S3 object storage because it's kind of the same way, it gets destroyed and rebuilt too okay. So that matches up very well with Vertica and we were able to design the system so that it's a panda only. Now we have some reports that we're running in SQL server. Okay which we're taking seven days. So we moved that to Vertica from SQL server and we rewrote the queries, which were had, which had been written in TC SQL with a bunch of loops and so forth and we were to get, this is amazing it went from seven days to two seconds, to generate this report. Which has tremendous value to the company because it would have to have this long cycle of seven days to get a new introspection in what they call the knowledge base. And now all of a sudden it's almost on demand two seconds to generate it. That's great and that's because of the way the data is stored. And the S3 you asked about, oh you know it, it's slow, well not in that context. Because what happens really with Vertica Eon mode is that it can, they have, when you set up your compute nodes, they have local storage also which is called the depot. It's kind of a cache okay. So the data will be drawn from the Flash Blade and cached locally. And that was, it was thought when they designed that, oh you know it's that'll cut down on the latency. Okay but it turns out that if you have your compute nodes close meaning minimal hops to the Flash Blade that you can actually tell Vertica, you know don't even bother caching that stuff just read it directly on the fly from the from the Flash Blade and the performance is still really good. It depends on your situation. But I know for example a major telecom company that uses the same topologies we're talking about here they did the same thing. They just dropped the cache cause the Flash Blade was able to deliver the data fast enough. >> So that's, you're talking about that's speed of light issues and just the overhead of switching infrastructure is that, it's eliminated and so as a result you can go directly to the storage array? >> That's correct yeah, it's like, it's fast enough that it's almost as if it's local to the compute node. But every situation is different depending on your needs. If you've got like a few tables that are heavily used, then yeah put them in the cache because that'll be probably a little bit faster. But if you're have a lot of ad hoc queries that are going on, you know you may exceed the storage of the local cache and then you're better off having it just read directly from the, from the Flash Blade. >> Got it so it's >> Okay. >> It's an append only approach. So you're not >> Right >> Overwriting on a record, so but then what you have automatically re index and that's the intelligence of the system. how does that work? >> Oh this is where we did a little bit of magic. There's not really anything like magic but I'll tell you what it is I mean. ( Dave laughing) Vertica does not have indexes. They don't exist. Instead I told you earlier that it gets a speed by sorting and encoding the data on disk and ordering it right. So when you've got an append-only situation, the natural question is well if I have a unique record, with let's say ID one, two, three, what happens if I append a new version of that, what happens? Well the way Vertica operates is that there's a thing called a projection which is actually like a materialized columnar data store. And you can have a, what they call a top-K projection, which says only put in this projection the records that meet a certain condition. So there's a field that we like to call a discriminator field which is like okay usually it's the latest update timestamp. So let's say we have record one, two, three and it had yesterday's date and that's the latest version. Now a new version comes in. When the data at load time vertical looks at that and then it looks in the projection and says does this exist already? If it doesn't then it adds it. If it does then that one now goes into that projection okay. And so what you end up having is a projection that is the latest snapshot of the data, which would be like, oh that's the reality of what the table is today okay. But inherent in that is that you now have a table that has all the change history of those records, which is awesome. >> Yeah. >> Because, you often want to go back and revisit, you know what it will happen to you. >> But that materialized view is the most current and the system knows that at least can (murmuring). >> Right so we then create views that draw off from that projection so that our users don't have to worry about any of that. They just get oh and say select from this view and they're getting the latest greatest snapshot of what the reality of the data is right now. But if they want to go back and say, well how did this data look two days ago? That's an easy query for them to do also. So they get the best of both worlds. >> So could you just plug any flash array into your system and achieve the same results or is there anything really unique about Pure? >> Yeah well they're the only ones that have got I think really dialed in the S3 object form because I don't think AWS actually publishes every last detail of that S3 spec. Okay so it had, there's a certain amount of reverse engineering they had to do I think. But they got it right. When we've, a couple maybe a year and a half ago or so there they were like at 99%, but now they worked with Vertica people to make sure that that object format was true to what it should be. So that it works just as if Vertica doesn't care, if it is on AWS or if it's on Pure Flash Blade because Pure did a really good job of dialing in that format and so Vertica doesn't care. It just knows S3, doesn't know what it doesn't care where it's going it just works. >> So the essentially vendor R and D abstracted that complexity so you didn't have to rewrite the application is that right? >> Right, so you know when Vertica ships it's software, you don't get a specific version for Pure or AWS, it's all in one package, and then when you configure it, it knows oh okay well, I'm just pointed at the, you know this port, on the Pure storage Flash Blade, and it just works. >> CB what's your data team look like? How is it evolving? You know a lot of customers I talked to they complain that they struggled to get value out of the data and they don't have the expertise, what does your team look like? How is it, is it changing or did the pandemic change things at all? I wonder if you could bring us up to date on that? >> Yeah but in some ways Microfocus has an advantage in that it's such a widely dispersed across the world company you know it's headquartered in the UK, but I deal with people I'm in the Bay Area, we have people in Mexico, Romania, India. >> Okay enough >> All over the place yeah all over the place. So when this started, it was actually a bigger project it got scaled back, it was almost to the point where it was going to be cut. Okay, but then we said, well let's try to do almost a skunkworks type of thing with reduced staff. And so we're just like a hand. You could count the number of key people on this on one hand. But we got it all together, and it's been a traumatic transformation for the company. Now there's, it's one approval and admiration from the highest echelons of this company that, hey this is really providing value. And the company is starting to get views into their business that they didn't have before. >> That's awesome, I mean, I've watched Microfocus for years. So to me they've always had a, their part of their DNA is private equity I mean they're sharp investors, they do great M and A >> CB: Yeah >> They know how to drive value and they're doing modern M and A, you know, we've seen what they what wait, what they did with SUSE, obviously driving value out of Vertica, they've got a really, some sharp financial people there. So that's they must have loved the the Skunkworks, fast ROI you know, small denominator, big numerator. (laughing) >> Well I think that in this case, smaller is better when you're doing development. You know it's a two-minute cooks type of thing and if you've got people who know what they're doing, you know I've got a lot of experience with Vertica, I've been on the advisory board for Vertica for a long time. >> Right And you know I was able to learn from people who had already, we're like the second or third company to do a Pure Flash Blade Vertica installation, but some of the best companies after they've already done it we are members of the advisory board also. So I learned from the best, and we were able to get this thing up and running quickly and we've got you know, a lot of other, you know handful of other key people who know how to write SQL and so forth to get this up and running quickly. >> Yeah so I mean, look it Pure is a fit I mean I sound like a fan boy, but Pure is all about simplicity, so is object. So that means you don't have to ra, you know worry about wrangling storage and worrying about LANs and all that other nonsense and file names but >> I have burned by hardware in the past you know, where oh okay they built into a price and so they cheap out on stuff like fans or other things in these components fail and the whole thing goes down, but this hardware is super good quality. And so I'm happy with the quality of that we're getting. >> So CB last question. What's next for you? Where do you want to take this initiative? >> Well we are in the process now of, we're when, so I designed a system to combine the best of the Kimball approach to data warehousing and the inland approach okay. And what we do is we bring over all the data we've got and we put it into a pristine staging layer. Okay like I said it's a, because it's append-only, it's essentially a log of all the transactions that are happening in this company, just as they appear okay. And then from the Kimball side of things we're designing the data marts now. So that's what the end users actually interact with. So we're taking the, we're examining the transactional systems to say, how are these business objects created? What's the logic there and we're recreating those logical models in Vertica. So we've done a handful of them so far, and it's working out really well. So going forward we've got a lot of work to do, to create just about every object that the company needs. >> CB you're an awesome guest really always a pleasure talking to you and >> Thank you. >> congratulations and good luck going forward stay safe. >> Thank you, you too Dave. >> All right thank you. And thank you for watching the Convergence of File and Object. This is Dave Vellante for theCUBE. (soft music)

Published Date : Apr 28 2021

SUMMARY :

brought to you by Pure Storage. but really focusing on the object pieces it acquired the software assets of HP all over the place to Okay so you obviously so that you could spin up you know and the ability to scale, and we can get into it if you want to talk security, all of the above. Yeah it's really all of the above Not the case for you is that right? And the S3 you asked about, storage of the local cache So you're not and that's the intelligence of the system. and that's the latest version. you know what it will happen to you. and the system knows that at least the data is right now. in the S3 object form and then when you configure it, I'm in the Bay Area, And the company is starting to get So to me they've always had loved the the Skunkworks, I've been on the advisory a lot of other, you know So that means you don't have to by hardware in the past you know, Where do you want to take this initiative? object that the company needs. congratulations and good And thank you for watching

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

Dave VellantePERSON

0.99+

MexicoLOCATION

0.99+

AWSORGANIZATION

0.99+

MicroFocusORGANIZATION

0.99+

VerticaORGANIZATION

0.99+

UKLOCATION

0.99+

seven daysQUANTITY

0.99+

RomaniaLOCATION

0.99+

99%QUANTITY

0.99+

HPORGANIZATION

0.99+

MicrofocusORGANIZATION

0.99+

two-minuteQUANTITY

0.99+

secondQUANTITY

0.99+

two secondsQUANTITY

0.99+

IndiaLOCATION

0.99+

KimballORGANIZATION

0.99+

Pure StorageORGANIZATION

0.99+

each serverQUANTITY

0.99+

CB BohnPERSON

0.99+

yesterdayDATE

0.99+

two days agoDATE

0.99+

firstQUANTITY

0.99+

Christopher CB BohnPERSON

0.98+

SQLTITLE

0.98+

VerticaTITLE

0.98+

a year and a half agoDATE

0.98+

both worldsQUANTITY

0.98+

Pure Flash BladeCOMMERCIAL_ITEM

0.98+

bothQUANTITY

0.98+

verticaTITLE

0.98+

Bay AreaLOCATION

0.97+

oneQUANTITY

0.97+

Flash BladeCOMMERCIAL_ITEM

0.97+

third oneQUANTITY

0.96+

CBPERSON

0.96+

one packageQUANTITY

0.96+

todayDATE

0.95+

Pure storage Flash BladeCOMMERCIAL_ITEM

0.95+

first vendorQUANTITY

0.95+

pandemicEVENT

0.94+

S3TITLE

0.94+

martsDATE

0.92+

SkunkworksORGANIZATION

0.91+

SUSEORGANIZATION

0.89+

threeQUANTITY

0.87+

S3COMMERCIAL_ITEM

0.87+

third companyQUANTITY

0.84+

Pure Flash Blade VerticaCOMMERCIAL_ITEM

0.83+

Pure Storage Convergence of File and Object FULL SHOW V1


 

we're running what i would call a little mini series and we're exploring the convergence of file and object storage what are the key trends why would you want to converge file an object what are the use cases and architectural considerations and importantly what are the business drivers of uffo so-called unified fast file and object in this program you'll hear from matt burr who is the gm of pure's flashblade business and then we'll bring in the perspectives of a solutions architect garrett belsner who's from cdw and then the analyst angle with scott sinclair of the enterprise strategy group esg he'll share some cool data on our power panel and then we'll wrap with a really interesting technical conversation with chris bond cb bond who is a lead data architect at microfocus and he's got a really cool use case to share with us so sit back and enjoy the program from around the globe it's thecube presenting the convergence of file and object brought to you by pure storage we're back with the convergence of file and object a special program made possible by pure storage and co-created with the cube so in this series we're exploring that convergence between file and object storage we're digging into the trends the architectures and some of the use cases for unified fast file and object storage uffo with me is matt burr who's the vice president and general manager of flashblade at pure storage hello matt how you doing i'm doing great morning dave how are you good thank you hey let's start with a little 101 you know kind of the basics what is unified fast file and object yeah so look i mean i think you got to start with first principles talking about the rise of unstructured data so um when we think about unstructured data you sort of think about the projections 80 of data by 2025 is going to be unstructured data whether that's machine generated data or um you know ai and ml type workloads uh you start to sort of see this um i don't want to say it's a boom uh but it's sort of a renaissance for unstructured data if you will we move away from you know what we've traditionally thought of as general purpose nas and and file shares to you know really things that focus on uh fast object taking advantage of s3 cloud native applications that need to integrate with applications on site um you know ai workloads ml workloads tend to look to share data across you know multiple data sets and you really need to have a platform that can deliver both highly performant and scalable fast file and object from one system so talk a little bit more about some of the drivers that you know bring forth that need to unify file an object yeah i mean look you know there's a there's there's a real challenge um in managing you know bespoke uh bespoke infrastructure or architectures around general purpose nas and daz etc so um if you think about how a an architect sort of looks at an application they might say well okay i need to have um you know fast daz storage proximal to the application um but that's going to require a tremendous amount of dams which is a tremendous amount of drives right hard drives are you know historically pretty pretty pretty unwieldy to manage because you're replacing them relatively consistently at multi-petabyte scale um so you start to look at things like the complexity of daz you start to look at the complexity of general purpose nas and you start to just look at quite frankly something that a lot of people don't really want to talk about anymore but actual data center space right like consolidation matters the ability to take you know something that's the size of a microwave like a modern flash blade or a modern um you know uffo device uh replaces something that might be you know the size of three or four or five refrigerators so matt what why is is now the right time for this i mean for years nobody really paid much attention to object s3 already obviously changed you know that course most of the world's data is still stored in file formats and you get there with nfs or smb why is now the time to think about unifying object and file well because we're moving to things like a contactless society um you know the the things that we're going to do are going to just require a tremendous amount more compute power network um and quite frankly storage throughput and you know i can give you two sort of real primary examples here right you know warehouses are being you know taken over by robots if you will um it's not a war it's a it's a it's sort of a friendly advancement in you know how do i how do i store a box in a warehouse and you know we have we have a customer who focuses on large sort of big box distribution warehousing and you know a box that carried a an object two weeks ago might have a different box size two weeks later well that robot needs to know where the space is in the data center in order to put it but also needs to be able to process hey i don't want to put the thing that i'm going to access the most in the back of the warehouse i'm going to put that thing in the front of the warehouse all of those types of data you know sort of real time you can think of the robot as almost an edge device is processing in real time unstructured data in its object right so it's sort of the emergence of these new types of workloads and i give you the opposite example the other end of the spectrum is ransomware right you know today you know we'll talk to customers and they'll say quite commonly hey if you know anybody can sell me a backup device i need something that can restore quickly um if you had the ability to restore something in 270 terabytes an hour or 250 terabytes an hour uh that's much faster when you're dealing with a ransomware attack you want to get your data back quickly you know so i want to add i was going to ask you about that later but since you brought it up what is the right i guess call it architecture for for for ransomware i mean how and explain like how unified object and file which appointment i get the fast recovery but how how would you recommend a customer uh go about architecting a ransomware proof you know system yeah well you know with with flashblade and and with flasharray there's an actual feature called called safe mode and that safe mode actually protects uh the snapshots and and the data from uh sort of being a part of the of the ransomware event and so if you're in a type of ransomware situation like this you're able to leverage safe mode and you say okay what happens in a ransomware attack is you can't get access to your data and so you know the bad guy the perpetrator is basically saying hey i'm not going to give you access to your data until you pay me you know x in bitcoin or whatever it might be right um with with safe mode those snapshots are actually protected outside of the ransomware blast zone and you can bring back those snapshots because what's your alternative if you're not doing something like that your alternative is either to pay and unlock your data or you have to start retouring restoring excuse me from tape or slow disk that could take you days or weeks to get your data back so leveraging safe mode um you know in either the flash for the flash blade product uh is a great way to go about architecting against ransomware i got to put my my i'm thinking like a customer now so safe mode so that's an immutable mode right can't change the data um is it can can an administrator go in and change that mode can you turn it off do i still need an air gap for example what would you recommend there yeah so there there are still um uh you know sort of our back or roll back role-based access control policies uh around who can access that safe mode and who can right okay so uh anyway subject for a different day i want to i want to actually bring up uh if you don't object a topic that i think used to be really front and center and it now be is becoming front and center again i mean wikibon just produced a research note forecasting the future of flash and hard drives and those of you who follow us know we've done this for quite some time and you can if you could bring up the chart here you you could and we see this happening again it was originally we forecast the the the death of of quote-unquote high spin speed disc drives which is kind of an oxymoron but you can see on here on this chart this hard disk had a magnificent journey but they peaked in volume in manufacturing volume in 2010 and the reason why that is is so important is that volumes now are steadily dropping you can see that and we use wright's law to explain why this is a problem and wright's law essentially says that as you your cumulative manufacturing volume doubles your cost to manufacture decline by a constant percentage now i won't go too much detail on that but suffice it to say that flash volumes are growing very rapidly hdd volumes aren't and so flash because of consumer volumes can take advantage of wright's law and that constant reduction and that's what's really important for the next generation which is always more expensive to build uh and so this kind of marks the beginning of the end matt what do you think what what's the future hold for spinning disc in your view uh well i can give you the answer on two levels on a personal level uh it's why i come to work every day uh you know the the eradication or or extinction of an inefficient thing um you know i like to say that uh inefficiency is the bane of my existence uh and i think hard drives are largely inefficient and i'm willing to accept the sort of long-standing argument that um you know we've seen this transition in block right and we're starting to see it repeat itself in in unstructured data and i'm going to accept the argument that cost is a vector here and it most certainly is right hdds have been considerably cheaper uh than than than flash storage um you know even to this day uh you know up up to this point right but we're starting to approach the point where you sort of reach a a 3x sort of um you know differentiator between the cost of an hdd and an std and you know that really is that point in time when uh you begin to pick up a lot of volume and velocity and so you know that tends to map directly to you know what you're seeing here which is you know a a slow decline uh which i think is going to become even more rapid kind of probably starting around next year um where you start to see sds excuse me ssds uh you know really replacing hdds uh at a much more rapid clip particularly on the unstructured data side and it's largely around cost the the workloads that we talked about robots and warehouses or you know other types of advanced machine learning and artificial intelligence type applications and workflows you know they require a degree of performance that a hard drive just can't deliver we are we are seeing sort of the um creative innovative uh disruption of an entire industry right before our eyes it's a fun thing to live through yeah and and we would agree i mean it doesn't the premise there is that it doesn't have to be less expensive we think it will be by you know the second half or early second half of this decade but even if it's a we think around a 3x delta the value of of ssd relative to spinning disk is going to overwhelm just like with your laptop you know it got to the point where you said why would i ever have a spinning disc in my laptop we see the same thing happening here um and and so and we're talking about you know raw capacity you know put in compression and d-dupe and everything else that you really can't do with spinning discs because of the performance issues you can do with flash okay let's come back to uffo can we dig into the challenges specifically that that this solves for customers give me give us some examples yeah so you know i mean if we if we think about the examples um you know the the robotic one um i think is is is the one that i think is the marker for you know kind of of of the the modern side of of of what we see here um but what we're you know what we're what we're seeing from a trend perspective which you know not everybody's deploying robots right um you know there's there's many companies that are you know that aren't going to be in either the robotic business uh or or even thinking about you know sort of future type oriented type things but what they are doing is green field applications are being built on object um generally not on not on file and and not on block and so you know the rise of of object as sort of the the sort of let's call it the the next great protocol for um you know for uh for for modern workloads right this is this is that that modern application coming to the forefront and that could be anything from you know financial institutions you know right down through um you we've even see it and seen it in oil and gas uh we're also seeing it across across healthcare uh so you know as as as companies take the opportunity as industries to take this opportunity to modernize you know they're modernizing not on things that are are leveraging you know um you know sort of archaic disk technology they're they're they're really focusing on on object but they still have file workflows that they need to that they need to be able to support and so having the ability to be able to deliver those things from one device in a capacity orientation or a performance orientation uh while at the same time dramatically simplifying uh the overall administration of your environment both physically and non-physically is a key driver so the great thing about object is it's simple it's a kind of a get put metaphor um it's it scales out you know because it's got metadata associated with the data uh and and it's cheap uh the drawback is you don't necessarily associate it with high performance and and and as well most applications don't you know speak in that language they speak in the language of file you know or as you mentioned block so i i see real opportunities here if i have some some data that's not necessarily frequently accessed you know every day but yet i want to then whether end of quarter or whatever it is i want to i want to or machine learning i want to apply some ai to that data i want to bring it in and then apply a file format uh because for performance reasons is that right maybe you could unpack that a little bit yeah so um you know we see i mean i think you described it well right um but i don't think object necessarily has to be slow um and nor does it have to be um you know because when you think about you brought up a good point with metadata right being able to scale to a billions of objects being able to scale to billions of objects excuse me is of value right um and i think people do traditionally associate object with slow but it's not necessarily slow anymore right we we did a sort of unofficial survey of of of our of our customers and our employee base and when people described object they thought of it as like law firms and storing a word doc if you will um and that that's just you know i think that there's a lack of understanding or a misnomer around what modern what modern object has become and perform an object particularly at scale when we're talking about billions of objects you know that's the next frontier right um is it at pace performance wise with you know the other protocols no uh but it's making leaps and grounds so you talked a little bit more about some of the verticals that you see i mean i think when i think of financial services i think transaction processing but of course they have a lot of tons of unstructured data are there any patterns you're seeing by by vertical market um we're you know we're not that's the interesting thing um and you know um as a as a as a as a company with a with a block heritage or a block dna those patterns were pretty easy to spot right there were a certain number of databases that you really needed to support oracle sql some postgres work et cetera then kind of the modern databases around cassandra and things like that you knew that there were going to be vmware environments you know you could you could sort of see the trends and where things were going unstructured data is such a broader horizontal thing right so you know inside of oil and gas for example you have you know um you have specific applications and bespoke infrastructures for those applications um you know inside of media entertainment you know the same thing the the trend that we're seeing the commonality that we're seeing is the modernization of you know object as a starting point for all the all the net new workloads within within those industry verticals right that's the most common request we see is what's your object roadmap what's your you know what's your what's your object strategy you know where do you think where do you think object is going so um there isn't any um you know sort of uh there's no there's no path uh it's really just kind of a wide open field in front of us with common requests across all industries so the amazing thing about pure just as a kind of a little you know quasi you know armchair historian the industry is pure was really the only company in many many years to be able to achieve escape velocity break through a billion dollars i mean three part couldn't do it isilon couldn't do it compellent couldn't do it i could go on but pure was able to achieve that as an independent company and so you become a leader you look at the gartner magic quadrant you're a leader in there i mean if you've made it this far you've got to have some chops and so of course it's very competitive there are a number of other storage suppliers that have announced products that unify object and file so i'm interested in how pure differentiates why pure um it's a great question um and it's one that uh you know having been a long time puritan uh you know i take pride in answering um and it's actually a really simple answer um it's it's business model innovation and technology right the the technology that goes behind how we do what we do right and i don't mean the product right innovation is product but having a better support model for example um or having on the business model side you know evergreen storage right where we sort of look at your relationship to us as a subscription right um you know we're going to sort of take the thing that that you've had and we're going to modernize that thing in place over time such that you're not rebuying that same you know terabyte or you know petabyte of storage that you've that you that you've paid for over time so um you know sort of three legs of the stool uh that that have made you know pure clearly differentiated i think the market has has recognized that um you're right it's it's hard to break through to a billion dollars um but i look forward to the day that you know we we have two billion dollar products and i think with uh you know that rise in in unstructured data growing to 80 by 2025 and you know the massive transition that you know you guys have noted in in in your hdd slide i think it's a huge opportunity for us on you know the other unstructured data side of the house you know the other thing i'd add matt i've talked to cause about this is is it's simplicity first i've asked them why don't you do this why don't you do it and the answer is always the same is that adds complexity and we we put simplicity for the customer ahead of everything else and i think that served you very very well what about the economics of of unified file an object i mean if you bring in additional value presumably there's a there there's a cost to that but there's got to be also a business case behind it what kind of impact have you seen uh with customers yeah i mean look i'll i'll i'll go back to something i mentioned earlier which is just the reclamation of floor space and power and cooling right um you know there's a you know there's people people people want to search for kind of the the sexier element if you will when it comes to looking at how we how you derive value from something but the reality is if you're reducing your power consumption by you know by by a material percentage power bills matter in big in big data centers um you know customers typically are are facing you know a paradigm of well i i want to go to the cloud but you know the clouds are not being more expensive than i thought it was going to be or you know i figured out what i can use in the cloud i thought it was going to be everything but it's not going to be everything so hybrid's where we're landing but i want to be out of the data center business and i don't want to have a team of 20 storage people to match you know to administer my storage um you know so there's sort of this this very tangible value around you know hey if i could manage um you know multiple petabytes with one full-time engineer uh because the system uh to yoran kaz's point was radically simpler to administer didn't require someone to be running around swapping drives all the time would that be a value the answer is yes 100 of the time right and then you start to look at okay all right well on the uffo side from a product perspective hey if i have to manage a you know bespoke environment for this application if i have to manage a bespoke environment for this application and a bespoke environment for this application and this book environment for this application i'm managing four different things and can i actually share data across those four different things there's ways to share data but most customers it just gets too complex how do you even know what your what your gold.master copy is of data if you have it in four different places or you try to have it in four different places and it's four different siloed infrastructures so when you get to the sort of the side of you know how do we how do you measure value in uffo it's actually being able to have all of that data concentrated in one place so that you can share it from application to application got it i'm interested we use a couple minutes left i'm interested in the the update on flashblade you know generally but also i have a specific question i mean look getting file right is hard enough uh you just announced smb support for flashblade i'm interested in you know how that fits in i think it's kind of obvious with file and object converging but give us the update on on flashblade and maybe you could address that specific question yeah so um look i mean we're we're um you know tremendously excited about the growth of flashblade uh you know we we we found workloads we never expected to find um you know the rapid restore workload was one that was actually brought to us from from from a customer actually and has become you know one of our one of our top two three four you know workloads so um you know we're really happy with the trend we've seen in it um and you know mapping back to you know thinking about hdds and ssds you know we're well on a path to building a billion dollar business here so you know we're very excited about that um but to your point you know you don't just snap your fingers and get there right um you know we've learned that doing file and object uh is is harder than block um because there's more things that you have to go do for one you're basically focused on three protocols s b nfs and s3 not necessarily in that order um but to your point about smb uh you know we we are uh on the path through to releasing um you know smb uh full full native smb support in in the system that will allow us to uh service customers we have a limitation with some customers today where they'll have an s b portion of their nfs workflow um and we do great on the nfs side um but you know we didn't we didn't have the ability to plug into the s p component of their workflow so that's going to open up a lot of opportunity for us um on on that front um and you know we continue to you know invest significantly across the board in in areas like security which is you know become more than just a hot button you know today security's always been there but it feels like it's blazing hot today um and so you know going through the next couple years we'll be looking at uh you know developing some some um you know pretty material security elements of the product as well so uh well on a path to a billion dollars is the net on that and uh you know we're we're fortunate to have have smb here and we're looking forward to introducing that to to those customers that have you know nfs workloads today with an s p component yeah nice tailwind good tam expansion strategy matt thanks so much really appreciate you coming on the program we appreciate you having us and uh thanks much dave good to see you [Music] okay we're back with the convergence of file and object in a power panel this is a special content program made possible by pure storage and co-created with the cube now in this series what we're doing is we're exploring the coming together of file and object storage trying to understand the trends that are driving this convergence the architectural considerations that users should be aware of and which use cases make the most sense for so-called unified fast file in object storage and with me are three great guests to unpack these issues garrett belsner is the data center solutions architect he's with cdw scott sinclair is a senior analyst at enterprise strategy group he's got deep experience on enterprise storage and brings that independent analyst perspective and matt burr is back with us gentlemen welcome to the program thank you hey scott let me let me start with you uh and get your perspective on what's going on the market with with object the cloud a huge amount of unstructured data out there that lives in files give us your independent view of the trends that you're seeing out there well dave you know where to start i mean surprise surprise date is growing um but one of the big things that we've seen is we've been talking about data growth for what decades now but what's really fascinating is or changed is because of the digital economy digital business digital transformation whatever you call it now people are not just storing data they actually have to use it and so we see this in trends like analytics and artificial intelligence and what that does is it's just increasing the demand for not only consolidation of massive amounts of storage that we've seen for a while but also the demand for incredibly low latency access to that storage and i think that's one of the things that we're seeing that's driving this need for convergence as you put it of having multiple protocols consolidated onto one platform but also the need for high performance access to that data thank you for that a great setup i got like i wrote down three topics that we're going to unpack as a result of that so garrett let me let me go to you maybe you can give us the perspective of what you see with customers is is this is this like a push where customers are saying hey listen i need to converge my file and object or is it more a story where they're saying garrett i have this problem and then you see unified file and object as a solution yeah i think i think for us it's you know taking that consultative approach with our customers and really kind of hearing pain around some of the pipelines the way that they're going to market with data today and kind of what are the problems that they're seeing we're also seeing a lot of the change driven by the software vendors as well so really being able to support a disaggregated design where you're not having to upgrade and maintain everything as a single block has really been a place where we've seen a lot of customers pivot to where they have more flexibility as they need to maintain larger volumes of data and higher performance data having the ability to do that separate from compute and cache and those other layers are is really critical so matt i wonder if if you could you know follow up on that so so gary was talking about this disaggregated design so i like it you know distributed cloud etc but then we're talking about bringing things together in in one place right so square that circle how does this fit in with this hyper-distributed cloud edge that's getting built out yeah you know i mean i i could give you the easy answer on that but i could also pass it back to garrett in the sense that you know garrett maybe it's important to talk about um elastic and splunk and some of the things that you're seeing in in that world and and how that i think the answer to dave's question i think you can give you can give a pretty qualified answer relative what your customers are seeing oh that'd be great please yeah absolutely no no problem at all so you know i think with um splunk kind of moving from its traditional design and classic design whatever you want you want to call it up into smart store um that was kind of one of the first that we saw kind of make that move towards kind of separating object out and i think you know a lot of that comes from their own move to the cloud and updating their code to basically take advantage of object object in the cloud uh but we're starting to see you know with like vertica eon for example um elastic other folks taking that same type of approach where in the past we were building out many 2u servers we were jamming them full of uh you know ssds and nvme drives that was great but it doesn't really scale and it kind of gets into that same problem that we see with you know hyper convergence a little bit where it's you know you're all you're always adding something maybe that you didn't want to add um so i think it you know again being driven by software is really kind of where we're seeing the world open up there but that whole idea of just having that as a hub and a central place where you can then leverage that out to other applications whether that's out to the edge for machine learning or ai applications to take advantage of it i think that's where that convergence really comes back in but i think like scott mentioned earlier it's really folks are now doing things with the data where before i think they were really storing it trying to figure out what are we going to actually do with it when we need to do something with it so this is making it possible yeah and dave if i could just sort of tack on to the end of garrett's answer there you know in particular vertica with neon mode the ability to leverage sharded subclusters give you um you know sort of an advantage in terms of being able to isolate performance hot spots you an advantage to that is being able to do that on a flashblade for example so um sharded subclusters allow you to sort of say i'm you know i'm going to give prioritization to you know this particular element of my application and my data set but i can still share those share that data across those across those subclusters so um you know as you see you know vertica advance with eon mode or you see splunk advance with with smart store you know these are all sort of advancements that are you know it's a chicken in the egg thing um they need faster storage they need you know sort of a consolidated data storage data set um and and that's what sort of allows these things to drive forward yeah so vertica eon mode for those who don't know it's the ability to separate compute and storage and scale independently i think i think vertica if they're if they're not the only one they're one of the only ones i think they might even be the only one that does that in the cloud and on-prem and that sort of plays into this distributed you know nature of this hyper-distributed cloud i sometimes call it and and i'm interested in the in the data pipeline and i wonder scott if we could talk a little bit about that maybe we're unified object and file i mean i'm envisioning this this distributed mesh and then you know uffo is sort of a node on that that i i can tap when i need it but but scott what are you seeing as the state of infrastructure as it relates to the data pipeline and the trends there yeah absolutely dave so when i think data pipeline i immediately gravitate to analytics or or machine learning initiatives right and so one of the big things we see and this is it's an interesting trend it seems you know we continue to see increased investment in ai increased interest and people think and as companies get started they think okay well what does that mean well i got to go hire a data scientist okay well that data scientist probably needs some infrastructure and what they end what often happens in these environments is where it ends up being a bespoke environment or a one-off environment and then over time organizations run into challenges and one of the big challenges is the data science team or people whose jobs are outside of it spend way too much time trying to get the infrastructure to to keep up with their demands and predominantly around data performance so one of the one of the ways organizations that especially have artificial intelligence workloads in production and we found this in our research have started mitigating that is by deploying flash all across the data pipeline we have we have data on this sorry interrupt but yeah if you could bring up that that chart that would be great um so take us through this uh uh scott and share with us what we're looking at here yeah absolutely so so dave i'm glad you brought this up so we did this study um i want to say late last year uh one of the things we looked at was across artificial intelligence environments now one thing that you're not seeing on this slide is we went through and we asked all around the data pipeline and we saw flash everywhere but i thought this was really telling because this is around data lakes and when when or many people think about the idea of a data lake they think about it as a repository it's a place where you keep maybe cold data and what we see here is especially within production environments a pervasive use of flash storage so i think that 69 of organizations are saying their data lake is mostly flash or all flash and i think we have zero percent that don't have any flash in that environment so organizations are finding out that they that flash is an essential technology to allow them to harness the value of their data so garrett and then matt i wonder if you could chime in as well we talk about digital transformation and i sometimes call it you know the coveted forced march to digital transformation and and i'm curious as to your perspective on things like machine learning and the adoption and scott you may have a perspective on this as well you know we had to pivot we had to get laptops we had to secure the end points you know and vdi those became super high priorities what happened to you know injecting ai into my applications and and machine learning did that go in the back burner was that accelerated along with the need to digitally transform garrett i wonder if you could share with us what you saw with with customers last year yeah i mean i think we definitely saw an acceleration um i think folks are in in my market are still kind of figuring out how they inject that into more of a widely distributed business use case but again this data hub and allowing folks to now take advantage of this data that they've had in these data lakes for a long time i agree with scott i mean many of the data lakes that we have were somewhat flash accelerated but they were typically really made up of you know large capacity slower spinning near-line drive accelerated with some flash but i'm really starting to see folks now look at some of those older hadoop implementations and really leveraging new ways to look at how they consume data and many of those redesigned customers are coming to us wanting to look at all flash solutions so we're definitely seeing it we're seeing an acceleration towards folks trying to figure out how to actually use it in more of a business sense now or before i feel it goes a little bit more skunk works kind of people dealing with uh you know in a much smaller situation maybe in the executive offices trying to do some testing and things scott you're nodding away anything you can add in here yeah so first off it's great to get that confirmation that the stuff we're seeing in our research garrett's seeing you know out in the field and in the real world um but you know as it relates to really the past year it's been really fascinating so one of the things we study at esg is i.t buying intentions what are things what are initiatives that companies plan to invest in and at the beginning of 2020 we saw a heavy interest in machine learning initiatives then you transition to the middle of 2020 in the midst of covid some organizations continued on that path but a lot of them had the pivot right how do we get laptops to everyone how do we continue business in this new world well now as we enter into 2021 and hopefully we're coming out of this uh you know the pandemic era um we're getting into a world where organizations are pivoting back towards these strategic investments around how do i maximize the usage of data and actually accelerating those because they've seen the importance of of digital business initiatives over the past year yeah matt i mean when we exited 2019 we saw a narrowing of experimentation and our premise was you know that that organizations are going to start now operationalizing all their digital transformation experiments and and then we had a you know 10 month petri dish on on digital so what do you what are you seeing in this regard a 10 month petri dish is an interesting way to interesting way to describe it um you know we saw another there's another there's another candidate for pivot in there around ransomware as well right um you know security entered into the mix which took people's attention away from some of this as well i mean look i'd like to bring this up just a level or two um because what we're actually talking about here is progress right and and progress isn't is an inevitability um you know whether it's whether whether you believe that it's by 2025 or you or you think it's 2035 or 2050 it doesn't matter we're on a forced march to the eradication of disk and that is happening in many ways uh you know in many ways um due to some of the things that garrett was referring to and what scott was referring to in terms of what are customers demands for how they're going to actually leverage the data that they have and that brings me to kind of my final point on this which is we see customers in three phases there's the first phase where they say hey i have this large data store and i know there's value in there i don't know how to get to it or i have this large data store and i've started a project to get value out of it and we failed those could be customers that um you know marched down the hadoop path early on and they they got some value out of it um but they realized that you know hdfs wasn't going to be a modern protocol going forward for any number of reasons you know the first being hey if i have gold.master how do i know that i have gold.4 is consistent with my gold.master so data consistency matters and then you have the sort of third group that says i have these large data sets i know how to extract value from them and i'm already on to the verticas the elastics you know the splunks etc um i think those folks are the folks that that ladder group are the folks that kept their their their projects going because they were already extracting value from them the first two groups we we're seeing sort of saying the second half of this year is when we're going to begin really being picking up on these on these types of initiatives again well thank you matt by the way for for hitting the escape key because i think value from data really is what this is all about and there are some real blockers there that i kind of want to talk about you mentioned hdfs i mean we were very excited of course in the early days of hadoop many of the concepts were profound but at the end of the day it was too complicated we've got these hyper-specialized roles that are that are you know serving the business but it still takes too long it's it's too hard to get value from data and one of the blockers is infrastructure that the complexity of that infrastructure really needs to be abstracted taking up a level we're starting to see this in in cloud where you're seeing some of those abstraction layers being built from some of the cloud vendors but more importantly a lot of the vendors like pew are saying hey we can do that heavy lifting for you uh and we you know we have expertise in engineering to do cloud native so i'm wondering what you guys see uh maybe garrett you could start us off and other students as some of the blockers uh to getting value from data and and how we're going to address those in the coming decade yeah i mean i i think part of it we're solving here obviously with with pure bringing uh you know flash to a market that traditionally was utilizing uh much slower media um you know the other thing that i that i see that's very nice with flashblade for example is the ability to kind of do things you know once you get it set up a blade at a time i mean a lot of the things that we see from just kind of more of a you know simplistic approach to this like a lot of these teams don't have big budgets and being able to kind of break them down into almost a blade type chunk i think has really kind of allowed folks to get more projects and and things off the ground because they don't have to buy a full expensive system to run these projects so that's helped a lot i think the wider use cases have helped a lot so matt mentioned ransomware you know using safe mode as a place to help with ransomware has been a really big growth spot for us we've got a lot of customers very interested and excited about that and the other thing that i would say is bringing devops into data is another thing that we're seeing so kind of that push towards data ops and really kind of using automation and infrastructure as code as a way to now kind of drive things through the system the way that we've seen with automation through devops is really an area we're seeing a ton of growth with from a services perspective guys any other thoughts on that i mean we're i'll tee it up there we are seeing some bleeding edge which is somewhat counterintuitive especially from a cost standpoint organizational changes at some some companies uh think of some of the the the internet companies that do uh music uh for instance and adding podcasts etc and those are different data products we're seeing them actually reorganize their data architectures to make them more distributed uh and actually put the domain heads the business heads in charge of the the data and the data pipeline and that is maybe less efficient but but it's again some of these bleeding edge what else are you guys seeing out there that might be yes some harbingers of the next decade uh i'll go first um you know i think specific to um the the construct that you threw out dave one of the things that we're seeing is um you know the the application owner maybe it's the devops person but it's you know maybe it's it's it's the application owner through the devops person they're they're becoming more technical in their understanding of how infrastructure um interfaces with their with their application i think um you know what what we're seeing on the flashblade side is we're having a lot more conversations with application people than um just i.t people it doesn't mean that the it people aren't there the it people are still there for sure they have to deliver the service etc um but you know the days of of i.t you know building up a catalog of services and a business owner subscribing to one of those services you know picking you know whatever sort of fits their need um i don't think that constru i think that's the construct that changes going forward the application owner is becoming much more prescriptive about what they want the infrastructure to fit how they want the infrastructure to fit into their application and that's a big change and and for for um you know certainly folks like like garrett and cdw um you know they do a good job with this being able to sort of get to the application owner and bring those two sides together there's a tremendous amount of value there for us it's been a little bit of a retooling we've traditionally sold to the i.t side of the house and um you know we've had to teach ourselves how to go talk the language of of applications so um you know i think you pointed out a good a good a good construct there and and you know that that application owner taking playing a much bigger role in what they're expecting uh from the performance of it infrastructure i think is is is a key is a key change interesting i mean that definitely is a trend that's put you guys closer to the business where the the infrastructure team is is serving the business as opposed to sometimes i talk to data experts and they're frustrated uh especially data owners or or data product builders who are frustrated that they feel like they have to beg beg the the data pipeline team to get you know new data sources or get data out how about the edge um you know maybe scott you can kick us off i mean we're seeing you know the emergence of edge use cases ai inferencing at the edge a lot of data at the edge what are you seeing there and and how does this unified object i'll bring us back to that and file fit wow dave how much time do we have um two minutes first of all scott why don't you why don't you just tell everybody what the edge is yeah you got it figured out all right how much time do you have matt at the end of the day and that that's that's a great question right is if you take a step back and i think it comes back today of something you mentioned it's about extracting value from data and what that means is when you extract value from data what it does is as matt pointed out the the influencers or the users of data the application owners they have more power because they're driving revenue now and so what that means is from an i.t standpoint it's not just hey here are the services you get use them or lose them or you know don't throw a fit it is no i have to i have to adapt i have to follow what my application owners mean now when you bring that back to the edge what it means is is that data is not localized to the data center i mean we just went through a nearly 12-month period where the entire workforce for most of the companies in this country had went distributed and business continued so if business is distributed data is distributed and that means that means in the data center that means at the edge that means that the cloud that means in all other places in tons of places and what it also means is you have to be able to extract and utilize data anywhere it may be and i think that's something that we're going to continue to and continue to see and i think it comes back to you know if you think about key characteristics we've talked about things like performance and scale for years but we need to start rethinking it because on one hand we need to get performance everywhere but also in terms of scale and this ties back to some of the other initiatives and getting value from data it's something i call that the massive success problem one of the things we see especially with with workloads like machine learning is businesses find success with them and as soon as they do they say well i need about 20 of these projects now all of a sudden that overburdens it organizations especially across across core and edge and cloud environments and so when you look at environments ability to meet performance and scale demands wherever it needs to be is something that's really important you know so dave i'd like to um just sort of tie together sort of two things that um i think that i heard from scott and garrett that i think are important and it's around this concept of scale um you know some of us are old enough to remember the day when kind of a 10 terabyte blast radius was too big of a blast radius for people to take on or a terabyte of storage was considered to be um you know an exemplary budget environment right um now we sort of think as terabytes kind of like we used to think of as gigabytes in some ways um petabyte like you don't have to explain anybody what a petabyte is anymore um and you know what's on the horizon and it's not far are our exabyte type data set workloads um and you start to think about what could be in that exabyte of data we've talked about how you extract that value we've talked about sort of um how you start but if the scale is big not everybody's going to start at a petabyte or an exabyte to garrett's point the ability to start small and grow into these products or excuse me these projects i think a is a really um fundamental concept here because you're not going to just go by i'm going to kick off a five petabyte project whether you do that on disk or flash it's going to be expensive right but if you could start at a couple hundred terabytes not just as a proof of concept but as something that you know you could get predictable value out of that then you could say hey this either scales linearly or non-linearly in a way that i can then go map my investments to how i can go dig deeper into this that's how all of these things are gonna that's how these successful projects are going to start because the people that are starting with these very large you know sort of um expansive you know greenfield projects at multi-petabyte scale it's gonna be hard to realize near-term value excellent we gotta wrap but but garrett i wonder if you could close when you look forward you talk to customers do you see this unification of of file and object is it is this an evolutionary trend is it something that is that that is that is that is going to be a lever that customers use how do you see it evolving over the next two three years and beyond yeah i mean i think from our perspective i mean just from what we're seeing from the numbers within the market the amount of growth that's happening with unstructured data is really just starting to finally really kind of hit this data deluge or whatever you want to call it that we've been talking about for so many years it really does seem to now be becoming true as we start to see things scale out and really folks settle into okay i'm going to use the cloud to to start and maybe train my models but now i'm going to get it back on prem because of latency or security or whatever the the um decision points are there this is something that is not going to slow down and i think you know folks like pure having the ability to have the tools that they give us um to use and bring to market with our customers are really key and critical for us so i see it as a huge growth area and a big focus for us moving forward guys great job unpacking a topic that you know it's covered a little bit but i think we we covered some ground that is uh that is new and so thank you so much for those insights and that data really appreciate your time thanks steve thanks yeah thanks dave okay and thank you for watching the convergence of file and object keep it right there right back after this short break innovation impact influence welcome to the cube disruptors developers and practitioners learn from the voices of leaders who share their personal insights from the hottest digital events around the globe enjoy the best this community has to offer on the cube your global leader in high-tech digital coverage [Music] okay now we're going to get the customer perspective on object and we'll talk about the convergence of file and object but really focusing on the object piece this is a content program that's being made possible by pure storage and it's co-created with the cube christopher cb bond is here he's a lead architect for microfocus the enterprise data warehouse and principal data engineer at microfocus cb welcome good to see you thanks dave good to be here so tell us more about your role at microfocus it's a pan microfocus role of course we know the company is a multinational software firm and acquired the software assets of hp of course including vertica tell us where you fit yeah so microfocus is uh you know it's like i said wide worldwide uh company that uh sells a lot of software products all over the place to governments and so forth and um it also grows often by acquiring other companies so there is the problem of of integrating new companies and their data and so what's happened over the years is that they've had a a number of different discrete data systems so you've got this data spread all over the place and they've never been able to get a full complete introspection on the entire business because of that so my role was come in design a central data repository an enterprise data warehouse that all reporting could be generated against and so that's what we're doing and we selected vertica as the edw system and pure storage flashblade as the communal repository okay so you obviously had experience with with vertica in your in your previous role so it's not like you were starting from scratch but but paint a picture of what life was like before you embarked on this sort of consolidated a approach to your your data warehouse what was it just disparate data all over the place a lot of m a going on where did the data live right so again the data was all over the place including under people's desks in just dedicated you know their their own private uh sql servers it a lot of data in in um microfocus is run on sql server which has pros and cons because that's a great uh transactional database but it's not really good for analytics in my opinion so uh but a lot of stuff was running on that they had one vertica instance that was doing some select uh reporting wasn't a very uh powerful system and it was what they call vertica enterprise mode where had dedicated nodes which um had the compute and storage um in the same locus on each uh server okay so vertica eon mode is a whole new world because it separates compute from storage you mentioned eon mode uh and the ability to to to scale storage and compute independently we wanted to have the uh analytics olap stuff close to the oltp stuff right so that's why they're co-located very close to each other and so uh we could what's nice about this situation is that these s3 objects it's an s3 object store on the pure flash plate we could copy those over if we needed to uh aws and we could spin up um a version of vertica there and keep going it's it's like a tertiary dr strategy because we actually have a we're setting up a second flashblade vertica system geo-located elsewhere for backup and we can get into it if you want to talk about how the latest version of the pure software for the flashblade allows synchronization across network boundaries of those flash plays which is really nice because if uh you know there's a giant sinkhole opens up under our colo facility and we lose that thing then we just have to switch the dns and we were back in business off the dr and then if that one was to go we could copy those objects over to aws and be up and running there so we're feeling pretty confident about being able to weather whatever comes along so you're using the the pure flash blade as an object store um most people think oh object simple but slow uh not the case for you is that right not the case at all it's ripping um well you have to understand about vertica and the way it stores data it stores data in what they call storage containers and those are immutable okay on disk whether it's on aws or if you had a enterprise mode vertica if you do an update or delete it actually has to go and retrieve that object container from disk and it destroys it and rebuilds it okay which is why you don't you want to avoid updates and deletes with vertica because the way it gets its speed is by sorting and ordering and encoding the data on disk so it can read it really fast but if you do an operation where you're deleting or updating a record in the middle of that then you've got to rebuild that entire thing so that actually matches up really well with s3 object storage because it's kind of the same way uh it gets destroyed and rebuilt too okay so that matches up very well with vertica and we were able to design this system so that it's append only now we had some reports that were running in sql server okay uh which were taking seven days so we moved that to uh to vertica from sql server and uh we rewrote the queries which were which had been written in t sql with a bunch of loops and so forth and we were to get this is amazing it went from seven days to two seconds to generate this report which has tremendous value uh to the company because it would have to have this long cycle of seven days to get a new introspection in what they call their knowledge base and now all of a sudden it's almost on demand two seconds to generate it that's great and that's because of the way the data is stored and uh the s3 you asked about oh you know is it slow well not in that context because what happens really with vertica eon mode is that it can they have um when you set up your compute nodes they have local storage also which is called the depot it's kind of a cache okay so the data will be drawn from the flash and cached locally uh and that was it was thought when they designed that oh you know it's that'll cut down on the latency okay but it turns out that if you have your compute nodes close meaning minimal hops to the flashblade that you can actually uh tell vertica you know don't even bother caching that stuff just read it directly on the fly from the from the flashblade and the performance is still really good it depends on your situation but i know for example a major telecom company that uh uses the same topology as we're talking about here they did the same thing they just they just dropped the cache because the flash player was able to to deliver the the data fast enough so that's you're talking about that that's speed of light issues and just the overhead of of of switching infrastructure is that that gets eliminated and so as a result you can go directly to the storage array that's correct yeah it's it's like it's fast enough that it's it's almost as if it's local to the compute node uh but every situation is different depending on your uh your knees if you've got like a few tables that are heavily used uh then yeah put them um put them in the cash because that'll be probably a little bit faster but if you have a lot of ad hoc queries that are going on you know you may exceed the storage of the local cache and then you're better off having it uh just read directly from the uh from the flash blade got it look it pure's a fit i mean i sound like a fanboy but pure is all about simplicity so is object so that means you don't have to you know worry about wrangling storage and worrying about luns and all that other you know nonsense and and file i've been burned by hardware in the past you know where oh okay they're building to a price and so they cheap out on stuff like fans or other things and these these components fail and the whole thing goes down but this hardware is super super good quality and uh so i'm i'm happy with the quality that we're getting so cb last question what's next for you where do you want to take this uh this this initiative well we are in the process now of we um when so i i designed this system to combine the best of the kimball approach to data warehousing and the inland approach okay and what we do is we bring over all the data we've got and we put it into a pristine staging layer okay like i said it's uh because it's append only it's essentially a log of all the transactions that are happening in this company just they appear okay and then from the the kimball side of things we're designing the data marts now so that that's what the end users actually interact with and so we're we're taking uh the we're examining the transactional systems to say how are these business objects created what's what's the logic there and we're recreating those logical models in uh in vertica so we've done a handful of them so far and it's working out really well so going forward we've got a lot of work to do to uh create just about every object that that the company needs cb you're an awesome guest to really always a pleasure talking to you and uh thank you congratulations and and good luck going forward stay safe thank you [Music] okay let's summarize the convergence of file and object first i want to thank our guests matt burr scott sinclair garrett belsener and c.b bohn i'm your host dave vellante and please allow me to briefly share some of the key takeaways from today's program so first as scott sinclair of esg stated surprise surprise data's growing and matt burr he helped us understand the growth of unstructured data i mean estimates indicate that the vast majority of data will be considered unstructured by mid-decade 80 or so and obviously unstructured data is growing very very rapidly now of course your definition of unstructured data and that may vary across across a wide spectrum i mean there's video there's audio there's documents there's spreadsheets there's chat i mean these are generally considered unstructured data but of course they all have some type of structure to them you know perhaps it's not as strict as a relational database but there's certainly metadata and certain structure to these types of use cases that i just mentioned now the key to what pure is promoting is this idea of unified fast file and object uffo look object is great it's inexpensive it's simple but historically it's been less performant so good for archiving or cheap and deep types of examples organizations often use file for higher performance workloads and let's face it most of the world's data lives in file formats what pure is doing is bringing together file and object by for example supporting multiple protocols ie nfs smb and s3 s3 of course has really given new life to object over the past decade now the key here is to essentially enable customers to have the best of both worlds not having to trade off performance for object simplicity and a key discussion point that we've had on the program has been the impact of flash on the long slow death of spinning disk look hard disk drives they had a great run but hdd volumes they peaked in 2010 and flash as you well know has seen tremendous volume growth thanks to the consumption of flash in mobile devices and then of course its application into the enterprise and that's volume is just going to keep growing and growing and growing the price declines of flash are coming down faster than those of hdd so it's the writing's on the wall it's just a matter of time so flash is riding down that cost curve very very aggressively and hdd has essentially become you know a managed decline business now by bringing flash to object as part of the flashblade portfolio and allowing for multiple protocols pure hopes to eliminate the dissonance between file and object and simplify the choice in other words let the workload decide if you have data in a file format no problem pure can still bring the benefits of simplicity of object at scale to the table so again let the workload inform what the right strategy is not the technical infrastructure now pure course is not alone there are others supporting this multi-protocol strategy and so we asked matt burr why pure or what's so special about you and not surprisingly in addition to the product innovation he went right to pure's business model advantages i mean for example with its evergreen support model which was very disruptive in the marketplace you know frankly pure's entire business disrupted the traditional disk array model which was fundamentally was flawed pure forced the industry to respond and when it achieved escape velocity velocity and pure went public the entire industry had to react and a big part of the pure value prop in addition to this business model innovation that we just discussed is simplicity pure's keep its simple approach coincided perfectly with the ascendancy of cloud where technology organizations needed cloud-like simplicity for certain workloads that were never going to move into the cloud they're going to stay on-prem now i'm going to come back to this but allow me to bring in another concept that garrett and cb really highlighted and that is the complexity of the data pipeline and what do you mean what do i mean by that and why is this important so scott sinclair articulated he implied that the big challenge is organizations their data full but insights are scarce scarce a lot of data not as much insights it takes time too much time to get to those insights so we heard from our guests that the complexity of the data pipeline was a barrier to getting to faster insights now cb bonds shared how he streamlined his data architecture using vertica's eon mode which allowed him to scale compute independently of storage so that brought critical flexibility and improved economics at scale and flashblade of course was the back-end storage for his data warehouse efforts now the reason i think this is so important is that organizations are struggling to get insights from data and the complexity associated with the data pipeline and data life cycles let's face it it's overwhelming organizations and there the answer to this problem is a much longer and different discussion than unifying object and file that's you know i can spend all day talking about that but let's focus narrowly on the part of the issue that is related to file and object so the situation here is that technology has not been serving the business the way it should rather the formula is twisted in the world of data and big data and data architectures the data team is mired in complex technical issues that impact the time to insights now part of the answer is to abstract the underlying infrastructure complexity and create a layer with which the business can interact that accelerates instead of impedes innovation and unifying file and object is a simple example of this where the business team is not blocked by infrastructure nuance like does this data reside in a file or object format can i get to it quickly and inexpensively in a logical way or is the infrastructure in a stovepipe and blocking me so if you think about the prevailing sentiment of how the cloud is evolving to incorporate on premises workloads that are hybrid and configurations that are working across clouds and now out to the edge this idea of an abstraction layer that essentially hides the underlying infrastructure is a trend we're going to see evolve this decade now is uffo the be all end-all answer to solving all of our data pipeline challenges no no of course not but by bringing the simplicity and economics of object together with the ubiquity and performance of file uffo makes it a lot easier it simplifies life organizations that are evolving into digital businesses which by the way is every business so we see this as an evolutionary trend that further simplifies the underlying technology infrastructure and does a better job supporting the data flows for organizations so they don't have to spend so much time worrying about the technology details that add a little value to the business okay so thanks for watching the convergence of file and object and thanks to pure storage for making this program possible this is dave vellante for the cube we'll see you next time [Music] you

Published Date : Feb 24 2021

SUMMARY :

on the nfs side um but you know we

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
garrett belsnerPERSON

0.99+

matt burrPERSON

0.99+

2010DATE

0.99+

2050DATE

0.99+

270 terabytesQUANTITY

0.99+

seven daysQUANTITY

0.99+

2021DATE

0.99+

scott sinclairPERSON

0.99+

2035DATE

0.99+

2019DATE

0.99+

fourQUANTITY

0.99+

threeQUANTITY

0.99+

two secondsQUANTITY

0.99+

2025DATE

0.99+

matt burrPERSON

0.99+

first phaseQUANTITY

0.99+

davePERSON

0.99+

dave vellantePERSON

0.99+

scott sinclairPERSON

0.99+

fiveQUANTITY

0.99+

250 terabytesQUANTITY

0.99+

10 terabyteQUANTITY

0.99+

zero percentQUANTITY

0.99+

100QUANTITY

0.99+

stevePERSON

0.99+

garyPERSON

0.99+

two billion dollarQUANTITY

0.99+

garrettPERSON

0.99+

two minutesQUANTITY

0.99+

two weeks laterDATE

0.99+

three topicsQUANTITY

0.99+

two sidesQUANTITY

0.99+

two weeks agoDATE

0.99+

billion dollarsQUANTITY

0.99+

mid-decade 80DATE

0.99+

todayDATE

0.99+

cdwPERSON

0.98+

three phasesQUANTITY

0.98+

80QUANTITY

0.98+

billions of objectsQUANTITY

0.98+

10 monthQUANTITY

0.98+

one deviceQUANTITY

0.98+

an hourQUANTITY

0.98+

one platformQUANTITY

0.98+

scottORGANIZATION

0.97+

last yearDATE

0.97+

five petabyteQUANTITY

0.97+

scottPERSON

0.97+

cassandraPERSON

0.97+

oneQUANTITY

0.97+

single blockQUANTITY

0.97+

one systemQUANTITY

0.97+

next decadeDATE

0.96+

tons of placesQUANTITY

0.96+

both worldsQUANTITY

0.96+

verticaTITLE

0.96+

mattPERSON

0.96+

bothQUANTITY

0.96+

69 of organizationsQUANTITY

0.96+

billion dollarsQUANTITY

0.95+

pandemicEVENT

0.95+

firstQUANTITY

0.95+

three great guestsQUANTITY

0.95+

next yearDATE

0.95+

Drug Discovery and How AI Makes a Difference Panel | Exascale Day


 

>> Hello everyone. On today's panel, the theme is Drug Discovery and how Artificial Intelligence can make a difference. On the panel today, we are honored to have Dr. Ryan Yates, principal scientist at The National Center for Natural Products Research, with a focus on botanicals specifically the pharmacokinetics, which is essentially how the drug changes over time in our body and pharmacodynamics which is essentially how drugs affects our body. And of particular interest to him is the use of AI in preclinical screening models to identify chemical combinations that can target chronic inflammatory processes such as fatty liver disease, cognitive impairment and aging. Welcome, Ryan. Thank you for coming. >> Good morning. Thank you for having me. >> The other distinguished panelist is Dr. Rangan Sukumar, our very own, is a distinguished technologist at the CTO office for High Performance Computing and Artificial Intelligence with a PHD in AI and 70 publications that can be applied in drug discovery, autonomous vehicles and social network analysis. Hey Rangan, welcome. Thank you for coming, by sparing the time. We have also our distinguished Chris Davidson. He is leader of our HPC and AI Application and Performance Engineering team. His job is to tune and benchmark applications, particularly in the applications of weather, energy, financial services and life sciences. Yes so particular interest is life sciences he spent 10 years in biotech and medical diagnostics. Hi Chris, welcome. Thank you for coming. >> Nice to see you. >> Well let's start with your Chris, yes, you're regularly interfaced with pharmaceutical companies and worked also on the COVID-19 White House Consortium. You know tell us, let's kick this off and tell us a little bit about your engagement in the drug discovery process. >> Right and that's a good question I think really setting the framework for what we're talking about here is to understand what is the drug discovery process. And that can be kind of broken down into I would say four different areas, there's the research and development space, the preclinical studies space, clinical trial and regulatory review. And if you're lucky, hopefully approval. Traditionally this is a slow arduous process it costs a lot of money and there's a high amount of error. Right, however this process by its very nature is highly iterate and has just huge amounts of data, right it's very data intensive, right and it's these characteristics that make this process a great target for kind of new approaches in different ways of doing things. Right, so for the sake of discussion, right, go ahead. >> Oh yes, so you mentioned data intensive brings to mind Artificial Intelligence, you know, so Artificial Intelligence making the difference here in this process, is that so? >> Right, and some of those novel approaches are actually based on Artificial Intelligence whether it's deep learning and machine learning, et cetera, you know, prime example would say, let's just say for the sake of discussion, let's say there's a brand new virus, causes flu-like symptoms, shall not be named if we focus kind of on the R and D phase, right our goal is really to identify target for the treatment and then screen compounds against it see which, you know, which ones we take forward right to this end, technologies like cryo-electron, cryogenic electron microscopy, just a form of microscopy can provide us a near atomic biomolecular map of the samples that we're studying, right whether that's a virus, a microbe, the cell that it's attaching to and so on, right AI, for instance, has been used in the particle picking aspect of this process. When you take all these images, you know, there are only certain particles that we want to take and study, right whether they have good resolution or not whether it's in the field of the frame and image recognition is a huge part of this, it's massive amounts of data in AI can be very easily, you know, used to approach that. Right, so with docking, you can take the biomolecular maps that you achieved from cryo-electron microscopy and you can take those and input that into the docking application and then run multiple iterations to figure out which will give you the best fit. AI again, right, this is iterative process it's extremely data intensive, it's an easy way to just apply AI and get that best fit doing something in a very, you know, analog manner that would just take humans very long time to do or traditional computing a very long time to do. >> Oh, Ryan, Ryan, you work at the NCNPR, you know, very exciting, you know after all, you know, at some point in history just about all drugs were from natural products yeah, so it's great to have you here today. Please tell us a little bit about your work with the pharmaceutical companies, especially when it is often that drug cocktails or what they call Polypharmacology, is the answer to complete drug therapy. Please tell us a bit more with your work there. >> Yeah thank you again for having me here this morning Dr. Goh, it's a pleasure to be here and as you said, I'm from the National Center for Natural Products Research you'll hear me refer to it as the NCNPR here in Oxford, Mississippi on the Ole Miss Campus, beautiful setting here in the South and so, what, as you said historically, what the drug discovery process has been, and it's really not a drug discovery process is really a therapy process, traditional medicine is we've looked at natural products from medicinal plants okay, in these extracts and so where I'd like to begin is really sort of talking about the assets that we have here at the NCNPR one of those prime assets, unique assets is our medicinal plant repository which comprises approximately 15,000 different medicinal plants. And what that allows us to do, right is to screen mine, that repository for activities so whether you have a disease of interest or whether you have a target of interest then you can use this medicinal plant repository to look for actives, in this case active plants. It's really important in today's environment of drug discovery to really understand what are the actives in these different medicinal plants which leads me to the second unique asset here at the NCNPR and that is our what I'll call a plant deconstruction laboratory so without going into great detail, but what that allows us to do is through a how to put workstation, right, is to facilitate rapid isolation and identification of phytochemicals in these different medicinal plants right, and so things that have historically taken us weeks and sometimes months, think acetylsalicylic acid from salicylic acid as a pain reliever in the willow bark or Taxol, right as an anti-cancer drug, right now we can do that with this system on the matter of days or weeks so now we're talking about activity from a plant and extract down to phytochemical characterization on a timescale, which starts to make sense in modern drug discovery, alright and so now if you look at these phytochemicals, right, and you ask yourself, well sort of who is interested in that and why, right what are traditional pharmaceutical companies, right which I've been working with for 20, over 25 years now, right, typically uses these natural products where historically has used these natural products as starting points for new drugs. Right, so in other words, take this phytochemical and make chemicals synthetic modifications in order to achieve a potential drug. But in the context of natural products, unlike the pharmaceutical realm, there is often times a big knowledge gap between a disease and a plant in other words I have a plant that has activity, but how to connect those dots has been really laborious time consuming so it took us probably 50 years to go from salicylic acid and willow bark to synthesize acetylsalicylic acid or aspirin it just doesn't work in today's environment. So casting about trying to figure out how we expedite that process that's when about four years ago, I read a really fascinating article in the Los Angeles Times about my colleague and business partner, Dr. Rangan Sukumar, describing all the interesting things that he was doing in the area of Artificial Intelligence. And one of my favorite parts of this story is basically, unannounced, I arrived at his doorstep in Oak Ridge, he was working Oak Ridge National Labs at the time, and I introduced myself to him didn't know what was coming, didn't know who I was, right and I said, hey, you don't know me you don't know why I'm here, I said, but let me tell you what I want to do with your system, right and so that kicked off a very fruitful collaboration and friendship over the last four years using Artificial Intelligence and it's culminated most recently in our COVID-19 project collaborative research between the NCNPR and HP in this case. >> From what I can understand also as Chris has mentioned highly iterative, especially with these combination mixture of chemicals right, in plants that could affect a disease. We need to put in effort to figure out what are the active components in that, that affects it yeah, the combination and given the layman's way of understanding it you know and therefore iterative and highly data intensive. And I can see why Rangan can play a huge significant role here, Rangan, thank you for joining us So it's just a nice segue to bring you in here, you know, given your work with Ryan over so many years now, tell I think I'm also quite interested in knowing a little about how it developed the first time you met and the process and the things you all work together on that culminated into the progress at the advanced level today. Please tell us a little bit about that history and also the current work. Rangan. >> So, Ryan, like he mentioned, walked into my office about four years ago and he was like hey, I'm working on this Omega-3 fatty acid, what can your system tell me about this Omega-3 fatty acid and I didn't even know how to spell Omega-3 fatty acids that's the disconnect between the technologist and the pharmacologist, they have terms of their own right since then we've come a long way I think I understand his terminologies now and he understands that I throw words like knowledge graphs and page rank and then all kinds of weird stuff that he's probably never heard in his life before right, so it's been on my mind off to different domains and terminologies in trying to accept each other's expertise in trying to work together on a collaborative project. I think the core of what Ryan's work and collaboration has led me to understanding is what happens with the drug discovery process, right so when we think about the discovery itself, we're looking at companies that are trying to accelerate the process to market, right an average drug is taking 12 years to get to market the process that Chris just mentioned, Right and so companies are trying to adopt what's called the in silico simulation techniques and in silico modeling techniques into what was predominantly an in vitro, in silico, in vivo environment, right. And so the in silico techniques could include things like molecular docking, could include Artificial Intelligence, could include other data-driven discovery methods and so forth, and the essential component of all the things that you know the discovery workflows have is the ability to augment human experts to do the best by assisting them with what computers do really really well. So, in terms of what we've done as examples is Ryan walks in and he's asking me a bunch of questions and few that come to mind immediately, the first few are, hey, you are an Artificial Intelligence expert can you sift through a database of molecules the 15,000 compounds that he described to prioritize a few for next lab experiments? So that's question number one. And he's come back into my office and asked me about hey, there's 30 million publications in PubMag and I don't have the time to read everything can you create an Artificial Intelligence system that once I've picked these few molecules will tell me everything about the molecule or everything about the virus, the unknown virus that shows up, right. Just trying to understand what are some ways in which he can augment his expertise, right. And then the third question, I think he described better than I'm going to was how can technology connect these dots. And typically it's not that the answer to a drug discovery problem sits in one database, right he probably has to think about uniproduct protein he has to think about phytochemical, chemical or informatics properties, data and so forth. Then he talked about the phytochemical interaction that's probably in another database. So when he is trying to answer other question and specifically in the context of an unknown virus that showed up in late last year, the question was, hey, do we know what happened in this particular virus compared to all the previous viruses? Do we know of any substructure that was studied or a different disease that's part of this unknown virus and can I use that information to go mine these databases to find out if these interactions can actually be used as a repurpose saying, hook, say this drug does not interact with this subsequence of a known virus that also seems to be part of this new virus, right? So to be able to connect that dot I think the abstraction that we are learning from working with pharma companies is that this drug discovery process is complex, it's iterative, and it's a sequence of needle in the haystack search problems, right and so one day, Ryan would be like, hey, I need to match genome, I need to match protein sequences between two different viruses. Another day it would be like, you know, I need to sift through a database of potential compounds, identified side effects and whatnot other day it could be, hey, I need to design a new molecule that never existed in the world before I'll figure out how to synthesize it later on, but I need a completely new molecule because of patentability reasons, right so it goes through the entire spectrum. And I think where HP has differentiated multiple times even the recent weeks is that the technology infusion into drug discovery, leads to several aha! Moments. And, aha moments typically happened in the other few seconds, and not the hours, days, months that Ryan has to laboriously work through. And what we've learned is pharma researchers love their aha moments and it leads to a sound valid, well founded hypothesis. Isn't that true Ryan? >> Absolutely. Absolutely. >> Yeah, at some point I would like to have a look at your, peak the list of your aha moments, yeah perhaps there's something quite interesting in there for other industries too, but we'll do it at another time. Chris, you know, with your regular work with pharmaceutical companies especially the big pharmas, right, do you see botanicals, coming, being talked about more and more there? >> Yeah, we do, right. Looking at kind of biosimilars and drugs that are already really in existence is kind of an important point and Dr. Yates and Rangan, with your work with databases this is something important to bring up and much of the drug discovery in today's world, isn't from going out and finding a brand new molecule per se. It's really looking at all the different databases, right all the different compounds that already exist and sifting through those, right of course data is mind, and it is gold essentially, right so a lot of companies don't want to share their data. A lot of those botanicals data sets are actually open to the public to use in many cases and people are wanting to have more collaborative efforts around those databases so that's really interesting to kind of see that being picked up more and more. >> Mm, well and Ryan that's where NCNPR hosts much of those datasets, yeah right and it's interesting to me, right you know, you were describing the traditional way of drug discovery where you have a target and a compound, right that can affect that target, very very specific. But from a botanical point of view, you really say for example, I have an extract from a plant that has combination of chemicals and somehow you know, it affects this disease but then you have to reverse engineer what those chemicals are and what the active ones are. Is that very much the issue, the work that has to be put in for botanicals in this area? >> Yes Doctor Goh, you hit it exactly. >> Now I can understand why a highly iterative intensive and data intensive, and perhaps that's why Rangan, you're highly valuable here, right. So tell us about the challenge, right the many to many intersection to try and find what the targets are, right given these botanicals that seem to affect the disease here what methods do you use, right in AI, to help with this? >> Fantastic question, I'm going to go a little bit deeper and speak like Ryan in terminology, but here we go. So with going back to about starting of our conversation right, so let's say we have a database of molecules on one side, and then we've got the database of potential targets in a particular, could be a virus, could be bacteria, could be whatever, a disease target that you've identified, right >> Oh this process so, for example, on a virus, you can have a number of targets on the virus itself some have the spike protein, some have the other proteins on the surface so there are about three different targets and others on a virus itself, yeah so a lot of people focus on the spike protein, right but there are other targets too on that virus, correct? >> That is exactly right. So for example, so the work that we did with Ryan we realized that, you know, COVID-19 protein sequence has an overlap, a significant overlap with previous SARS-CoV-1 virus, not only that, but it overlap with MERS, that's overlapped with some bad coronavirus that was studied before and so forth, right so knowing that and it's actually broken down into multiple and Ryan I'm going to steal your words, non-structural proteins, envelope proteins, S proteins, there's a whole substructure that you can associate an amino acid sequence with, right so on the one hand, you have different targets and again, since we did the work it's 160 different targets even on the COVID-19 mark, right and so you find a match, that we say around 36, 37 million molecules that are potentially synthesizable and try to figure it out which one of those or which few of those is actually going to be mapping to which one of these targets and actually have a mechanism of action that Ryan's looking for, that'll inhibit the symptoms on a human body, right so that's the challenge there. And so I think the techniques that we can unrule go back to how much do we know about the target and how much do we know about the molecule, alright. And if you start off a problem with I don't know anything about the molecule and I don't know anything about the target, you go with the traditional approaches of docking and molecular dynamics simulations and whatnot, right. But then, you've done so much docking before on the same database for different targets, you'll learn some new things about the ligands, the molecules that Ryan's talking about that can predict potential targets. So can you use that information of previous protein interactions or previous binding to known existing targets with some of the structures and so forth to build a model that will capture that essence of what we have learnt from the docking before? And so that's the second level of how do we infuse Artificial Intelligence. The third level, is to say okay, I can do this for a database of molecules, but then what if the protein-protein interactions are all over the literature study for millions of other viruses? How do I connect the dots across different mechanisms of actions too? Right and so this is where the knowledge graph component that Ryan was talking about comes in. So we've put together a database of about 150 billion medical facts from literature that Ryan is able to connect the dots and say okay, I'm starting with this molecule, what interactions do I know about the molecule? Is there a pretty intruding interaction that affects the mechanism of pathway for the symptoms that a disease is causing? And then he can go and figure out which protein and protein in the virus could potentially be working with this drug so that inhibiting certain activities would stop that progression of the disease from happening, right so like I said, your method of options, the options you've got is going to be, how much do you know about the target? How much do you know the drug database that you have and how much information can you leverage from previous research as you go down this pipeline, right so in that sense, I think we mix and match different methods and we've actually found that, you know mixing and matching different methods produces better synergies for people like Ryan. So. >> Well, the synergies I think is really important concept, Rangan, in additivities, synergistic, however you want to catch that. Right. But it goes back to your initial question Dr. Goh, which is this idea of polypharmacology and historically what we've done with traditional medicines there's more than one active, more than one network that's impacted, okay. You remember how I sort of put you on both ends of the spectrum which is the traditional sort of approach where we really don't know much about target ligand interaction to the completely interpretal side of it, right where now we are all, we're focused on is, in a single molecule interacting with a target. And so where I'm going with this is interesting enough, pharma has sort of migrate, started to migrate back toward the middle and what I mean by that, right, is we had these in a concept of polypharmacology, we had this idea, a regulatory pathway of so-called, fixed drug combinations. Okay, so now you start to see over the last 20 years pharmaceutical companies taking known, approved drugs and putting them in different combinations to impact different diseases. Okay. And so I think there's a really unique opportunity here for Artificial Intelligence or as Rangan has taught me, Augmented Intelligence, right to give you insight into how to combine those approved drugs to come up with unique indications. So is that patentability right, getting back to right how is it that it becomes commercially viable for entities like pharmaceutical companies but I think at the end of the day what's most interesting to me is sort of that, almost movement back toward that complex mixture of fixed drug combination as opposed to single drug entity, single target approach. I think that opens up some really neat avenues for us. As far as the expansion, the applicability of Artificial Intelligence is I'd like to talk to, briefly about one other aspect, right so what Rang and I have talked about is how do we take this concept of an active phytochemical and work backwards. In other words, let's say you identify a phytochemical from an in silico screening process, right, which was done for COVID-19 one of the first publications out of a group, Dr. Jeremy Smith's group at Oak Ridge National Lab, right, identified a natural product as one of the interesting actives, right and so it raises the question to our botanical guy, says, okay, where in nature do we find that phytochemical? What plants do I go after to try and source botanical drugs to achieve that particular end point right? And so, what Rangan's system allows us to do is to say, okay, let's take this phytochemical in this case, a phytochemical flavanone called eriodictyol and say, where else in nature is this found, right that's a trivial question for an Artificial Intelligence system. But for a guy like me left to my own devices without AI, I spend weeks combing the literature. >> Wow. So, this is brilliant I've learned something here today, right, If you find a chemical that actually, you know, affects and addresses a disease, right you can actually try and go the reverse way to figure out what botanicals can give you those chemicals as opposed to trying to synthesize them. >> Well, there's that and there's the other, I'm going to steal Rangan's thunder here, right he always teach me, Ryan, don't forget everything we talk about has properties, plants have properties, chemicals have properties, et cetera it's really understanding those properties and using those properties to make those connections, those edges, those sort of interfaces, right. And so, yes, we can take something like an eriodictyol right, that example I gave before and say, okay, now, based upon the properties of eriodictyol, tell me other phytochemicals, other flavonoid in this case, such as that phytochemical class of eriodictyols part right, now tell me how, what other phytochemicals match that profile, have the same properties. It might be more economically viable, right in other words, this particular phytochemical is found in a unique Himalayan plant that I've never been able to source, but can we find something similar or same thing growing in, you know a bush found all throughout the Southeast for example, like. >> Wow. So, Chris, on the pharmaceutical companies, right are they looking at this approach of getting, building drugs yeah, developing drugs? >> Yeah, absolutely Dr. Goh, really what Dr. Yates is talking about, right it doesn't help us if we find a plant and that plant lives on one mountain only on the North side in the Himalayas, we're never going to be able to create enough of a drug to manufacture and to provide to the masses, right assuming that the disease is widespread or affects a large enough portion of the population, right so understanding, you know, not only where is that botanical or that compound but understanding the chemical nature of the chemical interaction and the physics of it as well where which aspect affects the binding site, which aspect of the compound actually does the work, if you will and then being able to make that at scale, right. If you go to these pharmaceutical companies today, many of them look like breweries to be honest with you, it's large scale, it's large back everybody's clean room and it's, they're making the microbes do the work for them or they have these, you know, unique processes, right. So. >> So they're not brewing beer okay, but drugs instead. (Christopher laughs) >> Not quite, although there are pharmaceutical companies out there that have had a foray into the brewery business and vice versa, so. >> We should, we should visit one of those, yeah (chuckles) Right, so what's next, right? So you've described to us the process and how you develop your relationship with Dr. Yates Ryan over the years right, five years, was it? And culminating in today's, the many to many fast screening methods, yeah what would you think would be the next exciting things you would do other than letting me peek at your aha moments, right what would you say are the next exciting steps you're hoping to take? >> Thinking long term, again this is where Ryan and I are working on this long-term project about, we don't know enough about botanicals as much as we know about the synthetic molecules, right and so this is a story that's inspired from Simon Sinek's "Infinite Game" book, trying to figure it out if human population has to survive for a long time which we've done so far with natural products we are going to need natural products, right. So what can we do to help organizations like NCNPR to stage genomes of natural products to stage and understand the evolution as we go to understand the evolution to map the drugs and so forth. So the vision is huge, right so it's not something that we want to do on a one off project and go away but in the process, just like you are learning today, Dr. Goh I'm going to be learning quite a bit, having fun with life. So, Ryan what do you think? >> Ryan, we're learning from you. >> So my paternal grandfather lived to be 104 years of age. I've got a few years to get there, but back to "The Infinite Game" concept that Rang had mentioned he and I discussed that quite frequently, I'd like to throw out a vision for you that's well beyond that sort of time horizon that we have as humans, right and that's this right, is our current strategy and it's understandable is really treatment centric. In other words, we have a disease we develop a treatment for that disease. But we all recognize, whether you're a healthcare practitioner, whether you're a scientist, whether you're a business person, right or whatever occupation you realize that prevention, right the old ounce, prevention worth a pound of cure, right is how can we use something like Artificial Intelligence to develop preventive sorts of strategies that we are able to predict with time, right that's why we don't have preventive treatment approach right, we can't do a traditional clinical trial and say, did we prevent type two diabetes in an 18 year old? Well, we can't do that on a timescale that is reasonable, okay. And then the other part of that is why focus on botanicals? Is because, for the most part and there are exceptions I want to be very clear, I don't want to paint the picture that botanicals are all safe, you should just take botanicals dietary supplements and you'll be safe, right there are exceptions, but for the most part botanicals, natural products are in fact safe and have undergone testing, human testing for thousands of years, right. So how do we connect those dots? A preventive strategy with existing extent botanicals to really develop a healthcare system that becomes preventive centric as opposed to treatment centric. If I could wave a magic wand, that's the vision that I would figure out how we could achieve, right and I do think with guys like Rangan and Chris and folks like yourself, Eng Lim, that that's possible. Maybe it's in my lifetime I got 50 years to go to get to my grandfather's age, but you never know, right? >> You bring really, up two really good points there Ryan, it's really a systems approach, right understanding that things aren't just linear, right? And as you go through it, there's no impact to anything else, right taking that systems approach to understand every aspect of how things are being impacted. And then number two was really kind of the downstream, really we've been discussing the drug discovery process a lot and kind of the kind of preclinical in vitro studies and in vivo models, but once you get to the clinical trial there are many drugs that just fail, just fail miserably and the botanicals, right known to be safe, right, in many instances you can have a much higher success rate and that would be really interesting to see, you know, more of at least growing in the market. >> Well, these are very visionary statements from each of you, especially Dr. Yates, right, prevention better than cure, right, being proactive better than being reactive. Reactive is important, but we also need to focus on being proactive. Yes. Well, thank you very much, right this has been a brilliant panel with brilliant panelists, Dr. Ryan Yates, Dr. Rangan Sukumar and Chris Davidson. Thank you very much for joining us on this panel and highly illuminating conversation. Yeah. All for the future of drug discovery, that includes botanicals. Thank you very much. >> Thank you. >> Thank you.

Published Date : Oct 16 2020

SUMMARY :

And of particular interest to him Thank you for having me. technologist at the CTO office in the drug discovery process. is to understand what is and you can take those and input that is the answer to complete drug therapy. and friendship over the last four years and the things you all work together on of all the things that you know Absolutely. especially the big pharmas, right, and much of the drug and somehow you know, the many to many intersection and then we've got the database so on the one hand, you and so it raises the question and go the reverse way that I've never been able to source, approach of getting, and the physics of it as well where okay, but drugs instead. foray into the brewery business the many to many fast and so this is a story that's inspired I'd like to throw out a vision for you and the botanicals, right All for the future of drug discovery,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ChrisPERSON

0.99+

RyanPERSON

0.99+

Chris DavidsonPERSON

0.99+

NCNPRORGANIZATION

0.99+

Rangan SukumarPERSON

0.99+

National Center for Natural Products ResearchORGANIZATION

0.99+

RanganPERSON

0.99+

Simon SinekPERSON

0.99+

ChristopherPERSON

0.99+

HPORGANIZATION

0.99+

12 yearsQUANTITY

0.99+

third questionQUANTITY

0.99+

50 yearsQUANTITY

0.99+

Rangan SukumarPERSON

0.99+

10 yearsQUANTITY

0.99+

Infinite GameTITLE

0.99+

15,000 compoundsQUANTITY

0.99+

Jeremy SmithPERSON

0.99+

104 yearsQUANTITY

0.99+

COVID-19OTHER

0.99+

Ryan YatesPERSON

0.99+

30 million publicationsQUANTITY

0.99+

five yearsQUANTITY

0.99+

third levelQUANTITY

0.99+

70 publicationsQUANTITY

0.99+

Eng LimPERSON

0.99+

Oak Ridge National LabsORGANIZATION

0.99+

160 different targetsQUANTITY

0.99+

20QUANTITY

0.99+

thousands of yearsQUANTITY

0.99+

second levelQUANTITY

0.99+

GohPERSON

0.99+

The Infinite GameTITLE

0.99+

HimalayasLOCATION

0.99+

over 25 yearsQUANTITY

0.99+

two different virusesQUANTITY

0.98+

more than one networkQUANTITY

0.98+

YatesPERSON

0.98+

late last yearDATE

0.98+

oneQUANTITY

0.98+

todayDATE

0.98+

about 150 billion medical factsQUANTITY

0.98+

one databaseQUANTITY

0.97+

both endsQUANTITY

0.97+

SARS-CoV-1 virusOTHER

0.97+

second unique assetQUANTITY

0.97+

single drugQUANTITY

0.97+

Oak Ridge National LabORGANIZATION

0.97+

Oak RidgeLOCATION

0.97+

UNLIST TILL 4/2 - Migrating Your Vertica Cluster to the Cloud


 

>> Jeff: Hello everybody, and thank you for joining us today for the virtual Vertica BDC 2020. Today's break-out session has been titled, "Migrating Your Vertica Cluster to the Cloud." I'm Jeff Healey, and I'm in Vertica marketing. I'll be your host for this break-out session. Joining me here are Sumeet Keswani and Chris Daly, Vertica product technology engineers and key members of our customer success team. Before we begin, I encourage you to submit questions and comments during the virtual session. You don't have to wait, just type your question or comment in the question box below the slides and click Submit. As always, there will be a Q&A session at the end of the presentation. We'll answer as many questions as we're able to during that time. Any questions that we don't address, we'll do our best to answer them offline. And alternatively, you can visit Vertica forums at forum.vertica.com to post your questions there after the session. Our engineering team is planning to join the forums to keep the conversation going. Also as a reminder that you can maximize your screen by clicking the double arrow button in the lower right corner of the slides. And yes, this virtual session is being recorded and will be available to view on demand this week. We'll send you a notification as soon as it's ready. Now let's get started. Over to you, Sumeet. >> Sumeet: Thank you, Jeff. Hello everyone, my name is Sumeet Keswani, and I will be talking about planning to deploy or migrate your Vertica cluster to the Cloud. So you may be moving an on-prem cluster or setting up a new cluster in the Cloud. And there are several design and operational considerations that will come into play. You know, some of these are cost, which industry you are in, or which expertise you have, in which Cloud platform. And there may be a personal preference too. After that, you know, there will be some operational considerations like VM and cluster sizing, what Vertica mode you want to deploy, Eon or Enterprise. It depends on your use keys. What are the DevOps skills available, you know, what elasticity, separation you need, you know, what is your backup and DR strategy, what do you want in terms of high availability. And you will have to think about, you know, how much data you have and where it's going to live. And in order to understand the cost, or the cost and the benefit of deployment and you will have to understand the access patterns, and how you are moving data from and to the Cloud. So things to consider before you move a deployment, a Vertica deployment to the Cloud, right, is one thing to keep in mind is, virtual CPUs, or CPUs in the Cloud, are not the same as the usual CPUs that you've been familiar with in your data center. A vCPU is half of a CPU because of hyperthreading. There is definitely the noisy neighbor effect. There is, depending on what other things are hosted in the Cloud environment, you may see performance, you may occasionally see performance issues. There are I/O limitations on the instance that you provision, so that what that really means is you can't always scale up. You might have to scale up, basically, you have to add more instances rather than getting bigger or the right size instances. Finally, there is an important distinction here. Virtualization is not free. There can be significant overhead to virtualization. It could be as much as 30%, so when you size and scale your clusters, you must keep that in mind. Now the other important aspect is, you know, where you put Vertica cluster is important. The choice of the region, how far it is from your various office locations. Where will the data live with respect to the cluster. And remember, popular locations can fill up. So if you want to scale out, additional capacity may or may not be available. So these are things you have to keep in mind when picking or choosing your Cloud platform and your deployment. So at this point, I want to make a plug for Eon mode. Eon mode is the latest mode, is a Cloud mode from Vertica. It has been designed with Cloud economics in mind. It uses shared storage, which is durable, available, and very cheap, like S3 storage or Google Cloud storage. It has been designed for quick scaling, like scale out, and highly elastic deployments. It has also been designed for high workload isolation, where each application or user group can be isolated from the other ones, so that they'll be paid and monitored separately, without affecting each other. But there are some disadvantages, or perhaps, you know, there's a cost for using Eon mode. Storage in S3 is neither cheap nor efficient. So there is a high latency of I/O when accessing data from S3. There is API and data access cost. There is API and data access cost associated with accessing your data in S3. Vertica in Eon mode has a pay as you go model, which you know, works for some people and does not work for others. And so therefore it is important to keep that in mind. And performance can be a little bit variable here, because it depends on cache, it depends on the local depot, which is a cache, and it is not as predictable as EE mode, so that's another trade-off. So let's spend about a minute and see how a Vertica cluster in Eon mode looks like. A Vertica cluster in Eon mode has S3 as the durability layer where all the data sits. There are subclusters, which are essentially just aggregation groups, which is separated compute, which will service different workloads. So for in this example, you may have two subclusters, one servicing ETL workload and the other one servicing (mic interference obscures speaking). These clusters are isolated, and they do not affect each other's performance. This allows you to scale them independently and isolate workloads. So this is the new Vertica Eon mode which has been specifically designed by us for use in the Cloud. But beyond this, you can use EE mode or Eon mode in the Cloud, it really depends on what your use case is. But both of these are possible, and we highly recommend Eon mode wherever possible. Okay, let's talk a little bit about what we mean by Vertica support in the Cloud. Now as you know, a Cloud is a shared data center, right. Performance in the Cloud can vary. It can vary between regions, availability zones, time of the day, choice of instance type, what concurrency you use, and of course the noisy neighbor effect. You know, we in Vertica, we performance, load, and stress test our product before every release. We have a bunch of use cases, we go through all of them, make sure that we haven't, you know, regressed any performance, and make sure that it works up to standards and gives you the high performance that you've come to expect. However, your solution or your workload is unique to you, and it is still your responsibility to make sure that it is tuned appropriately. To do this, one of the easiest things you can do is you know, pick a tested operating system, allocate the virtual machine, you know, with enough resources. It's something that we recommend, because we have tested it thoroughly. It goes a long way in giving you predictability. So after this I would like to now go into the various platforms, Cloud platforms, that Vertica has worked on. And I'll start with AWS, and my colleague Chris will speak about Azure and GCP. And our thoughts forward. So without further ado, let's start with the Amazon Web Services platform. So this is Vertica running on the Amazon Web Services platform. So as you probably are all aware, Amazon Web Services is the market leader in this space, and indeed really our biggest provider by far, and have been here for a very long time. And Vertica has a deep integration in the Amazon Web Services space. We provide a marketplace offering which has both pay as you go or a bring your own license model. We have many, you know, knowledge base articles, best practices, scripts, and resources that help you configure and use a Vertica database in the Cloud. We have several customers in the Cloud for many, many years now, and we have managed and console-based point and click deployments, you know, for ease of use in the Cloud. So Vertica has a deep integration in the Amazon space, and has been there for quite a bit now. So we communicate a lot of experience here. So let's talk about sizing on AWS. And sizing on any platform comes down to you know, these four or five different things. It comes down to picking the right instance type, picking the right disk volume and type, tuning and optimizing your networking, and finally, you know, some operational concerns like security, maintainability, and backup. So let's go into each one of these on the AWS ecosystem. So the choice of instance type is one of the important choices that you will make. In Eon mode, you know, you don't really need persistent disk. You can, you should probably choose ephemeral disk because it gives you extra speed, and speed with the instance type. We highly recommend the i3.4x instance types, which are very economical, have a big, 4 terabyte depot or cache per node. The i3.metal is similar to the i3.4, but has got significantly better performance, for those subclusters that need this extra oomph. The i3.2 is good for scale out of small ad hoc clusters. You know, they have a smaller cache and lower performance but it's cheap enough to use very indiscriminately. If you were in EE mode, well we don't use S3 as the layer of durability. Your local volumes is where we persist the data. Hence you do need an EBS volume in EE mode. In order to make sure that, you know, that the instance or the deployment is manageable, you might have to use some sort of a software RAID array over the EBS volumes. The most common instance type you see in EE mode is the r4.4x, the c4, or the m4 instance types. And then of course for temp space and depot we always recommend instance volumes. They're just much faster. Okay. So let's go, let's talk about optimizing your network or tuning your network. So the best, the best thing you can do about tuning your network, especially in Eon mode but in other modes too, is to get a VPC S3 endpoint. This is essentially a route table that makes sure that all traffic between your cluster and S3 goes over an internal fabric. This makes it much faster, you don't pay for egress cost, especially if you're doing external tables or your communal storage, but you do need to create it. Many times people will forget doing it. So you really do have to create it. And best of all, it's free. It doesn't cost you anything extra. You just have to create it during cluster creation time, and there's a significant performance difference for using it. The next thing about tuning your network is, you know, sizing it correctly. Pick the closest geographical region to where you'll consume the data. Pick the right availability zone. We highly recommend using cluster placement groups. In fact, they are required for the stability of the cluster. A cluster placement group is essentially, it operates this notion of rack. Nodes in a cluster placement group, are, you know, physically closer to each other than they would otherwise be. And this allows, you know, a 10 Gbps, bidirectional, TCP/IP flow between the nodes. And this makes sure that, you know, you get a high amount of Gbps per second. As you probably are all aware, the Cloud does not support broadcast or UDP broadcast. Hence you must use point-to-point UDP for spread in the Cloud, or in AWS. Beyond that, you know, point-to-point UDP does not scale very well beyond 20 nodes. So you know, as your cluster sizes increase, you must switch over to large cluster mode. And finally, use instances with enhanced networking or SR-IOV support. Again, it's free, it comes with the choice of the instance type and the operating system. We highly recommend it, it makes a big difference in terms of how your workload will perform. So let's talk a little bit about security, configuration, and orchestration. As I said, we provide CloudFormation scripts to make the ease of deployment. You can use the MC point and click. With regard to security, you know, Vertica does support instance profiles out of the box in Amazon. We recommend you use it. This is highly desirable so that you're not passing access keys and secret keys around. If you use our marketplace image, we have picked the latest operating systems, we have patched them, Amazon actually validates everything on marketplace and scans them for security vulnerabilities. So you get that for free. We do some basic configuration, like we disable root ssh access, we disallow any password access, we turn on encryption. And we run a basic set of security checks to make sure that the image is secure. Of course, it could be made more secure. But we try to balance out security, performance, and convenience. And finally, let's talk about backups. Especially in Eon mode I get the question, "Do we really need to back up our system, "since the data is in S3?" And the answer is yes, you do. Because you know, S3's not going to protect you against an accidental drop table. You know, S3 has a finite amount of reliability, durability, and availability. And you may want to be able to restore data differently. Also, backups are important if you're doing DR, or if you have additional cluster in a different region. The other cluster can be considered a backup. And finally, you know, why not create a backup or a disaster recovery cluster, you know, storage is cheap in the Cloud. So you know, we highly recommend you use it. So with this, I would like to hand it over to my colleague Christopher Daly, who will talk about the other two platforms that we support, that is Google and Azure. Over to you, Chris, thank you. >> Chris: Thanks, Sumeet, and hi everyone. So while there's no argument that we here at Vertica have a long history of running within the Amazon Web Services space, there are other alternative Cloud service providers where we do have a presence, such as Google Cloud Platform, or GCP. For those of you who are unfamiliar with GCP, it's considered the third-largest Cloud service provider in the marketspace, and it's priced very competitively to its peers. Has a lot of similarities to AWS in the products and services that it offers, but it tends to be the go-to place for newer businesses or startups. We officially started supporting GCP a little over a year ago with our first entry into their GCP marketplace. So a solution that deployed a fully-functional and ready-to-use Enterprise mode cluster. We followed up on that with the release and the support of Google storage buckets, and now I'm extremely pleased to announce that with the launch of Vertica 10, we're officially supporting Eon mode architecture in GCP as well. But that's not all, as we're adding additional offerings into the GCP marketplace. With the launch of version 10 we'll be introducing a second listing in the marketplace that allows for the deployment of an Eon mode cluster. It's all being driven by our own management consult. This will allow customers to quickly spin up Eon-based clusters within the GCP space. And if that wasn't enough, I'm also pleased to tell you that very soon after the launch we're going to be offering Vertica by the hour in GCP as well. And while we've done a lot to automate the solutions coming out of the marketplace, we recognize the simple fact that for a lot of you, building your cluster manually is really the only option. So with that in mind, let's talk about the things you need to understand in GCP to get that done. So wag me if you think this slide looks familiar. Well nope, it's not an erroneous duplicate slide from Sumeet's AWS section, it's merely an acknowledgement of all the things you need to consider for running Vertica in the Cloud. In Vertica, the choice of the operational mode will dictate some of the choices you'll need to make in the infrastructure, particularly around storage. Just like on-prem solutions, you'll need to understand the disk and networking capacities to get the most out of your cluster. And one of the most attractive things in GCP is the pricing, as it tends to run a little less than the others. But it does translate into less choices and options within the environment. If nothing else, I want you to take one thing away from this slide, and Sumeet said this earlier. VMs running, about AWS, Sumeet said this about AWS earlier. VMs running in the GCP space run on top of hardware that has hyperthreading enabled. And that a vCPU doesn't equate to a core, but rather a processing thread. This becomes particularly important if you're moving from an on-prem environment into the Cloud. Because a physical Vertica node with 32 cores is not the same thing as a VM with 32 vCPUs. In fact, with 32 vCPUs, you're only getting about 16 cores worth of performance. GCP does offer a handful of VM types, which they categorize by letter, but for us, most of these don't make great choices for Vertica nodes. The M series, however, does offer a good core to memory ratio, especially when you're looking at the high-mem variants. Also keep in mind, performance in I/O, such as network and disk, are partially dependent on the VM size, so customers in GCP space should be focusing on 16 vCPU VMs and above for their Vertica nodes. Disk options in GCP can be broken down into two basic types, persistent disks and local disks, which are ephemeral. Persistent disks come in two forms, standard or SSD. For Vertica in Eon mode, we recommend that customers use persistent SSD disks for the catalog, and either local SSD disks or persistent SSD disks for the depot and the temp space. Couple of things to think about here, though. Persistent disks are provisioned as a single device with a settable size. Local disks are provisioned as multiple disk devices with a fixed size, requiring you to use some kind of software RAIDing to create a single storage device. So while local SSD disks provide much more throughput, you're using CPU resources to maintain that RAID set. So you're giving, it's a little bit of a trade-off. Persistent disks offer redundancy, either within the zone that they exist or within the region, and if you're selecting regional redundancy, the disks are replicated across multiple zones in the region. This does have an effect in the performance to VM, so we don't recommend this. What we do recommend is the zonal redundancy when you're using persistent disks, as it gives you that redundancy level without actually affecting the performance. Remember also, in the Cloud space, all I/O is network I/O, as disks are basically block storage devices. This means that disk actions can and will slow down network traffic. And finally, the storage bucket access in GCP is based on GCP interoperability mode, which means that it's basically compliant with the AWS S3 API. In interoperability mode, access to the bucket is granted by a key pair that GCP refers to as HMAC keys. HMAC keys can be generated for individual users or for service accounts. We will recommend that when you're creating HMAC keys, choose a service account to ensure that the keys are not tied to a single employee. When thinking about storage for Enterprise mode, things change a little bit. We still recommend persistent SSD disks over standard ones. However, the use of local SSD disks for anything other than temp space is highly discouraged. I said it before, local SSD disks are ephemeral, meaning that the data's lost if the machine is turned off or goes down. So not really a place you want to store your data. In GCP, multiple persistent disks placed into a software RAID set does not create more throughput like you can find in other Clouds. The I/O saturation usually hits the VM limit long before it hits the disk limit. In fact, performance of a persistent disk is determined not just by the size of the disk but also by the size of the VM. So a good rule of thumb in GCP is to maximize your I/O throughput for persistent disks, is that the size tends to max out at two terabytes for SSDs and 10 terabytes for standard disks. Network performance in GCP can be thought of in two distinct ways. There's node-to-node traffic, and then there's egress traffic. Node-to-node performance in GCP is really good within the zone, with typical traffic between nodes falling in the 10-15 gigabits per second range. This might vary a little from zone to zone and region to region, but usually it's only limited, they're only limited by the existing traffic where the VMs exist. So kind of a noisy neighbor effect. Egress traffic from a VM, however, is subject to throughput caps, and these are based on the size of the VM. So the speed is set for the number of vCPUs in the VM at two gigabits per second per vCPU, and tops out at 32 gigabits per second. So the larger the VM, the more vCPUs you get, the larger the cap. So some things to consider in the NAV ring space for your Vertica cluster, pick a region that's physically close to you, even if you're connecting to the GCP network from a corporate LAN as opposed to the internet. The further the packets have to travel, the longer it's going to take. Also, GCP, like most Clouds, doesn't support UDP broadcast traffic on their virtual NAV ring, so you do have to use the point-to-point flag for spread when you're creating your cluster. And since the network cap on VMs is set at 32 gigabits per second per VM, maximize your network egress throughput and don't use VMs that are smaller than 16 vCPUs for your Vertica nodes. And that gets us to the one question I get asked the most often. How do I get my data into and out of the Cloud? Well, GCP offers many different methods to support different speeds and different price points for data ingress and egress. There's the obvious one, right, across the internet either directly to the VMs or into the storage bucket. Or you can, you know, light up a VPN tunnel to encrypt all that traffic. But additionally, GCP offers direct network interconnect from your corporate network. These get provided either by Google or by a partner, and they vary in speed. They also offer things called direct or carrier peering, which is connecting the edges of the networks between your network and GCP, and you can use a CDN interconnect, which creates, I believe, an on-demand connection from the GCP network, your network to the GCP network provided by a large host of CDN service providers. So GCP offers a lot of ways to move your data around in and out of the GCP Cloud. It's really a matter of what price point works for you, and what technology your corporation is looking to use. So we've talked about AWS, we've talked about GCP, it really only leaves one more Cloud. So last, and by far not the least, there's the Microsoft Azure environment. Holding on strong to the number two place in the major Cloud providers, Azure offers a very robust Cloud offering that's attractive to customers that already consume services from Microsoft. But what you need to keep in mind is that the underlying foundation of their Cloud is based on the Microsoft Windows products. And this makes their Cloud offering a little bit different in the services and offerings that they have. The good news here, though, is that Microsoft has done a very good job of getting their virtualization drivers baked into the modern kernels of most Linux operating systems, making running Linux-based VMs in Azure fairly seamless. So here's the slide again, but now you're going to notice some slight differences. First off, in Azure we only support Enterprise mode. This is because the Azure storage product is very different from Google Cloud storage and S3 on AWS. So while we're working on getting this supported, and we're starting to focus on this, we're just not there yet. This means that since we're only supporting Enterprise mode in Azure, getting the local disk performance right is one of the keys to success of running Vertica here, with the other major key being making sure that you're getting the appropriate networking speeds. Overall, Azure's a really good platform for Vertica, and its performance and pricing are very much on par with AWS. But keep in mind that the newer versions of the Linux operating systems like RHEL and CentOS run much better here than the older versions. Okay, so first things first again, just like GCP, in Azure VMs are running on top of hardware that has hyperthreading enabled. And because of the way Hyper-V, Azure's virtualization engine works, you can actually see this, right? So if you look down into the CPU information of the VM, you'll actually see how it groups the vCPUs by core and by thread. Azure offers a lot of VM types, and is adding new ones all the time. But for us, we see three VM types that make the most sense for Vertica. For customers that are looking to run production workloads in Azure, the Es_v3 and the Ls_v2 series are the two main recommendations. While they differ slightly in the CPU to memory ratio and the I/O throughput, the Es_v3 series is probably the best recommendation for a generalized Vertica node, with the Ls_v2 series being recommended for workloads with higher I/O requirements. If you're just looking to deploy a sandbox environment, the Ds_v3 series is a very suitable choice that really can reduce your overall Cloud spend. VM storage in Azure is provided by a grouping of four different types of disks, all offering different levels of performance. Introduced at the end of last year, the Ultra Disk option is the highest-performing disk type for VMs in Azure. It was designed for database workloads where high throughput and low latency is very desirable. However, the Ultra Disk option is not available in all regions yet, although that's been changing slowly since their launch. The Premium SSD option, which has been around for a while and is widely available, can also offer really nice performance, especially higher capacities. And just like other Cloud providers, the I/O throughput you get on VMs is dictated not only by the size of the disk, but also by the size of the VM and its type. So a good rule of thumb here, VM types with an S will have a much better throughput rate than ones that don't, meaning, and the larger VMs will have, you know, higher I/O throughput than the smaller ones. You can expand the VM disk throughput by using multiple disks in Azure and using a software RAID. This overcomes limitations of single disk performance, but keep in mind, you're now using CPU cycles to maintain that raid, so it is a bit of a trade-off. The other nice thing in Azure is that all their managed disks are encrypted by default on the server side, so there's really nothing you need to do here to enable that. And of course I mentioned this earlier. There is no native access to Azure storage yet, but it is something we're working on. We have seen folks using third-party applications like MinIO to access Azure's storage as an S3 bucket. So it might be something you want to keep in mind and maybe even test out for yourself. Networking in Azure comes in two different flavors, standard and accelerated. In standard networking, the entire network stack is abstracted and virtualized. So this works really well, however, there are performance limitations. Standard networking tends to top out around four gigabits per second. Accelerated networking in Azure is based on single root I/O virtualization of the Mellanox adapter. This is basically the VM talking directly to the physical network card in the host hardware, and it can produce network speeds up to 20 gigabits per second, so much, much faster. Keep in mind, though, that not all VM types and operating systems actually support accelerated networking, and you know, just like disk throughput, network throughput is based on VM type and size. So what do you need to think about for networking in the Azure space? Again, stay close to home. Pick regions that are geographically close to your location. Yes, the backbones between the regions are very, very fast, but the more hops your packets have to make, the longer it takes. Azure offers two types of groupings of their VMs, availability sets and availability zones. Availability zones offer good redundancy across multiple zones, but this actually increases the node-to-node latency, so we recommend you avoid this. Availability sets, on the other hand, keep all your VMs grouped together within a single zone, but makes sure that no two VMs are running on the same host hardware, for redundancy. And just like the other Clouds, UDP broadcast is not supported. So you have to use the point-to-point flag when you're creating your database to ensure that the spread works properly. Spread time out, okay, this is a good one. So recently, Microsoft has started monthly rolling updates of their environment. What this looks like is VMs running on top of hardware that's receiving an update can be paused. And this becomes problematic when the pausing of the VM exceeds eight seconds, as the unpaused members of the cluster now think the paused VM is down. So consider adjusting the spread time out for your clusters in Azure to 30 seconds, and this will help avoid a little of that. If you're deploying a large cluster in Azure, more than 20 nodes, use large closer mode, as point-to-point for spread doesn't really scale well with a lot of Vertica nodes. And finally, you know, pick VM types and operating systems that support accelerated networking. The difference in the node-to-node speeds can be very dramatic. So how do we move data around in Azure, right? So Microsoft views data egress a little differently than other Clouds, as it classifies any data being transmitted by a VM as egress. However, it only bills for data egress that actually leaves the Azure environment. Egress speed limits in Azure are based entirely on the VM type and size, and then they're limited by your connection to them. While not offering as many pathways to access their Cloud as GCP, Azure does offer a direct network-to-network connection called ExpressRoute. Offered by a large group of third-party processors, partners, the ExpressRoute offers multiple tiers of performance that are based on a flat charge for inbound data and a metered charge for outbound data. And of course you can still access these via the internet, and securely through a VPN gateway. So on behalf of Jeff, Sumeet, and myself, I'd like to thank you for listening to our presentation today, and we're now ready for Q&A.

Published Date : Mar 30 2020

SUMMARY :

Also as a reminder that you can maximize your screen So the best, the best thing you can do and the larger VMs will have, you know,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ChrisPERSON

0.99+

SumeetPERSON

0.99+

Jeff HealeyPERSON

0.99+

Chris DalyPERSON

0.99+

JeffPERSON

0.99+

Christopher DalyPERSON

0.99+

Sumeet KeswaniPERSON

0.99+

GoogleORGANIZATION

0.99+

VerticaORGANIZATION

0.99+

AWSORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

10 GbpsQUANTITY

0.99+

AmazonORGANIZATION

0.99+

forum.vertica.comOTHER

0.99+

30 secondsQUANTITY

0.99+

Amazon Web ServicesORGANIZATION

0.99+

RHELTITLE

0.99+

TodayDATE

0.99+

32 coresQUANTITY

0.99+

CentOSTITLE

0.99+

more than 20 nodesQUANTITY

0.99+

32 vCPUsQUANTITY

0.99+

two platformsQUANTITY

0.99+

eight secondsQUANTITY

0.99+

VerticaTITLE

0.99+

10 terabytesQUANTITY

0.99+

oneQUANTITY

0.99+

todayDATE

0.99+

bothQUANTITY

0.99+

20 nodesQUANTITY

0.99+

two terabytesQUANTITY

0.99+

each applicationQUANTITY

0.99+

S3TITLE

0.99+

two typesQUANTITY

0.99+

LinuxTITLE

0.99+

two subclustersQUANTITY

0.98+

first entryQUANTITY

0.98+

one questionQUANTITY

0.98+

fourQUANTITY

0.98+

AzureTITLE

0.98+

Vertica 10TITLE

0.98+

4/2DATE

0.98+

FirstQUANTITY

0.98+

16 vCPUQUANTITY

0.98+

two formsQUANTITY

0.97+

MinIOTITLE

0.97+

single employeeQUANTITY

0.97+

firstQUANTITY

0.97+

this weekDATE

0.96+

Gabriel Chapman grphx full


 

hi everybody and welcome to this cube special presentation of the verdict of virtual Big Data conference the cube is running in parallel with day 1 and day 2 of the verdict big data event by the way the cube has been at every single big data event and it's our pleasure to be here in the virtual / digital event as well Gabriel Chapman is here is the director of flash blade product solutions marketing at pure storage gave great to see you thanks for coming on great to see you - how's it going it's going very well I mean I wish we were meeting in Boston at the Encore Hotel but you know and and hopefully we'll be able to meet it accelerate at some point you cheer or one of the the sub shows that you guys are doing the regional shows but because we've been covering that show as well but I really want to get into it and the last accelerate September 2019 pure and Vertica announced a partnership I remember a joint being ran up to me and said hey you got to check this out the separation of Butte and storage by a Eon mode now available on flash played so and and I believe still the only company that can support that separation and independent scaling both on permit in the cloud so Gabe I want to ask you what were the trends in analytical database and cloud that led to this partnership you know realistically I think what we're seeing is that there's been in kind of a larger shift when it comes to modern analytics platforms towards moving away from the the traditional you know Hadoop type architecture where we were doing on and leveraging a lot of direct attached storage primarily because of the limitations of how that solution was architected when we start to look at the larger trends towards you know how organizations want to do this type of work on premises they're looking at solutions that allow them to scale the compute storage pieces independently and therefore you know the flash play platform ended up being a great solution to support Vertica in their transition to Eon mode leveraging is essentially as an s3 object store okay so let's let's circle back on that you guys in your in your announcement of a flash blade you make the claim that flash blade is the industry's most advanced file and object storage platform ever that's a bold statement so defend that it's supposed to yeah III like to go beyond that and just say you know so we've really kind of looked at this from a standpoint of you know as as we've developed flash blade as a platform and keep in mind it's been a product that's been around for over three years now and has you know it's been very successful for pure storage the reality is is that fast file and fast object as a combined storage platform is a direction that many organizations are looking to go and we believe that we're a leader in that fast object of best file storage place in realistically would we start to see more organizations start to look at building solutions that leverage cloud storage characteristics but doing so on prem or multitude different reasons we've built a platform that really addresses a lot of those needs around simplicity around you know making things assure that you know vast matters for us simple is smart we can provide you know cloud integrations across the spectrum and you know there's a subscription model that fits into that as well we fall that that falls into our umbrella of what we consider the modern data experience and it's something that we've built into the entire pure portfolio okay so I want to get into the architecture a little bit of Flash blade and then better understand the fit for analytic databases generally but specifically Vertica so it is a blade so you got compute and a network included it's a key value store based system so you're talking about scale out unlike unlike viewers sort of you know initial products which were scale up and so I want to under in as a fabric base system I want to understand what that all mean so take us through the architecture you know some of the quote-unquote firsts that you guys talk about so let's start with sort of the blade aspect yeah the blade aspect meaning we call it a flash blade because if you look at the actual platform you have a primarily a chassis with built in networking components right so there's a fabric interconnect with inside the platform that connects to each one of the individual blades the individual blades have their own compute that drives basically a pure storage flash components inside it's not like we're just taking SSDs and plugging them into a system and like you would with the traditional commodity off-the-shelf hardware design this is a very much an engineered solution that is built towards the characteristics that we believe were important with fast file and fast object scalability you know massive parallelization when it comes to performance and the ability to really kind of grow and scale from essentially seven blades right now to a hundred and fifty that's that's the kind of scale that customers are looking for especially as we start to address these larger analytic spools they have multi petabyte datasets you know that single addressable object space and you know file performance that is beyond what most of your traditional scale-up storage platforms are able to deliver yes I interviewed cause last September and accelerate and and Christopher's been you know attacked by some of the competitors is not having a scale out I asked him his thoughts on that he said well first of all our Flash blade is scale-out and he said look anything that that that adds the complexity you know we avoid but for the workloads that are associated with Flash blade scale-out is the right sort of approach maybe you could talk about why that is well you know realistically I think you know that that approach is better when we're starting to learn to work with large unstructured data sets I mean flash plays uniquely architected to allow customers to achieve you know a superior resource utilization for compute and storage well at the same time you know reducing significantly the complexity that is arisen around these kind of bespoke or siloed nature of big data and analytic solutions I mean we really kind of look at this from a standpoint of you have built and delivered or created applications in the public cloud space that address you know object storage and and unstructured data and and for some organizations the importance is bringing that on Prem I mean we do seek repatriation that coming on on for a lot of organizations as these data egress charges continue to expand and grow and then organizations that want even higher performance in the what we're able to get into the public cloud space they are bringing that data back on Prem they are looking at from a standpoint we still want to be able to scale the way we scale on the cloud we still want to operate the same way we operate in the cloud but we want to do it within control of our own you know our own borders and so that's you know that's one of the bigger pieces to that is we start to look at how do we address cloud characteristics and dynamics and consumption metrics or models as well as the benefits and efficiencies of scale that they're able to afford but allowing customers that do that with inside their own data center yes are you talking about the trends earlier you had these cloud native databases that allowed the scaling of compute and storage independently of Vertica comes in with eon of a lot of times we talk about these these partnerships as Barney deals of you know I love you you love me here's a press release and then we go on or they're just straight you know go to market are there other aspects of this partnership that are that are non Barney deal like in other words any specific you know engineering you know other go to market programs can you talk about that a little bit yeah it's it's it's more than just you know I then what we consider a channel meet in the middle or you know that Barney type of deal it's the realistically you know we've done some first with Vertica that I think are really important if they think you look at the architecture and how we do have we've brought this to market together we have solutions teams in the back end who are you know subject matter experts in this space if you talk to joy and the people from vertigo they're very high on or very excited about the partnership because it often it opens up a new set of opportunities for their customers to to leverage Eon mode and you know get into some of the the nuanced aspects of how they leverage the depot for Depot with inside each individual compute node and adjustments with inside there I reach additional performance gains for customers on Prem and at the same time for them there's still the ability to go into that cloud model if they wish to and so I think a lot of it is around how do we partner as two companies how do we do a joint selling motions you know how do we show up and and you know do white papers and all of the the traditional marketing aspects that we bring devote to the market and then you know joint selling opportunities as exists where they are and so that's realistically I think like any other organization that's going to market with a partner or an ISP that they have a strong partnership with you'll continue to see us you know talking about our chose mutually beneficial relationships and the solutions that we're bringing to the market okay you know of course he used to be a Gartner analyst and you go over to the vendor side now but as but as it but as a gardener analyst you're obviously objective you see it all you know well there's a lot of ways to skin a cat there are there are there are strengths weaknesses opportunities threats etc for every vendor so you have you have Vertica who's got a very mature stack and and talking to a number of the customers out there we're using Eon mode you know there's certain workloads where these cloud native databases make sense it's not just the economics of scaling compute and storage independently I want to talk more about that there's flexibility aspects as well but Vertica really you know has to play its trump card which is look we've got a big on-premise state and we're gonna bring that you know Eon capability both on Prem and we're embracing the cloud now they're obviously you have to they had to play catch-up in the cloud but at the same time they've got a much more mature stack than a lot of these other you know cloud native databases that might have just started a couple of years ago so you know so there's trade-offs that customers have to make how do you sort through that where do you see the interest in this and and and what's the sweet spot for this partnership you know we've been really excited to build the partnership with Vertica and we're providing you know we're really proud to provide pretty much the only on Prem storage platform that's validated with the vertical yawn mode to deliver a modern data experience for our customers together you know it's it's that partnership that allows us to go into customers that on Prem space where I think that they're still you know not to say that not everybody wants to go the cloud I think there's aspects and solutions that work very well there but for the vast majority I still think that there's you know the your data center is not going away and you do want to have control over some of the many of the different facets with inside the operational confines so therefore we start to look at how do we can do the best of what cloud offers but on Prem and that's realistically where we start to see the stronger push for those customers who still want to manage their data locally as well as maybe even work around some of the restrictions that they might have around cost and complexity hiring you know the different types of skills skill sets that are required to bring you know applications purely cloud native it's still that larger part of that digital transformation that many organizations are going for going forward with and realistically I think they're taking a look at the pros and cons and we've been doing cloud long enough for people recognize that you know it's not perfect for everything and that there's certain things that we still want to keep inside our own data center so I mean realistically as we move forward that's that that better option when it comes to a modern architecture they can do it you know we can deliver and address a diverse set of performance requirements and allow the organization to continue to grow the model to the data you know based on the data that they're actually trying to leverage and that's really what flash Wood was built or it was built for a platform that can address small files or large files or high throughput high throughput low latency scale to petabytes in a single namespace in a single rack as we like to put it in there I mean we see customers that have put you know 150 flash blades into production as a single namespace it's significant for organizations that are making that drive towards modern data experience with modern analytics platforms pure and Vertica have delivered an experience that can address that to a wide range of customers that are implementing you know the verdict technology I'm interested in exploring the use case a little bit further you just sort of gave some parameters and some examples and some of the flexibility that you have in but take us through kind of what the discuss the customer discussions are like obviously you've got a big customer base you and Vertica that that's on prem that's the the the unique advantage of this but there are others it's not just the economics of the the granular scaling of compute and storage independently there are other aspects so to take us through that sort of a primary use case or use cases yeah you know I mean I can give you a couple customer examples and we have a large SAS analyst company which uses verdict on flash play to authenticate the quality of digital media in real time and you know then for them it makes a big difference is they're doing they're streaming and whatnot that they can they can fine tune and grandly control that so that's one aspect that that we get address we have a multi national car con company which uses verdict on flash blade to make thousands of decisions per second for autonomous vehicle decision-making trees that you know that's what really these new modern analytics platforms were built or there's another healthcare organization that uses Vertica on flash blade to enable healthcare providers to make decisions in real time the impact Ives especially when we start to look at and you know the current state of affairs with Kovac in the coronavirus you know those types of technologies are really going to help us kind of get love and and help lower and been you know bend that curve downward so you know there's all these different areas where we can address the goals and the achievements that we're trying to look bored with with real-time analytic decision making tools like Berta and you know realistically as we have these conversations with customers they're looking to get beyond the ability of just you know you know a data scientist or a data architect looking to just kind of drive in information we were talking about Hadoop earlier we're kind of going well beyond that now and I guess what I'm saying is that in the first phase of cloud it was all about infrastructure it was about you know spinning up you know compute and storage a little bit of networking in there seems like the the a next a new workload that's clearly emerging is you've got and it started with the cloud databases but then bringing in you know AI and machine learning tooling on top of that and then being able to really drive these new types of insights and it's really about taking data these bogs this bog of data that we've collected over the last 10 years a lot of that you know driven by Hadoop bringing machine intelligence into the equation scaling it with either cloud public cloud or bringing that cloud experience on prams scale you know across your organizations and across your partner network that really is a new emerging work load do you see that and maybe talk a little bit about you know what you're seeing with customers yeah I mean it really is we see several trends you know one of those is the ability to take a take this approach to move it out of the lab but into production you know especially when it comes to you know data science projects machine learning projects that traditionally start out as kind of small proofs of concept easy to spin up in the cloud but when a customer wants to scale and move towards a real you know it derived a significant value from that they do want to be able to control more characteristics right and we know machine learning you know needs to needs to learn from a massive amounts of data to provide accuracy there's just too much data to retrieve in the cloud for every training job at the same time predictive analytics without accuracy is not going to deliver the business advantage of what everyone is seeking you know we see this the visualization of data analytics is traditionally deployed as being on a continuum with you know the things that we've been doing in the long you know in the past you know with data warehousing data lakes AI on the other end but but this way we're starting to manifest it in organizations that are looking towards you know getting more utility and better you know elasticity out of the data that they are working for so they're not looking to just build ups you know silos of bespoke AI environments they're looking to leverage you know a platform that can allow them to you know do a I for one thing machine learning for another leverage multiple protocols to access that data because the tools are so much different you know it is a growing diversity of of use cases that you can put on a single platform I think organizations are looking for as they try to scale these environments I think there's gonna be a big growth area in the coming years gay ball I wish we were in Boston together you would have painted your little corner of Boston Orange I know that you guys are sharing but I really appreciate you coming on the cube wall-to-wall coverage two days at the vertical Vertica virtual big data conference keep you right there but right back right after this short break [Music]

Published Date : Mar 30 2020

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
JimPERSON

0.99+

DavePERSON

0.99+

JohnPERSON

0.99+

JeffPERSON

0.99+

Paul GillinPERSON

0.99+

MicrosoftORGANIZATION

0.99+

DavidPERSON

0.99+

Lisa MartinPERSON

0.99+

PCCWORGANIZATION

0.99+

Dave VolantePERSON

0.99+

AmazonORGANIZATION

0.99+

Michelle DennedyPERSON

0.99+

Matthew RoszakPERSON

0.99+

Jeff FrickPERSON

0.99+

Rebecca KnightPERSON

0.99+

Mark RamseyPERSON

0.99+

GeorgePERSON

0.99+

Jeff SwainPERSON

0.99+

Andy KesslerPERSON

0.99+

EuropeLOCATION

0.99+

Matt RoszakPERSON

0.99+

Frank SlootmanPERSON

0.99+

John DonahoePERSON

0.99+

Dave VellantePERSON

0.99+

Dan CohenPERSON

0.99+

Michael BiltzPERSON

0.99+

Dave NicholsonPERSON

0.99+

Michael ConlinPERSON

0.99+

IBMORGANIZATION

0.99+

MeloPERSON

0.99+

John FurrierPERSON

0.99+

NVIDIAORGANIZATION

0.99+

Joe BrockmeierPERSON

0.99+

SamPERSON

0.99+

MattPERSON

0.99+

Jeff GarzikPERSON

0.99+

CiscoORGANIZATION

0.99+

Dave VellantePERSON

0.99+

JoePERSON

0.99+

George CanuckPERSON

0.99+

AWSORGANIZATION

0.99+

AppleORGANIZATION

0.99+

Rebecca NightPERSON

0.99+

BrianPERSON

0.99+

Dave ValantePERSON

0.99+

NUTANIXORGANIZATION

0.99+

NeilPERSON

0.99+

MichaelPERSON

0.99+

Mike NickersonPERSON

0.99+

Jeremy BurtonPERSON

0.99+

FredPERSON

0.99+

Robert McNamaraPERSON

0.99+

Doug BalogPERSON

0.99+

2013DATE

0.99+

Alistair WildmanPERSON

0.99+

KimberlyPERSON

0.99+

CaliforniaLOCATION

0.99+

Sam GroccotPERSON

0.99+

AlibabaORGANIZATION

0.99+

RebeccaPERSON

0.99+

twoQUANTITY

0.99+

Gabriel Chapman


 

hi everybody and welcome to this cube special presentation of the verdict of virtual Big Data conference the cube is running in parallel with day 1 and day 2 of the verdict big data event by the way the cube has been at every single big data event and it's our pleasure to be here in the virtual / digital event as well Gabriel Chapman is here is the director of flash blade product solutions marketing at pure storage gave great to see you thanks for coming on great to see you - how's it going it's going very well I mean I wish we were meeting in Boston at the Encore Hotel but you know and and hopefully we'll be able to meet it accelerate at some point you cheer or one of the the sub shows that you guys are doing the regional shows but because we've been covering that show as well but I really want to get into it and the last accelerate September 2019 pure and Vertica announced a partnership I remember a joint being ran up to me and said hey you got to check this out the separation of Butte and storage by a Eon mode now available on flash played so and and I believe still the only company that can support that separation and independent scaling both on permit in the cloud so Gabe I want to ask you what were the trends in analytical database and cloud that led to this partnership you know realistically I think what we're seeing is that there's been in kind of a larger shift when it comes to modern analytics platforms towards moving away from the the traditional you know Hadoop type architecture where we were doing on and leveraging a lot of direct attached storage primarily because of the limitations of how that solution was architected when we start to look at the larger trends towards you know how organizations want to do this type of work on premises they're looking at solutions that allow them to scale the compute storage pieces independently and therefore you know the flash play platform ended up being a great solution to support Vertica in their transition to Eon mode leveraging is essentially as an s3 object store okay so let's let's circle back on that you guys in your in your announcement of a flash blade you make the claim that flash blade is the industry's most advanced file and object storage platform ever that's a bold statement so defend that it's supposed to yeah III like to go beyond that and just say you know so we've really kind of looked at this from a standpoint of you know as as we've developed flash blade as a platform and keep in mind it's been a product that's been around for over three years now and has you know it's been very successful for pure storage the reality is is that fast file and fast object as a combined storage platform is a direction that many organizations are looking to go and we believe that we're a leader in that fast object of best file storage place in realistically would we start to see more organizations start to look at building solutions that leverage cloud storage characteristics but doing so on prem or multitude different reasons we've built a platform that really addresses a lot of those needs around simplicity around you know making things assure that you know vast matters for us simple is smart we can provide you know cloud integrations across the spectrum and you know there's a subscription model that fits into that as well we fall that that falls into our umbrella of what we consider the modern data experience and it's something that we've built into the entire pure portfolio okay so I want to get into the architecture a little bit of Flash blade and then better understand the fit for analytic databases generally but specifically Vertica so it is a blade so you got compute and a network included it's a key value store based system so you're talking about scale out unlike unlike viewers sort of you know initial products which were scale up and so I want to under in as a fabric base system I want to understand what that all mean so take us through the architecture you know some of the quote-unquote firsts that you guys talk about so let's start with sort of the blade aspect yeah the blade aspect meaning we call it a flash blade because if you look at the actual platform you have a primarily a chassis with built in networking components right so there's a fabric interconnect with inside the platform that connects to each one of the individual blades the individual blades have their own compute that drives basically a pure storage flash components inside it's not like we're just taking SSDs and plugging them into a system and like you would with the traditional commodity off-the-shelf hardware design this is a very much an engineered solution that is built towards the characteristics that we believe were important with fast file and fast object scalability you know massive parallelization when it comes to performance and the ability to really kind of grow and scale from essentially seven blades right now to a hundred and fifty that's that's the kind of scale that customers are looking for especially as we start to address these larger analytic spools they have multi petabyte datasets you know that single addressable object space and you know file performance that is beyond what most of your traditional scale-up storage platforms are able to deliver yes I interviewed cause last September and accelerate and and Christopher's been you know attacked by some of the competitors is not having a scale out I asked him his thoughts on that he said well first of all our Flash blade is scale-out and he said look anything that that that adds the complexity you know we avoid but for the workloads that are associated with Flash blade scale-out is the right sort of approach maybe you could talk about why that is well you know realistically I think you know that that approach is better when we're starting to learn to work with large unstructured data sets I mean flash plays uniquely architected to allow customers to achieve you know a superior resource utilization for compute and storage well at the same time you know reducing significantly the complexity that is arisen around these kind of bespoke or siloed nature of big data and analytic solutions I mean we really kind of look at this from a standpoint of you have built and delivered or created applications in the public cloud space that address you know object storage and and unstructured data and and for some organizations the importance is bringing that on Prem I mean we do seek repatriation that coming on on for a lot of organizations as these data egress charges continue to expand and grow and then organizations that want even higher performance in the what we're able to get into the public cloud space they are bringing that data back on Prem they are looking at from a standpoint we still want to be able to scale the way we scale on the cloud we still want to operate the same way we operate in the cloud but we want to do it within control of our own you know our own borders and so that's you know that's one of the bigger pieces to that is we start to look at how do we address cloud characteristics and dynamics and consumption metrics or models as well as the benefits and efficiencies of scale that they're able to afford but allowing customers that do that with inside their own data center yes are you talking about the trends earlier you had these cloud native databases that allowed the scaling of compute and storage independently of Vertica comes in with eon of a lot of times we talk about these these partnerships as Barney deals of you know I love you you love me here's a press release and then we go on or they're just straight you know go to market are there other aspects of this partnership that are that are non Barney deal like in other words any specific you know engineering you know other go to market programs can you talk about that a little bit yeah it's it's it's more than just you know I then what we consider a channel meet in the middle or you know that Barney type of deal it's the realistically you know we've done some first with Vertica that I think are really important if they think you look at the architecture and how we do have we've brought this to market together we have solutions teams in the back end who are you know subject matter experts in this space if you talk to joy and the people from vertigo they're very high on or very excited about the partnership because it often it opens up a new set of opportunities for their customers to to leverage Eon mode and you know get into some of the the nuanced aspects of how they leverage the depot for Depot with inside each individual compute node and adjustments with inside there I reach additional performance gains for customers on Prem and at the same time for them there's still the ability to go into that cloud model if they wish to and so I think a lot of it is around how do we partner as two companies how do we do a joint selling motions you know how do we show up and and you know do white papers and all of the the traditional marketing aspects that we bring devote to the market and then you know joint selling opportunities as exists where they are and so that's realistically I think like any other organization that's going to market with a partner or an ISP that they have a strong partnership with you'll continue to see us you know talking about our chose mutually beneficial relationships and the solutions that we're bringing to the market okay you know of course he used to be a Gartner analyst and you go over to the vendor side now but as but as it but as a gardener analyst you're obviously objective you see it all you know well there's a lot of ways to skin a cat there are there are there are strengths weaknesses opportunities threats etc for every vendor so you have you have Vertica who's got a very mature stack and and talking to a number of the customers out there we're using Eon mode you know there's certain workloads where these cloud native databases make sense it's not just the economics of scaling compute and storage independently I want to talk more about that there's flexibility aspects as well but Vertica really you know has to play its trump card which is look we've got a big on-premise state and we're gonna bring that you know Eon capability both on Prem and we're embracing the cloud now they're obviously you have to they had to play catch-up in the cloud but at the same time they've got a much more mature stack than a lot of these other you know cloud native databases that might have just started a couple of years ago so you know so there's trade-offs that customers have to make how do you sort through that where do you see the interest in this and and and what's the sweet spot for this partnership you know we've been really excited to build the partnership with Vertica and we're providing you know we're really proud to provide pretty much the only on Prem storage platform that's validated with the vertical yawn mode to deliver a modern data experience for our customers together you know it's it's that partnership that allows us to go into customers that on Prem space where I think that they're still you know not to say that not everybody wants to go the cloud I think there's aspects and solutions that work very well there but for the vast majority I still think that there's you know the your data center is not going away and you do want to have control over some of the many of the different facets with inside the operational confines so therefore we start to look at how do we can do the best of what cloud offers but on Prem and that's realistically where we start to see the stronger push for those customers who still want to manage their data locally as well as maybe even work around some of the restrictions that they might have around cost and complexity hiring you know the different types of skills skill sets that are required to bring you know applications purely cloud native it's still that larger part of that digital transformation that many organizations are going for going forward with and realistically I think they're taking a look at the pros and cons and we've been doing cloud long enough for people recognize that you know it's not perfect for everything and that there's certain things that we still want to keep inside our own data center so I mean realistically as we move forward that's that that better option when it comes to a modern architecture they can do it you know we can deliver and address a diverse set of performance requirements and allow the organization to continue to grow the model to the data you know based on the data that they're actually trying to leverage and that's really what flash Wood was built or it was built for a platform that can address small files or large files or high throughput high throughput low latency scale to petabytes in a single namespace in a single rack as we like to put it in there I mean we see customers that have put you know 150 flash blades into production as a single namespace it's significant for organizations that are making that drive towards modern data experience with modern analytics platforms pure and Vertica have delivered an experience that can address that to a wide range of customers that are implementing you know the verdict technology I'm interested in exploring the use case a little bit further you just sort of gave some parameters and some examples and some of the flexibility that you have in but take us through kind of what the discuss the customer discussions are like obviously you've got a big customer base you and Vertica that that's on prem that's the the the unique advantage of this but there are others it's not just the economics of the the granular scaling of compute and storage independently there are other aspects so to take us through that sort of a primary use case or use cases yeah you know I mean I can give you a cup of customer examples and we have a large SAS analyst company which uses verdict on flash play to authenticate the quality of digital media in real time and you know then for them it makes a big difference is they're doing they're streaming and whatnot that they can they can fine tune and grandly control that so that's one aspect that we get address we have a multi national car con company which uses verdict on flash blade to make thousands of decisions per second for autonomous vehicle decision-making trees that you know that's what really these new modern analytics platforms were built or there's another healthcare organization that uses Vertica on flash blade to enable healthcare providers to make decisions in real time the impact Ives especially when we start to look at and you know the current state of affairs with Kovac in the coronavirus you know those types of technologies are really going to help us kind of get love and and help lower and been you know bend that curve downward so you know there's all these different areas where we can address the goals and the achievements that we're trying to look bored with with real-time analytic decision making tools like Berta and you know realistically as we have these conversations with customers they're looking to get beyond the ability of just you know you know a data scientist or a data architect looking to just kind of drive in information we were talking about Hadoop earlier we're kind of going well beyond that now and I guess what I'm saying is that in the first phase of cloud it was all about infrastructure it was about you know spinning up you know compute and storage a little bit of networking in there seems like the the a next a new workload that's clearly emerging is you've got and it started with the cloud databases but then bringing in you know AI and machine learning tooling on top of that and then being able to really drive these new types of insights and it's really about taking data these bogs this bog of data that we've collected over the last 10 years a lot of that you know driven by Hadoop bringing machine intelligence into the equation scaling it with either cloud public cloud or bringing that cloud experience on prams scale you know across your organizations and across your partner network that really is a new emerging work load do you see that and maybe talk a little bit about you know what you're seeing with customers yeah I mean it really is we see several trends you know one of those is the ability to take a take this approach to move it out of the lab but into production you know especially when it comes to you know data science projects machine learning projects that traditionally start out as kind of small proofs of concept easy to spin up in the cloud but when a customer wants to scale and move towards a real you know it derived a significant value from that they do want to be able to control more characteristics right and we know machine learning you know needs to needs to learn from a massive amounts of data to provide accuracy there's just too much data to retrieve in the cloud for every training job at the same time predictive analytics without accuracy is not going to deliver the business advantage of what everyone is seeking you know we see this the visualization of data analytics is traditionally deployed as being on a continuum with you know the things that we've been doing in the long you know in the past you know with data warehousing data lakes AI on the other end but but this way we're starting to manifest it in organizations that are looking towards you know getting more utility and better you know elasticity out of the data that they are working for so they're not looking to just build ups you know silos of bespoke AI environments they're looking to leverage you know a platform that can allow them to you know do a I for one thing machine learning for another leverage multiple protocols to access that data because the tools are so much different you know it is a growing diversity of of use cases that you can put on a single platform I think organizations are looking for as they try to scale these environments I think there's gonna be a big growth area in the coming years gay ball I wish we were in Boston together you would have painted your little corner of Boston Orange I know that you guys are sharing but I really appreciate you coming on the cube wall-to-wall coverage two days at the vertical Vertica virtual big data conference keep you right there but right back right after this short break [Music]

Published Date : Mar 30 2020

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
September 2019DATE

0.99+

Gabriel ChapmanPERSON

0.99+

BostonLOCATION

0.99+

two companiesQUANTITY

0.99+

BarneyORGANIZATION

0.99+

VerticaORGANIZATION

0.99+

GabePERSON

0.99+

GartnerORGANIZATION

0.98+

two daysQUANTITY

0.98+

ChristopherPERSON

0.98+

last SeptemberDATE

0.98+

first phaseQUANTITY

0.97+

a hundred and fiftyQUANTITY

0.97+

one aspectQUANTITY

0.97+

over three yearsQUANTITY

0.97+

seven bladesQUANTITY

0.97+

pureORGANIZATION

0.96+

day 2QUANTITY

0.96+

bothQUANTITY

0.95+

oneQUANTITY

0.95+

single rackQUANTITY

0.95+

firstsQUANTITY

0.94+

Boston OrangeLOCATION

0.94+

coronavirusOTHER

0.93+

Encore HotelLOCATION

0.93+

thousands of decisions per secondQUANTITY

0.93+

single namespaceQUANTITY

0.92+

each oneQUANTITY

0.92+

single platformQUANTITY

0.92+

HadoopTITLE

0.91+

day 1QUANTITY

0.91+

150 flash bladesQUANTITY

0.9+

singleQUANTITY

0.89+

Big DataEVENT

0.88+

firstQUANTITY

0.86+

BertaORGANIZATION

0.86+

a couple of years agoDATE

0.85+

KovacORGANIZATION

0.84+

last 10 yearsDATE

0.82+

PremORGANIZATION

0.81+

each individualQUANTITY

0.8+

IvesORGANIZATION

0.7+

big dataEVENT

0.66+

one of the bigger piecesQUANTITY

0.66+

the sub showsQUANTITY

0.66+

every singleQUANTITY

0.64+

VerticaTITLE

0.61+

EonTITLE

0.57+

dataEVENT

0.56+

egressORGANIZATION

0.56+

timesQUANTITY

0.54+

EonORGANIZATION

0.54+

petabytesQUANTITY

0.53+

s3TITLE

0.49+

UNLISTED DO NOT PUBLISH Woicke Edit Suggestions


 

six five four three two one hi everybody and welcome to this cube special presentation of the verdict of virtual big data conference the cube is running in parallel with day 1 and day 2 of the verdict the big data event by the way the cube has been at every single big data event and it's our pleasure to be here in the virtual / digital event as well Gabriel Chapman is here is the director of flash blade product solutions marketing at pure storage Gabe great to see you thanks for coming on great to see you - how's it going it's going very well I mean I wish we were meeting in Boston at the Encore hotel but you know and and hopefully we'll be able to meet it accelerate at some point you cheer or one of the the sub shows that you guys are doing the regional shows but because we've been covering that show as well but I really want to get into it and the last accelerate September 2019 pure and Vertica announced a partnership I remember a joint being ran up to me and said hey you got to check this out the separation of Butte and storage by a Eon mode now available on flash played so and and I believe still the only company that can support that separation and independent scaling both on prime and in the cloud so gave I want to ask you what were the trends in analytical database and plowed that led to this partnership you know realistically I think what we're seeing is that there's been kind of a larger shift when it comes to modern analytics platforms towards moving away from the the traditional you know Hadoop type architecture where we were doing on and leveraging a lot of direct mass storage primarily because of the limitations of how that solution was architected when we start to look at the larger trends towards you know how organizations want to do this type of work on premises they're looking at solutions that allow them to scale the compute storage pieces independently and therefore you know the flash blade platform ended up being a great solution to support Vertica in their transition to Eon mode leveraging >> essentially as an s3 object store okay so let's let's circle back on that you guys in your in your announcement of a flash blade you make the claim that flash blade is the industry's most advanced file and object storage platform ever that's a bold statement I defend that it's supposed to yeah I I like to go beyond that and just say you know so we've really kind of looked at this from a standpoint of you know as as we've developed flash Wade as a platform and keep in mind it's been a product that's been around for over three years now and has you know it's been very successful for pure storage the reality is is that fast file and fast object as a combined storage platform is a direction that many organizations are looking to go and we believe that we're a leader in that fast object of best file storage place in realistically which we start to see more organizations start to look at building solutions that leverage cloud storage characteristics but doing so on prem for a multitude of different reasons we've built a platform that really addresses a lot of those needs around simplicity around you know making things assure that you know vast matters for us simple is smart we can provide you know cloud integrations across the spectrum and you know there's a subscription model that fits into that as well we fall that falls into our umbrella of what we consider the modern day day experience and it's something that we've built into the entire pure portfolio okay so I want to get into the architecture a little bit of Flash blade and then better understand the fit for analytic databases generally but specifically for Vertica so it is a blade so you got compute and a network included it's a key value store based system so you're talking about scale out unlike unlike viewers sort of you know initial products which were scale up and so I want to as a fabric base system I want to understand what that all mean so take us through the architecture you know some of the quote-unquote firsts that you guys talk about so let's start with sort of the blade aspect yeah the blade aspect mean we call it a flash blade because if you look at the actual platform you have a primarily a chassis with built in networking components right so there's a fabric interconnect with inside the platform that connects to each one of the individual blades the individual blades have their own compute that drives basically a pure storage flash components inside it's not like we're just taking SSDs and plugging them into a system and like you would with the traditional commodity off-the-shelf hardware design this is a very much an engineered solution that is built towards the characteristics that we believe were important with fast file and fast object scalability you know massive parallelization when it comes to performance and the ability to really kind of grow and scale from essentially seven blades right now to a hundred and fifty that's that's the kind of scale that customers are looking for especially as we start to address these larger analytics pools mayo multi petabyte datasets you know that single addressable object space and you know file performance that is beyond what most of your traditional scale-up storage platforms are able to deliver yeah I saw you interviewed cause last September and accelerate and and Christopher's been you know attacked by some of the competitors is not having a scale out I asked them his thoughts on that he said well first of all our flash plate is scale out he said look anything that that that adds the complexity you know we avoid but for the workloads that are associated with Flash blade scale out is the right sort of approach maybe you could talk about why that is well you know realistically I think you know that that approach is better when we're starting to learn to work with large unstructured data sets I mean flash plays uniquely architected to allow customers to achieve you know a superior resource utilization for compute and storage well at the same time you know reducing significantly the complexity that is arisen around these kind of bespoke or siloed nature of big data and analytic solutions I mean we really kind of look at this from a standpoint of you have built and delivered or created applications in the public cloud space that address you know object storage and and unstructured data and and for some organizations the importance is bringing that on Prem I mean we do seek repatriation that coming on for a lot of organizations as these data egress charges continue to expand and grow and then organizations that want even higher performance in the what we're able to get into the public cloud space they are bringing that data back on Prem they are looking at from a standpoint we still want to be able to scale the way we scale on the cloud we still want to operate the same way we operate in the cloud but we want to do it within control of our own you know our own borders and so that's you know that's one of the bigger pieces to that is we start to look at how do we address cloud characteristics and dynamics and consumption metrics or models as well as the benefits and efficiencies of scale that they're able to afford but allowing customers that do that with inside their own data center so you're talking about the trends earlier you had these cloud native databases that allowed the scaling of compute and storage independently Vertica comes in with Eon a lot of times we talk about these these partnerships as Barney deals of you know I love you you love me here's a press release and then we go on or they're just straight you know go to market are there other aspects of this partnership that are that are non Barney deal like in other words any specific you know engineering you know other go to market programs could you talk about that a little bit yeah it's it's it's more than just you know I then what we consider a channel meet in the middle or you know that Barney type of deal it's realistically you know we've done some first with Vertica that I think are really important if they think you look at the architecture and how we do how we've brought this to market together we have solutions teams in the back end who are you know subject matter experts in this space if you talk to joy and the people from vertigo they're very high on they're very excited about the partnership because it often it opens up a new set of opportunities for their customers to to leverage Eon mode and you know get into some of the the nuanced aspects of how they leverage the Depot or Depot with inside each individual compute node and adjustments with inside there I reach additional performance gains for customers on Prem and it's the same time for them there's still the ability to go into that cloud model if they wish to and so I think a lot of it is around how do we partner as two companies how do we do a joint selling motions you know how do we show up and and you know do white papers and all of the the traditional marketing aspects that we bring into the market and then you know joint selling opportunities exist where they are and so that's realistically I think like any other organization that's going to market with a partner or an ISP that they have a strong partnership with you'll continue to see us you know talking about our shows mutually beneficial relationships and the solutions that we're bringing it to the market okay you know of course he used to be a Gartner analyst and you go over to the vendor side now but as but as it but as a gardener analyst you're obviously objective you see it all and you know well there's a lot of ways to skin a cat there are there are there are strengths weaknesses opportunities threats etc for every vendor so you have you have Vertica who's got a very mature stack and and talking to a number of the customers out there who are using Eon mode you know there's certain workloads where these cloud native databases make sense it's not just the economics of scaling compute and storage independently I want to talk more about that there's flexibility aspects as well but Vertica really you know has to play its trump card which is look we've got a big on-premise state and we're gonna bring that you know Eon capability both on Prem and we're embracing the cloud now they're obviously having they had to play catch-up in the cloud but at the same time they've got a much more mature stack than a lot of these other you know cloud native databases that might have just started a couple years ago so you know so there's trade-offs that customers have to make how do you sort through that where do you see the interest in this and and and what's the sweet spot for this partnership you know we've been really excited to build the partnership with Vertica and we're providing you know we're really proud to provide pretty much the only on Prem storage platform that's validated with the vertical Aeon mode to deliver a modern data experience for our customers together you know it's it's that partnership that allows us to go into customers that on Prem space where I think that they're still you know not to say that not everybody wants to go the cloud I think there's aspects and then solutions that work very well there but for the vast majority I still think that there's you know the your data center is not going away and you do want to have control over some of the many of the different facets with inside the operational confines so therefore we start to look at how do we can do the best of what cloud offers but on Prem and that's realistically where we start to see the stronger push for those customers you still want to manage their data locally as well as maybe even work around some of the restrictions that they might have around cost and complexity hiring you know the different types of skills skill sets that are required to bring you know applications purely cloud native it's still that larger part of that digital transformation that many organizations are going for going forward with and realistically I think they're taking a look at the pros and cons and we've been doing cloud long enough where people recognize that you know it's not perfect for everything and that there's certain things that we still want to keep inside our own data center so I mean realistically as we move forward that's that that better option when it comes to a modern architecture they can do it you know we can deliver and address a diverse set of performance requirements and allowed the organization to continue to grow the model to the data you know based on the data that they're actually trying to leverage and that's really what flash Wood was built for it was built for a platform that can address small files or large files or high throughput high throughput low latency scale to petabytes in a single namespace in a single rack as we like to put it in there I mean we see customers that have put you know 150 flash blades into production as a single namespace it's significant for organizations that are making that drive towards modern data experience with modern analytics platforms pure and Vertica have delivered an experience that can address that to a wide range of customers that are implementing you know the verdict technology I'm interested in exploring the the use case a little bit further you just sort of gave some parameters and some examples and some of the flexibility that you have in but take us through kind of what to discuss the customer discussions are like obviously you've got a big customer base you and Vertica that that's on prem that's the the unique advantage of this but there are others it's not just the economics of the granular scaling of compute and storage independently there are other aspects so to take us through that sort of a primary use case or use cases yeah you know I mean I can give you a couple customer examples and we have a large SAS analyst company which uses verdict on flash play to authenticate the quality of digital media in real time and you know then for them it makes a big difference is they're doing they're streaming and whatnot that they can they can fine tune and grandly control that so that's one aspect that we need to address we have a multi national car company which uses verdict on flash blade to make thousands of decisions per second for autonomous vehicle decision making trees you know that's what really these new modern analytics platforms were built for there's another healthcare organization that uses Vertica on flash blade to enable healthcare providers to make decisions in real time the impact vibes especially when we start to look at and you know the current state of affairs little Co vid and the coronavirus you know those types of technologies are really going to help us kind of get love and and help lower and been you know bend that curve downward so you know there's all these different areas where we can address the goals and the achievements that we're trying to look bored with with real-time analytic decision making tools like birth and you know realistically as we have these conversations with customers they're looking to get beyond the ability of just you know you know a data scientist or a data architect looking to just kind of drive in information you know you know I'm gonna set this model up and we'll come back in a day now we need to make these and the performs characteristics the Aeon mode and vertical allows for can get us towards this almost near real-time analytics decision-making process and that the customers and that's the kind of conversations that we're having with customers who really need to be able to turn this around very quickly instead of waiting well I think you're hitting on something that is actually pretty relevant and that is that near real-time analytic you know database we were talking about Hadoop earlier we're kind of going well beyond that now and I guess what I'm saying is that in the first phase of cloud it was all about infrastructure it was about you know spinning up you know compute and storage a little bit of networking in there seems like the the a next a new workload that's clearly emerging is you've got and it started with the cloud native databases but then bringing in you know AI and machine learning tooling on top of that and then being able to really drive these new types of insights and it's really about taking data these bogs this bog of data that we've collected over the last 10 years a lot of that you know driven by Hadoop bringing machine intelligence into the equation scaling it with either cloud public cloud or bringing that cloud experience on-premise scale you know across your organizations and across your partner network that really is a new emerging work load do you see that and maybe talk a little bit about you know what you're seeing with customers yeah I mean it really is we see several trends you know one of those is the ability to take a take this approach to move it out of the lab but into production you know especially when it comes to you know data science projects machine learning projects that traditionally start out as kind of small proofs of concept easy to spin up in the cloud but when a customer wants to scale and move towards a real you know that derived a significant value from that they do want to be able to control more characteristics right and we know machine learning you know needs to needs to learn from a massive amounts of data to provide accuracy there's just too much data to retrieve in the cloud for every training job at the same time predictive analytics without accuracy is not going to deliver the business advantage of what everyone is seeking you know we see this the visualization of data analytics is traditionally deployed as being on a continuum with you know the things that we've been doing in the long you know in the past you know with data warehousing data lakes AI on the other end but but this way we're starting to manifest it in organizations that are looking towards you know getting more utility and better you know elasticity out of the data that they are working for so they're not looking to just build ups you know silos of bespoke AI environments they're looking to leverage you know a platform that can allow them to you know do a I for one thing machine learning for another leverage multiple protocols to access that data because the tools are so much different you know it is a growing diversity of of use cases that you can put on a single platform I think organizations are looking for as they try to scale these environments I think there's gonna be a big growth area in the coming years Gabe well I wish we were in Boston together you would have painted your little corner of Boston Orange I know that you guys are sure but I really appreciate you coming on the cube and thank you very much have a great day you too okay thank you everybody for watching this is the cubes coverage wall-to-wall coverage two days of the vertical Vertica virtual Big Data conference keep her at their very back right after this short break

Published Date : Mar 30 2020

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
BostonLOCATION

0.99+

September 2019DATE

0.99+

Gabriel ChapmanPERSON

0.99+

BarneyORGANIZATION

0.99+

two companiesQUANTITY

0.99+

VerticaORGANIZATION

0.99+

two daysQUANTITY

0.99+

GabePERSON

0.99+

WoickePERSON

0.98+

GartnerORGANIZATION

0.98+

last SeptemberDATE

0.97+

over three yearsQUANTITY

0.97+

one aspectQUANTITY

0.96+

first phaseQUANTITY

0.96+

pureORGANIZATION

0.96+

ChristopherPERSON

0.95+

oneQUANTITY

0.95+

single rackQUANTITY

0.95+

a hundred and fiftyQUANTITY

0.95+

day 2QUANTITY

0.95+

bothQUANTITY

0.93+

seven bladesQUANTITY

0.93+

DepotORGANIZATION

0.93+

150 flash bladesQUANTITY

0.92+

HadoopORGANIZATION

0.92+

single namespaceQUANTITY

0.92+

single platformQUANTITY

0.92+

day 1QUANTITY

0.92+

coronavirusOTHER

0.91+

firstsQUANTITY

0.91+

firstQUANTITY

0.9+

flash WadeTITLE

0.89+

singleQUANTITY

0.88+

each oneQUANTITY

0.88+

a dayQUANTITY

0.87+

a couple years agoDATE

0.85+

thousands of decisions per secondQUANTITY

0.83+

PremORGANIZATION

0.77+

primeCOMMERCIAL_ITEM

0.77+

EncoreLOCATION

0.74+

single addressableQUANTITY

0.72+

Big DataEVENT

0.72+

each individualQUANTITY

0.71+

AeonORGANIZATION

0.68+

Boston OrangeLOCATION

0.65+

VerticaTITLE

0.62+

egressORGANIZATION

0.62+

every singleQUANTITY

0.6+

last 10 yearsDATE

0.6+

a couple customerQUANTITY

0.59+

EonTITLE

0.55+

piecesQUANTITY

0.54+

petabytesQUANTITY

0.53+

flash bladeORGANIZATION

0.52+

EonORGANIZATION

0.51+

sub showsQUANTITY

0.5+

HadoopTITLE

0.49+

sixQUANTITY

0.49+

petabyteQUANTITY

0.48+

lotQUANTITY

0.47+

bigEVENT

0.43+

vertigoPERSON

0.34+

Keynote Analysis | AnsibleFest 2019


 

>> Announcer: Live from Atlanta, Georgia. It's theCUBE covering Ansible Fest 2019. Brought to you by Red Hat. >> Hello everyone, welcome to theCUBE. We are broadcasting live here, in Atlanta, Georgia. I'm John Furrier with Stu Miniman, my co-host, TheCUBE's coverage of Red Hat, Ansible Fest. This is probably one of the hottest topic areas that we've been seeing in Enterprise Tech emerging, along with observability. Automation and observability is the key topics here. Automation is the theme, Stu, Ansible just finished their keynotes, keynote analysis, general availability of their new platform, the Ansible Automation Platform is the big news. It seems nuanced for the general tech practitioner out there, what's Ansible doing? Why are we here? We saw the rise of network management turn into observability as the hottest category in the cloud 2.0. companies going public, lot of M&A activity, observability is data driven. Automation's this other category that is just exploding in growth and change. Huge impact to all industries and it's coming from the infrastructure scale side where the blocking and tackling of DevOps has been. This is the focus Ansible and their show Automation for all, your analysis of the keynote, What's the most important thing going on here? >> So John, as you said automation is a super hot topic. I was just at the New Relic show talking about observability last week, we've got the Pager Duty show also going on this week. The automation is so critical. We know that IT can't keep up with things if they can't automate it. It's not just replacing some scripting. I loved in the keynote the talked about strategically thinking about automation. We've been watching the RPA companies talk about automation. There's lots of different automation, there's the right way to do it and another angle, John, that we love covering is what's going on with Open Source? You were just at the Open Core summit in San Francisco. The Red Hat team very clear, Open Source is not their business model. They use Open Source and everything that Red Hat does is 100% Open Source and that was core and key to what Ansible was and how it's created. This isn't a product pitch here, it is a community, John this is the 6th most active repository in GitHub. Out of over 100 million repositories out there, the 6th most active. That tells you that this is being used by the community, it's not a couple of companies using this, but it's a broad ecosystem. We hear Microsoft and Cisco, F5, lots of companies that are contributing as well as just all the Nusers. We heard JP Morgan in the keynote this morning, so a lot of participation there. But it is building out that sweep with a platform that you talked about, and we're going to spend a lot of time the next few days understanding this maturation and growth. >> Yeah, the automation platform that they announced, that's the big news. The general availability of their automation platform and Stu, the word they're using here is scale. This is something that you brought up the Open Core summit which I attended last week was the inaugural conference, lot of controversy. And this is a generational shift we are seeing the midst of our own eyes right in front of us, on the ground floor of a shift in Open Source community. How the platform of open source is evolving. What Amazon, now Azure and Google and the others are doing is showing that scale has changed the game in how Open Source is going to not only grow and evolve but shape application developers. And the reason why Ansible is so important right now and this conference is that we all know that when you stand up stuff, infrastructure, you've got to configure the hell out of it. DevOps has always been infrastructures code and as more stuff gets scaled up, as more stuff gets provisions, as more stuff gets built and created, the management and the controlling of the configurations, this has been real hotspot. This has been an opportunity and a problem. Anyone who's here, they're active because you know, this is a major pain point. This is a problem area that's an opportunity to take what is a blocking and tackling operational role, configuring standing up infrastructure, enabling applications and making it a competitive advantage. This is why they game is changing. We're starting to see platforms not tools. Your analysis, are they positioned? Was this keynote successful? >> Yeah, John. I really liked what Robyn Bergeron came out and talked about the key principles of what Ansible has done. It's simplicity, it's modularity and it's learning from Open Source. This project was only started in 2012. One of the things I always look at is in the old days you wanted to have that experience. There's not compressions algorithm for experience. Today, if I could start from day one today, and build with the latest tools, heavily using DevOps, understanding all of the experience that's happened in Open Source, we can move forward. So from 2012 to 2015 Red Hat acquired Ansible, to today in 2019, they're making huge growth and helping companies really leverage and mature their IT processes and move toward true business innovation with leveraging automation. >> Stu, this is not for the faint of heart either. These are rockstar DevOps infrastructure folks who are evolving in taking either network or infrastructure development to enable a software extraction layer for applications. It's not a joke either. I mean they've got some big names up on stage. One tweet I want to call out and get your reaction to. JP Morgan, his presentation the exec there, a tweet came out from Christopher Festa, "500 developers are working to automate business processes leading to among other benefits, 98% improvement in recovery times. What used to take 6 - 8 hours to recover, now takes 2 - 5 minutes." Christopher Festa. Stu. >> So John, that's what we wanted. How can we take these things that took hours and I had to go through this ticketing process and make that change. What I loved of what Chris from JP Morgan did, is he brought us inside and he said look, too make this change it took us a year of sorting through the security, the cyber, the control processes. We understand there's not just oh hey, lets sprinkle a little DevOps on everything and it's wonderful, we need to get buy in from the team, and it can spread between groups and change that culture. It's something that we've tracked in Red Hat for years and all of these environments. This is something that does require commitment, because it's not just John taking oh I scripted something, and that's good. We need to be able to really look at these changes because automation, if we just automate a bad process, that's not going to help out business. We really need to make sure we understand what we're automating, the business value and what is going to be the ramifications of what we're doing. >> Well one of the things I want to share with folks watching is research that we did at SiliconAngle, theCUBE and Wikibon as part of our CUBE insights, Stu I know you're a part of this. We talked to a bunch of practitioners and customers, dozens of our community members and we found that observability we've just pointed out, has been an explosive category. That automation has been identified, and we're putting a stake in the ground, right here in theCUBE as one of the next big sectors that will rise up as a small little white space will become a massive market, automation. You watch that cloud 2.0 sector called automation. Why? The reasoning was this, here's the results of our survey. Automations quickly becoming a critical foundational element of the network as enterprises focus on multi-cloud, network being infrastructures, service and storage, and multi-cloud rapid development and deployment. Software to find everything's happening, pretty much we've been covering that on theCUBE. And most enterprises are just grappling with this concept and see opportunities. The benefit that people see in automation as we've discovered, Stu, are the following: 1. Focused on focused efforts for better results, efficiency, security is a top driver on all these things. You've got to have security built into the software, and then automation is creating job satisfaction for these guys. This is mundane tasks being automated away. So people are happier so job satisfaction. And finally, this is an opportunity to re-skill. Stu, these are the key bullets points that we've found in talking to our community. Your reaction to those results. >> Yeah John, I love that. Ultimately we want to be able to provide not only better value to my ultimate end user, but I need to look internal. As you said, John, how can I retool some of my sales force and get them engaged. And if you want to hire the millennials, they want to not be doing the drudgery, they want to do something where they feel that they are making a difference. You laid out a lot of good reasons why it would help and why people would want to get involved. John, you know I've talked to a number of government adgencys, when we changed that 40 year old process and now we're doing things faster and better, and that means I can really higher that next generation of workers because otherwise I wouldn't be able to higher them to just do things the old way. >> Stu this is about cloud 2.0 and this is about modernization. You mentioned Open Source, Open Core summit, that is a tell sign that Open Source is changing, the communities are changing, this is going to be a massive wave. Again, we've been chronicling this cloud 2.0, we coined that term, and we're trying to identify those key points, obviously observability, automation. But look, at the end of the day, You've go to have a focused effort to make the job go better you heard JP Morgan pointing out. Minutes versus hours. This is the benefit of infrastructure as code. At the end of the day employee satisfaction, the people that you want to hire that can be redeployed into new roles, analytics, math, quantitative analysis, versus the mundane tasks. Automation is going to impact all aspects of the stack. So final question Stu, What are you expecting for the next two days, we're going to be here for two days, what do you expect to hear from our guests. >> So John, one of the things I'm going to really look at is as you mentioned, infrastructure where this all started. So how do I use it to deploy a VM, Ansible's there. VMware, I've already talked to a number of people in the virtualization community, the love and embrace Ansible. We saw Microsoft up on stage, loving and embracing. As we move towards micro-service architectures, containerization and all of these cloud native deployments, how is Ansible and this community doing? Where are the stumbling blocks? To be honest, from what I hear coming into this, Ansible has been doing well. Red Hat has helped them grow even more, and the expectation is that IBM will help proliferate this even further. The traditional competitors to Ansible, you think about the Chefs and Puppets of the world, have been struggling with that cloud native world. John, I know I see Ansible when I go to the cloud shows, I hear customers talking about it. So Ansible seems to making that transition to cloud native well but there are other threats in the cloud native world. When I go to the serverless conference, I have not yet heard where this fits into the environment. So we always know that that next generation in technology, how will this automation move forward. >> As Red Hat starts getting much more proliferated in major enterprises with IBM, which will extend their lead even further in the enterprise, it's an opportunity for Ansible. The community angle is interesting. I want to get your community angle real quick So I saw a tweet from NetApp, their tagline at their booth is Simplify, automate, orchestrate. Sounds like they're leaning into the Kubernetes world, containers, you've got the start of thinking about software obstructions, this aint the provisioning hardware anymore. Whole new ball-game. Your assessment of Ansible's community presence, I mentioned that was a tweet from Red Hat, I mean NetApp. What's your take on the community angle here? >> John it's all about community. The GitHub's staff speak for themselves, this is very much a community event. Kudos to the team here, a lot on the diversity, inclusion effort, so really pushing those things forward. So John, something we always notice at the tech shows, the ratios of gender is way more diverse at an event like this. We know we see it in the developer communities, that there's more diversity in there, gender and ethnicity. >> Still a lot of guys though. >> Sure there is, by the way, when they took over this hotel, all of the bathrooms are gender-neutral, so you can use whatever bathroom you want there. >> I'll make sure I'm using the right pronouns when I'm saying hello to people. Stu, thanks for Commentary. Keynote analysis, I'm John Ferrier with Stu Miniman, breaking down why we are here? Why Ansible? Why is automation important? We believe automation will be a killer category, we're going to see a lot of growth here, and again the impact is with machine learning and A.I. This is where it all starts, automating the data, the technology and the configurations going to empower the next generation modern enterprise. More live coverage from Ansible Fest after this short break. (Upbeat techno music)

Published Date : Sep 24 2019

SUMMARY :

Brought to you by Red Hat. This is the focus Ansible and their show We heard JP Morgan in the keynote this morning, is showing that scale has changed the game is in the old days you wanted to have that experience. JP Morgan, his presentation the exec there, This is something that does require commitment, Well one of the things I want to share with folks watching and that means I can really higher that next generation This is the benefit of infrastructure as code. So John, one of the things I'm going to really look at the provisioning hardware anymore. the ratios of gender is way more diverse all of the bathrooms are gender-neutral, and again the impact is with machine learning and A.I.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Robyn BergeronPERSON

0.99+

2012DATE

0.99+

John FurrierPERSON

0.99+

IBMORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

John FerrierPERSON

0.99+

98%QUANTITY

0.99+

AmazonORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

2QUANTITY

0.99+

6thQUANTITY

0.99+

40 yearQUANTITY

0.99+

GitHubORGANIZATION

0.99+

Christopher FestaPERSON

0.99+

two daysQUANTITY

0.99+

ChrisPERSON

0.99+

last weekDATE

0.99+

2019DATE

0.99+

2015DATE

0.99+

6QUANTITY

0.99+

AnsibleORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

Atlanta, GeorgiaLOCATION

0.99+

500 developersQUANTITY

0.99+

TodayDATE

0.99+

100%QUANTITY

0.99+

Red HatORGANIZATION

0.99+

JP MorganORGANIZATION

0.99+

todayDATE

0.99+

WikibonORGANIZATION

0.99+

StuPERSON

0.99+

TheCUBEORGANIZATION

0.98+

this weekDATE

0.98+

GoogleORGANIZATION

0.98+

theCUBEORGANIZATION

0.97+

SiliconAngleORGANIZATION

0.97+

F5ORGANIZATION

0.97+

oneQUANTITY

0.97+

over 100 million repositoriesQUANTITY

0.97+

Red HatTITLE

0.97+

dozensQUANTITY

0.97+

New RelicORGANIZATION

0.96+

Pager DutyTITLE

0.96+

5 minutesQUANTITY

0.95+

OneQUANTITY

0.95+

day oneQUANTITY

0.94+

8 hoursQUANTITY

0.94+

One tweetQUANTITY

0.93+

Ansible Fest 2019EVENT

0.93+

Ansible FestEVENT

0.93+

Open SourceEVENT

0.91+

Open CoreEVENT

0.89+

Keynote Analysis | AnsibleFest 2019


 

live from Atlanta Georgia it's the tube covering ansible fest 2019 brought to you by Red Hat hello everyone welcome to the queue we are broadcasting live here in Atlanta Georgia I'm John force too many men my co-host the cubes coverage of Red Hat ansible Fest this is probably one of the hottest topic areas that we've been seeing in Enterprise tech emerging along with observability automation and observability is the key topics here automation is the theme stew ansible just finished their keynote keynote analysis general availability of their new platform the ansible automation platform is the big news this is a big I mean it seems nuanced for the general tech practitioner out there what's ansible doing why we here we saw the rise of network management turned into observability as the hottest category in the cloud cloud 2.0 companies going public a lot of M&A activity and observability is data-driven automations this other category that is just exploding and growth and change huge impact to all industries and it's coming from the infrastructure scale side where the blocking and tackling of DevOps has been this is the focus of ansible and their show automation for all your analysis of the keynote what's the most important thing going on here yes so John as you said automation is a super hot topic you know I was just at the New Relic show talking about observability last week we've got the Pedro Duty show also going on this week the the automation is so critical we know that IT can't keep up with things if they can't automate it and it's not just replacing some scripting I loved in the keynote they talked about strategically thinking about automation we've been watching the RP a companies talking about automation so there's lots of different automation there's the right way to do it and another thing angle John that we love covering is you know what's going on with open source you were just at the open core summit in San Francisco the Red Hat team very clear open source is not their business model it is they use open source and everything that Red Hat does is a hundred percent open source and that was core and key to what ansible was and how its created this isn't a product pitch here it's a community you know it's John this is the six most active you know repository in github so out of over a hundred million repositories out there the six most active so that tells you that this is being used by the community it's not a couple of companies using this but it's a broad ecosystem we hear Microsoft and Cisco f5 lots of companies that are contributing as well as just all of the end users we of JPMorgan in the keynote this morning so a lot of participation there but you know it is building out that suite with the platform that you talked about and we're gonna spend a lot of time in the next few days understanding this maturation and growth yeah the automation platform that they announced that's the big news the general availability of their automation platform and stew the word they're using here is scale okay and this is something that you brought up to open core summit which I attended last week was the inaugural conference a lot of controversy and this is a generational shift we are seeing in the midst of our own eyes right in front of us on the ground floor of a shift in open source community how the platform of open source is evolving what Amazon now azure and Google and the others are doing is they're showing that scale has changed the game in how open-source is going to not only grow and evolve but shape application developers and the reason why ansible is so important right now in this conference is that we all know that when you stand up stuff infrastructure you've got to configure the hell out of it DevOps has always been infrastructure is code and as more stuff gets scaled up as more stuff gets provision as more stuff gets built and created the management and the controlling of the configurations this has been a real hot spot this has been an opportunity and a problem so you know everyone who's here they're they're active because you know what this is a major pain point this is a problem area that's an opportunity to take what is a blocking and tackling operational role configurating standing up infrastructure enabling applications and making it a competitive advantage this is why the game is changing starting to see platforms not tools your analysis are they positioned was this keynote successful John and I really liked rut Robin Bergeron came out and talked about the key principles of what antal is done its simplicity its modularity and it's learning from open source this project was only started in 2012 so one of the things I always look at is in the old days you wanted you know to have that experience there's no compression algorithm for experience today if I could start from day one today and build with the latest tools you know heavily using DevOps understanding all of the experience that's happened in open source we can move forward so from 2012 to 2015 Red Hat you know acquired ansible to today in 2019 they're making huge growth and helping companies really leverage and mature their IT processes and move towards you know true business innovation with leveraging automation dude this is not and again this is not for the faint of heart either again these are Rockstar DevOps infrastructure folks who are evolving in taking either network and or infrastructure development to enable and software abstraction layer for applications and this not it's not a joke either I mean got some big names up on stage of just one tweet I want to call out and get your reaction to JP Morgan on his presentation the exact there he was tweet came out from Christopher Festa 500 developers are working to automate business processes leading to among other benefits ninety-eight percent improvement in recovery times what used to take six to eight hours to recover now takes two to five minutes Christopher Festa student so John that's what we want is how can we take these things that took you know hours and I had to go through this ticketing process and make that change what I loved of what Chris from JP Morgan said is he brought us inside he said look to make this change it took us a year of sorting through the security the cyber the the control processes we understand there's not just you know oh hey let's sprinkle a little DevOps on everything and it's wonderful we need to get you know buy-in from the team it you know and it can spread between groups and you know change that culture it's something that you know we've tracked in Red Hat for years and all of these environments this something that does require commitment because it's not just John taking oh I scripted something and and and that's good we need to be able to really look at these changes because automation if we just automate a bad process that's not gonna help our business we really need to make sure we understand what we're automating the business value and and what is going are going to be the ramification to what we're doing well one of the things I want to share with folks watching is some research that we did at Silk'n angle the cube and wiki bond it's part of our cube insights do I know you were part of this we talked to a bunch of practitioners and customers and dozens of our of our community members and we found that observability we've just pointed out has been you know explosive category that automation has been identified and we're putting a stake in the ground right here in the cube as one of the next big sectors that will rise up as a small little white space will become a massive market automation you watch that cloud 2.0 sector called automation why the reasoning was this and here's the results of our of our survey automation is quickly becoming a critical foundational element of the network as enterprises focus on multi cloud network being infrastructure servers and storage a multi cloud rapid application development and deployment software-defined everything's happening pretty much we've been covering that on the cube and most enterprises are just crap lling with this concept and see opportunities the benefits that people see in automation as we've discovered still in the following one focused on focused efforts for better results efficiency security is a top driver on all these things because you got to have security built into the software and then automation is creating job satisfaction for these guys I mean they've been doing this is mundane tasks being automated way so people are happier so job satisfaction and finally this is an opportunity to rescale do these are the key bullet points that we found in talking to our serve our community your reaction to those those results yeah John I love that we know ultimately when we want to be able to provide not only better value to my ultimate end user but I need to look internal as you said John you know how can i you know retool some of my sales force and get them engaged and if you want to hire the Millennials they want a bit just and not be doing the drudgery they want to do something where they feel that they are making a difference and you know you laid out a lot of good reasons why it would help and why people would want to get involved John you know the government I've talked to a number of government agencies when they talk about you know we changed that 40-year old process and now we're doing things faster and better and that means I can really hire that next generation of workforce because otherwise I wouldn't be able to hire them to just do things the old way this is about cloud 2 point and this is about modernization and you mention open source open core summit that I think is a tell sign that open source is changing the communities are changing this is gonna be a massive wave again we've been chronicling this cloud 2 point of the week we coined that term we're trying to identify those key points obviously observability automation but look at the end of the day you got to have a focused effort to make the job go better you heard JP Morgan pointing out minutes versus hours this is the benefits of infrastructure as code in the end of the day employee satisfaction the people that you want to hire to re-skill that can be redeployed into new roles analytics math quantitative analysis versus the mundane tasks automation is going to impact all aspects of the stack so final questions do what are you expecting for the next two days we're gonna be here for two days what do you expect to hear from our guests yeah so John one of the things I'm going to really look at is as you mentioned infrastructure is that where this all started so you know how do I easy to play a VM you know ansible is there you know VMware I've already talked to a number of people in the virtualization community they love and embrace ansible we saw Microsoft up on stage loving embrace it as we move towards micro service architectures containerization and all of these cloud native deployments you know how is ansible in this community doing where the stumbling blocks to be honest from what I hear John coming into this anta Buhl's been doing well Red Hat has helped them grow even more and the expectation is that IBM will help proliferate this in even further the traditional competitors to ansible you think about the chef's in puppet to the world have been struggling with that cloud native world John I know I see ansible when I go to the cloud shows and I hear customers talking about it so ansible seems to be making that transition towards cloud native well but other threats in the cloud native world you know if I've said you know that when I when I go to the server lists you know conference I I don't I have not yet heard you know where this fits into the environment so we always know that that next generation and technology you know how will you know this automation move forward as Red Hat starts to get much more proliferating into major enterprises with IBM which will take their extend their lead even further in the enterprise it's an opportunity for ansible and the community angle is interesting I saw our tweets don't get your community your angle real quick on this I saw a tweet from NetApp their tagline at their booth is simplify automate and orchestrate sounds like they're leading into the kubernetes world containers you got to start thinking about software abstractions and this is the st. the you know provisioning hardware anymore whole new ballgame your assessment of an Sable's community presence mentioned I was a tweet from Red Hat I mean NetApp what's your take on the community angle here John it's all about community we the github stats speak for themselves this is very much a community invent you know kudos to the team here a lot on the diversity inclusion effort so really pushing those things forward John something we always notice at the tech shows the ratio of you know gender is way to more diverse at an event like this we know we see it in the developer communities that there was more diversity in there so by the way when they took over this hotel all of the bathrooms are I believe it's you know it's gender-neutral so you can use whatever bathroom yeah you know you you want there let's make sure I'm using the right pronoun when I'm going saying a lot of people Stu thanks for commentary keynote analysis I'm John first dude minimun breaking down why we are here why ansible why is automation important we believe automation will be a killer category we want to see a lot of growth here and again the impact is with machine learning and AI this is where it all starts automating the data the technology and the configuration is going to empower the next generation modern enterprise more live coverage from ansible fests after this short break

Published Date : Sep 24 2019

SUMMARY :

shows the ratio of you know gender is

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
2012DATE

0.99+

JohnPERSON

0.99+

twoQUANTITY

0.99+

Robin BergeronPERSON

0.99+

MicrosoftORGANIZATION

0.99+

sixQUANTITY

0.99+

CiscoORGANIZATION

0.99+

ninety-eight percentQUANTITY

0.99+

San FranciscoLOCATION

0.99+

ChrisPERSON

0.99+

IBMORGANIZATION

0.99+

JPMorganORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

two daysQUANTITY

0.99+

last weekDATE

0.99+

2015DATE

0.99+

Red HatTITLE

0.99+

five minutesQUANTITY

0.99+

Atlanta GeorgiaLOCATION

0.99+

todayDATE

0.99+

GoogleORGANIZATION

0.98+

dozensQUANTITY

0.98+

eight hoursQUANTITY

0.98+

JP MorganORGANIZATION

0.98+

RockstarORGANIZATION

0.98+

over a hundred million repositoriesQUANTITY

0.98+

JP MorganORGANIZATION

0.98+

Red HatORGANIZATION

0.98+

githubTITLE

0.97+

Atlanta GeorgiaLOCATION

0.97+

firstQUANTITY

0.97+

this weekDATE

0.96+

2019DATE

0.96+

MillennialsPERSON

0.96+

one tweetQUANTITY

0.95+

Christopher FestaPERSON

0.95+

six most activeQUANTITY

0.94+

oneQUANTITY

0.93+

day oneQUANTITY

0.92+

Pedro DutyTITLE

0.92+

ansibleORGANIZATION

0.92+

40-year oldQUANTITY

0.91+

New RelicORGANIZATION

0.9+

yearsQUANTITY

0.89+

this morningDATE

0.89+

Silk'n angleORGANIZATION

0.88+

hundred percentQUANTITY

0.87+

DevOpsTITLE

0.87+

wiki bondORGANIZATION

0.87+

SableORGANIZATION

0.84+

six most active you know repositoryQUANTITY

0.83+

a yearQUANTITY

0.83+

couple of companiesQUANTITY

0.81+

Red Hat ansible FestEVENT

0.8+

lotQUANTITY

0.79+

next few daysDATE

0.78+

one of the thingsQUANTITY

0.76+

BuhlPERSON

0.76+

NetAppORGANIZATION

0.75+

500QUANTITY

0.75+

peopleQUANTITY

0.74+

open sourceEVENT

0.72+

ansible fest 2019EVENT

0.72+

AnsibleFest 2019EVENT

0.72+

Red HatORGANIZATION

0.72+

VMwareTITLE

0.68+

a lot of growthQUANTITY

0.67+

Deepak Chopra, Pioneer in personal transformation | Coupa Insp!re19


 

>> from the Cosmopolitan Hotel in Las Vegas, Nevada. It's the Cube covering Cooper inspired 2019. Brought to You by Cooper. >> Welcome to the cue from Cooper inspired 19 at the Cosmopolitan in Las Vegas. I'm Lisa Martin, and I'm very pleased and honored to be joined by Dr Deepak Chopra, world renowned pioneer in integrative medicine and personal transformation. Doctor Chopper. What a pleasure to have you on a huge It's wonderful to be with. So here we are Ready Technology conference. I know you talk a lot of different types of guns, and if we look at technology these days, we can't get up without it, right? It's our alarm clock in the morning. We're listening to podcasts or radio dot Thomas. We're getting ready for work. It's an essential component of our allies, but also something that if you look on the other side, it's bombarding us constantly with opportunities to talk to this person or to buy this or that as an expert in the human brain and consciousness were some of the observations that you've seen where way can really tie together technology to help us be more mindful. >> My first world, you have to realize that technology is our creation in my opinion technologies, actually an aspect of human evolution it's now happening is part of a revolution. It's also an aspect of cultural evolution. So when you say we're constantly bombarded by it, that implies a certain element of victimization by our own creation. So we don't need to do that. You know, technologies neutral. You can hack with it. You can mess up in election with it, you can cause destruction with it. You can increase inflammation in the body with it by sending somebody an emoticon that is upsetting to them. Or you can use technology to heal yourself on. Ultimately heal the ecosystem and the world. So, personally, I am a big fan of technology. If you don't relate to technology, you will become irrelevant. That's a Darwinian principle. Either you adapt and use it or they're not. >> That's a really interesting way of putting it. You're right. If you're not using it and adopting it and being receptive to the positive changes that it can bring in our lives, you will be irrelevant. What are some of your recommendations for people everyday people to be able to use it for just getting more center rather than protect my email. I have attacks. I have to respond to my >> so my push the activity Every days I have technology time, morning and afternoon. Have relationship game. Have meditation time, healthy eating time have playtime, recreation time have slipped out. So whenever you're doing something, you do it with full awareness. Whether it's technology speaking to another person, the most important activity in your life is what you're doing. Right now. The most important person in your life is the one in front of you Right now. Most of one thing to do with technologies to be fully engaged only when you're doing not, otherwise, schedule it. >> I love that. I love that you have all of these great times. A scheduled part of mutes wondered how much of this is psychological about actually controlling yourself? That's sort of common sense, but it's also in this day and age one of the hardest things to do here we are at a conference about business Spend management, where Cooper is talking to their businesses and every industry about you need to have control over your budget over your spend. It's sort of the same thing with technology. How do we actually use it to establish those schedules established that control that allows to take advantage of it also allows us to sit back, relax and enjoy the Now, >> you know, I don't like the word control, obviously. Okay, My word for that is be aware. So be aware of yourself and be aware of the fact that everything that's happening to you in the world is reflection of yourself. So if you find the world insane that question your sanity if you find the world melodramatic, hysterical question, your aspect of melodrama and his Syria, if you find the world centre, it's because your center and so the Boston born thing is self awareness, period >> like that and you're right. That's a >> much more you for Mr Ward, then control awareness. It's It's a more peaceful, I think, More action taking word. So I listened. T you started a podcast series this year. Infinite potential. So I know that you're not only using technology to continue reaching the folks who've been following you for many years, but now a new audience getting to tell stories in a different way. And I heard a two part podcast years where you were talking about a I and so one of the things that I wanted to talk to you about is this deep. So how are you leveraging a I to share your daily reflections, reach a bigger audience and help us become more aware? >> So my personal interest all my life as you mentioned, is well being personal transformation. I'm using deep learning, artificial intelligence, augmented immersive experiences, virtual reality, biological feedback, neuro plasticity, epi, genetics, all as a means for well being and personal transformation. So the future well being is very precise. It's very personalized because no two people react to the same similares, whether it's a diet or a compliment or in a front in the same way artificial intelligence can. If you want, help me know everything about you. Everything, how your mind works, how your emotions work, how your body works and the relationship with that. So one of the things I'm examining right now is 2,000,000 jeans in our body which are not human, which microbial is called the microbe microbiome. It's actually as significant as human genes in determining your state of well being by analyzing the microbiome through artificial intelligence and deep learning. You can killer well being interventions very personally and very predictably and, of course, requiring your participation. You become your own healer of co healer in the sense artificial intelligence for deep leading off gene expression. Not just jeans. Because genes are not now owns their verbs. What are they doing? What are they up to right now? The genes that are responsible for healing active are the genes that are responsible for inflammation or disease Inactive. What most of your audience may not know is that only 5% of genetic mutations that give rise to disease fully penetrate, which means only 5%. Which means the guarantee. The disease. If you have ah Braca gene for breast cancer, you're going to get breast cancer for that. Also, new technologies like Christmas you'll be able to read the barcode of a gene, cut the hunt footed or deleting harmful Julin Jean insert the healthy, and so that will solve that problem. And it's happening very soon. It's in the works, but 95% of illness, even with the genetic mutations that predispose you to a less, are not predictable dependent on your lifestyle. Now it was in the past. You couldn't measure that. Today you can. You can measure sleep. You can measure dream, sleep deep sleep. You can measure exercise. You can measure heart rate variability. You can measure gene expression and you can digitize the whole thing. So with that, we have an amazing new frontier in medicine. The three dimensional model of pharmaceuticals has very limited application, only an acute illness. The future off treatment even will be through technology. So in five years you go to a doctor's office. They might give you a V R session instead of writing a prescription. >> Well, in a lot of advanced technologies are being utilized now in medicine, seeing a doctor virtually through computer, exactly telemedicine being able to treat more people faster. But it's like were in >> the first minute of on there if I >> were in the in the puberty. Yeah, you know, puberty is a time of challenge, and >> true and and >> so were the adolescence of our use of technology is getting richer. >> So when we look at all of the applications for the emerging technologies that you mentioned it so much good that can happen, we can become so much more aware of our own and take don't take control. I know you don't like that word, but take ownership, Influence, Influence Yes, >> if we look at some of the negative consequences of artificial intelligence machine learning. I was fascinated by your podcast with Christopher Whitely and how incredibly potent Cambridge Analytica waas in changing the course of American history. >> And it could ruin democracy. Yes, So we need to have surveillance. We need to have, you know, chords for keeping it secure. Yes. So even these problems, by the way, can be solved by technologies >> they can. It's sort of a catch >> 22 isn't it? >> Yes, but the same time here we are, freely as just consumers. And one of the things that Cooper is talking about is making a purchasing decision, making buying management in business. As easy as it is for us consumers, you know you need something, you go on amazon dot com and there is click to buy. It shows up so quickly you've forgotten what you ordered. It's like your birthday. So there are so many advantages. At the same time, it's creating a lot of challenges with >> this conversation is going to help solve those challenges because the more we have this conversation in social media, in education facilities, even an entertainment, we're writing a new story together. >> And that story is that narrative is so powerful. Yes, absolutely. You're right. It's everything but going back to your word awareness. That's what So money, whatever the causes, really needs to have us that consistent. It's not just saying it a few times here. They're on different media, right? It's not consistent, >> consistent messaging. And in my mind that messaging is one thing. It's been my mission statement for the last 35 years. Way have to accelerate collective consciousness in the direction of a more peaceful, just sustainable, healthier and joyful work. We have to eliminate war. We have to eliminate equal destruction. We have to eliminate to climate change way have the technology to do it. But now we need to harness the collective intelligence, the collective creativity and the collective impulse for love and compassion to technology, and we'll do it. >> I like that. You sound very definitive. We will do it first, though some of those naysayers who don't believe climate change Israel, for example, How do you advise whether it's a government organization for people to start looking at? Use the technology? Look at the data, start being receptive to the fact that changes happening. But we could harness the power of it for so many good application. >> It was in this year's. It's not without arguing with them on. Data helps, but scientific data never changed a broader revolution. You need data. You need science, which you need collective emotional connection. If you don't have that emotional and spiritual connection, if you don't see that the air is your breath. If you don't see that the rivers and waters in the ocean are your circulation. If you don't see that the earth is recycling is your body. If you don't see that what we call the environment is their extended body. You have a personal body and the university body, and if you're not emotionally tied to that, then scientific did does >> such an interesting concept. We just think, Well, the data's there, it shows this. Therefore, it is what you're saying. We have to have an emotional connection. >> Yes, data by itself, science were itself faxed by itself. Don't change the world. But when facts are tied to an emotional story, everything changes. >> So, wrapping things up here, I know that you are working to create a diversion of Dr Deepak Chopra that will live forever that will be able to continue to inspire. Many generations >> have been working on this. It's actually a stealth project, so I can't give details. But I've been working on this for more than a year now, and where we are is I will soon have a version of myself, my mind twin that will know everything that I've ever said. But we'll also through deep learning, continue to learn and we lived for generations are from gone or perhaps eternally and we'll communicate with the world even when I'm physically nor president and because it will be learning as we go along and incorporating everything into my take on what is reality. What is fundamental reality, what is consciousness? It will be much smarter than I am. >> So you think that a I and consciousness are really going to be able Thio merge together to continue to evolve rather than you think about a way I take stated from the past and the present to try to predict the future. But you see them as living some bio symbiotically, eh? I >> do. But we have to be careful here will never have subjective consciousness. Okay? Never. It may replicate insight and intuition and creativity and even vision, but it won't be able to fall in love. >> That's good. I was a little worried about that on >> it will not be able to address experientially what comes from, um, meditation and other reflective enquiries that transcend human thought. So, you know, science is a system of thought, just like mythology, religion, philosophy, theology, our systems of thought. No system of thought can actually access reality till you go to the source of thought, which is consciousness >> source thought. Dr. Deepak Chopra. What a pleasure to have you on the Cube. Thank you so much for joining me this morning. I know you've got to get off your keynote, but it was very much a pleasure. >> Thank you. My pleasure. >> Excellent for Dr Deepak Chopra. I'm Lisa Martin. You're watching the Cube from Cooper inspired 19. Thanks for watching

Published Date : Jun 26 2019

SUMMARY :

It's the Cube What a pleasure to have you on a huge It's wonderful to be with. relate to technology, you will become irrelevant. I have to respond to my the most important activity in your life is what you're doing. and every industry about you need to have control over your budget over and be aware of the fact that everything that's happening to you in the world is reflection of yourself. like that and you're right. I and so one of the things that I wanted to talk to you about is this deep. So in five years you go to a doctor's office. to treat more people faster. you know, puberty is a time of challenge, and I know you don't like that word, but take ownership, I was fascinated by your podcast with Christopher Whitely and We need to have, you know, chords for keeping it It's sort of a catch Yes, but the same time here we are, freely as just consumers. this conversation is going to help solve those challenges because the more we have this conversation It's everything but going back to your word awareness. and the collective impulse for love and compassion to technology, change Israel, for example, How do you advise whether it's a government If you don't see that the rivers and waters in the ocean are your circulation. We have to have an emotional connection. Don't change the world. So, wrapping things up here, I know that you are working to create a diversion continue to learn and we lived for generations are from gone or to continue to evolve rather than you think about a way I take stated from the past do. But we have to be careful here will never have subjective consciousness. I was a little worried about that on reality till you go to the source of thought, which is consciousness What a pleasure to have you on the Cube. Thank you. Thanks for watching

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

Christopher WhitelyPERSON

0.99+

CooperPERSON

0.99+

Deepak ChopraPERSON

0.99+

Deepak ChopraPERSON

0.99+

95%QUANTITY

0.99+

2019DATE

0.99+

five yearsQUANTITY

0.99+

two peopleQUANTITY

0.99+

Julin JeanPERSON

0.99+

WardPERSON

0.99+

two partQUANTITY

0.99+

TodayDATE

0.99+

SyriaLOCATION

0.99+

first minuteQUANTITY

0.98+

ChristmasEVENT

0.98+

oneQUANTITY

0.98+

DoctorPERSON

0.98+

Las VegasLOCATION

0.98+

Las Vegas, NevadaLOCATION

0.98+

firstQUANTITY

0.98+

BostonLOCATION

0.97+

2,000,000 jeansQUANTITY

0.97+

ChopperPERSON

0.97+

amazon dot comORGANIZATION

0.97+

ThomasPERSON

0.97+

one thingQUANTITY

0.96+

DrPERSON

0.96+

this yearDATE

0.96+

Cambridge AnalyticaORGANIZATION

0.95+

earthLOCATION

0.95+

5%QUANTITY

0.95+

more than a yearQUANTITY

0.94+

Dr.PERSON

0.92+

Ready TechnologyEVENT

0.92+

Cosmopolitan HotelORGANIZATION

0.9+

ThioPERSON

0.89+

first worldQUANTITY

0.84+

19QUANTITY

0.81+

this morningDATE

0.8+

22QUANTITY

0.78+

this yearDATE

0.76+

AmericanLOCATION

0.76+

CosmopolitanORGANIZATION

0.76+

one thingQUANTITY

0.76+

IsraelLOCATION

0.74+

last 35 yearsDATE

0.71+

twinQUANTITY

0.66+

CubeTITLE

0.62+

CoupaTITLE

0.6+

ah BracaOTHER

0.58+

Teresa Carlson, AWS | AWS Public Sector Summit 2019


 

>> live from Washington, D. C. It's the Cube covering a ws public sector summit brought to you by Amazon Web services. >> Welcome back, everyone to the Cubes Live coverage of a ws Public sector summit here in Washington D. C. Our nation's capital. I'm your host, Rebecca Knight co hosting alongside John Farrier wear welcoming Back to the Cuba, Cuba and esteemed Cube veteran Teresa Carlson, vice president Worldwide public Sector A W s. >> Thank you really appreciate always being on the key, But I appreciate you being here and our public sector. Sandy, >> Thank you for having us. So give up. Give us the numbers. How many people are in this room? How many people are here? >> Well, we have now today. Well, for this time that we're here, there's probably about 13,000 people here will expect a couple of 1,000 more. I think by the time it's all said Dan, we'll have about 15,000 at the conference. Of course, you had my keynote today with whole Benson sessions. They're all packed, and tomorrow you'll have Andy, jazzy herewith made ing a fireside chat at 11 o'clock on Wednesday, so I think that room will be overflowing with Andy Kelly as well, Because everybody loves him >> and Andy just coming back from a conference for the Silicon Valley elites on the west coast, where he put a big plug in for public sector, which is awesome. Yes. Now there you guys are kicking some serious butt. Congratulations. >> Thank you. Yeah. Thank you. >> I mean, what's it like for you? You're the leader. You're the chief of the public sector business. You've grown it. It's now cruising altitude that seem so cruising. >> Yeah, it. Well, first of all, this Nana, this would've been possible without Andy Jassy actually kind of believing and the mission of public sector when he hired me in 2010. And you're right, John. We started. You've hurt, covered the story. We started with two people in 2010 at the end of 2010. And now we have thousands of people around the world and, you know, over 35 countries, customers and 100 72 2 countries. And the business is growing at more than 41% every year date of yes, and we're $31,000,000,000. Business with public sector ban important component in that business. So for s here today. It is very meaningful. And the reason it is so meaningful. It is about our customers. And this is This is a testament to that. Our customers left what a TBS provides. And in the public sector business, it is a game changer to their mission way >> We're talking on our insure this morning. Rebecca and I around this new generation of workers, and that's almost like a revolution of red tape. Why's it in the way you gotta do better ways to be management cloud health care you named the vertical isn't a capacity to disrupt, create value. So you have this kind of shift happening. But you guys are also technology leaders. So when when you see things like space, >> Yeah, these were kind >> of tell signs that the CIA adopting the d o d. Look at the big contracts are coming in. People are working it hard. These air tell signs that the growth Israel >> Yeah, grab reaction to that gross Israel and I and I like to talk to my leaders about while we've had phenomenal growth, and that's fantastic. Way really are only getting started because now, in 2018 I really saw our customers doing unbelievable work leads very hard mission. Critical work was that they were meeting from it from it's kind of old environment, moving it on day to be asked, migrating and totally optimizing it. Now what's changing within the intelligence community and D o d is that you know, in 2013 when the icy made this decision made, it started changing even enterprise views of moving to the cloud from a security perspective. But you have that shift has happened. Now you see d o d moving for Jet I, which will be announced hopefully in July or August. Hope hopefully scene. But even without Jed, I. D o. D is making massive mate to cloud. I mean, and by the way, there no blockers now, like a year ago when we talked here, there were still some blockers for them. Today, really pretty much every blocker has been remade so that they can move a lot faster. So even outside of Jed, I we see our d o. D customers moving. You heard Kenny Bow and our debt today on stage, Who's the CEO of the special access program? Talk about what they're doing and why Cloud became an important element of their mission. And I could tell you, Kenny works on some very challenging and difficult mission programs for D. O. D. So that these air kind examples. On the flip side, I met with some CIA's yesterday from the state and local government. Now that has been a super surprising market for me where I'm seeing them. Actually, 2018 was a true change of year for them. Massive workloads in the state Medicaid systems that are moving off of legacy systems on a TVs, justice and public safety systems moving off on TBS. So that's where you're seeing moves. But you know what they shared with me yesterday, and my theme, as you saw today, was removing barriers. But they talked about acquisition barrier still, that states still don't know how to buy cloud, and they were asking for help. Can you help kind of educate and work with their acquisition officials? So it's nice when they're asking us for help in areas that they see their own walkers. >> So what accounts for the fact that these blockers air sort of disappearing as you set up on the main stage this morning? cloud is the new normal, right? Everyone is really adopting this cloud first approach. And what accounts for the fact that these challenges ey're sort of slowly dissipating? Well, there, you know, some of >> the blockers had been very legacy, and I'd like to tell you already that kind of old guard helped create a lot of these models. And most of these models, as an example of acquisition, were created so that governments had to pay at friend. So these models were like, pay me a lot of many a friend and then let's hope I will use them all that technology. So now we come along and say, Actually, no, you don't need to pay us anything up front. You could try it and pay as you use it and then scale that and they're like, Wait, wait a minute. We don't know how to do that model. So part of these things have been created because of all systems that what's changing those systems is that you can't you again if you can't change gravity, and we're at the point where it is the new normal, and you cannot change gravity, and they're seeing security. If you think about security is the number one reason they're moving to the cloud. Once you start having security issues, they on their own start removing blockers because they're like we've got it made faster because we wanted our secure. >> I know you've got a lot of things going on. You got customer visits. Your time's very tight. Appreciate you coming on. But I got to get and I want to talk about check for good programs you launched what happened at the breakfast of the stories. We could go for an hour on that, but I really want to dig into this ground station thing. And one of the coolest thing I saw reinvent when it kind of got launch. This is literally it reminds me the old Christopher Columbus days is the world flat is flat. We'll know the world is round. You have space? Yeah, space and data. It's gonna change the coyote edge to be the world. Right? So this is a game changer. I see this game changer way had your GM on earlier. Brett, what's what's going on with ground? So how is that going to help? Because it's almost provisioning back haul. It's gonna help. Certainly. Rural area st >> Yeah, way ahead of Earth and Space Day yesterday. So we kicked off with that with two amazing speakers. And the reason ground station is so important. By the way, it was a customer of ours in the US intelligence community that told us about six years ago we needed to create this. So you know where I said 95% of our services or customer driven? It was a customer that said, Why doesn't a TVs have a ground station and we really listen to them? Work backwards? And then we launch a ground station. I became general availability in May, and that is really about creating a ubiquitous environment for everyone, for space, for the space and satellite communications. So you can downlink an uplink data. But then the element of utilizing the cloud the process and analyze that data in real time and be ableto have that wherever you are is really I mean, it truly is going to be an opportunity for best commercial enterprises and public sector customers. And you know, John, right now, the pipeline that we have seen already for ground station, even I'm surprised at how Many of our customers and partners are so interested with acid ate a >> government thing about, like traffic lights, bio sensors Now back hauling all that into a global, >> you know, many different way. And now start. If he saw the announced with the Cloud Innovation Center at Cal Poly, we're gonna be doing some research with them on space communications and programs around ground station. Chile is another location You've heard me talk about that has missed tell escapes in the world. And we're gonna be working in Chile doing some work on ground station there in the Middle East. So this is, by the way, global. While the Qena it kind of came. Tosto, >> go to Cal Poly together way. We're gonna go to Chile. >> Chile next. Yeah, chili is great. So you could get two best locations with me. I would love that line here. Next. Exactly 11. Yes. >> Thank you so much for >> back. And make sure we get all those other days. >> Yes, because next time I've got to tell you that tape for good. There's too much not to talk about. So we have to convene again. >> Come to your office in the next couple months of summer. I'll make a trip down. We'll come to >> thank you all for being here. Thank you so much. Thank you. >> Thanks so much, Theresa. I'm Rebecca Knight for John Furrier. Stay tuned. You are watching the Cube.

Published Date : Jun 11 2019

SUMMARY :

a ws public sector summit brought to you by Amazon Web services. Welcome back, everyone to the Cubes Live coverage of a ws Public sector summit here in Washington Thank you really appreciate always being on the key, But I appreciate you being here and our public Thank you for having us. Of course, you had my keynote today with whole Benson sessions. Now there you guys are kicking some serious butt. Thank you. You're the chief of the public sector business. the world and, you know, over 35 countries, customers and 100 72 2 countries. Why's it in the way you gotta do better ways of tell signs that the CIA adopting the d o d. d is that you know, in 2013 when the icy made this decision made, So what accounts for the fact that these blockers air sort of disappearing as you set up on the main stage this morning? the blockers had been very legacy, and I'd like to tell you already that kind of old guard But I got to get and I want to talk about check for good programs you launched what happened And you know, John, right now, the pipeline that we have seen You've heard me talk about that has missed tell escapes in the world. We're gonna go to Chile. So you could get two best locations with me. And make sure we get all those other days. Yes, because next time I've got to tell you that tape for good. Come to your office in the next couple months of summer. Thank you so much. I'm Rebecca Knight for John Furrier.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
TheresaPERSON

0.99+

RebeccaPERSON

0.99+

Teresa CarlsonPERSON

0.99+

AndyPERSON

0.99+

Rebecca KnightPERSON

0.99+

JohnPERSON

0.99+

Andy KellyPERSON

0.99+

Andy JassyPERSON

0.99+

2013DATE

0.99+

2010DATE

0.99+

$31,000,000,000QUANTITY

0.99+

2018DATE

0.99+

TBSORGANIZATION

0.99+

95%QUANTITY

0.99+

Washington D. C.LOCATION

0.99+

ChileLOCATION

0.99+

tomorrowDATE

0.99+

CIAORGANIZATION

0.99+

MayDATE

0.99+

Washington, D. C.LOCATION

0.99+

TodayDATE

0.99+

Cloud Innovation CenterORGANIZATION

0.99+

Kenny BowPERSON

0.99+

yesterdayDATE

0.99+

BrettPERSON

0.99+

AugustDATE

0.99+

SandyPERSON

0.99+

todayDATE

0.99+

two peopleQUANTITY

0.99+

DanPERSON

0.99+

JulyDATE

0.99+

JedPERSON

0.99+

more than 41%QUANTITY

0.99+

AWSORGANIZATION

0.98+

Middle EastLOCATION

0.98+

John FarrierPERSON

0.98+

a year agoDATE

0.98+

Christopher ColumbusPERSON

0.98+

thousands of peopleQUANTITY

0.97+

JedLOCATION

0.97+

NanaPERSON

0.97+

John FurrierPERSON

0.97+

two amazing speakersQUANTITY

0.97+

over 35 countriesQUANTITY

0.96+

end of 2010DATE

0.96+

first approachQUANTITY

0.95+

Amazon WebORGANIZATION

0.95+

about 13,000 peopleQUANTITY

0.95+

100 72 2 countriesQUANTITY

0.95+

USLOCATION

0.95+

EarthLOCATION

0.94+

KennyPERSON

0.94+

Cal PolyORGANIZATION

0.93+

about six years agoDATE

0.93+

this morningDATE

0.92+

1,000QUANTITY

0.91+

oneQUANTITY

0.91+

Silicon ValleyLOCATION

0.9+

IsraelORGANIZATION

0.9+

CubaLOCATION

0.9+

D. O. D.LOCATION

0.89+

11 o'clock on WednesdayDATE

0.89+

an hourQUANTITY

0.87+

TostoLOCATION

0.86+

AWS Public Sector Summit 2019EVENT

0.86+

about 15,000QUANTITY

0.85+

BensonPERSON

0.83+

Cal PolyLOCATION

0.8+

two best locationsQUANTITY

0.8+

vice presidentPERSON

0.78+

firstQUANTITY

0.78+

CubeTITLE

0.76+

CubeORGANIZATION

0.76+

next couple monthsDATE

0.76+

Jet IORGANIZATION

0.74+

chiliLOCATION

0.68+

every blockerQUANTITY

0.67+

sectorEVENT

0.65+

Brett McMillen, AWS | AWS Public Sector Summit 2019


 

>> live from Washington, D. C. It's the Cube covering a ws public sector summit by Amazon Web services. >> Welcome back, everyone to the cubes Live coverage of a ws public sector Here in our nation's capital Washington D. C. I'm your host Rebecca. Night hosting alongside of John Farrier. Always a pleasure being with you. >> So good to see you again. >> And we're joined by first time Cube guest Brett MacMillan. He is the GM ground station. Eight of us. Thanks so much for coming on >> the road to be here. Thank you. >> So why don't you start by telling our viewers a little bit about ground station? What? It is one of us. >> You're first of all really excited to be here at this conference yesterday we had our second annual Earth Science Day. Last year was really successful, and we're finding a huge amount of interest around a space and space primarily tto help save the earth. And so >> eight of >> us came out with the solution, and we made it generally available last month called Ground Station. And if you think back about 15 years ago, before the commercial cloud came out, uh, you had to do for a data center. You Hey, either had to buy the data center. You had to do a long term lease. And then >> we >> came out with the commercial cloud. And from that point forward, there was a tremendous number of innovations. That movie came out of that. I don't think any of us back then could have predicted things like Pin arrests O R. Spotify Or or that Netflix would have gone from shipping your DVDs to be in the online streaming company and all those innovations happening, we think that we're at the beginning of that stage of satellite industry. So what ground station is is It's a service that you can use like any other cloud service. Just pay for what you used on demand. You can scale up you, Khun scale down. And we think that we're in the early stages of opening up innovations in this >> industry >> and its satellite specific. So it's a satellite services of connectivity. How how's it work? What's that >> s what happened to you. You would have a you just go into the eight of us counsel on you schedule a contact. And most of these early use cases there for our low earth orbit. Satellites are medium earth orbit satellites, and we have deployed these satellite antennas. And what's really important about this is we put them right next to our data centers or availability zones. So now you're getting the entire power of the cloud. And so what happens is you would schedule contact and either up Linker downlink your data during that contact period. And we just charge per per minute. And >> so it's like the two was servers and still has three. With storage and thie used. Case wasn't solved. The provisioning problem. So you guys are doing it for up Lincoln down Lincoln to satellite usage and data over satellite. Pretty >> direct. Correct. And so And the other thing that's really nice about it is just like the cloud would announce enable people to go global and minutes ground station allowed you to go global also. So, traditionally, what would happen if you would buy a satellite antenna or you'd Lisa Sal? I'd intended somewhere in the world and you're only catching so many passes of those satellites. We are deploying these at our data centers through out the world, and so you're able to at a very low cost. Now touch these passes of the sound lights. >> You know, Brett, Rebekah and I were talking on the intro around the role of technology. How it's causing a lot of change. You mentioned that window of 10 years where, before YouTube, after YouTube, all these new services came on. Think about it. Those didn't exist around before. Two thousand four time frame. Roughly two thousand 10 2 4 2 4 to 5. Then the mobile revolution hit. Similar wave is coming into government and seeing it. Amazon Webster Public Sector Summit is our fourth year. It gets bigger. The inclusion of space is a tell sign of commercialization of some of the tech coming in infiltrating process, change within government and use cases. So I would agree with you that that's relevant. >> Yeah, And >> next level is what? What was that window? What's gonna happen that 10 year? >> You don't change? It is hard to predict, but we know from our past experience on what we've done in the cloud. We know that when you remove the undifferentiated heavy lifting like buying servers are doing networks and things like that. It frees people up to do innovations on DH And when you look at what's happening in the satellite industry, virtually every industry, every person can benefit from a better understanding of this earth and from satellite imagery and satellite sensing. And so, if you start moving forward with that and you ask what can happen, we've got governments throughout the world that are very concerned about deforestation. And so, for example, today they find out 54 station after the trees are gone. And what if you could instead, for a very low cost, download pictures of satellite images and get it in more of a really time type basis? Or get it in that same hour that, uh, sound like took the picture. Now what you could do is catch the deforestation when the boulders air show up, not after the trees went down, so >> get in front of it. Used the data is a data business just about other use cases, because again, early adopters are easily the developers that are hungry for the resource. We saw that with cloud to industry, I mentioned now those service thousands and thousands of new services a year from a baby s jazz. He loves to talk about that at reinvent, and it's pretty impressive. But the early days was developers. They were the ones who have the value. They were thirsty for the resource. What are the sum of that resource? Is what's the low hanging fruit coming in for ground station that you could share that tell sign for >> where it's going? Interest not only for the his new developers in these new things, but large, established sound like companies are very interested in that, because when I was talking about earlier, you can cover areas with our service in ways that were very expensive to do. Like until you Ground Station would have been a little hard for us to roll out, had we not first on eight of us if you didn't first have things like Ace two and three and your ways of of storing your data or our petabytes scale worldwide network. And so when you look at that, you're able to get multiple different organizations doing some really cool things. We're in partnership with Cal Poly, Cal Poly and Cal Poly's been in the space industry for a long time. Back in 1999 they were one of the inventors of original Cube sat, and today what they're doing is they have this STDs, Sally Data Solutions service on. It's an initiative that they're doing and they did a hackathon. And when you look at all the areas that could benefit from from space and satellite tourists, all kinds of things pop up. So, for example, if your cattle rancher and you have a very large area, sometimes cat cat will get stuck in an area like a canyon or something. You don't find out about it. It's too. It's too late. So Cal Poly did this hackathon on DH. What they came up with is, it's very inexpensive now to put a I ot device on it on the cows on with the ground station. You can now download that information you can communicate to a satellite, and now we can find out how where those cows are and get them if they're in a dangerous situation. I >> think the eye OT impact is going to be huge. Rebecca, think about what we talked about around Coyote. I ot is the edge of the network, but there's no networks, not flat. It's in space. The earth is round right, so You know, it's kind of like a Christopher Columbus moment where if you have the data, all you need power and connectivity. So battery power is getting stronger every day. Long life batteries. But the connectivity with ground station literally makes a new eye ot surface area of the earth. Absolutely. I mean, that's pretty groundbreaking. >> This is a really exciting time to be in the space industry. A couple things are driving it. One is that the capabilities that were able to put up in space for the same amount of weight and the same amount of payload is increasing dramatically. The only thing that's happening is that the cost for lift the cost to put satellites and and orbit is dropping dramatically. And so what's happening with those two things is were able to get a lot more organisations putting satellites up there. And what's turning out is that there's a tremendous number of images and sensing capabilities. It's coming down actually more than the humans are able to analyze. And that's where the cloud comes in is that you take and you download this information and then you start using things like machine learning and artificial intelligence and you can see anomalies and point them out to the humans and say, for example, these balls are just showed up. Maybe we should go take a look at that. >> You know, imagery has always been a hot satellite thing. You see Google Earth map three D mapping is getting better. How is that playing into it? Is that a use case for you guys? I mean, you talk about the impact. Is that something we all relate to >> you and I would submit that we are in the early stages of that. It's amazing what we can do with their damaging today. And everybody on their phones get Google maps and all the other things that are out there. But we're in early stages of what we could do with that. So some areas that we're looking at very closely. So, for example, during the California wildfires last year, NASA worked on something to help out the people on the ground. You know, with ground station, what you'll be able to do is do more downloads and get more information than a more real time basis, and you'll actually be able to look at this and say the wildfires are happening in these areas and help the citizens with escape routes and help them understand things that were actually hard to determine from the ground. And so we're looking at this for natural disasters as well as just Data Day solutions. >> It's such an exciting time, and you and your pointing at so many different use cases that have a lot of potential to really be game changers. What keeps you up at night about this, though? I mean, I think that they're as we know, there's a lot of unintended consequences that comes with these new technologies and particularly explosion of these new technologies. What are what are your worries? What what is the future perils that you see? >> So So we definitely are working with these agencies of the federal government and commercial things on making sure that you can sit. You're the data. But again, that was one of the benefits of starting with a ws. We started with security being a primary of part of what we did. And so when when you have ground station, you do a satellite uplink for downlink, and then you immediately tell it where in the world you want the data to be stored. So, for example, we could download, Let's say, in another part of the world, and then you can bring it back to the nine states and store it in your we call a virtual private cloud. It's a way for our customers to be able to control their environment securely. And so we spent a lot of time explain to people how they could do that and how they could do it securely. And so, uh, well, it doesn't keep me awake at night, But we spend a tremendous amount of time working with these organisations, making sure that they are using best practices when they're using our solution. Right? >> Talk about the challenges you mentioned, storing the securely role of policy. We're living in a world now where the confluence of policy science tech people are all kind of exploding and studio innovation but also meet challenges. What are some of the things that you guys are doing? Obeys the bar improving? I mean, I'll say there's early days, so you're seeing areas to improve. What if some of the areas that you're improving on that are being worked on now on impact >> So you mentioned policy side of it. What I'd like Teo say is any time there's a new technology that comes out way. Have to do some catching up from, You know, the policy, the regulator point in front of you right now because the satellite industry is moving so fast. Um, there's a scale issues on. So governments throughout the world are looking at the number of satellites they're going up in, the number of communications are happening, and they're working with that scale on Andi. I I'm very proud to say that they're reacting. They were acting fairly quickly on DH. That's one of the areas that I think we're going to see more on is as this industry evolves, having things like having antennas insert and antennas and satellite certified quickly is one of the things that we need to talk. >> Some base infrastructure challenges mean Consider space kind of infrastructure. At this point, it plenty of room up there currently, but can envision a day with satellites, zillion satellites up there at some point. But that gets set up first. You're saying the posture. The government is pro innovation in this area. >> Oh, you're wasting a lot of interest in that way. We launched ground station governments both here in this country as well as throughout the world, very interested in this on DH. They see the potential on being able to make the satellite's on satellite imagery and detection available. And it's not just for those largest organizations like the governments. But it's also when you commercialize this and what we've made it so that small, medium sized businesses now, Khun, get into this business and do innovative things. >> Question. I want to ask. You know, we're tight on time, Rebecca, but we'll get this out. In your opinion. What? What do you think the modernization of public policy governments means? Because the paint on your definition, what modernization is This seems to be the focus of this conference here, a ws re public sector summit. This is the conversation we're having in other agencies. They want to modernize. >> What does that mean to you? It takes on many things. Many perspectives. What? What I find a lot is modernizations is making helping your workers be more productive. And so we do this with a number of different ways. So when you look at ground station. Really? Benefit of it isn't. Can I get the image? Can I get the data? But how can I do something with it? And so when you start applying machine learning artificial intelligence now you can put a point toe anomalies that are happening. And now you can have the people really focus on the anomalies and not look at a lot of pictures. They're exactly the same. So when you look at a modernization, I think it's some economists with How do we make the workforce that's in place more productive >> and find those missing cows? It's Fred McMillan. Thank you so much for coming on the Q. Thank >> you. It was a pleasure. We've >> got a lot of great mark. We got many more gas. Got Teresa Carlson. Jay Carney? >> Yeah. Yeah. General Keith Alexander, About how date is being used in the military. We got ground station connectivity. I really think this is a great opportunity for io. T wait to see how it progresses. >> Excellent. Thank you. >> Becca. Knight for John Furrier. Stay tuned to the Cube.

Published Date : Jun 11 2019

SUMMARY :

live from Washington, D. C. It's the Cube covering Welcome back, everyone to the cubes Live coverage of a ws public sector Here in our nation's He is the GM ground station. the road to be here. So why don't you start by telling our viewers a little bit about ground station? You're first of all really excited to be here at this conference yesterday we had our second annual Earth Science And if you think back about 15 years ago, before the commercial cloud came So what ground station is is It's a service that you can use like So it's a satellite services of connectivity. And so what happens is you would schedule contact and So you guys are doing it for up Lincoln down Lincoln to the cloud would announce enable people to go global and minutes ground station allowed you So I would agree with you that that's relevant. And what if you could instead, for a very low cost, download pictures of What are the sum of that resource? And so when you look at that, you're able to get multiple if you have the data, all you need power and connectivity. One is that the capabilities that were able to put up in space for the same Is that a use case for you guys? you and I would submit that we are in the early stages of that. What what is the future perils that you see? the federal government and commercial things on making sure that you can sit. What are some of the things that you guys are doing? of the things that we need to talk. You're saying the posture. But it's also when you commercialize this and what we've made it so that small, What do you think the modernization of public policy governments means? And so when you start applying machine Thank you so much for coming It was a pleasure. got a lot of great mark. I really think this is a great opportunity for io. Thank you.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
RebeccaPERSON

0.99+

Jay CarneyPERSON

0.99+

Brett McMillenPERSON

0.99+

Brett MacMillanPERSON

0.99+

NASAORGANIZATION

0.99+

Fred McMillanPERSON

0.99+

RebekahPERSON

0.99+

BrettPERSON

0.99+

Teresa CarlsonPERSON

0.99+

1999DATE

0.99+

AWSORGANIZATION

0.99+

Cal PolyORGANIZATION

0.99+

Washington, D. C.LOCATION

0.99+

John FarrierPERSON

0.99+

thousandsQUANTITY

0.99+

Keith AlexanderPERSON

0.99+

YouTubeORGANIZATION

0.99+

threeQUANTITY

0.99+

Washington D. C.LOCATION

0.99+

NetflixORGANIZATION

0.99+

two thingsQUANTITY

0.99+

Christopher ColumbusPERSON

0.99+

last monthDATE

0.99+

Last yearDATE

0.99+

twoQUANTITY

0.99+

eightQUANTITY

0.99+

Sally Data SolutionsORGANIZATION

0.99+

fourth yearQUANTITY

0.99+

yesterdayDATE

0.99+

Lisa SalPERSON

0.99+

zillion satellitesQUANTITY

0.99+

oneQUANTITY

0.99+

Two thousandQUANTITY

0.98+

todayDATE

0.98+

10 yearsQUANTITY

0.98+

bothQUANTITY

0.98+

firstQUANTITY

0.98+

54 stationQUANTITY

0.98+

John FurrierPERSON

0.97+

OneQUANTITY

0.97+

earthLOCATION

0.97+

a dayQUANTITY

0.97+

BeccaPERSON

0.97+

Google mapsTITLE

0.97+

Amazon Webster Public Sector SummitEVENT

0.96+

Earth Science DayEVENT

0.96+

last yearDATE

0.96+

first timeQUANTITY

0.95+

CubeORGANIZATION

0.95+

Pin arrests O R.TITLE

0.95+

nine statesQUANTITY

0.95+

LincolnLOCATION

0.95+

Amazon WebORGANIZATION

0.95+

10 yearQUANTITY

0.95+

LinkerORGANIZATION

0.93+

5QUANTITY

0.93+

SpotifyORGANIZATION

0.92+

a yearQUANTITY

0.92+

Eight of usQUANTITY

0.92+

two thousandQUANTITY

0.91+

CubeCOMMERCIAL_ITEM

0.9+

second annualQUANTITY

0.89+

TeoPERSON

0.89+

about 15 years agoDATE

0.88+

CaliforniaLOCATION

0.88+

AWS Public Sector Summit 2019EVENT

0.87+

couple thingsQUANTITY

0.86+

Ace twoCOMMERCIAL_ITEM

0.84+

KhunPERSON

0.81+

KnightPERSON

0.74+

DayORGANIZATION

0.73+

wildfiresEVENT

0.73+

10QUANTITY

0.72+

thousands of new servicesQUANTITY

0.72+

CoyoteORGANIZATION

0.67+

EarthTITLE

0.6+

Marco Bill-Peter, Red Hat & Dr. Christoph Baeck, Hilti | Red Hat Summit 2019


 

>> live from Boston, Massachusetts. It's the queue covering your red. Have some twenty nineteen brought to you by bread. >> Welcome back to the Cube. Continuing coverage here read. Had some twenty nineteen day three of our three days of covering some nine thousand attendees, great keynotes, great educational sessions and a couple of great guests for you to meddle. And John Walls were joined by Marco Bill Peter, who is the senior vice president of customer experience and engagement at Red Hat. Good to see you, Marco. Thanks for having the job on the keynote stage this morning. And Dr Christoph back, who was the head of infrastructure from Hilty and Christof. Thank you for being here is Well, thankyou. Hailing from from election Stein. And we think you're the first guest alum were to check our database, But But we've set a new record today. So thanks for adding to our having. First off, let's talk about Hilty. I'm sure people don't stay healthy. I've seen them, but this building probably wouldn't be here without you. Have imagined half the city wouldn't be here without you. But just tell folks at home a healthy a little bit about where you fit into the construction. >> Lt was founded in the nineteen forties in principality of a Lichtenstein as and is now today leading supplier for the construction industry. We provide tours, consumables, services and software solutions for professional construction companies. Daddy's from hammer drills, two anchors to calculation software and overall complete services for the industry. That's what hell is doing. >> So you did a very good job this morning on the keynote of painting that picture about about the scope of your work and the necessity of your work, the vitality of it. Because construction projects, as we all know, how very strict deadlines. Sometimes they have unique needs. They have immediate needs emergency needs, and you're in the center of all that. And so your technology is central to your general operation. >> Absolutely yes. I mean, with twenty five thousand or twenty nine thousand employees, twenty five thousand users in our system, basically, everybody's using everyday ASAP or the fast majority of users. We have ten thousand concurrent users every day on our system. That deal with customer requests with orders with quotes, but also, of course, with complaints with repair handling and so on. In >> just a few. Yeah, just >> so Marco, I hear ASAP and, you know, bring me back to when? Oh, well, you know, Lennox was that stuff that sat on little sidelines. We're well past that. You've got so many customers that run their business, you know, mission critical around the globe. Just give give, give us a little bit of background on the partnership with Hilty and Red Hat and Solutions like asap. >> Yes, sure. Yeah. The Department of Hilty goes back to, I think, two thousand seven for me. Personally, I started working with Hilton for another company in ninety three. So I know where the hell did Quite well, actually studied in the same town next to Lichtenstein, Son of the mail. And And it's it's amazing to see the journey kind of two thousand nine going all s ap mission critical on rail and now actually moved to Asa P s for Han. And yes, Hill is one ofthe declines. But it kind of talks that we can handle this mission. Critical applications are mission critical customers and built this good relationship to make sure they have these these courage to actually do this Bold jumps limited The last six months. >> Christoph, you know you've got a broad, you know, roll at the at the company way talked to so many companies on becoming a tech company on becoming a software company. Well, software is critical, but at the end of the day, you know, infrastructure and running your business is core. You know, you're not going to become a fully digital software. You have real stuff in the physical world that lots of people and lots of, you know, physical things that need to go to a little bit about that balance. And now the company has been changing over those last ten years. >> I was excited to be open with you. I was really excited when our executive board a couple of years ago, besides tools, consumables and services also added software into a strategic pillar for Hilty. Um, and while I believe that software will be an interesting pillar for us, well will generate additional revenue, will generate additional sales from early. Also in the consumables and tools and services piece software becomes more and more important when you look at the journey off building a building like this. As you mentioned John, I mean it starts with specifying it starts with the planning on CD, and it ends at the end with with Asset Management. Where are the tours and so on. So it's a complete life cycle through out the building off off throughout the construction of ah building. You >> know, Marco had mentioned that you made this decision to migrate Ohana last year right? Twenty eighteen or or where he might be rated last year? Isn't last year's decision made before that? Talk about that a little bit, if you would please and where Red Hat fit into that? Because that that's that's not a small decision, right? I mean, that's a That's a very calculated and I wouldn't not risky, but it's It's just a big move. Yeah, and so the confidence that you had a CZ well, that red hat was your partner to make that happen. >> Absolutely. I mean, the announcement of SAPI to support Hana as thie only database after twenty twenty five voice one off the factors to push us into that direction, that that was then clear for us that we want to go there. And it was also pretty clear for us that in our size it was not that easy to move in twenty twenty three or something like that in that direction, but that we have to be the first movers to be fully supported by ASAP and >> all >> these Parkins because later on, they will be busy with migrating all the big shots. So Wade took the decision to move first and soon, and that allowed us to be in the focus off all thes attached partners ASAP. But also read had also tell emcee for storage and HP for for servers. That meant that we had confidence that we have full attention from all these providers and partners to help us to migrate. On the other hand, it was clear the the the journey we started in two thousand nine has indicated by Marko that we moved to an open software that we move to commodity hardware. Intel based server hardware was a move that had paid off in the past, and we didn't want to go away from that and move again to a proprietary hardware or software solutions. So it was very clear that we want to do that jointly with red hat on commodity and until based service and That's how we went there, Right? >> So, Christophe, big theme, we hear not only at this show, but almost every show we go to is today customers. It's, you know, the hybrid and multi cloud world I see ASAP at all of the Big Cloud shows that that that we cover well, we're just cloud fit into your over discussion, you know, at your company. And then, you know, we can drill down to the specifics of that sapien red hat. But it's what do you have? A cloud strategy, as it were? >> Oh, yes, you know, we moved fairly soon to Amazon with all our customer facing workload. So when you go to hilton dot com or any of our Web pages, you typically land on a ws powered website because that one gave us the flexibility off operating systems off databases of whatever we needed. That was that was available there with our internal workload. However, So all the software we use Internet eternally toe run the company. We have a world that is split between ASAP, which runs entirely on Red hat, um, and the rest of the workload. Witches to a large degree, windows based workload so there. We decided a few years ago to Movinto Microsoft Azure platform to move the internal workload into Azure as it is mainly Windows based. >> So Marco actually want one a depart from healthy for a second. Just give us a little bit of a broad view. You know, we've talked to you many times. You talked about the stage. You know, the customer experiences a critical piece of red hats mission out there When I talk to customers today, One of the biggest changes they've seen the last few years is I'm managing a lot of stuff that's not in my environment. It's the stuff I'm responsible for it and something goes wrong. I'm absolutely getting a call, but you know, it's not my network. It's not my servers. It's not my piece there, but I have to do all of them, you know, got imagine. That's been a transformation for red Hat in the partnerships, and you're everywhere, so it just gets a little context. Yeah, I >> mean, you described it very well, right? I mean, I think the last two years before, I think it was just like some use cases in the public club. But today. The harder cloud is here, right? And everybody does it right. It's not like just one company from a customer experience to stand behind. Like I mentioned it on the state gets harder. Right? And you gonna have these partnerships, right? One partnership, right. We can talk about the azure. We have people in enrichment, right? Think about it today. And then everything changed with start having on stage here. But we have support people in micro for the last two or three years, right? Same diff ASAP as an example, right? We have people. We actually build a fairly large teeming, involved off to be close of us. Time together. I want to do that speed ASAP. A cloud bead on regular bear closes in general. Yes, That challenges. You mentioned networking, right? It gets tricky, right? And he shifted from, but it's unavoidable, right? It shifted from, like, okay, we own and control the stacked kind of too. Yes, you need to know you're open source after and to have really partnerships. Right? And I think the announcement Microsoft, too have this managed services offering that we do joint. It's That's what we're driving so that we can do this better together with partners. >> Marco is great to hear you that but Christoph, he's not listening. Tell us to reality. You've worked with Red Hat for ten years. You're going to cloud how they doing? How's the ecosystem, the vendors in general? They're all up on stage, holding hands. I mean, it's it's seamless and nobody ever point fingers. I'm sure >> to be very, very honest with you. I mean, I appreciate it last year, hearing that redhead will be offered in Azure. I mean, that was not possible to mention those two company names in one sentence in the past, at least for us as customers, and that that was a bold statement last year that those two parties will suddenly join. That fits very well in our strategy, because we believe internal workload for Hilton should run in in In Azure seeing on last Tuesday, Microsoft and Red Hat shaking hands and movie. Even beyond that one was, for me, them almost the most exciting event here, or the most exciting statement that I saw here during these few days because that reemphasized the close relationship that those two have, and that exactly fits our road map. That's exciting. >> And, you know, we heard that, you know, again from from both CEO Saying customers really kind of brought us together. They made this deal work because we kept hearing that they love us and they love you, and they like us together. So So we got that. We understand that. So? So Marco customers drove that to a certain degree. You've got a customer here who made this big Hana jump, which is you say small guy. You know, I would beg to differ little bit that you had him before the big guys did. But what, like an initiative like that? What is that doing for you? What? Red hat. In terms of carrying that over to other customers. Now, you've learned from one you've seen what they've gone through. What kind of confidence does that give you? What kind of interest does it give you about how to approach this game? >> Absolutely. You know what we learned from give you one example right? If you moved his heart ever closer Christopher Hilty uses systems have twelve terabytes memory. Think about it that fairly large systems and that foot train tried to actually test our softer with that footprint and then even think about the next. Next journey is in if you want to do this in the cloud. What does that mean? If you take a twelve terabyte image and running in a double? Yes. And so that is, since my team also does quality assurance and product security. That's for them as well as in. Okay, we've seen what tilted can do work. How do we actually make this more robust? How do we test you are there? And how do we do that in this journey? It's, I think I'm pretty proud of how we actually learn from these instances, and health is not the only one. It's just one the republic. But yet it's every time. I think that's the only survived is into industry. If you really learn continuously and also applied right. I mean our whole setup involved or we shifted completely and not just from the people. They have theirs. So we have people that do open. Chief. There were people do Lennox and performance, but also from structure. I really be sure that they were set up for success and know what the next they have customers is obviously every casting off. A message we will do will go through a journey license over the next ten years. >> Kristoff obviously being on stage, you know it is good for the company, but coming to Red Hat Summit one. Just give our audience that if they hadn't come to it. Some of the value is, too what you place in some, the activities that have excited you most here this week. >> I mean, one thing is, of course, hearing about latest technologies, new releases, off software, off new possibilities and opportunities for us as customers from Red Hat. But also, it's great to see how on the floor out there other partners customers on DH fingers mingle around the ecosystem that created that was created around open software about, ah, not only operating system, but also about containers about all these those different technologies, which I have an important role for all of us in nineteen the future. >> Sure. Well, good week, that's for sure. Very nice job you get on the Kino stage to both of you and good luck with the partnership on down the road. And again, I would make the difference that way. little guys got in early hilt. He's no small fry in inner world, that's for sure. Thanks for the time, Krystof. Marco. Thank you. Thank you very much. Back with more. We're live here in Boston and we're covering the red hat. Summer twenty nineteen on the

Published Date : May 9 2019

SUMMARY :

Have some twenty nineteen brought to you by bread. and a couple of great guests for you to meddle. calculation software and overall complete services for the industry. So you did a very good job this morning on the keynote of painting that picture about about the scope I mean, with twenty five thousand or twenty nine thousand employees, Yeah, just so Marco, I hear ASAP and, you know, bring me back to when? But it kind of talks that we can handle this mission. Well, software is critical, but at the end of the day, you know, infrastructure and running your business and services piece software becomes more and more important when you look at the journey off building Yeah, and so the confidence that you had a CZ well, I mean, the announcement of SAPI to support Hana a move that had paid off in the past, and we didn't want to go away from that and move again And then, you know, we can drill down to the specifics of that sapien red hat. However, So all the software we use Internet eternally toe run the company. It's not my piece there, but I have to do all of them, you know, got imagine. so that we can do this better together with partners. Marco is great to hear you that but Christoph, he's not listening. I mean, that was not possible What kind of interest does it give you about how to approach this game? How do we test you are there? Some of the value is, too what you place in some, the activities that have excited you most here this week. that created that was created around open software about, both of you and good luck with the partnership on down the road.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ChristophPERSON

0.99+

Christopher HiltyPERSON

0.99+

JohnPERSON

0.99+

ChristophePERSON

0.99+

MicrosoftORGANIZATION

0.99+

MarkoPERSON

0.99+

BostonLOCATION

0.99+

last yearDATE

0.99+

MarcoPERSON

0.99+

John WallsPERSON

0.99+

KrystofPERSON

0.99+

two partiesQUANTITY

0.99+

last yearDATE

0.99+

two companyQUANTITY

0.99+

Marco Bill PeterPERSON

0.99+

ten yearsQUANTITY

0.99+

Red HatORGANIZATION

0.99+

twelve terabytesQUANTITY

0.99+

AmazonORGANIZATION

0.99+

HiltonORGANIZATION

0.99+

Boston, MassachusettsLOCATION

0.99+

one sentenceQUANTITY

0.99+

twenty five thousandQUANTITY

0.99+

last TuesdayDATE

0.99+

Christoph BaeckPERSON

0.99+

KristoffPERSON

0.99+

twelve terabyteQUANTITY

0.99+

three daysQUANTITY

0.99+

twoQUANTITY

0.99+

bothQUANTITY

0.99+

todayDATE

0.99+

twenty five thousand usersQUANTITY

0.99+

HiltyORGANIZATION

0.99+

one companyQUANTITY

0.99+

twenty nine thousand employeesQUANTITY

0.98+

AzureTITLE

0.98+

one exampleQUANTITY

0.98+

One partnershipQUANTITY

0.98+

hilton dot comORGANIZATION

0.97+

firstQUANTITY

0.97+

LichtensteinLOCATION

0.97+

FirstQUANTITY

0.97+

this weekDATE

0.96+

first guestQUANTITY

0.96+

HiltyPERSON

0.96+

LennoxPERSON

0.96+

OneQUANTITY

0.95+

two anchorsQUANTITY

0.95+

Summer twenty nineteenDATE

0.95+

WadePERSON

0.95+

Marco Bill-PeterPERSON

0.95+

nineteen fortiesDATE

0.95+

ninety threeQUANTITY

0.94+

Red Hat SummitEVENT

0.94+

oneQUANTITY

0.94+

first moversQUANTITY

0.94+

ChristofPERSON

0.94+

two thousand sevenQUANTITY

0.94+

redheadORGANIZATION

0.93+

two thousand nineQUANTITY

0.93+

Twenty eighteenQUANTITY

0.92+

HiltiPERSON

0.92+

twenty nineteenQUANTITY

0.92+

two thousand nineQUANTITY

0.92+

nine thousand attendeesQUANTITY

0.92+

doubleQUANTITY

0.92+

IntelORGANIZATION

0.91+

WindowsTITLE

0.91+

threeQUANTITY

0.91+

twenty nineteen dayQUANTITY

0.9+

Teresa Carlson, AWS | AWS re:Invent 2018


 

live from Las Vegas it's the cube covering AWS reinvents 2018 brought to you by Amazon Web Services inhale and their ecosystem partners hey welcome back everyone this the cube live day 3 coverage of Amazon Web Services AWS reinvent 2018 we're here with two cents Dave six years we've been covering Amazon every single reinvent since they've had this event except for the first year and you know we've been following AWS really since its inception one of my startup said I was trying to launch and didn't ever got going years ago and he went easy to launch was still command-line and so we know all about it but what's really exciting is the global expansion of Amazon Web Services the impact that not only the commercial business but the public sector government changing the global landscape and the person who I've written about many times on Forbes and unhooking angle Theresa Carlson she's the chief a public sector vice president of Amazon Web Services public sector public sector great to see you hi hi John I checked great to be here again as always so the global landscape mean public sector used to be this a we talk to us many times do this do that yeah the digital environment and software development growth is changing all industries including public sector he's been doing a great job leading the charge the CIA one of the most pivotal deals when I asked Andy jassie directly and my one-on-one with them that this proudest moments one of them is the CIA deal when I talked to the top execs in sales Carla and other people in Amazon they point to that seminal moment with a CIA deal happen and now you got the DoD a lot of good stuff yeah what's do how do you top that how do you raise the bar well you know it still feels like day one even with all that work in that effort and those customers kind of going back to go forward in 2013 when we won the CIA opportunity they are just an amazing customer the entire community is really growing but there's so much more at this point that we're doing outside of that work which is being additive around the world and as you've always said John that was kind of a kind of a pivotal deal but now we're seeing so many of our government customers we now have customers at a hundred and seventy four countries and I have teams on the ground in 28 countries so we're seeing a global mood but you know at my breakfast this week we talked a lot about one of the big changes I've seen in the last like 18 months is state and local government where we're seeing actually states making a big move California Arizona New York Ohio Virginia so we're starting to see those states really make big moves and really looking at applications and solutions that can change that citizen services engagement and I achieve in these state local governments aren't real I won't say their course they're funded but they're not like funded like a financial services sector but that's women money they got to be very efficient clouds a perfect opportunity for them because they can be more productive I do a lot of good things I can and there's 20 new governor's coming on this year so we've had a lot of elections lots of new governors lots of new local council members coming in but governor's a lot of times you'll see a big shift when a governor comes in and takes over or if there's one that stays in and maintains you'll see kind of that program I was just in Arizona a couple weeks ago and the governor of Arizona has a really big fish toward modernization and utilization of information technology and the CIO of the state of Arizona is like awesome they're doing all this work transformative work with the government and then I was at Arizona State University the same day where we just announced a cloud Innovation Center for smart cities and I went around their campus and it's amazing they're using IOT everywhere you can go in there football stadium and you can see the movement of the people how many seats are filled where the parking spaces are how much water's been used where Sparky is their their backside I've got to be Sparky which was fed but you're seeing these kind of things and all of that revs on AWS and they're doing all the analytics and they're gonna continue to do that one for efficiency and knowledge but to also to protect their students and citizens and make them safer through the knowledge of data analytics you know to John's point about you know funding and sometimes constricted funding at state and local levels and even sometimes the federal levels yeah we talked about this at the public sector summit I wonder if you could comment Amazon in the early days help startups compete with big companies it gave them equivalent resources it seems like the distance between public sector and commercial is closing because of the cloud they're able to take advantage of resources at lower cost that they weren't able to before it's definitely becoming the new normal in governments for sure and we are seeing that gap closing this year 2018 for me was a year that I saw kind of big moves to cloud because in the early days it was website hosting kind of dipping their toes in this year we're talking about massive systems that are being moved to the cloud you know big re-architecting and design and a lot of people say well why do they do that that costs money well the reason is because they may have to Rio architect and design but then they get all the benefits of cloud through the things that examples this week new types of storage new types of databases at data analytics IOT machine learning because in the old model they're kind of just stagnated with where they were with that application so we're seeing massive moves with very large applications so that's kind of cool to see our customers and public sector making those big moves and then the outputs the outcome for citizens tax payers agencies that's really the the value and sometimes that's harder to quantify or justify in public sector but over the long term it's it's going to make a huge difference in services and one of the things I now said the breakfast was our work and something called helping out the agents with that ATO process the authority to operate which is the big deal and it cost a lot of money a lot of times long time and processes and we've been working with companies like smartsheet which we helped them do this less than 90 days to get go plow so now working with our partners like Talos and Rackspace and our own model that's one of the things you're also gonna see check and Jon you're taking your knowledge of the process trying to shrink that down could time wise excessive forward to the partners yes to help them through the journey these fast move fast that kind of just keep it going and that's really the goal because they get very frustrated if they build an application that takes forever to get that security that authority to operate because they can't really they can't move out into full production unless that's completed and this could make or break these companies these contracts are so big oh yeah I mean it's significant and they want to get paid for what they're doing and the good work but they also want to see the outcome and the results yeah I gotta ask you what's new on the infrastructure side we were in Bahrain for the region announcement exciting expansion there you got new clouds gov cloud east yeah that's up and running no that's been running announced customers are in there they're doing their dr their coop running applications we're excited yes that's our second region based on a hundred and eighty five percent year-over-year growth of DEFCON region west so it's that been rare at reading I read an article that was on the web from general Keith Alexander he wrote an op-ed on the rationale that the government's taking in the looking at the cloud and looking at the military look at the benefits for the country around how to do cloud yes you guys are also competing for the jet idea which is now it's not a single source contract but they want to have one robust consistent environment yeah a big advantage new analytics so between general Keith Alexander story and then the the public statement around this was do is actually outlined benefits of staying with one cloud how is that going what how's that Jedi deal going well there's there's two points I'd like to make them this first of all we are really proud of DoD they're just continuing to me and they're sticking with their model and it's not slowing them down everything happening around Jedi so the one piece yes Jedi is out there and they need to complete this transaction but the second part is we're just we're it's not slowing us down to work with DoD in fact we've had great meetings with DoD customers this week and they're actually launching really amazing cloud workloads now what's going to be key for them is to have a platform that they can consistently develop and launch new mission applications very rapidly and because they were kind of behind they their model right now is to be able to take rapid advantage of cloud computing for those warriors there's those war fighters out in the field that we can really help every day so I think general Alexander is spot on the benefits of the cloud are going to really merit at DoD I have to say as an analyst you know you guys can't talk about these big deals but when companies you know competitors can test them information becomes public so in the case of CI a IBM contested the judge wheeler ruling was just awesome reading and it underscored Amazon's lead at the time yeah at Forrest IBM to go out and pay two billion dollars for software the recent Oracle can contestant and the GAO is ruling there gave a lot of insights I would recommend go reading it and my takeaway was the the DoD Pentagon said a single cloud is more secure it's going to be more agile and ultimately less costly so that's that decision was on a very strong foundation and we got insight that we never would have been able to get had they not tested well and remember one of the points we were just talking earlier was the authority to operate that that ability to go through the security and compliance to get it launched and if you throw a whole bunch of staff at an organization if they they're struggling with one model how are they gonna get a hundred models all at once so it's important for DoD that they have a framework that they can do live in real first of all as a technical person and an operating system which is kind of my background is that it makes total sense to have that cohesiveness but the FBI gave a talk at your breakfast on Tuesday morning Christene Halverson yeah she's amazing and she pointed out the problems that they're having keep up with the bad actors and she said quote we are FBI is in a data crisis yes and she pointed out all the bad things that happened in Vegas the Boston Marathon bombing and the time it took to put the puzzle pieces together was so long and Amazon shrinks that down if post-event that's hard imagine what the DoD is to do in real time so this is pointing to a new model it's a new era and on that well and we you know one of the themes was tech4good and if you look at the FBI example it's a perfect example of s helping them move faster to do their mission and if they continue to do what they've always done which is use old technologies that don't scale buying things that they may never use or being able to test and try quickly and effectively test Belfast recover and then use this data an FBI I will tell you it is brilliant how they're the name of this program sandcastle one Evan that they've used to actually do all this data and Linux and she talked about time to mission time to catch the bad guys time to share that analysis and data with other groups so that they could quickly disseminate and get to the heart of the matter and not sit there and say weight on it weight on this bad guy while we go over here and change time to value completely being that Amazon is on whether it's commercial or government I talk about values great you guys could have a short term opportunity to nail all these workloads but in the Amazon fashion there's always a wild card no I was so excited Dave and I interviewed Lockheed Martin yesterday yeah and this whole ground station thing is so cool because it's kind of like a Christopher Columbus moment yeah because the world isn't flat doesn't have an edge no it's wrong that lights can power everything there's spaces involved there's space company yes space force right around the corner yep you're in DC what's the excitement around all this what's going on we surprised a lot of with that announcement Lockheed Martin and DigitalGlobe we even had DigitalGlobe in with Andy when we talked about AWS ground station and Lockheed Martin verge and the benefit of this is two amazing companies coming together a tub yes that knows cloud analytics air storage and now we're taking a really hard problem with satellites and making it almost as a service as well as Lockheed doing their cube stats and making sure that there is analysis of every satellite that moves that all points in time with net with no disruption we're going to bring that all together for our customers for a mission that is so critical at every level of government research commercial entities and it's going to help them move fast and that is the key move very fast every mission leader you talk to you that has these kind of predators will say we have to move faster and that's our goal bringing commercial best practices I know you got a run we got less than a minute left but I want you to do a quick plug in for the work you're doing around the space in general you had a special breakout ibrehem yours public sector summit not going on in the space area that your involvement give it quick yeah so we will have it again this year winner first ever at the day before our public sector summit we had an Earth and space day and where we really brought together all these thought leaders on how do we take advantage of that commercial cloud services that are out there to help both this programs research Observatory in any way shape app data sets it went great we worked with NASA while we were here we actually had a little control center with that time so strip from NASA JPL where we literally sat and watched the Mars landing Mars insight which we were part of and so was Lockheed Martin and so his visual globe so that was a lot of fun so you'll see us continue to really expand our efforts in the satellite and space arena around the world with these partnership well you're super cool and relevant space is cool you're doing great relevant work with Amazon I wish we had more time to talk about all the mentoring you're doing with women you're doing tech4good so many great things going on I need to get you guys and all my public sector summits in 2019 we're going to have eight of them around the world and it was so fantastic having the Cuban Baja rain this year I mean it was really busy there and I think we got to see the level of innovation that's shaping up around the world with our customers well thanks to the leadership that you have in the Amazon as a company in the industry is changing the cube will be global and we might see cube regions soon if Lockheed Martin could do it the cube could be there and they have cube sets yes thank you for coming on theresa carlson making it happen really changing the game and raising the bar in public sector globally with cloud congratulations great to have you on the cube as always more cube covers Andy Jasmine coming up later in the program statements for day three coverage after this short break [Music]

Published Date : Nov 29 2018

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
Christene HalversonPERSON

0.99+

Theresa CarlsonPERSON

0.99+

2013DATE

0.99+

Amazon Web ServicesORGANIZATION

0.99+

Teresa CarlsonPERSON

0.99+

Andy JasminePERSON

0.99+

CarlaPERSON

0.99+

DavePERSON

0.99+

BahrainLOCATION

0.99+

Andy jassiePERSON

0.99+

Christopher ColumbusPERSON

0.99+

NASAORGANIZATION

0.99+

FBIORGANIZATION

0.99+

ArizonaLOCATION

0.99+

Amazon Web ServicesORGANIZATION

0.99+

Amazon Web ServicesORGANIZATION

0.99+

RackspaceORGANIZATION

0.99+

TalosORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Keith AlexanderPERSON

0.99+

JohnPERSON

0.99+

Tuesday morningDATE

0.99+

2019DATE

0.99+

less than 90 daysQUANTITY

0.99+

AWSORGANIZATION

0.99+

IBMORGANIZATION

0.99+

LockheedORGANIZATION

0.99+

second partQUANTITY

0.99+

EvanPERSON

0.99+

DCLOCATION

0.99+

Las VegasLOCATION

0.99+

Lockheed MartinORGANIZATION

0.99+

28 countriesQUANTITY

0.99+

two billion dollarsQUANTITY

0.99+

CIAORGANIZATION

0.99+

two pointsQUANTITY

0.99+

yesterdayDATE

0.99+

second regionQUANTITY

0.99+

OracleORGANIZATION

0.99+

EarthLOCATION

0.99+

CaliforniaLOCATION

0.99+

DoDTITLE

0.98+

AndyPERSON

0.98+

less than a minuteQUANTITY

0.98+

20 new governorQUANTITY

0.98+

this weekDATE

0.97+

one modelQUANTITY

0.97+

six yearsQUANTITY

0.97+

LinuxTITLE

0.97+

oneQUANTITY

0.97+

eightQUANTITY

0.97+

bothQUANTITY

0.96+

Arizona State UniversityORGANIZATION

0.96+

ForrestORGANIZATION

0.96+

this yearDATE

0.96+

first yearQUANTITY

0.95+

JonPERSON

0.95+

two amazing companiesQUANTITY

0.95+

single sourceQUANTITY

0.95+

Boston Marathon bombingEVENT

0.95+

theresa carlsonPERSON

0.94+

firstQUANTITY

0.93+

DEFCONORGANIZATION

0.93+

this yearDATE

0.93+

DoDORGANIZATION

0.93+

two centsQUANTITY

0.92+

this weekDATE

0.92+