Image Title

Search Results for Ferrari:

Massimo Ferrari, Red Hat | AnsibleFest 2019


 

>> Announcer: Live from Atlanta, Georgia, it's theCUBE, covering AnsibleFest 2019, brought to you by Red Hat. >> Okay, welcome back, everyone, it's CUBE's live coverage here in Atlanta, Georgia, for AnsibleFest 2019, and I'm John Furrier, with Stu Miniman, my co-host. Our next guest is Massimo Ferrari, product manager with Ansible Security. Welcome to theCUBE, thanks for coming on. >> Thanks very much. Thank you for having me. >> So, security, obviously, big part of the conversation in automation. >> Obviously. >> Making things more efficient, making security, driving a lot of automation, obviously, job performance, among other things. Red Hat's done a lot of automation in other areas outside of just configuration, network automation, now security looking kind of like the same thing, but security's certainly different and more critical. >> Massimo: It is, it's more time-sensitive-- >> Talk us through the security automation angle, what's going on? >> Well, basically, there are several things going on, right? I believe the main thing is that IT organizations are changing, well, honestly, IT organizations have been changing for the last, probably five years, 10 years, and as a consequence, the infrastructures to be protected are changing as well. And there are a couple of challenges that are kind of common to other areas. As you said, automation is all over the place, so clearly, there are some challenges that are common to IT operations, or network operations, something that is peculiar for the security space. What we are seeing, basically, is that if you think about, there's a major problem of scale, right? If you think about the adoption of technologies like containers, or private and public cloud, if you are a large organization, you are introducing those technologies side by side with, for example, your legacy applications on bare metal or your fantastic digital machines, but what they do actually is introducing a problem of size, a problem of scale, and a problem of complexity connected to that, and a problem of distribution which is just unmanageable without automation. And the other problem is just complexity, that I mentioned before, is, I wasn't specifically referring to the complexity of the infrastructure per se. If we think about adopting best practices or practices like microservices or adopting functions of service, we can easily imagine how an old-school three-tiers application can be re-engineered to become something like with made of 10 hundred components, and those are microcomponents, very focused on single things, but from a security perspective, those are ingress points. And what automation did, what automation proved to be able to do, is to manage complexity for other areas. So you can be successful in IT operations, in network, and clearly, it can be successful in security, but what is unique to security is that professionals are facing a problem of speed, which means different things, but to give you an example, what we are seeing is that more and more cyberattacks are using automation and artificial intelligence, and the result of that is that the velocity and impact of those attacks is so big that you can't cope with human operators, so we are in a classic situation of fighting fire with fire. >> So, this is a great example. We had the service guys on earlier talking about the Automation Platform, and one comment was, "You don't want to boil the ocean over. "Focus on some things you can break down "and show some wins." Security professionals have that same problem, they want to throw automation and AI at the problem, "It's going to solve everything." >> Of course. >> And so, it's certainly very valuable, managing configurations, open ports, S3 buckets, there's a variety of things that are entry points for hackers and adversaries to come in, take down networks. What's the best practice? How would you see customers applying automation? What's the playbook, if you will? What's the formula for a customer to look at security and say, "Okay, how do I direct Ansible "at my security problems, or opportunities, "to manage that?" >> Well, when you discuss security automation with customers, it really depends on the kind of audience that you have. As you know, security organizations tend to be fairly structured, right? And depending on the person you are talking to, they may have a slightly different meaning for security automation. It's a broader practice in general. What we are trying to do with Ansible Security Automation is we are targeting a very specific problem. There is a well-known issue in the security world, which is the lack of integration. What we know is that if you are any large organization, you buy tens, hundred sometime, of security solutions, and those are great, they protect whatever they have to protect, but there is little to no integration between them, and the result of that is that security teams have an incredible amount of manual work to do just to correlate data coming from different dashboards, or to perform an investigation across different perimeters, or at some point, they have to remediate something that is going on and they have to apply this remediation across groups of devices that are sparse. And what we are trying to do with Ansible Security Automation is to propose Ansible as an integrational layer, as a glue, between all those different technologies. On one hand it's a matter of become more efficient, streamline the process. On the other hand is an idea of having, truly, a way to plan, use the automation as your action plan, because security is obiously is time-critical, and so, automation becomes, in this context, become even more important. >> Massimo, with the launch of the Ansible Automation Platform, we see a real enhancement of how the ecosystem's participating here. Where does security fit into the collections that are coming from the partner ecosystem of Ansible? >> Well, in one way, we have been building over the shoulders of our friends in Network Automation. They did an amazing job over four years. They did two major things. The first one is that they expanded for the first time the footprint of Ansible outside the traditional IT operations space. That was amazing. And we did kind of the same thing, and we started working with some vendors that were already working with us for slightly different use cases, and we helped them to identify the right use cases for security, and expand even more what they were capable of doing through Ansible. And what we are doing now is basically working with customers, we have lighthouse customers, we call them, that guide us to understand which is the next step that we are supposed to perform, and we are gathering together a security community around Ansible. Because surprisingly, we all know that the security community has always been there, always been super vocal, but open-sourcing security's a fairly new thing, right? And so we have this ability, the important thing is that we all know that Red Hat is not a security vendor, right? We don't want to be a security vendor. That's not the ambition that we have. We are automation experts, in the case of Ansible, and we are open-source experts across the board. So what we are doing with them, we are helping them to get there, to cooperate in the open-source world. And for security, proven to be very interesting the adoption of collection, because in some way allows them to deliver the content that they want to deliver in a very, I would say, focused way, and since security relies on, again, is a matter of time to market or time to solve the problem, through collection, they have more independence, they are capable to deliver whatever they want to deliver, when they want to deliver, according to their staff needs. >> You know, one of the things you mentioned, glue layer, integration layer, and open source, your expertise on automation. It's interesting, and I want to get your reaction to this, 'cause we did a survey of CISOs in our community prior to the Amazon Web Services re:Inforce conference this past summer. It was their first, inaugural, cloud security, so, yeah, cloud security was a big part of it. But with on-premise and hybrid and multi-cloud here, being discussed, this notion of what cloud and role of enterprise is interesting to the CISOs, chief information security officer. And the trend on the survey was is that CISOs are re-hiring internal development teams to build stacks onsite in their own organizations, investing in their stack, and they're picking a cloud, and then a secondary cloud. So as that development team picks up, that seems to be a trend, one, do you agree with that? And if people want to have their own developers in-house, for security purposes, how does Ansible fit into that glue layer? Because if it's configuring all the gear and all the pipes and plumbing, it makes sense to kind of think about that. So this might be a trend that's helping you? >> So, the trend, there is a general trend in the corporate enterprise world hat more technical people are coming into traditionally, in areas that are traditionally under the purview of other people or domains, right? So, more technical people coming into business lines. We are seeing more developers coming into security, that's certainly a trend. It is a matter of managing scale and complexity. You need to have technical people there. So, in one hand, that help us to create a more efficient and more pervasive community around security. You have developers there, which means that you need to serve that corner case that you are not targeted at the moment, you have talented people that can cooperate with us and build those kinds of things. >> John: And use the open-source software. (laughs) >> Exactly, but that's the entire purpose, right? You want to drive people to contribute. They get the value back, we get the value back, they get the value back, that's the entire purpose. >> So you do see the trend of more developers being hired by enterprises in-house? >> It certainly is, and it's been going on for about, probably three to five years I've seen that, in other areas, mainly in the business area, because they want to gain that agility and want to be self-contained, in some way. Business want to be self-contained, and security, in some sense, is going the same direction. That fits clearly one angle of Ansible, so you have more contribution in the community. On the other hand, what we are trying to make sure is that we support the traditional security teams. Traditional security teams are not super developmental yet, so they want to consume the content. >> Well, DevOps is always, as infrastructure as code implies that the infrastructure has been coded, and if you look at all of the security breaches that have been big, a lot of them have been basic stuff. An exposed S3 bucket, is that Amazon's fault, or is that the operator's fault? Or patches that aren't deployed. You guys are winning with Ansible in these area. This seems to be a nice spot for you guys to come in. I mean, can you elaborate on those points, and is that true, you guys winning in those areas? 'Cause, I mean, I could see automation just solving a lot of those problems. >> Well, I will say something that's not super popular, but as a security community, we've always been horrible at the basics, right? Like any other technical people, we're chasing the latest and greatest, the fun stuff, the basics, we always been bad at that. Automation is a fairly new thing in security, And what we all know that automation does is providing you consistency and reduce human error. Most of this stuff is because somebody forgot to configure something, someone forgot to rotate a secret or something like that. >> They didn't bring their playbook to the game. (laughs) >> So, I'm not trying to guide the priorities here, but the point is that the same benefits that we get from automation-- >> There's just no excuse. If you have automation, you can basically-- >> Exactly. >> Load that patch, or configure that port properly, because a playbook exists. This only helps. >> Absolutely, but those are the basic values of automation. You're communicating a slightly different way to security, because they use different language, and for them, automation is still a new thing. But what you heard during the keynote, so, the entire purpose of the platform is to help different areas in the IT organization to cooperate with each other. As we know, security is not a problem of IT security anymore. It's a broader problem and needs to have a common tool to be solved. >> In the demo in the keynote this morning, I thought that they did a good job showing how the various stakeholders in the organization can all collaborate and work together. I want you to explain how security fits into that discussion, and also, they hadn't added the hardening piece in there, but I would expect for many companies that, I want to flag when I'm creating this image, that it's going to say, "Hey, "have you put the right security policies on top of it," not something that they just, "Oh, it's one of the steps that I do." How do we make sure that everybody follows those corporate edicts that we have? >> Well, it's mainly a matter, I don't want to play the usual card of cultural change, but the fact is that in security, especially, we are looking at two major shifts, and one of these shifts is that pretty much everyone, I would say private organization and government, kind of acknowledge that security, cybersecurity, is not an IT problem anymore, it's a business problem, right? Being a business problem, that means that the stakeholders involved are in all different parts of the organization, and that requires a different level of collaboration. Collaboration starts with training, and enablement of people to understand where the problems are, and understand that they are part of the same process. We used to have security as an highly specialized function of IT, right now, what happens is that, if you think about a data breach, a data breach could be caused by an IT problem, but most of the impact is on the business, right? So right now, a lot of security processes are shifting to give responsibility to the business owners, and if the government is involved, I live in London, and in Europe, for another month, I guess, we have this fantastic thing that you know, it's called GDPR. GDPR forces you to have what is called a data breach notification process, which means that now, if you're investigating a cyberthreat, you want to have legal there to make sure that everything is fine, and if this data breach could become a media thing, you want to have PR there, because you want to have a plan to mitigate whatever kind of impact you may have on your corporate image. You may also want to have there, I don't know, customer care, just to handle the calls from the customer worried for the data. So the point is that this is becoming a process that need to involve people. People needs to be aware that they are part of this process, and what we can do, as an automation provider, we are trying to enable, through the platform, the IT organizations to cooperate with each other. Having workflows, having the ability to contribute to the same process allows you to be responsible for your piece. >> Massimo, the new security track here at the show this year, for those that didn't get to come, or maybe that didn't get to see all of it, some of the highlights you want to share with the audience? >> So, this year, the general message this year is that it's the first time that we have this fantastic security track, and this is not a security conference, it is never going to be a security conference. So what we are trying to do is to enable security teams to talk with the automation experts to introduce automation in that space. So the general message that we have this year is, well, the desire is to create a bridge between the Ansible practitioners, the Ansible heroes, whatever you want to call them, to understand what the problem is, what the problem could be, and have a sort of a common language they can use to communicate. So the message that we have this year is, go back home, and sit down at the same table with your security folks, and make sure that they are aware that there's a new possibility, and you can help them, that you now have a common tool together. We had a couple of very interesting tracks. We have partners, a lot of partners are contributing to security space, we mentioned that before, and most of them have tracks here, and they are showing what they built with us, what are the possibilities of those tools. We have a couple of customer stories that are extremely interesting. I just came out from a session presenting one of our customer stories. And in general, we are trying to show also how you can integrate security in all the broader processes, like the mythical DevSecOps process. >> What's been the feedback from customers specifically around the talk, and the security conversations here at AnsibleFest? >> It wasn't unexpected, but it's going particularly well. We have very good feedbacks. And we have, we kind of-- >> John: What are they saying? >> Well, they are saying some, okay, the best quote that I can give you, the customer told me, "Oh, this year, I learned something new. "I learned that we can do something "in this space that we never thought about." Which is a good feedback to have at a conference. And a lot of people are attending these sessions. We have quite a lot of security professionals, that was kind of unexpected, so all the sessions are pretty full, but we also are seeing people that are just, they're just curious, they're coming in, and they are staying, they are paying attention. So there is the real opportunity, they see the same opportunity that we see, and hopefully, they will bring the message home. >> Massimo, thank you for coming on theCUBE and sharing your insights. Certainly, security is a main driver for automation, one of the key four bullet points that we outlined in our opening. Thanks for coming on, and sharing your insights. >> Thank you very much for having me. >> It's theCUBE coverage here at AnsibleFest 2019, where Red Hat's announced their Ansible Automation Platform. I'm John Furrier, with Stu Miniman. Stay with us for more after this short break. (upbeat music)

Published Date : Sep 25 2019

SUMMARY :

brought to you by Red Hat. Welcome to theCUBE, Thank you for having me. big part of the conversation in automation. now security looking kind of like the same thing, the infrastructures to be protected are changing as well. We had the service guys on earlier What's the formula for a customer to look at security And depending on the person you are talking to, that are coming from the partner ecosystem of Ansible? That's not the ambition that we have. that seems to be a trend, one, do you agree with that? at the moment, you have talented people John: And use the open-source software. They get the value back, we get the value back, and security, in some sense, is going the same direction. and is that true, you guys winning in those areas? the basics, we always been bad at that. their playbook to the game. If you have automation, you can basically-- Load that patch, or configure that port properly, so, the entire purpose of the platform "Oh, it's one of the steps that I do." the IT organizations to cooperate with each other. So the general message that we have this year is, well, And we have, we kind of-- "I learned that we can do something one of the key four bullet points Thank you very much I'm John Furrier, with Stu Miniman.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
EuropeLOCATION

0.99+

LondonLOCATION

0.99+

AmazonORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

MassimoPERSON

0.99+

AnsibleORGANIZATION

0.99+

JohnPERSON

0.99+

Red HatORGANIZATION

0.99+

Massimo FerrariPERSON

0.99+

threeQUANTITY

0.99+

John FurrierPERSON

0.99+

firstQUANTITY

0.99+

Atlanta, GeorgiaLOCATION

0.99+

10 yearsQUANTITY

0.99+

Ansible SecurityORGANIZATION

0.99+

AnsibleFestORGANIZATION

0.99+

10 hundred componentsQUANTITY

0.99+

this yearDATE

0.99+

oneQUANTITY

0.99+

CUBEORGANIZATION

0.99+

AnsibORGANIZATION

0.99+

first timeQUANTITY

0.98+

five yearsQUANTITY

0.98+

first oneQUANTITY

0.98+

one wayQUANTITY

0.98+

one commentQUANTITY

0.97+

GDPRTITLE

0.97+

three-tiersQUANTITY

0.97+

two major shiftsQUANTITY

0.97+

one angleQUANTITY

0.96+

over four yearsQUANTITY

0.94+

four bullet pointsQUANTITY

0.91+

two major thingsQUANTITY

0.91+

AnsibleFest 2019EVENT

0.91+

theCUBEORGANIZATION

0.88+

Services re:Inforce conferenceEVENT

0.87+

one handQUANTITY

0.86+

tens, hundredQUANTITY

0.85+

Ansible Automation PlatformTITLE

0.84+

this morningDATE

0.79+

DevOpsTITLE

0.78+

singleQUANTITY

0.77+

one of these shiftsQUANTITY

0.71+

past summerDATE

0.69+

Security AutomationTITLE

0.67+

SecurityTITLE

0.66+

Network AutomationORGANIZATION

0.64+

S3TITLE

0.63+

DevSecOpsTITLE

0.59+

Amazon WebORGANIZATION

0.55+

the stepsQUANTITY

0.55+

coupleQUANTITY

0.52+

Seamus Jones & Milind Damle


 

>>Welcome to the Cube's Continuing coverage of AMD's fourth generation Epic launch. I'm Dave Nicholson and I'm joining you here in our Palo Alto Studios. We have two very interesting guests to dive into some of the announcements that have been made and maybe take a look at this from an AI and ML perspective. Our first guest is Milland Doley. He's a senior director for software and solutions at amd, and we're also joined by Shamus Jones, who's a director of server engineering at Dell Technologies. Welcome gentlemen. How are you? >>Very good, thank >>You. Welcome to the Cube. So let's start out really quickly, Shamus, what, give us a thumbnail sketch of what you do at Dell. >>Yeah, so I'm the director of technical marketing engineering here at Dell, and our team really takes a look at the technical server portfolio and solutions and ensures that we can look at, you know, the performance metrics, benchmarks, and performance characteristics, so that way we can give customers a good idea of what they can expect from the server portfolio when they're looking to buy Power Edge from Dell. >>Milland, how about you? What's, what's new at a M D? What do you do there? >>Great to be here. Thank you for having me at amd, I'm the senior director of performance engineering and ISV ecosystem enablement, which is a long winter way of saying we do a lot of benchmarks, improved performance and demonstrate with wonderful partners such as Shamus and Dell, the combined leverage that AMD four generation processes and Dell systems can bring to bear on a multitude of applications across the industry spectrum. >>Shamus, talk about that relationship a little bit more. The relationship between a M D and Dell. How far back does it go? What does it look like in practical terms? >>Absolutely. So, you know, ever since AM MD reentered the server space, we've had a very close relationship. You know, it's one of those things where we are offering solutions that are out there to our customers no matter what generation A portfolio, if they're, if they're demanding either from their competitor or a m d, we offer a portfolio solutions that are out there. What we're finding is that within their generational improvements, they're just getting better and better and better. Really exciting things happening from a m D at the moment, and we're seeing that as we engineer those CPU stacks into our, our server portfolio, you know, we're really seeing unprecedented performance across the board. So excited about the, the history, you know, my team and Lin's team work very closely together, so much so that we were communicating almost on a daily basis around portfolio platforms and updates around the, the, the benchmarks testing and, and validation efforts. >>So Melind, are you happy with these PowerEdge boxes that Seamus is building to, to house, to house your baby? >>We are delighted, you know, it's hard to find stronger partners than Shamus and Dell with AMD's, second generation epic service CPUs. We already had undisputable industry performance leadership, and then with the third and now the fourth generation CPUs, we've just increased our lead with competition. We've got so many outstanding features at the platform, at the CPU level, everybody focuses on the high core counts, but there's also the DDR five, the memory, the io, and the storage subsystem. So we believe we have a fantastic performance and performance per dollar performance per what edge over competition, and we look to partners such as Dell to help us showcase that leadership. >>Well. So Shay Yeah, through Yeah, go ahead >>Dave. What, what I'd add, Dave, is that through the, the partnership that we've had, you know, we've been able to develop subsystems and platform features that historically we couldn't have really things around thermals power efficiency and, and efficiency within the platform. That means that customers can get the most out of their compute infrastructure. >>So this is gonna be a big question moving forward as next generation platforms are rolled out, there's the potential for people to have sticker shock. You talk about something that has eight or 12 cores in a, in a physical enclosure versus 96 cores, and, and I guess the, the question is, do the ROI and TCO numbers look good for someone to make that upgrade? Shamus, you wanna, you wanna hit that first or you guys are integrated? >>Absolutely, yeah, sorry. Absolutely. So we, I'll tell you what, at the moment, customers really can't afford not to upgrade at the moment, right? We've taken a look at the cost basis of keeping older infrastructure in place, let's say five or seven year old infrastructure servers that are, that are drawing more power maybe are, are poorly utilized within the infrastructure and take more and more effort and time to manage, maintain and, and really keep in production. So as customers look to upgrade or refresh their platforms, what we're finding right is that they can take a dynamic consolidation sometimes 5, 7, 8 to one consolidation depending on which platform they have as a historical and which one they're looking to upgrade to. Within AI specifically and machine learning frameworks, we're seeing really unprecedented performance. Lin's team partnered with us to deliver multiple benchmarks for the launch, some of which we're still continuing to see the goodness from things like TP C X AI as a framework, and I'm talking about here specifically the CPU U based performance. >>Even though in a lot of those AI frameworks, you would also expect to have GPUs, which all of the four platforms that we're offering on the AM MD portfolio today offer multiple G P U offerings. So we're seeing a balance between a huge amount of C P U gain and performance, as well as more and more GPU offerings within the platform. That was real, that was a real challenge for us because of the thermal challenges. I mean, you think GPUs are going up 300, 400 watt, these CPUs at 96 core are, are quite demanding thermally, but what we're able to do is through some, some unique smart cooling engineering within the, the PowerEdge portfolio, we can take a look at those platforms and make the most efficient use case by having things like telemetry within the platform so that way we can dynamically change fan speeds to get customers the best performance without throttling based on their need. >>Melin the cube was at the Supercomputing conference in Dallas this year, supercomputing conference 2022, and a lot of the discussion was around not only advances in microprocessor technology, but also advances in interconnect technology. How do you manage that sort of research partnership with Dell when you aren't strictly just focusing on the piece that you are bringing to the party? It's kind of a potluck, you know, we, we, we, we mentioned P C I E Gen five or 5.0, whatever you want to call it, new DDR storage cards, Nicks, accelerators, all of those, all of those things. How do you keep that straight when those aren't things that you actually build? >>Well, excellent question, Dave. And you know, as we are developing the next platform, obviously the, the ongoing relationship is there with Dell, but we start way before launch, right? Sometimes it's multiple years before launch. So we are not just focusing on the super high core counts at the CPU level and the platform configurations, whether it's single socket or dual socket, we are looking at it from the memory subsystem from the IO subsystem, P c i lanes for storage is a big deal, for example, in this generation. So it's really a holistic approach. And look, core counts are, you know, more important at the higher end for some customers h HPC space, some of the AI applications. But on the lower end you have database applications or some other is s v applications that care a lot about those. So it's, I guess different things matter to different folks across verticals. >>So we partnered with Dell very early in the cycle, and it's really a joint co-engineering. Shamus talked about the focus on AI with TP C X xci, I, so we set five world records in that space just on that one benchmark with AD and Dell. So fantastic kick kick off to that across a multitude of scale factors. But PPP c Xci is not just the only thing we are focusing on. We are also collaborating with Dell and des e i on some of the transformer based natural language processing models that we worked on, for example. So it's not just a steep CPU story, it's CPU platform, es subsystem software and the whole thing delivering goodness across the board to solve end user problems in AI and and other verticals. >>Yeah, the two of you are at the tip of the spear from a performance perspective. So I know it's easy to get excited about world records and, and they're, they're fantastic. I know Shamus, you know, that, you know, end user customers might, might immediately have the reaction, well, I don't need a Ferrari in my data center, or, you know, what I need is to be able to do more with less. Well, aren't we delivering that also? And you know, you imagine you milland you mentioned natural, natural language processing. Shamus, are you thinking in 2023 that a lot more enterprises are gonna be able to afford to do things like that? I mean, what are you hearing from customers on this front? >>I mean, while the adoption of the top bin CPU stack is, is definitely the exception, not the rule today we are seeing marked performance, even when we look at the mid bin CPU offerings from from a m d, those are, you know, the most common sold SKUs. And when we look at customers implementations, really what we're seeing is the fact that they're trying to make the most, not just of dollar spend, but also the whole subsystem that Melin was talking about. You know, the fact that balanced memory configs can give you marked performance improvements, not just at the CPU level, but as actually all the way through to the, to the application performance. So it's, it's trying to find the correct balance between the application needs, your budget, power draw and infrastructure within the, the data center, right? Because not only could you, you could be purchasing and, and look to deploy the most powerful systems, but if you don't have an infrastructure that's, that's got the right power, right, that's a large challenge that's happening right now and the right cooling to deal with the thermal differences of the systems, might you wanna ensure that, that you can accommodate those for not just today but in the future, right? >>So it's, it's planning that balance. >>If I may just add onto that, right? So when we launched, not just the fourth generation, but any generation in the past, there's a natural tendency to zero in on the top bin and say, wow, we've got so many cores. But as Shamus correctly said, it's not just that one core count opn, it's, it's the whole stack. And we believe with our four gen CPU processor stack, we've simplified things so much. We don't have, you know, dozens and dozens of offerings. We have a fairly simple skew stack, but we also have a very efficient skew stack. So even, even though at the top end we've got 96 scores, the thermal budget that we require is fairly reasonable. And look, with all the energy crisis going around, especially in Europe, this is a big deal. Not only do customers want performance, but they're also super focused on performance per want. And so we believe with this generation, we really delivered not just on raw performance, but also on performance per dollar and performance per one. >>Yeah. And it's not just Europe, I'm, we're, we are here in Palo Alto right now, which is in California where we all know the cost of an individual kilowatt hour of electricity because it's quite, because it's quite high. So, so thermals, power cooling, all of that, all of that goes together and that, and that drives cost. So it's a question of how much can you get done per dollar shame as you made the point that you, you're not, you don't just have a one size fits all solution that it's, that it's fit for function. I, I'm, I'm curious to hear from you from the two of you what your thoughts are from a, from a general AI and ML perspective. We're starting to see right now, if you hang out on any kind of social media, the rise of these experimental AI programs that are being presented to the public, some will write stories for you based on prom, some will create images for you. One of the more popular ones will create sort of a, your superhero alter ego for, I, I can't wait to do it, I just got the app on my phone. So those are all fun and they're trivial, but they sort of get us used to this idea that, wow, these systems can do things. They can think on their own in a certain way. W what do, what do you see the future of that looking like over the next year in terms of enterprises, what they're going to do for it with it >>Melan? Yeah, I can go first. Yeah, yeah, yeah, yeah, >>Sure. Yeah. Good. >>So the couple of examples, Dave, that you mentioned are, I, I guess it's a blend of novelty and curiosity. You know, people using AI to write stories or poems or, you know, even carve out little jokes, check grammar and spelling very useful, but still, you know, kind of in the realm of novelty in the mainstream, in the enterprise. Look, in my opinion, AI is not just gonna be a vertical, it's gonna be a horizontal capability. We are seeing AI deployed across the board once the models have been suitably trained for disparate functions ranging from fraud detection or anomaly detection, both in the financial markets in manufacturing to things like image classification or object detection that you talked about in, in the sort of a core AI space itself, right? So we don't think of AI necessarily as a vertical, although we are showcasing it with a specific benchmark for launch, but we really look at AI emerging as a horizontal capability and frankly, companies that don't adopt AI on a massive scale run the risk of being left behind. >>Yeah, absolutely. There's an, an AI as an outcome is really something that companies, I, I think of it in the fact that they're adopting that and the frameworks that you're now seeing as the novelty pieces that Melin was talking about is, is really indicative of the under the covers activity that's been happening within infrastructures and within enterprises for the past, let's say 5, 6, 7 years, right? The fact that you have object detection within manufacturing to be able to, to be able to do defect detection within manufacturing lines. Now that can be done on edge platforms all the way at the device. So you're no longer only having to have things be done, you know, in the data center, you can bring it right out to the edge and have that high performance, you know, inferencing training models. Now, not necessarily training at the edge, but the inferencing models especially, so that way you can, you know, have more and, and better use cases for some of these, these instances things like, you know, smart cities with, with video detection. >>So that way they can see, especially during covid, we saw a lot of hospitals and a lot of customers that were using using image and, and spatial detection within their, their video feeds to be able to determine who and what employees were at risk during covid. So there's a lot of different use cases that have been coming around. I think the novelty aspect of it is really interesting and I, I know my kids, my daughters love that, that portion of it, but really what's been happening has been exciting for quite a, quite a period of time in the enterprise space. We're just now starting to actually see those come to light in more of a, a consumer relevant kind of use case. So the technology that's been developed in the data center around all of these different use cases is now starting to feed in because we do have more powerful compute at our fingertips. We do have the ability to talk more about the framework and infrastructure that's that's right out at the edge. You know, I know Dave in the past you've said things like the data center of, you know, 20 years ago is now in my hand as, as my cell phone. That's right. And, and that's, that's a fact and I'm, it's exciting to think where it's gonna be in the next 10 or 20 years. >>One terabyte baby. Yeah. One terabyte. Yeah. It's mind bo. Exactly. It's mind boggling. Yeah. And it makes me feel old. >>Yeah, >>Me too. And, and that and, and Shamus, that all sounded great. A all I want is a picture of me as a superhero though, so you guys are already way ahead of the curve, you know, with, with, with that on that note, Seamus wrap us up with, with a, with kind of a summary of the, the highlights of what we just went through in terms of the performance you're seeing out of this latest gen architecture from a md. >>Absolutely. So within the TPC xai frameworks that Melin and my team have worked together to do, you know, we're seeing unprecedented price performance. So the fact that you can get 220% uplift gen on gen for some of these benchmarks and, you know, you can have a five to one consolidation means that if you're looking to refresh platforms that are historically legacy, you can get a, a huge amount of benefit, both in reduction in the number of units that you need to deploy and the, the amount of performance that you can get per unit. You know, Melinda had mentioned earlier around CPU performance and performance per wat, specifically on the Tu socket two U platform using the fourth generation a m d Epic, you know, we're seeing a 55% higher C P U performance per wat that is that, you know, when for people who aren't necessarily looking at these statistics, every generation of servers, that that's, that is a huge jump leap forward. >>That combined with 121% higher spec scores, you know, as a benchmark, those are huge. Normally we see, let's say a 40 to 60% performance improvement on the spec benchmarks, we're seeing 121%. So while that's really impressive at the top bin, we're actually seeing, you know, large percentile improvements across the mid bins as well, you know, things in the range of like 70 to 90% performance improvements in those standard bins. So it, it's a, it's a huge performance improvement, a power efficiency, which means customers are able to save energy, space and time based on, on their deployment size. >>Thanks for that Shamus, sadly, gentlemen, our time has expired. With that, I want to thank both of you. It's a very interesting conversation. Thanks for, thanks for being with us, both of you. Thanks for joining us here on the Cube for our coverage of AMD's fourth generation Epic launch. Additional information, including white papers and benchmarks plus editorial coverage can be found on does hardware matter.com.

Published Date : Dec 9 2022

SUMMARY :

I'm Dave Nicholson and I'm joining you here in our Palo Alto Studios. Shamus, what, give us a thumbnail sketch of what you do at Dell. and ensures that we can look at, you know, the performance metrics, benchmarks, and Dell, the combined leverage that AMD four generation processes and Shamus, talk about that relationship a little bit more. So, you know, ever since AM MD reentered the server space, We are delighted, you know, it's hard to find stronger partners That means that customers can get the most out you wanna, you wanna hit that first or you guys are integrated? So we, I'll tell you what, and make the most efficient use case by having things like telemetry within the platform It's kind of a potluck, you know, we, But on the lower end you have database applications or some But PPP c Xci is not just the only thing we are focusing on. Yeah, the two of you are at the tip of the spear from a performance perspective. the fact that balanced memory configs can give you marked performance improvements, but any generation in the past, there's a natural tendency to zero in on the top bin and say, the two of you what your thoughts are from a, from a general AI and ML perspective. Yeah, I can go first. So the couple of examples, Dave, that you mentioned are, I, I guess it's a blend of novelty have that high performance, you know, inferencing training models. So the technology that's been developed in the data center around all And it makes me feel old. so you guys are already way ahead of the curve, you know, with, with, with that on that note, So the fact that you can get 220% uplift gen you know, large percentile improvements across the mid bins as well, Thanks for that Shamus, sadly, gentlemen, our time has

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave NicholsonPERSON

0.99+

DellORGANIZATION

0.99+

EuropeLOCATION

0.99+

70QUANTITY

0.99+

40QUANTITY

0.99+

55%QUANTITY

0.99+

fiveQUANTITY

0.99+

DavePERSON

0.99+

220%QUANTITY

0.99+

Palo AltoLOCATION

0.99+

121%QUANTITY

0.99+

96 coresQUANTITY

0.99+

CaliforniaLOCATION

0.99+

AMDORGANIZATION

0.99+

Shamus JonesPERSON

0.99+

12 coresQUANTITY

0.99+

ShamusORGANIZATION

0.99+

ShamusPERSON

0.99+

2023DATE

0.99+

eightQUANTITY

0.99+

96 coreQUANTITY

0.99+

300QUANTITY

0.99+

bothQUANTITY

0.99+

twoQUANTITY

0.99+

dozensQUANTITY

0.99+

seven yearQUANTITY

0.99+

5QUANTITY

0.99+

FerrariORGANIZATION

0.99+

96 scoresQUANTITY

0.99+

60%QUANTITY

0.99+

90%QUANTITY

0.99+

Milland DoleyPERSON

0.99+

first guestQUANTITY

0.99+

thirdQUANTITY

0.99+

Dell TechnologiesORGANIZATION

0.99+

amdORGANIZATION

0.99+

todayDATE

0.98+

LinPERSON

0.98+

20 years agoDATE

0.98+

MelindaPERSON

0.98+

One terabyteQUANTITY

0.98+

SeamusORGANIZATION

0.98+

one coreQUANTITY

0.98+

MelindPERSON

0.98+

fourth generationQUANTITY

0.98+

this yearDATE

0.97+

7 yearsQUANTITY

0.97+

Seamus JonesPERSON

0.97+

DallasLOCATION

0.97+

OneQUANTITY

0.97+

MelinPERSON

0.97+

oneQUANTITY

0.97+

6QUANTITY

0.96+

Milind DamlePERSON

0.96+

MelanPERSON

0.96+

firstQUANTITY

0.95+

8QUANTITY

0.94+

second generationQUANTITY

0.94+

SeamusPERSON

0.94+

TP C XTITLE

0.93+

Chris Casey, AWS | AWS re:Invent 2022


 

>> Hello, wonderful humans and welcome back to theCUBE. We are live from Las Vegas, Nevada, this week at AWS Reinvent. I am joined by analyst and 10 year reinvent veteran John Furrier. John, pleasure to join you today. >> Great to see you, great event. This is 10 years. We've got great guests coming on the Q3 days of after this wall to wall, we'll lose our voice every year, Thursday >> Host: I can feel the energy. Can you feel the volume already? >> Yes. Everyone's getting bigger, stronger, in the marketplace seeing a lot more activity new players coming into the cloud. Ones that have been around for 10 years or growing up and turning into platforms and just the growth of software in the industry is phenomenal. Our next guest is going to be great to chat about. >> I know it's funny you mentioned marketplace. We're going to be talking marketplace, in our next segment. We're bringing back a Cube alumni Chris Casey welcome back to the show. How, how you Feeling today? >> Thank you for having me. Yeah, I mean this week is the most exciting week of the year for us at AWS and you know, it's just a fantastic energy. You mentioned it before, to be here in Las Vegas at Reinvent and thank you very much for having me back. It's great to talk to John last year and lovely to meet you and talk to you this year. >> It is, it is our pleasure. It is definitely the biggest event of the year. It's wild that Amazon would do this on the biggest online shopping day of the year as well. It goes to show about the boldness and the bravery of the team, which is very impressive. So you cover a few different things at AWS So you cover a few different things at AWS you're talking about and across industries as well. Can you talk to me a little bit about why the software alliances and the data exchange are so important to the partner organization at AWS? >> Yeah, it really comes back to the importance to, to the AWS customer. As we've been working with customers over the, you know the past few years especially, and they've been embarking on their enterprise transformation and their digital transformation moving workloads to to the cloud, they've really been asking us for more and more support from the AWS ecosystem, and that includes native AWS services as well as partners to really help them start to solve some of the industry specific use cases and challenges that they're facing and really incorporate those as part of the enterprise transformation journey that they're embarking on with AWS. What, how that translates back to the AWS marketplace and the partner organization is customers have told us they're really looking for us to have the breadth and depth of the ecosystem of partners available to them that have the intellectual property that solves very niche use cases and workloads that they're looking to migrate to the cloud. A lot of the time that furnishes itself as an independent software vendor and they have software that the customer is trying to use to solve, you know an insurance workflow or an analytics workflow for your utility company as well as third party data that they need to feed into that software. And so my team's responsibility is helping work backwards from the customer need there and making sure that we have the partners available to them. Ideally in the AWS marketplace so they can go and procure those products and make them part of solutions that they're trying to build or migrate to AWS. >> A lot of success in marketplace over the past couple years especially during the pandemic people were buying and procuring through the marketplace. You guys have changed some of the operational things, data exchange enterprise sellers or your sales reps can sell in there. The partners have been glowingly saying great things about how it's just raining money for them if they do it right. And some are like, well, I don't get the marketplace. So there's a, there's kind of a new game in town and the marketplace with some of the successes. What, what is this new momentum that's happening? Is it just people are getting more comfortable they're doing it right? How does the marketplace work effectively? >> Yeah, I mean, marketplace has been around for for 10 years as well as the AWS partner organization. >> Host: It's like our coverage. >> Yes, just like. >> Host: What a nice coincidence. Decades all around happy anniversary everyone. >> Yeah, everyone's selling, celebrating the 10 year birthday, but I think to your point, John, you know, we we've continued iterate on features and functionality that have made the partner experience a much more welcoming digital experience for them to go to market with AWS. So that certainly helped and we've seen more and more customers start to adopt marketplace especially for, for some of their larger applications that they're trying to transform on the cloud. And that extends into industry verticals as well as horizontal sort of business applications whether they be ERP systems like Infor the customers are trying to procure through the marketplace. And I think even for our partners, it's customer driven. You know, we, we've, we've heard from our customers that the, the streamlining the payments and procurement process is a really key benefit for them procuring by the marketplace and also the extra governance and control and visibility they get on their third party licensing contracts is a really material benefit for them which is helping our partners lean in to marketplace as a as a digital channel for them to go to market with us. >> And also you guys have this program it's what's it called enterprise buying or something where clients can just take their spend and move it over into other products like MongoDB more Mongo gimme some more Splunk, gimme some more influence. I mean all these things are possible now, right. For some of the partners. Isn't that, that's like that's like found money for the, for the partners. >> Yeah, going back to what I said before about the AWS ecosystem, we're really looking to help customers holistically with regard to that, and certainly when customers are looking to make commitments to AWS and and move a a large swath of workloads to AWS we want to make sure they can benefit from that commitment not only from native AWS services but also third party data and software applications that they might be procuring through the marketplace. So certainly for the procurement teams not only is there technical benefits for them on the marketplace and you know foresters total economic impact study really helped quantify that for us more recently. You know, 66% of time saving for procurement professionals. >> Host: Wow. >> Which is when you calculate that in hours in person weeks or a year, that's a lot of time on undifferentiated heavy lifting that they can now be doing on value added activities. >> Host: That's a massive shift for >> Yeah, massive shift. So that in addition, you know, to, you know, some of the more contractual and commercial benefits is really helping customers look holistically at how AWS is helping them transform with third party applications and data. >> I want to stick on customers for a second 'cause in my show notes are some pretty well known customers and you mentioned in for a moment ago can you tell us a little bit about what's going on with Ferrari? >> Chris: Sure. So in four is one of our horizontal business application partners and sellers in the AWS marketplace and they sell ERP systems so helping enterprises with resource planning and Ferrari is obviously a very well known brand and you know, the oldest and most successful >> May have heard of them. >> Chris: Yes. Right. The most successful formula one racing team and Ferrari, you know a really meaningful customer for AWS from multiple angles whether they're using AWS to enhance their car design, as well as their fan engagement, as well as their actual end car consumer experience. But as it specifically relates to marketplace as part of Ferrari's technical transformation they were looking to upgrade their ERP system. And so they went through a whole swath of vendors that they wanted to assess and they actually chose Infor as their ERP system. And one of the reasons was >> Nice. >> Chris: because Infor actually have an automotive specific instance of their SaaS application. So when we're talking about really solving for some of those niche challenges for customers who operate in an industry, that was one of the key benefits. And then as an added bonus for Ferrari being able to procure that software through the AWS marketplace gave them all the procurement benefits that we just talked about. So it's super exciting that we're able to play a, you know a part in accelerating that digital transformation with Ferrari and also help Infor in terms of getting a really meaningful customer using their software services on AWS. >> Yeah. Putting a new meaning to turn key your push start. (laughing) >> You mentioned horizontal services earlier. What is it all about there? What's new there? We're hearing, I'm expecting to see that in the keynote tomorrow. Horizontal and vertical solutions and let's get the CEOs. What, what's the focus there? What's this horizontal focus for you? >> Yeah, I, I think the, the big thing is is really helping line of business users. So people in operations or marketing functions, that our customers, see the the partners and the solutions that they use on a daily basis today and how they can actually help accelerate their overall enterprise transformation. With those partners, now on AWS. Historically, you know, those line of business users might not have cared where an application historically ran whether it was on-prem or on AWS but now just the depth of those transformation journeys their enterprises are on that's really the next frontier of applications and use cases that many of our customers are saying they want to move to AWS. >> John: And what are some of those horizontal examples that you see emerging? >> So Salesforce is, is probably one, one of the best ones to call out there. And really the two meaningful things Salesforce have done there is a deep integration with our ML and AI services like SageMaker so people can actually perform some of those activities without leaving the Salesforce application. And then AWS and Salesforce have worked on a unified developer experience, which really helps remove friction in terms of data flows for anyone that's trying to build on both of those services. So the partnership with horizontal business applications like Salesforce is much deeper than just to go to market. It's also on the build side to help make it much more seamless for customers as they're trying to migrate to Salesforce on AWS as an example there. >> It's like having too many tabs open at once, everybody wants it all in one place all at one time. >> Chris: Yeah. >> And it makes sense that you're doing so much in, in the partner marketplace. Let's talk a little bit more about the data exchange. How, how is this intertwined with your vertical and horizontal efforts that the team's striving as well as with another big name example that folks know probably only because of the last few, few years, excuse me, with Moderna? Can you tell us a little more about that? >> Sure. I think when we're, when we're talking to customers about their needs when they're operating in a specific industry, but it probably goes for all customers and enterprise customers especially when they're thinking about software. Almost always that software also needs data to actually be analyzed or processed through it for really the end business outcome to be achieved. And so we're really making a conscious effort to really help our partners integrate with solutions that the AWS field teams and business development teams are talking to customers about and help tie those solutions to customer use cases, rather than it being an engagement with a specific customer on a product by product basis. And certainly software and and data going together is a really nice combination that many customers are looking for us to solve for and for looking for us to create pairings based on other customer needs or use cases that we've historically solved for in the past. >> I mean, with over a million customers, it's hard to imagine anyone could have more use cases to pull from when we're talking about these different instances >> Right. The challenge actually is identifying which are the key ones for each of the industries and which are the ones that are going to help move the needle the most for customers in there, it's, it's not an absence of selection in that case. >> Host: Right. (laughter) I can imagine. I can imagine that's actually the challenge. >> Chris: Yeah. >> Yeah. >> But it's really important. And then more specifically on the data exchange, you know I think it goes back to one of the leadership principles that we launched last year. The two new leadership principles, success and scale bring broad responsibility. You know, we take that very seriously at AWS and we think about that in our actions with our native services, but also in terms of, you know, the availability of partner solutions and then ultimately the end customer outcomes that we can help achieve. And I think Moderna's a great example of that. Moderna have been using the mRNA technology and they're using it to develop a a new vaccine for the RSV virus. And they're actually using the data exchange to procure and then analyze real world evidence data. And what that, what that helps them do is identify and and analyze in almost real time using data on Redshift who are the best vaccine candidates for the trials based on geography and demographics. So it's really helping them save costs, but not only cost really help optimize and be much more efficient in terms of how they're going about their trials from time to market.. >> Host: Time to market. >> vaccine perspective. Yeah. And more importantly, getting the analysis and the results back from those trials as fast as they possibly can. >> Yeah. >> And data exchange, great with the trend that we're going to hear and the keynote tomorrow. More data exchanging more data being more fluid addressable shows those advantages. That's a great example. Great call out there. Chris, I got to get your thoughts on the ecosystem. You know, Ruba Borno is the new head of partners, APN, Amazon Partner Network and marketplace comes together. How you guys serve your partners is also growing and evolving. What's the biggest thing going on in the ecosystem that you see from your perspective? You can put your Amazon hat on or take your your Amazon hat off a personal hat on what's going on. There's a real growth, I mean seeing people getting bigger and stronger as partners. There's more learning, there's more platforms developing. It's, it's kind of the next gen wave coming. What's going on there? What's the, what's the keynote going to be like, what's the what's this reinvent going to be for partners? Give us a share your, share your thoughts. >> Yeah, certainly. I, I think, you know, we are really trying to make sure that we're simplifying the partner experience as much as we possibly can to really help our partners become you know, more profitable or the most profitable they can be with AWS. And so, you know, certainly in Ruba's keynote on Wednesday you're going to hear a little bit about what we've done there from a programs perspective, what we're doing there from feature and capability perspectives to help, you know really push the digital custom, the digital partner experience, sorry, I should say as much as possible. And really looking holistically at that partner experience and listening to our partners as much as we possibly can to adapt partner pathways to ultimately simplify how they're going to market with AWS. Not only on the co-sell side of things and how we interact with our field teams and actually interact with the end customer, but also on how we, we build and help coil with them on AWS to make their solutions whether that be software, whether that be machine learning models, whether that be data sets most optimized to operate in the AWS ecosystem. So you're going to hear a lot of that in Ruba's keynote on Wednesday. There's certainly some really fantastic partner stories and partner launches that'll be featured. Also some customer outcomes that have been realized as a result of partners. So make sure you don't miss it >> John: More action than ever before, right now. >> It's jam-packed, certainly and throughout the week you're going to see multiple launches and releases related to what we're doing with partners on marketplace, but also more generally to help achieve those customer outcomes. >> Well said Brian. So your heart take, what is the future of partnerships the future of the cloud, if you want throw it in, what what are you going to be saying to us? Hopefully the next time you get to sit down with John and I here on theCUBE at reinvent next year. >> Chris: Yeah, I think Adam, Adam was quoted today, as you know, saying that the, the partner ecosystem is going to be around and a foundation for decades. I think is a hundred percent right for me in terms of the industry verticals, the partner ecosystem we have and the availability of these niche solutions that really are solving very specific but mission critical use cases for our customers in each of the industries is super important and it's going to be a a foundation for AWS's growth strategy across all the industry segments for many years to come. So we're super excited about the opportunity ahead of us and we're ready to get after it. >> John: If you, if you could do an Instagram reel right now, what would you say is the most important >> The Insta challenge by go >> The Insta challenge, real >> Host: Chris's Insta challenge >> Insta challenge here, what would be the the real you'd say to the audience about why this year's reinvent is so important? >> I think this year's reinvent is going to give you a clear sense of the breadth and depth of partners that are available to you across the AWS ecosystem. And there's really no industry or use case that we can't solve with partners that we have available within the partner organization. >> Anything is possible. What a note to close on. Chris Casey, thank you so much for joining us for the second time here on theCUBE. John >> He nailed Instagram challenge. >> Yeah, he did. Did he pass the John test? >> I'd say, I'd say so. >> I'd say so. And and and he certainly teased us all with the content to come this week. I want to see all the keynotes here about some of those partners. You tease them in the gaming space with us earlier. It's going to be a very exciting week. Thank you John, for your commentary. Thank you Chris, one more time. >> Thanks for having me. >> And thank you all for tuning in here at theCUBE where we are the leader in high tech coverage. My name is Savannah Peterson, joined by John Furrier with Cube Team live from Las Vegas, Nevada. AWS Reinvent will be here all week and we hope you stay tuned.

Published Date : Nov 29 2022

SUMMARY :

John, pleasure to join you today. on the Q3 days of after this wall to wall, Host: I can feel the energy. of software in the industry is phenomenal. We're going to be talking marketplace, and thank you very much and the bravery of the team, and depth of the ecosystem of the operational things, data exchange for 10 years as well as the Host: What a nice coincidence. for them to go to market with AWS. For some of the partners. So certainly for the procurement teams Which is when you calculate that of the more contractual in the AWS marketplace And one of the reasons was one of the key benefits. your push start. that in the keynote tomorrow. AWS but now just the depth of the best ones to call out there. It's like having too because of the last few, few for really the end business for each of the industries actually the challenge. the data exchange to procure getting the analysis and the results back the ecosystem that you perspectives to help, you know John: More action than and releases related to what we're doing Hopefully the next time you get to sit and the availability of that are available to you What a note to close on. Did he pass the John test? It's going to be a very exciting week. and we hope you stay tuned.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

Chris CaseyPERSON

0.99+

ChrisPERSON

0.99+

AWSORGANIZATION

0.99+

AdamPERSON

0.99+

BrianPERSON

0.99+

Savannah PetersonPERSON

0.99+

FerrariORGANIZATION

0.99+

John FurrierPERSON

0.99+

AmazonORGANIZATION

0.99+

oneQUANTITY

0.99+

WednesdayDATE

0.99+

66%QUANTITY

0.99+

10 yearsQUANTITY

0.99+

last yearDATE

0.99+

second timeQUANTITY

0.99+

Las VegasLOCATION

0.99+

twoQUANTITY

0.99+

10 yearQUANTITY

0.99+

ThursdayDATE

0.99+

Las Vegas, NevadaLOCATION

0.99+

todayDATE

0.99+

next yearDATE

0.99+

RubaPERSON

0.99+

bothQUANTITY

0.99+

APNORGANIZATION

0.99+

Ruba BornoPERSON

0.99+

this weekDATE

0.98+

Daniel Newman, Futurum Research | AnsibleFest 2022


 

>>Hey guys. Welcome back to the Cubes coverage of Ansible Fast 2022. This is day two of our wall to wall coverage. Lisa Martin here with John Ferer. John, we're seeing this world where companies are saying if we can't automate it, we need to, The automation market is transforming. There's been a lot of buzz about that. A lot of technical chops here at Ansible Fest. >>Yeah, I mean, we've got a great guest here coming on Cuba alumni, Dean Newman, future room. He travels every event he's got. He's got his nose to the grindstone ear to the ground. Great analysis. I mean, we're gonna get into why it's important. How does Ansible fit into the big picture? It's really gonna be a great segment. The >>Board do it well, John just did my job for me about, I'll introduce him again. Daniel Newman, one of our alumni is Back Principal Analyst at Future and Research. Great to have you back on the cube. >>Yeah, it's good to join you. Excited to be back in Chicago. I don't know if you guys knew this, but for 40 years, this was my hometown. Now I don't necessarily brag about that anymore. I'm, I live in Austin now. I'm a proud Texan, but I did grow up here actually out in the west suburbs. I got off the plane, I felt the cold air, and I almost turned around and said, Does this thing go back? Yeah. Cause I'm, I've, I've grown thin skin. It did not take me long. I, I like the warm, Come on, >>I'm the saying, I'm from California and I got off the plane Monday. I went, Whoa, I need a coat. And I was in Miami a week ago and it was 85. >>Oh goodness. >>Crazy. So you just flew in. Talk about what's going on, your take on, on Ansible. We've talked a lot with the community, with partners, with customers, a lot of momentum. The flywheel of the community is going around and round and round. What are some of your perspectives that you see? >>Yeah, absolutely. Well, let's you know, I'm gonna take a quick step back. We're entering an era where companies are gonna have to figure out how to do more with less. Okay? We've got exponential data growth, we've got more architectural complexity than ever before. Companies are trying to discern how to deal with many different environments. And just at a macro level, Red Hat is one of the companies that is almost certainly gonna be part of this multi-cloud hybrid cloud era. So that should initially give a lot of confidence to the buying group that are looking at how to automate their environments. You're automating workflows, but really with, with Ansible, we're focused on automating it, automating the network. So as companies are kind of dig out, we're entering this recessionary period, Okay, we're gonna call it what it is. The first thing that they're gonna look at is how do we tech our way out of it? >>I had a wonderful one-on-one conversation with ServiceNow ceo, Bill McDermott, and we saw ServiceNow was in focus this morning in the initial opening session. This is the integration, right? Ansible integrating with ServiceNow. What we need to see is infrastructure automation, layers and applications working in concert to basically enable enterprises to be up and running all the time. Let's first fix the problems that are most common. Let's, let's automate 'em, let's script them. And then at some point, let's have them self resolving, which we saw at the end with Project Wisdom. So as I see it, automation is that layer that enterprises, boards, technologists, all can agree upon are basically here's something that can make our business more efficient, more profitable, and it's gonna deal with this short term downturn in a way that tech is actually gonna be the answer. Just like Bill and I said, let's tech our way out of it. >>If you look at the Red Hat being bought by ibm, you see Project Wisdom Project, not a product, it's a project. Project Wisdom is the confluence of research and practitioners kind of coming together with ai. So bringing AI power to the Ansible is interesting. Red Hat, Linux, Rel OpenShift, I mean, Red Hat's kind of position, isn't it? Kind of be in that right spot where a puck might be coming maybe. I mean, what do you think? >>Yeah, as analysts, we're really good at predicting the, the recent past. It's a joke I always like to make, but Red Hat's been building toward the future. I think for some time. Project Wisdom, first of all, I was very encouraged with it. One of the things that many people in the market probably have commented on is how close is IBM in Red Hat? Now, again, it's a $34 billion acquisition that was made, but boy, the cultures of these two companies couldn't be more different. And of course, Red Hat kind of carries this, this sort of middle ground layer where they provide a lot of value in services to companies that maybe don't use IBM at, at, for the public cloud especially. This was a great indication of how you can take the power of IBM's research, which of course has some of the world's most prolific data scientists, engineers, building things for the future. >>You know, you see things like yesterday they launched a, you know, an AI solution. You know, they're building chips, semiconductors, and technologies that are gonna power the future. They're building quantum. Long story short, they have these really brilliant technologists here that could be adding value to Red Hat. And I don't know that the, the world has fully been able to appreciate that. So when, when they got on stage and they kind of say, Here's how IBM is gonna help power the next generation, I was immediately very encouraged by the fact that the two companies are starting to show signs of how they can collaborate to offer value to their customers. Because of course, as John kind of started off with, his question is, they've kind of been where the puck is going. Open source, Linux hybrid cloud, This is the future. In the future. Every company's multi-cloud. And I said in a one-on-one meeting this morning, every company is going to probably have workloads on every cloud, especially large enterprises. >>Yeah. And I think that the secret's gonna be how do you make that evolve? And one of the things that's coming out of the industry over the years, and looking back as historians, we would say, gotta have standards. Well, with cloud, now people standards might slow things down. So you're gonna start to figure out how does the community and the developers are thinking it'll be the canary in the coal mine. And I'd love to get your reaction on that, because we got Cuban next week. You're seeing people kind of align and try to win the developers, which, you know, I always laugh cuz like, you don't wanna win, you want, you want them on your team, but you don't wanna win them. It's like a, it's like, so developers will decide, >>Well, I, I think what's happening is there are multiple forces that are driving product adoption. And John, getting the developers to support the utilization and adoption of any sort of stack goes a long way. We've seen how sticky it can be, how sticky it is with many of the public cloud pro providers, how sticky it is with certain applications. And it's gonna be sticky here in these interim layers like open source automation. And Red Hat does have a very compelling developer ecosystem. I mean, if you sat in the keynote this morning, I said, you know, if you're not a developer, some of this stuff would've been fairly difficult to understand. But as a developer you saw them laughing at jokes because, you know, what was it the whole part about, you know, it didn't actually, the ping wasn't a success, right? And everybody started laughing and you know, I, I was sitting next to someone who wasn't technical and, and you know, she kinda goes, What, what was so funny? >>I'm like, well, he said it worked. Do you see that? It said zero data trans or whatever that was. So, but if I may just really quickly, one, one other thing I did wanna say about Project Wisdom, John, that the low code and no code to the full stack developer is a continuum that every technology company is gonna have to think deeply about as we go to the future. Because the people that tend to know the process that needs to be automated tend to not be able to code it. And so we've seen every automation company on the planet sort of figuring out and how to address this low code, no code environment. I think the power of this partnership between IBM Research and Red Hat is that they have an incredibly deep bench of capabilities to do things like, like self-training. Okay, you've got so much data, such significant size models and accuracy is a problem, but we need systems that can self teach. They need to be able self-teach, self learn, self-heal so that we can actually get to the crux of what automation is supposed to do for us. And that's supposed to take the mundane out and enable those humans that know how to code to work on the really difficult and hard stuff because the automation's not gonna replace any of that stuff anytime soon. >>So where do you think looking at, at the partnership and the evolution of it between IBM research and Red Hat, and you're saying, you know, they're, they're, they're finally getting this synergy together. How is it gonna affect the future of automation and how is it poised to give them a competitive advantage in the market? >>Yeah, I think the future or the, the competitive space is that, that is, is ecosystems and integration. So yesterday you heard, you know, Red Hat Ansible focusing on a partnership with aws. You know, this week I was at Oracle Cloud world and they're talking about running their database in aws. And, and so I'm kind of going around to get to the answer to your question, but I think collaboration is sort of the future of growth and innovation. You need multiple companies working towards the same goal to put gobs of resources, that's the technical term, gobs of resources towards doing really hard things. And so Ansible has been very successful in automating and securing and focusing on very certain specific workloads that need to be automated, but we need more and there's gonna be more data created. The proliferation, especially the edge. So you saw all this stuff about Rockwell, How do you really automate the edge at scale? You need large models that are able to look and consume a ton of data that are gonna be continuously learning, and then eventually they're gonna be able to deliver value to these companies at scale. IBM plus Red Hat have really great resources to drive this kind of automation. Having said that, I see those partnerships with aws, with Microsoft, with ibm, with ServiceNow. It's not one player coming to the table. It's a lot of players. They >>Gotta be Switzerland. I mean they have the Switzerland. I mean, but the thing about the Amazon deal is like that marketplace integration essentially puts Ansible once a client's in on, on marketplace and you get the central on the same bill. I mean, that's gonna be a money maker for Ansible. I >>Couldn't agree more, John. I think being part of these public cloud marketplaces is gonna be so critical and having Ansible land and of course AWS largest public cloud by volume, largest marketplace today. And my opinion is that partnership will be extensible to the other public clouds over time. That just makes sense. And so you start, you know, I think we've learned this, John, you've done enough of these interviews that, you know, you start with the biggest, with the highest distribution and probability rates, which in this case right now is aws, but it'll land on in Azure, it'll land in Google and it'll continue to, to grow. And that kind of adoption, streamlining make it consumption more consumable. That's >>Always, I think, Red Hat and Ansible, you nailed it on that whole point about multicloud, because what happens then is why would I want to alienate a marketplace audience to use my product when it could span multiple environments, right? So you saw, you heard that Stephanie yesterday talk about they, they didn't say multiple clouds, multiple environments. And I think that is where I think I see this layer coming in because some companies just have to work on all clouds. That's the way it has to be. Why wouldn't you? >>Yeah. Well every, every company will probably end up with some workloads in every cloud. I just think that is the fate. Whether it's how we consume our SaaS, which a lot of people don't think about, but it always tends to be running on another hyperscale public cloud. Most companies tend to be consuming some workloads from every cloud. It's not always direct. So they might have a single control plane that they tend to lead the way with, but that is only gonna continue to change. And every public cloud company seems to be working on figuring out what their niche is. What is the one thing that sort of drives whether, you know, it is, you know, traditional, we know the commoditization of traditional storage network compute. So now you're seeing things like ai, things like automation, things like the edge collaboration tools, software being put into the, to the forefront because it's a different consumption model, it's a different margin and economic model. And then of course it gives competitive advantages. And we've seen that, you know, I came back from Google Cloud next and at Google Cloud next, you know, you can see they're leaning into the data AI cloud. I mean, that is their focus, like data ai. This is how we get people to come in and start using Google, who in most cases, they're probably using AWS or Microsoft today. >>It's a great specialty cloud right there. That's a big use case. I can run data on Google and run something on aws. >>And then of course you've got all kinds of, and this is a little off topic, but you got sovereignty, compliance, regulatory that tends to drive different clouds over, you know, global clouds like Tencent and Alibaba. You know, if your workloads are in China, >>Well, this comes back down at least to the whole complexity issue. I mean, it has to get complex before it gets easier. And I think that's what we're seeing companies opportunities like Ansible to be like, Okay, tame, tame the complexity. >>Yeah. Yeah, I totally agree with you. I mean, look, when I was watching the demonstrations today, my take is there's so many kind of simple, repeatable and mundane tasks in everyday life that enterprises need to, to automate. Do that first, you know? Then the second thing is working on how do you create self-healing, self-teaching, self-learning, You know, and, and I realize I'm a little broken of a broken record at this, but these are those first things to fix. You know, I know we want to jump to the future where we automate every task and we have multi-term conversational AI that is booking our calendars and driving our cars for us. But in the first place, we just need to say, Hey, the network's down. Like, let's make sure that we can quickly get access back to that network again. Let's make sure that we're able to reach our different zones and locations. Let's make sure that robotic arm is continually doing the thing it's supposed to be doing on the schedule that it's been committed to. That's first. And then we can get to some of these really intensive deep metaverse state of automation that we talk about. Self-learning, data replication, synthetic data. I'm just gonna throw terms around. So I sound super smart. >>In your customer conversations though, from an looking at the automation journey, are you finding most of them, or some percentage is, is wanting to go directly into those really complex projects rather than starting with the basics? >>I don't know that you're, you're finding that the customers want to do that? I think it's the architecture that often ends up being a problem is we as, as the vendor side, will tend to talk about the most complex problems that they're able to solve before companies have really started solving the, the immediate problems that are before them. You know, it's, we talk about, you know, the metaphor of the cloud is a great one, but we talk about the cloud, like it's ubiquitous. Yeah. But less than 30% of our workloads are in the public cloud. Automation is still in very early days and in many industries it's fairly nascent. And doing things like self-healing networks is still something that hasn't even been able to be deployed on an enterprise-wide basis, let alone at the industrial layer. Maybe at the company's on manufacturing PLAs or in oil fields. Like these are places that have difficult to reach infrastructure that needs to be running all the time. We need to build systems and leverage the power of automation to keep that stuff up and running. That's, that's just business value, which by the way is what makes the world go running. Yeah. Awesome. >>A lot of customers and users are struggling to find what's the value in automating certain process, What's the ROI in it? How do you help them get there so that they understand how to start, but truly to make it a journey that is a success. >>ROI tends to be a little bit nebulous. It's one of those things I think a lot of analysts do. Things like TCO analysis Yeah. Is an ROI analysis. I think the businesses actually tend to know what the ROI is gonna be because they can basically look at something like, you know, when you have an msa, here's the downtime, right? Business can typically tell you, you know, I guarantee you Amazon could say, Look for every second of downtime, this is how much commerce it costs us. Yeah. A company can generally say, if it was, you know, we had the energy, the windmills company, like they could say every minute that windmill isn't running, we're creating, you know, X amount less energy. So there's a, there's a time value proposition that companies can determine. Now the question is, is about the deployment. You know, we, I've seen it more nascent, like cybersecurity can tend to be nascent. >>Like what does a breach cost us? Well there's, you know, specific costs of actually getting the breach cured or paying for the cybersecurity services. And then there's the actual, you know, ephemeral costs of brand damage and of risks and customer, you know, negative customer sentiment that potentially comes out of it. With automation, I think it's actually pretty well understood. They can look at, hey, if we can do this many more cycles, if we can keep our uptime at this rate, if we can reduce specific workforce, and I'm always very careful about this because I don't believe automation is about replacement or displacement, but I do think it is about up-leveling and it is about helping people work on things that are complex problems that machines can't solve. I mean, said that if you don't need to put as many bodies on something that can be immediately returned to the organization's bottom line, or those resources can be used for something more innovative. So all those things are pretty well understood. Getting the automation to full deployment at scale, though, I think what often, it's not that roi, it's the timeline that gets misunderstood. Like all it projects, they tend to take longer. And even when things are made really easy, like with what Project Wisdom is trying to do, semantically enable through low code, no code and the ability to get more accuracy, it just never tends to happen quite as fast. So, but that's not an automation problem, That's just the crux of it. >>Okay. What are some of the, the next things on your plate? You're quite a, a busy guy. We, you, you were at Google, you were at Oracle, you're here today. What are some of the next things that we can expect from Daniel Newman? >>Oh boy, I moved Really, I do move really quickly and thank you for that. Well, I'm very excited. I'm taking a couple of work personal days. I don't know if you're a fan, but F1 is this weekend. I'm the US Grand Prix. Oh, you're gonna Austin. So I will be, I live in Austin. Oh. So I will be in Austin. I will be at the Grand Prix. It is work because it, you know, I'm going with a number of our clients that have, have sponsorships there. So I'll be spending time figuring out how the data that comes off of these really fun cars is meaningfully gonna change the world. I'll actually be talking to Splunk CEO at the, at the race on Saturday morning. But yeah, I got a lot of great things. I got a, a conversation coming up with the CEO of Twilio next week. We got a huge week of earnings ahead and so I do a lot of work on that. So I'll be on Bloomberg next week with Emily Chang talking about Microsoft and Google. Love talking to Emily, but just as much love being here on, on the queue with you >>Guys. Well we like to hear that. Who you're rooting for F one's your favorite driver. I, >>I, I like Lando. Do you? I'm Norris. I know it's not necessarily a fan favorite, but I'm a bit of a McLaren guy. I mean obviously I have clients with Oracle and Red Bull with Ball Common Ferrari. I've got Cly Splunk and so I have clients in all. So I'm cheering for all of 'em. And on Sunday I'm actually gonna be in the Williams Paddock. So I don't, I don't know if that's gonna gimme me a chance to really root for anything, but I'm always, always a big fan of the underdog. So maybe Latifi. >>There you go. And the data that comes off the how many central unbeliev, the car, it's crazy's. Such a scientific sport. Believable. >>We could have Christian, I was with Christian Horner yesterday, the team principal from Reside. Oh yeah, yeah. He was at the Oracle event and we did a q and a with him and with the CMO of, it's so much fun. F1 has been unbelievable to watch the momentum and what a great, you know, transitional conversation to to, to CX and automation of experiences for fans as the fan has grown by hundreds of percent. But just to circle back full way, I was very encouraged with what I saw today. Red Hat, Ansible, IBM Strong partnership. I like what they're doing in their expanded ecosystem. And automation, by the way, is gonna be one of the most robust investment areas over the next few years, even as other parts of tech continue to struggle that in cyber security. >>You heard it here. First guys, investment in automation and cyber security straight from two analysts. I got to sit between. For our guests and John Furrier, I'm Lisa Martin, you're watching The Cube Live from Chicago, Ansible Fest 22. John and I will be back after a short break. SO'S stick around.

Published Date : Oct 19 2022

SUMMARY :

Welcome back to the Cubes coverage of Ansible Fast 2022. He's got his nose to the grindstone ear to the ground. Great to have you back on the cube. I got off the plane, I felt the cold air, and I almost turned around and said, Does this thing go back? And I was in Miami a week ago and it was 85. The flywheel of the community is going around and round So that should initially give a lot of confidence to the buying group that in concert to basically enable enterprises to be up and running all the time. I mean, what do you think? One of the things that many people in the market And I don't know that the, the world has fully been able to appreciate that. And I'd love to get your reaction on that, because we got Cuban next week. And John, getting the developers to support the utilization Because the people that tend to know the process that needs to be the future of automation and how is it poised to give them a competitive advantage in the market? You need large models that are able to look and consume a ton of data that are gonna be continuously I mean, but the thing about the Amazon deal is like that marketplace integration And so you start, And I think that is where I think I see this What is the one thing that sort of drives whether, you know, it is, you know, I can run data on Google regulatory that tends to drive different clouds over, you know, global clouds like Tencent and Alibaba. I mean, it has to get complex before is continually doing the thing it's supposed to be doing on the schedule that it's been committed to. leverage the power of automation to keep that stuff up and running. how to start, but truly to make it a journey that is a success. to know what the ROI is gonna be because they can basically look at something like, you know, I mean, said that if you don't need to put as many bodies on something that What are some of the next things that we can Love talking to Emily, but just as much love being here on, on the queue with you Who you're rooting for F one's your favorite driver. And on Sunday I'm actually gonna be in the Williams Paddock. And the data that comes off the how many central unbeliev, the car, And automation, by the way, is gonna be one of the most robust investment areas over the next few years, I got to sit between.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

Daniel NewmanPERSON

0.99+

MicrosoftORGANIZATION

0.99+

CaliforniaLOCATION

0.99+

JohnPERSON

0.99+

AlibabaORGANIZATION

0.99+

ChicagoLOCATION

0.99+

Dean NewmanPERSON

0.99+

Emily ChangPERSON

0.99+

John FurrierPERSON

0.99+

AustinLOCATION

0.99+

AmazonORGANIZATION

0.99+

John FererPERSON

0.99+

IBMORGANIZATION

0.99+

EmilyPERSON

0.99+

MiamiLOCATION

0.99+

TencentORGANIZATION

0.99+

ChinaLOCATION

0.99+

OracleORGANIZATION

0.99+

MondayDATE

0.99+

GoogleORGANIZATION

0.99+

AWSORGANIZATION

0.99+

AnsibleORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

40 yearsQUANTITY

0.99+

TwilioORGANIZATION

0.99+

next weekDATE

0.99+

firstQUANTITY

0.99+

yesterdayDATE

0.99+

SundayDATE

0.99+

Saturday morningDATE

0.99+

Futurum ResearchORGANIZATION

0.99+

two companiesQUANTITY

0.99+

BillPERSON

0.99+

Red BullORGANIZATION

0.99+

StephaniePERSON

0.99+

less than 30%QUANTITY

0.99+

85QUANTITY

0.99+

ibmORGANIZATION

0.99+

todayDATE

0.99+

The Cube LiveTITLE

0.99+

two analystsQUANTITY

0.99+

IBM ResearchORGANIZATION

0.99+

McLarenORGANIZATION

0.99+

Bill McDermottPERSON

0.99+

oneQUANTITY

0.99+

Christian HornerPERSON

0.98+

this weekDATE

0.98+

one playerQUANTITY

0.98+

Williams PaddockLOCATION

0.98+

RockwellORGANIZATION

0.98+

Grand PrixEVENT

0.98+

Keith Norbie, NetApp & Brandon Jackson, CDW | VMware Explore 2022


 

>>Hey everyone. Welcome back to San Francisco. Lisa Martin and Dave Nicholson here. The cube is covering VMware Explorer, 2022 first year with the new name, there's about seven to 10,000 people here. So folks are excited to be back. I was in the keynote this morning. You probably were two David. It was standing room, only lots of excitement, lots of news. We're gonna be unpacking some news. Next. We have Brandon Jackson joining us S DDC architect at CDW and Keith normy is back one of our alumni head of worldwide partner solution sales at NetApp guys. Welcome back to the program. Hey, thank >>You, reunion week. >>So let's talk about what's going on, obviously, lots of news this morning, lots of momentum at VMware, lots of momentum at NetApp CDW. Keith, we'll start with you talk about what was announced yesterday, NetApp, VMware, AWS, and what's in it for customers and partners. >>Yeah, it's a new day. I talked about this in a blog that I wrote that, you know, for me, I started out with VMware and NetApp about 15 years ago when the ecosystem was still kind of emerging back in the ESX three days, for those that remember those days and, and NetApp had a really real dominant position because some of the things that they had delivered with VMware, and we're kind of at that same venture now where everyone needs to have as they talk about today. Multi-cloud, and, and there's been some things that people try to get through as they talk about cloud chaos today. It also is in the, some of the realms, the barriers that you don't often see. So releasing this new FSX capability with the metal data store within VMware cloud, and AWS is a real big opportunity. And it's not just a big opportunity for NetApp. It's a big opportunity for the people that actually deliver this for the customers, which is our partner. So for me, it's full circle. I started with a partner I come back around and I'm now in a great position to kind of work with our partners. And they're the real story here with us. Yeah. >>Brandon, talk about the value in this from CDWs perspective, what is the momentum that your you and the company are excited to carry forward? >>Yeah, this is super exciting. I've been close to the VMware cloud AWS story since its inception. So, you know, almost four years building that practice out at CDW and it's a great solution, but we spent all this time prior driving people to that HCI type of mentality where, Hey, you can just scale the portions that you need and that wasn't available in the cloud. And although it's a great solution, there's pain points there where it just can become cost prohibitive because customers see what they need. But that storage piece is a heavy component. And when that adds to what that cluster size needs to be, that's a real problem with this announcement, right? We can now use those supplemental data stores and be able to shrink that size. So it saves the customer massive amounts of money. I mean, we have like 25, 50% in savings while without sacrificing anything, they're getting the operational efficiency that they know and love from NetApp. They get that control and that experience that they've been using or want to use in VMware cloud. And they're just combining the two in a very cost friendly package. >>So I have one comment and that is finally >>Right. Absolutely. I, >>We used to refer to it as the devil's triangle of CPU, memory and storage. And if those are, if those are inextricably linked to one another, you want a little bit more storage. Okay. Here's your CPU and memory that you can pay for and power and cool that you don't need? No, no, no, no, no, no. I just need, I just need some storage over here. And in the VMware context, think of the affinity that VMware has had with NetApp forever. The irony being that EMC of course, owned VMware for a period of time, kind of owned their stock. Yeah. So you have this thing that is fundamentally built around VMFS that just fits perfectly into the filer methodology. Yeah. And now they're back together in the cloud. And, and the thing is if, if we were, if we were sitting here talking about this 5, 6, 7 years ago, an AWS person would've said we were all crazy. Yeah, yeah. AWS at the time would've said, nah, no, no, no, no. We're gonna figure that out. You, you, you, you guys are just gonna have to go away. It's >>Not lost on me that, you know, it was great seeing and hearing of NetApp in a day, one VMware keynote. >>It's amazing. >>That was great. And so we built off that because the, the, the great thing about kind of where this comes from is, you know, you built that whole HCI or converged infrastructure for simplicity and everyone is simplicity. And so this is just another evolution of the story. And as you do, so, you know, you've, you've freed up for all the workloads, all the scenarios, all the, all the operational situations that you've wanted to kind of get into. Now, if you can save anywhere from 25 to 50% of the costs of previous, you can unleash a whole nother set of workloads and do so by the way, with same consistent operational consistency from NetApp, in terms of the data that you have on-prem to cloud, or even if you don't have NetApp, on-prem, you know, we have the ways to get it to the cloud and VMware cloud and AWS, and, and, and basically give you that data simplicity for management. >>And, but again, it isn't just a NetApp part of this. There is, as everyone knows with cloud, a whole layer of infrastructure around the security networking, there's a ton of work that gets from the partner side to look at applications and workloads and understand sort of what's the composition of those, which ones are ready for the cloud. First, you know, seeing, you know, the AWS person with the SAP title, that's a big workload. Obviously that's making this journey to the cloud, along with all the rest of them. That's what the partners deliver. NetApp has done everything they can do to make that as frictionless as possible in the marketplace as a first party service, and now through VMware cloud. So we've done all we can do on, on that factor. Now it's the partners that could take it. And by the way, the reaction that we've seen kind of in some of, of the private previews are working, has been incredible. These guys bring really the true superhero muscle to what organizations are gonna need to have to take those workloads to VMware cloud and, and evolve it into this new cloud era that they're talking about at the keynote today. >>Yeah, don't get us wrong. We love vSphere eight and vs a, a and VSAN aid in particular, but there's a huge market need for this, for what you guys are delivering. >>Talk to us, Brandon, from your perspective about being able to, to part, to, to have the powerhouses of NetApp, VMware and AWS, and in terms of being able to meet your customers where they are and what they want. >>And I, that's huge, right? That the solution allows these things to come together in a seamless way, right? So we get the, the flexibility of cloud. We get the scalability of easy storage now, in a way we didn't have before, and we get the power that's VMware, right. And in that, in the virtualization platform, and that makes it easy for a customer to say, I need to be somewhere else. And maybe that's not, that's not a colo anymore. That's not a secondary data center. I want to be in the cloud, but I wanna do it on my terms. I wanna do it. So it works for me as a customer. This solution has that, right? And, and we come in as a partner and we look at, we kind of call it the full stack approach, where we really look at the entire, you know, ecosystem that we're talking. >>So from the application all the way down to the infrastructure and even below, and figure out how that's gonna work best for our customers and putting things together with the native cloud services, then with their VMware environment, living on VMware cloud, AWS, leveraging storage with a, you know, with the, the FSX in. So they can easily grow their storage and use all those operational efficiencies and the things that they love about NetApp already. And from a Dr. Use case, we can replicate from a NetApp to NetApp. And it's just, it makes it so easy to have that conversation with the customers and just, it clicks. And like, this is what I need. This is what I've been looking for. And all wrapped up in a really easy package. >>No wonder Dave's comment was finally right. >>Oh, absolutely. I mean, we've been, again, you know, we talked about the HCI, like that made sense. And three or four years ago, maybe even a little bit longer, right. That click, same thing was like, oh my gosh, this is the way infrastructure should work. And we're just having that same Nirvana moment that this is how easy cloud infrastructure can work and that I can have that storage without sacrificing the cost, throw more nodes into my cluster to be able to do so. >>Yeah. I I've just worked with so many customers who struggle to get to where they want to be BEC, and this is something that just feels like a nice worn in pair of shoes or jeans to folks who right now, you know, look, the majority of it spend is still on premises, right? So the typical deployment of VMware today is often VMware with NetApp appliances providing file storage. So this is something that I imagine will help accelerate some of your customers' moves. >>It absolutely will. And in fact, I have three customers off the hand that I know that I've been like, not wanting to say anything like let's talk next week. Right? There's this, there may be something we can talk about when, on, after Explorer waiting for the announcement, because we've been working with NetApp and, and doing some of the private preview stuff. Yeah. And our engineering teams, working with your engineering teams to build this out so that when the announcement came out yesterday, we can go back and say, okay, now let's have that conversation. Now let's talk about what this looks like, >>Where are you having customer conversation? So this is strictly an it conversation has this elevated up the stack, especially as we've seen the massive, I call it cloud migration adoption of the last couple of years. >>I, I I'll speak fairly from the partner level. It is an elevated conversation. So we're not only talking, at least I'm not only talking to it. Administrators, directors, C levels like this is a story that resonates because it's about business value, right? I have an initiative, I have a goal. And that goal is wrapped into that it solution. And typically has some sort of resource or financial cost to it. We want to hear that story. And so it resonates when we can talk about how you can achieve your goals, do it in a way with a specific solution that encompasses everything at a price point that you'll like, and then that can flow down to the directors and the it administrators. And we can start talking about, you know, turning the screws and the knobs. >>Yeah. And for us, it does start with a partner because the reality is that's who the that's, who the customers all engage. And the reality is there's not just one partner type there's many, you know, we, in fact, what the biggest thing that we've been really modernizing is how to address the different partner types. Cuz you obviously have the Accentures of the world that are the big GSIs, the big SI you have folks that are hosting providers, you have Equinox X in the middle of that. You've got partners that just do services that might be only influenced partners that are influencing the, the design. And so if you look up and down between, you know, VMware's partner ecosystem and NetApp's partner ecosystem overlap pretty well, but there's this factor with AWS about, you know, both born and the cloud partners and partners, you know, like CW that have really, you know, taken the step forward to be relevant in that phase going forward. >>And that's, what's exciting to us is to see that kind of come forward. So when something like a FSX end comes forward in this VMware cloud and AWS scenario, they can take and, and just have instant ignition with it. And for us, that's what it's about. Our job is really just to remove friction back what they do and get outta the way, help them win. And last week we were in Chicago at the AWS reinvent thing and seeing AWS with another partner in their whole briefing and how they came to life with the, with this whole anticipation for this week, you know, it's, it's all the partners are very excited for it. So we're just gonna fuel that. And you know, I often wonder we got the, the t-shirt that says, you know, two's company three is a cloud maybe should have been four because it takes the, the partner for the, the completion. >>We appreciate that for sure. >>It does. It sounds like there's tremendous momentum in the market, an appetite across all three companies, four, if you include CDW. So in terms of, of the selling motion, it sounds like you've got folks that are gonna be eating out of eating out of your pocket. Who've been waiting for this for quite a while. Yeah. >>I think you, the analogy used earlier, it's nice when the tires are already on the Ferrari, right. This thing could just go, yes. And we've got people that we're already talking to that this fits, we've got some great go to market strategies. As we start doing partner in sales enablement to make sure that our people behind the scenes are telling the story and the way that we want it to jointly so that all of us can, you know, come together and have that aligned common message to really, you know, make this win and make this pop >>One correction though is technically we sponsor Aston Martin. So it's not a fry. It's an Aston Martin. There >>You go. >>That's right. Quite taken, not a car guy. Can >>You, can you talk a little bit Brendan about the, the routes to market and the, the GTM that you guys are working on together, even at a high level? Yeah. >>At a high level, we've already had some meetings talking about how we can get this message out. The nice thing about this is it's not relegated to a single industry vertical. It's not a single type of customer. We see this across the board and, and certainly with any of our cloud infrastructure solutions, it seems very, even from a regional standpoint and an industry vertical standpoint. So really it's just about how to get our sellers, you know, that get that message to them. So we had meetings here this week. We've been talking to your teams, oh, for probably six weeks now on what's that gonna look like? You know, what type of events are we gonna hold? Do we wanna do some type of road show? Yeah. We've done that with FlexPod very successfully, a few years ago where our teams working with your teams and VMware, we all came out and, and showed this to the world and doing something similar with this to show how easy it is to add supplemental storage to VMC. And just get that out to the masses through events, maybe through sales webinars. I mean, we're still in this world where maybe it's more virtual than on person, but we're starting to shift back, but it's just about telling the message and, and showing, Hey, here's how you do it. Come talk to us. We can help you. And we want to help >>Talk about the messaging from a, a multi-cloud perspective. Here we are at VMware Explorer, the theme, the center of the multi-cloud universe, how is this solution from NetApp's perspective? And then CDWs, how does it an enabler of customers that so many are living in the multi-cloud world by default? >>Yeah. And I think the big subtlety there that, that maybe was MIS missed was the private cloud being just so their cloud. The reality of that is probably a little bit short of, you know, of what people kind of deal with with their on on-prem data centers, just because of some of the applications, data sets they're trying to work through for AI ML and analytics. But that's what the partner's great at is, is helping them kind of leap forward and actually realize the on-prem to become the private cloud and really operate in this multi-cloud scenario and, and get beyond this cloud chaos factor. So again, you know, the beautiful part about all this is that, you know, the, the, the never ending sort of options, the optionality that you have on security, on networking, on applications, data sets, locations, governance, these are all factors that the partner deals with way better than we could even think of. So for us, it's really about just trying to connect with them, get their feedback and actually design in from the partner to take something like this and make it something that works for them >>Back to your shirt. What does it say? Two's company, three's a cloud that's right. But if you want rain, you need a fourth. Yeah. Right. We're here in California. I don't care about clouds. We need it to rain. All >>Right. So >>It's all well and good that yeah. If you know, a couple of you get together and offer something up, but where the rubber meets the road, you know, the customer relationship, the strategic seat at the customer table, there, aren't more of those than there have been in the past. And, and, and ecosystems have obviously gotten more complicated. I can't help thinking back as I think back on the history of, of NetApp and VMware and CDW, there was a time when, when things were bad, you get rid of marketing. And then, and then after that, it was definitely alliances and partnerships cuz who the heck are those people right now? Everything is an ecosystem. Yeah. Everything is an ecosystem. So talk about how CW CDW has changed through its history in terms of where CDW has come from. >>Sure. And you >>Know, not everybody knows that CDW is involved in as sophisticated in area as you are. >>And, and that's true. I mean, sometimes it's tongue in cheek, but you know, we've fulfilled a lot of needs throughout the years and, and maybe at times just a fulfillment or a box pusher, but we're really so much more that, and we've been so much more than that for years. And through some of our acquisitions, you know, Sirius last year I G N w our international arm with Kway when it became CDW, K we have a, you know, a premier experience around consultative services. And that we talk about that full stack, right? Yeah. From the application to the cloud, to the infrastructure, to the security around it, to the networking, we can help out with all of that. And we've got experts and, and, you know, on the presales and postsales that, that's what they live for. It's their passion. And working with partners close in hand, that that's, we've had great relationships with, with NetApp. And again, I've been with CDW for over 12 years. And in all 12 of those years, I've been very close to NetApp in one way, shape or form, and to see how we work together to solve our customers' challenges. It's less about what we want to do. It's more about what we're doing to help the customer. And, and I've seen that day in and day out from our relationship and, you know, kind of our partnership. >>So say we're back here in six months, or maybe we're back here at reinvent, talking with you guys and a customer. What are some of the outcomes that at this stage you were expecting customers to be able to achieve, >>Be able to do more, put more out there, right. To not be limited by the construct of, I only have X amount of space. And so maybe the use case or the initiative is, is wrapped around that. Let's turn that around and say, that's, you're limitless, let's have move what you need. And you're not gonna have to worry so much about the cost, the way you did six months ago or seven months ago, or six months in a day ago that you can do more with it. And if we have an X amount in our bucket in, in July, we could do 200 VMs. You know, and now six months later, we've done 500 VMs because of those efficiency savings because of that cost savings and using supplemental storage. So I, I see that being a growth factor and being say, Hey, this was easy. We always knew this was a solution we liked, but now it's easy and bigger. >>Yeah. I think on our end, the spectrum, I'll just say what Phil Brons would say. I said previously, he was in the previous segment, which is, this could go pretty quick, folks that have wanted to do this now that they know this is something to do and that they can go at it. The part we already know, the partners are very much in like ready to go mode. They've been waiting for this day to just get the announcement out so they can get kind of get going. And it's funny because you know, when we've presented, we've kind of presented some of the tech behind what we're doing and then the ROI T C calculator last, and everyone's feedback is the same. They said you should just lead to the calculator. So then yeah, you can see exactly how much money you save. In fact, one of the jokes is there's not many times you've saved this much money in it before. And so it's, it's a big, wow. Factor, >>Big, wow. Factor, big differentiator, guys. Thank you so much for joining David, me talking about what NetApp, VMware, AWS are doing, how it's being delivered through CDW, the evolution of all these companies. We're excited to watch the solution. We better let you go because you probably have a ton of meeting. People are just chopping at the bit to get this. Yeah. >>It's, it's exciting times. I'm loving it being here and being able to talk about this finally, in a public setting. So this has been great. >>Awesome guys. Thank you again for your time. We appreciate it. Yep. For our guests and Dave Nicholson, I'm Lisa Martin. You're watching the cube live from VMware Explorer, 2022. We'll be back after a short break, stick around.

Published Date : Aug 31 2022

SUMMARY :

So folks are excited to be back. we'll start with you talk about what was announced yesterday, NetApp, VMware, I talked about this in a blog that I wrote that, you know, for me, type of mentality where, Hey, you can just scale the portions that you need and that wasn't available in I, And in the VMware context, think of the affinity that VMware has had with NetApp forever. Not lost on me that, you know, it was great seeing and hearing of NetApp in a day, And as you do, so, you know, you've, you've freed up for all the workloads, And by the way, the reaction that we've seen kind of in some of, of the private previews are working, a and VSAN aid in particular, but there's a huge market need for this, for what you guys are delivering. and in terms of being able to meet your customers where they are and what they want. And in that, in the virtualization platform, and that makes it easy for a with a, you know, with the, the FSX in. I mean, we've been, again, you know, we talked about the HCI, like that made sense. now, you know, look, the majority of it spend is still on premises, right? And our engineering teams, working with your engineering teams to build this out Where are you having customer conversation? And we can start talking about, you know, turning the screws and the knobs. And so if you look up and down between, you know, VMware's partner ecosystem and NetApp's partner ecosystem overlap to life with the, with this whole anticipation for this week, you know, it's, So in terms of, of the selling motion, it sounds like you've got folks that you know, come together and have that aligned common message to really, you know, So it's not a fry. That's right. You, can you talk a little bit Brendan about the, the routes to market and the, the GTM that you guys are And just get that out to the masses through events, And then CDWs, how does it an enabler of customers that so many are living in the multi-cloud world The reality of that is probably a little bit short of, you know, of what people But if you want rain, you need a fourth. So but where the rubber meets the road, you know, the customer relationship, the strategic seat at the customer table, I mean, sometimes it's tongue in cheek, but you know, we've fulfilled What are some of the outcomes that at this stage you were expecting customers to be able to achieve, the cost, the way you did six months ago or seven months ago, or six months in a day ago that you So then yeah, you can see exactly how much money you save. We better let you go because you probably have a ton of meeting. So this has been great. Thank you again for your time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

Dave NicholsonPERSON

0.99+

Lisa MartinPERSON

0.99+

CaliforniaLOCATION

0.99+

DavePERSON

0.99+

KeithPERSON

0.99+

Brandon JacksonPERSON

0.99+

Keith NorbiePERSON

0.99+

San FranciscoLOCATION

0.99+

AWSORGANIZATION

0.99+

ChicagoLOCATION

0.99+

CDWORGANIZATION

0.99+

JulyDATE

0.99+

last weekDATE

0.99+

FerrariORGANIZATION

0.99+

Aston MartinORGANIZATION

0.99+

CDWsORGANIZATION

0.99+

next weekDATE

0.99+

VMwareORGANIZATION

0.99+

yesterdayDATE

0.99+

BrandonPERSON

0.99+

12QUANTITY

0.99+

Phil BronsPERSON

0.99+

EMCORGANIZATION

0.99+

NetAppORGANIZATION

0.99+

500 VMsQUANTITY

0.99+

three customersQUANTITY

0.99+

200 VMsQUANTITY

0.99+

fourthQUANTITY

0.99+

NetAppTITLE

0.99+

twoQUANTITY

0.99+

six months laterDATE

0.99+

25QUANTITY

0.99+

last yearDATE

0.99+

FirstQUANTITY

0.99+

six months agoDATE

0.99+

one partnerQUANTITY

0.99+

ESXTITLE

0.99+

TwoQUANTITY

0.99+

5DATE

0.99+

seven months agoDATE

0.99+

over 12 yearsQUANTITY

0.99+

six monthsQUANTITY

0.98+

BrendanPERSON

0.98+

threeDATE

0.98+

2022DATE

0.98+

this weekDATE

0.98+

Keith normyPERSON

0.98+

six weeksQUANTITY

0.98+

25, 50%QUANTITY

0.97+

50%QUANTITY

0.97+

todayDATE

0.97+

6DATE

0.97+

fourQUANTITY

0.97+

bothQUANTITY

0.97+

oneQUANTITY

0.97+

FSXTITLE

0.96+

Wasabi |Secure Storage Hot Takes


 

>> The rapid rise of ransomware attacks has added yet another challenge that business technology executives have to worry about these days, cloud storage, immutability, and air gaps have become a must have arrows in the quiver of organization's data protection strategies. But the important reality that practitioners have embraced is data protection, it can't be an afterthought or a bolt on it, has to be designed into the operational workflow of technology systems. The problem is, oftentimes, data protection is complicated with a variety of different products, services, software components, and storage formats, this is why object storage is moving to the forefront of data protection use cases because it's simpler and less expensive. The put data get data syntax has always been alluring, but object storage, historically, was seen as this low-cost niche solution that couldn't offer the performance required for demanding workloads, forcing customers to make hard tradeoffs between cost and performance. That has changed, the ascendancy of cloud storage generally in the S3 format specifically has catapulted object storage to become a first class citizen in a mainstream technology. Moreover, innovative companies have invested to bring object storage performance to parity with other storage formats, but cloud costs are often a barrier for many companies as the monthly cloud bill and egress fees in particular steadily climb. Welcome to Secure Storage Hot Takes, my name is Dave Vellante, and I'll be your host of the program today, where we introduce our community to Wasabi, a company that is purpose-built to solve this specific problem with what it claims to be the most cost effective and secure solution on the market. We have three segments today to dig into these issues, first up is David Friend, the well known entrepreneur who co-founded Carbonite and now Wasabi will then dig into the product with Drew Schlussel of Wasabi, and then we'll bring in the customer perspective with Kevin Warenda of the Hotchkiss School, let's get right into it. We're here with David Friend, the President and CEO and Co-founder of Wasabi, the hot storage company, David, welcome to theCUBE. >> Thanks Dave, nice to be here. >> Great to have you, so look, you hit a home run with Carbonite back when building a unicorn was a lot more rare than it has been in the last few years, why did you start Wasabi? >> Well, when I was still CEO of Wasabi, my genius co-founder Jeff Flowers and our chief architect came to me and said, you know, when we started this company, a state of the art disk drive was probably 500 gigabytes and now we're looking at eight terabyte, 16 terabyte, 20 terabyte, even 100 terabyte drives coming down the road and, you know, sooner or later the old architectures that were designed around these much smaller disk drives is going to run out of steam because, even though the capacities are getting bigger and bigger, the speed with which you can get data on and off of a hard drive isn't really changing all that much. And Jeff foresaw a day when the architectures sort of legacy storage like Amazon S3 and so forth was going to become very inefficient and slow. And so he came up with a new, highly parallelized architecture, and he said, I want to go off and see if I can make this work. So I said, you know, good luck go to it and they went off and spent about a year and a half in the lab, designing and testing this new storage architecture and when they got it working, I looked at the economics of this and I said, holy cow, we can sell cloud storage for a fraction of the price of Amazon, still make very good gross margins and it will be faster. So this is a whole new generation of object storage that you guys have invented. So I recruited a new CEO for Carbonite and left to found Wasabi because the market for cloud storage is almost infinite. You know, when you look at all the world's data, you know, IDC has these crazy numbers, 120 zetabytes or something like that and if you look at that as you know, the potential market size during that data, we're talking trillions of dollars, not billions and so I said, look, this is a great opportunity, if you look back 10 years, all the world's data was on-prem, if you look forward 10 years, most people agree that most of the world's data is going to live in the cloud, we're at the beginning of this migration, we've got an opportunity here to build an enormous company. >> That's very exciting. I mean, you've always been a trend spotter, and I want to get your perspectives on data protection and how it's changed. It's obviously on people's minds with all the ransomware attacks and security breaches, but thinking about your experiences and past observations, what's changed in data protection and what's driving the current very high interest in the topic? >> Well, I think, you know, from a data protection standpoint, immutability, the equivalent of the old worm tapes, but applied to cloud storage is, you know, become core to the backup strategies and disaster recovery strategies for most companies. And if you look at our partners who make backup software like Veeam, Convo, Veritas, Arcserve, and so forth, most of them are really taking advantage of mutable cloud storage as a way to protect customer data, customers backups from ransomware. So the ransomware guys are pretty clever and they, you know, they discovered early on that if someone could do a full restore from their backups, they're never going to pay a ransom. So, once they penetrate your system, they get pretty good at sort of watching how you do your backups and before they encrypt your primary data, they figure out some way to destroy or encrypt your backups as well, so that you can't do a full restore from your backups. And that's where immutability comes in. You know, in the old days you, you wrote what was called a worm tape, you know, write once read many, and those could not be overwritten or modified once they were written. And so we said, let's come up with an equivalent of that for the cloud, and it's very tricky software, you know, it involves all kinds of encryption algorithms and blockchain and this kind of stuff but, you know, the net result is if you store your backups in immutable buckets, in a product like Wasabi, you can't alter it or delete it for some period of time, so you could put a timer on it, say a year or six months or something like that, once that data is written, you know, there's no way you can go in and change it, modify it, or anything like that, including even Wasabi's engineers. >> So, David, I want to ask you about data sovereignty. It's obviously a big deal, I mean, especially for companies with the presence overseas, but what's really is any digital business these days, how should companies think about approaching data sovereignty? Is it just large firms that should be worried about this? Or should everybody be concerned? What's your point of view? >> Well, all around the world countries are imposing data sovereignty laws and if you're in the storage business, like we are, if you don't have physical data storage in-country, you're probably not going to get most of the business. You know, since Christmas we've built data centers in Toronto, London, Frankfurt, Paris, Sydney, Singapore, and I've probably forgotten one or two, but the reason we do that is twofold; one is, you know, if you're closer to the customer, you're going to get better response time, lower latency, and that's just a speed of light issue. But the bigger issue is, if you've got financial data, if you have healthcare data, if you have data relating to security, like surveillance videos, and things of that sort, most countries are saying that data has to be stored in-country, so, you can't send it across borders to some other place. And if your business operates in multiple countries, you know, dealing with data sovereignty is going to become an increasingly important problem. >> So in May of 2018, that's when the fines associated with violating GDPR went into effect and GDPR was like this main spring of privacy and data protection laws and we've seen it spawn other public policy things like the CCPA and think it continues to evolve, we see judgments in Europe against big tech and this tech lash that's in the news in the U.S. and the elimination of third party cookies, what does this all mean for data protection in the 2020s? >> Well, you know, every region and every country, you know, has their own idea about privacy, about security, about the use of even the use of metadata surrounding, you know, customer data and things of this sort. So, you know, it's getting to be increasingly complicated because GDPR, for example, imposes different standards from the kind of privacy standards that we have here in the U.S., Canada has a somewhat different set of data sovereignty issues and privacy issues so it's getting to be an increasingly complex, you know, mosaic of rules and regulations around the world and this makes it even more difficult for enterprises to run their own, you know, infrastructure because companies like Wasabi, where we have physical data centers in all kinds of different markets around the world and we've already dealt with the business of how to meet the requirements of GDPR and how to meet the requirements of some of the countries in Asia and so forth, you know, rather than an enterprise doing that just for themselves, if you running your applications or keeping your data in the cloud, you know, now a company like Wasabi with, you know, 34,000 customers, we can go to all the trouble of meeting these local requirements on behalf of our entire customer base and that's a lot more efficient and a lot more cost effective than if each individual country has to go deal with the local regulatory authorities. >> Yeah, it's compliance by design, not by chance. Okay, let's zoom out for the final question, David, thinking about the discussion that we've had around ransomware and data protection and regulations, what does it mean for a business's operational strategy and how do you think organizations will need to adapt in the coming years? >> Well, you know, I think there are a lot of forces driving companies to the cloud and, you know, and I do believe that if you come back five or 10 years from now, you're going to see majority of the world's data is going to be living in the cloud and I think storage, data storage is going to be a commodity much like electricity or bandwidth, and it's going to be done right, it will comply with the local regulations, it'll be fast, it'll be local, and there will be no strategic advantage that I can think of for somebody to stand up and run their own storage, especially considering the cost differential, you know, the most analysts think that the full, all in costs of running your own storage is in the 20 to 40 terabytes per month range, whereas, you know, if you migrate your data to the cloud, like Wasabi, you're talking probably $6 a month and so I think people are learning how to deal with the idea of an architecture that involves storing your data in the cloud, as opposed to, you know, storing your data locally. >> Wow, that's like a six X more expensive in the clouds, more than six X, all right, thank you, David,-- >> In addition to which, you know, just finding the people to babysit this kind of equipment has become nearly impossible today. >> Well, and with a focus on digital business, you don't want to be wasting your time with that kind of heavy lifting. David, thanks so much for coming in theCUBE, a great Boston entrepreneur, we've followed your career for a long time and looking forward to the future. >> Thank you. >> Okay, in a moment, Drew Schlussel will join me and we're going to dig more into product, you're watching theCUBE, the leader in enterprise and emerging tech coverage, keep it right there. ♪ Whoa ♪ ♪ Brenda in sales got an email ♪ ♪ Click here for a trip to Bombay ♪ ♪ It's not even called Bombay anymore ♪ ♪ But you clicked it anyway ♪ ♪ And now our data's been held hostage ♪ ♪ And now we're on sinking ship ♪ ♪ And a hacker's in our system ♪ ♪ Just 'cause Brenda wanted a trip ♪ ♪ She clicked on something stupid ♪ ♪ And our data's out of our control ♪ ♪ Into the hands of a hacker's ♪ ♪ And he's a giant asshole. ♪ ♪ He encrypted it in his basement ♪ ♪ He wants a million bucks for the key ♪ ♪ And I'm pretty sure he's 15 ♪ ♪ And still going through puberty ♪ ♪ I know you didn't mean to do us wrong ♪ ♪ But now I'm dealing with this all week long ♪ ♪ To make you all aware ♪ ♪ Of all this ransomware ♪ ♪ That is why I'm singing you this song ♪ ♪ C'mon ♪ ♪ Take it from me ♪ ♪ The director of IT ♪ ♪ Don't click on that email from a prince Nairobi ♪ ♪ 'Cuz he's not really a prince ♪ ♪ Now our data's locked up on our screen ♪ ♪ Controlled by a kid who's just fifteen ♪ ♪ And he's using our money to buy a Ferrari ♪ (gentle music) >> Joining me now is Drew Schlussel, who is the Senior Director of Product Marketing at Wasabi, hey Drew, good to see you again, thanks for coming back in theCUBE. >> Dave, great to be here, great to see you. >> All right, let's get into it. You know, Drew, prior to the pandemic, Zero Trust, just like kind of like digital transformation was sort of a buzzword and now it's become a real thing, almost a mandate, what's Wasabi's take on Zero Trust. >> So, absolutely right, it's been around a while and now people are paying attention, Wasabi's take is Zero Trust is a good thing. You know, there are too many places, right, where the bad guys are getting in. And, you know, I think of Zero Trust as kind of smashing laziness, right? It takes a little work, it takes some planning, but you know, done properly and using the right technologies, using the right vendors, the rewards are, of course tremendous, right? You can put to rest the fears of ransomware and having your systems compromised. >> Well, and we're going to talk about this, but there's a lot of process and thinking involved and, you know, design and your Zero Trust and you don't want to be wasting time messing with infrastructure, so we're going to talk about that, there's a lot of discussion in the industry, Drew, about immutability and air gaps, I'd like you to share Wasabi's point of view on these topics, how do you approach it and what makes Wasabi different? >> So, in terms of air gap and immutability, right, the beautiful thing about object storage, which is what we do all the time is that it makes it that much easier, right, to have a secure immutable copy of your data someplace that's easy to access and doesn't cost you an arm and a leg to get your data back. You know, we're working with some of the best, you know, partners in the industry, you know, we're working with folks like, you know, Veeam, Commvault, Arc, Marquee, MSP360, all folks who understand that you need to have multiple copies of your data, you need to have a copy stored offsite, and that copy needs to be immutable and we can talk a little bit about what immutability is and what it really means. >> You know, I wonder if you could talk a little bit more about Wasabi's solution because, sometimes people don't understand, you actually are a cloud, you're not building on other people's public clouds and this storage is the one use case where it actually makes sense to do that, tell us a little bit more about Wasabi's approach and your solution. >> Yeah, I appreciate that, so there's definitely some misconception, we are our own cloud storage service, we don't run on top of anybody else, right, it's our systems, it's our software deployed globally and we interoperate because we adhere to the S3 standard, we interoperate with practically hundreds of applications, primarily in this case, right, we're talking about backup and recovery applications and it's such a simple process, right? I mean, just about everybody who's anybody in this business protecting data has the ability now to access cloud storage and so we've made it really simple, in many cases, you'll see Wasabi as you know, listed in the primary set of available vendors and, you know, put in your private keys, make sure that your account is locked down properly using, let's say multifactor authentication, and you've got a great place to store copies of your data securely. >> I mean, we just heard from David Friend, if I did my math right, he was talking about, you know, 1/6 the cost per terabyte per month, maybe even a little better than that, how are you able to achieve such attractive economics? >> Yeah, so, you know, I can't remember how to translate my fractions into percentages, but I think we talk a lot about being 80%, right, less expensive than the hyperscalers. And you know, we talked about this at Vermont, right? There's some secret sauce there and you know, we take a different approach to how we utilize the raw capacity to the effective capacity and the fact is we're also not having to run, you know, a few hundred other services, right? We do storage, plain and simple, all day, all the time, so we don't have to worry about overhead to support, you know, up and coming other services that are perhaps, you know, going to be a loss leader, right? Customers love it, right, they see the fact that their data is growing 40, 80% year over year, they know they need to have some place to keep it secure, and, you know, folks are flocking to us in droves, in fact, we're seeing a tremendous amount of migration actually right now, multiple petabytes being brought to Wasabi because folks have figured out that they can't afford to keep going with their current hyperscaler vendor. >> And immutability is a feature of your product, right? What the feature called? Can you double-click on that a little bit? >> Yeah, absolutely. So, the term in S3 is Object Lock and what that means is your application will write an object to cloud storage, and it will define a retention period, let's say a week. And for that period, that object is immutable, untouchable, cannot be altered in any way, shape, or form, the application can't change it, the system administration can't change it, Wasabi can't change it, okay, it is truly carved in stone. And this is something that it's been around for a while, but you're seeing a huge uptick, right, in adoption and support for that feature by all the major vendors and I named off a few earlier and the best part is that with immutability comes some sense of, well, it comes with not just a sense of security, it is security. Right, when you have data that cannot be altered by anybody, even if the bad guys compromise your account, they steal your credentials, right, they can't take away the data and that's a beautiful thing, a beautiful, beautiful thing. >> And you look like an S3 bucket, is that right? >> Yeah, I mean, we're fully compatible with the S3 API, so if you're using S3 API based applications today, it's a very simple matter of just kind of redirecting where you want to store your data, beautiful thing about backup and recovery, right, that's probably the simplest application, simple being a relative term, as far as lift and shift, right? Because that just means for your next full, right, point that at Wasabi, retain your other fulls, you know, for whatever 30, 60, 90 days, and then once you've kind of made that transition from vine to vine, you know, you're often running with Wasabi. >> I talked to my open about the allure of object storage historically, you know, the simplicity of the get put syntax, but what about performance? Are you able to deliver performance that's comparable to other storage formats? >> Oh yeah, absolutely, and we've got the performance numbers on the site to back that up, but I forgot to answer something earlier, right, you said that immutability is a feature and I want to make it very clear that it is a feature but it's an API request. Okay, so when you're talking about gets and puts and so forth, you know, the comment you made earlier about being 80% more cost effective or 80% less expensive, you know, that API call, right, is typically something that the other folks charge for, right, and I think we used the metaphor earlier about the refrigerator, but I'll use a different metaphor today, right? You can think of cloud storage as a magical coffee cup, right? It gets as big as you want to store as much coffee as you want and the coffee's always warm, right? And when you want to take a sip, there's no charge, you want to, you know, pop the lid and see how much coffee is in there, no charge, and that's an important thing, because when you're talking about millions or billions of objects, and you want to get a list of those objects, or you want to get the status of the immutable settings for those objects, anywhere else it's going to cost you money to look at your data, with Wasabi, no additional charge and that's part of the thing that sets us apart. >> Excellent, so thank you for that. So, you mentioned some partners before, how do partners fit into the Wasabi story? Where do you stop? Where do they pick up? You know, what do they bring? Can you give us maybe, a paint a picture for us example, or two? >> Sure, so, again, we just do storage, right, that is our sole purpose in life is to, you know, to safely and securely store our customer's data. And so they're working with their application vendors, whether it's, you know, active archive, backup and recovery, IOT, surveillance, media and entertainment workflows, right, those systems already know how to manage the data, manage the metadata, they just need some place to keep the data that is being worked on, being stored and so forth. Right, so just like, you know, plugging in a flash drive on your laptop, right, you literally can plug in Wasabi as long as your applications support the API, getting started is incredibly easy, right, we offer a 30-day trial, one terabyte, and most folks find that within, you know, probably a few hours of their POC, right, it's giving them everything they need in terms of performance, in terms of accessibility, in terms of sovereignty, I'm guessing you talked to, you know, Dave Friend earlier about data sovereignty, right? We're global company, right, so there's got to be probably, you know, wherever you are in the world some place that will satisfy your sovereignty requirements, as well as your compliance requirements. >> Yeah, we did talk about sovereignty, Drew, this is really, what's interesting to me, I'm a bit of a industry historian, when I look back to the early days of cloud, I remember the large storage companies, you know, their CEOs would say, we're going to have an answer for the cloud and they would go out, and for instance, I know one bought competitor of Carbonite, and then couldn't figure out what to do with it, they couldn't figure out how to compete with the cloud in part, because they were afraid it was going to cannibalize their existing business, I think another part is because they just didn't have that imagination to develop an architecture that in a business model that could scale to see that you guys have done that is I love it because it brings competition, it brings innovation and it helps lower clients cost and solve really nagging problems. Like, you know, ransomware, of mutability and recovery, I'll give you the last word, Drew. >> Yeah, you're absolutely right. You know, the on-prem vendors, they're not going to go away anytime soon, right, there's always going to be a need for, you know, incredibly low latency, high bandwidth, you know, but, you know, not all data's hot all the time and by hot, I mean, you know, extremely hot, you know, let's take, you know, real time analytics for, maybe facial recognition, right, that requires sub-millisecond type of processing. But once you've done that work, right, you want to store that data for a long, long time, and you're going to want to also tap back into it later, so, you know, other folks are telling you that, you know, you can go to these like, you know, cold glacial type of tiered storage, yeah, don't believe the hype, you're still going to pay way more for that than you would with just a Wasabi-like hot cloud storage system. And, you know, we don't compete with our partners, right? We compliment, you know, what they're bringing to market in terms of the software vendors, in terms of the hardware vendors, right, we're a beautiful component for that hybrid cloud architecture. And I think folks are gravitating towards that, I think the cloud is kind of hitting a new gear if you will, in terms of adoption and recognition for the security that they can achieve with it. >> All right, Drew, thank you for that, definitely we see the momentum, in a moment, Drew and I will be back to get the customer perspective with Kevin Warenda, who's the Director of Information technology services at The Hotchkiss School, keep it right there. >> Hey, I'm Nate, and we wrote this song about ransomware to educate people, people like Brenda. >> Oh, God, I'm so sorry. We know you are, but Brenda, you're not alone, this hasn't just happened to you. >> No! ♪ Colonial Oil Pipeline had a guy ♪ ♪ who didn't change his password ♪ ♪ That sucks ♪ ♪ His password leaked, the data was breached ♪ ♪ And it cost his company 4 million bucks ♪ ♪ A fake update was sent to people ♪ ♪ Working for the meat company JBS ♪ ♪ That's pretty clever ♪ ♪ Instead of getting new features, they got hacked ♪ ♪ And had to pay the largest crypto ransom ever ♪ ♪ And 20 billion dollars, billion with a b ♪ ♪ Have been paid by companies in healthcare ♪ ♪ If you wonder buy your premium keeps going ♪ ♪ Up, up, up, up, up ♪ ♪ Now you're aware ♪ ♪ And now the hackers they are gettin' cocky ♪ ♪ When they lock your data ♪ ♪ You know, it has gotten so bad ♪ ♪ That they demand all of your money and it gets worse ♪ ♪ They go and the trouble with the Facebook ad ♪ ♪ Next time, something seems too good to be true ♪ ♪ Like a free trip to Asia! ♪ ♪ Just check first and I'll help before you ♪ ♪ Think before you click ♪ ♪ Don't get fooled by this ♪ ♪ Who isn't old enough to drive to school ♪ ♪ Take it from me, the director of IT ♪ ♪ Don't click on that email from a prince in Nairobi ♪ ♪ Because he's not really a prince ♪ ♪ Now our data's locked up on our screen ♪ ♪ Controlled by a kid who's just fifteen ♪ ♪ And he's using our money to buy a Ferrari ♪ >> It's a pretty sweet car. ♪ A kid without facial hair, who lives with his mom ♪ ♪ To learn more about this go to wasabi.com ♪ >> Hey, don't do that. ♪ Cause if we had Wasabi's immutability ♪ >> You going to ruin this for me! ♪ This fifteen-year-old wouldn't have on me ♪ (gentle music) >> Drew and I are pleased to welcome Kevin Warenda, who's the Director of Information Technology Services at The Hotchkiss School, a very prestigious and well respected boarding school in the beautiful Northwest corner of Connecticut, hello, Kevin. >> Hello, it's nice to be here, thanks for having me. >> Yeah, you bet. Hey, tell us a little bit more about The Hotchkiss School and your role. >> Sure, The Hotchkiss School is an independent boarding school, grades nine through 12, as you said, very prestigious and in an absolutely beautiful location on the deepest freshwater lake in Connecticut, we have 500 acre main campus and a 200 acre farm down the street. My role as the Director of Information Technology Services, essentially to oversee all of the technology that supports the school operations, academics, sports, everything we do on campus. >> Yeah, and you've had a very strong history in the educational field, you know, from that lens, what's the unique, you know, or if not unique, but the pressing security challenge that's top of mind for you? >> I think that it's clear that educational institutions are a target these days, especially for ransomware. We have a lot of data that can be used by threat actors and schools are often underfunded in the area of IT security, IT in general sometimes, so, I think threat actors often see us as easy targets or at least worthwhile to try to get into. >> Because specifically you are potentially spread thin, underfunded, you got students, you got teachers, so there really are some, are there any specific data privacy concerns as well around student privacy or regulations that you can speak to? >> Certainly, because of the fact that we're an independent boarding school, we operate things like even a health center, so, data privacy regulations across the board in terms of just student data rights and FERPA, some of our students are under 18, so, data privacy laws such as COPPA apply, HIPAA can apply, we have PCI regulations with many of our financial transactions, whether it be fundraising through alumni development, or even just accepting the revenue for tuition so, it's a unique place to be, again, we operate very much like a college would, right, we have all the trappings of a private college in terms of all the operations we do and that's what I love most about working in education is that it's all the industries combined in many ways. >> Very cool. So let's talk about some of the defense strategies from a practitioner point of view, then I want to bring in Drew to the conversation so what are the best practice and the right strategies from your standpoint of defending your data? >> Well, we take a defense in-depth approach, so we layer multiple technologies on top of each other to make sure that no single failure is a key to getting beyond those defenses, we also keep it simple, you know, I think there's some core things that all organizations need to do these days in including, you know, vulnerability scanning, patching , using multifactor authentication, and having really excellent backups in case something does happen. >> Drew, are you seeing any similar patterns across other industries or customers? I mean, I know we're talking about some uniqueness in the education market, but what can we learn from other adjacent industries? >> Yeah, you know, Kevin is spot on and I love hearing what he's doing, going back to our prior conversation about Zero Trust, right, that defense in-depth approach is beautifully aligned, right, with the Zero Trust approach, especially things like multifactor authentication, always shocked at how few folks are applying that very, very simple technology and across the board, right? I mean, Kevin is referring to, you know, financial industry, healthcare industry, even, you know, the security and police, right, they need to make sure that the data that they're keeping, evidence, right, is secure and immutable, right, because that's evidence. >> Well, Kevin, paint a picture for us, if you would. So, you were primarily on-prem looking at potentially, you know, using more cloud, you were a VMware shop, but tell us, paint a picture of your environment, kind of the applications that you support and the kind of, I want to get to the before and the after Wasabi, but start with kind of where you came from. >> Sure, well, I came to The Hotchkiss School about seven years ago and I had come most recently from public K12 and municipal, so again, not a lot of funding for IT in general, security, or infrastructure in general, so Nutanix was actually a hyperconverged solution that I implemented at my previous position. So when I came to Hotchkiss and found mostly on-prem workloads, everything from the student information system to the card access system that students would use, financial systems, they were almost all on premise, but there were some new SaaS solutions coming in play, we had also taken some time to do some business continuity, planning, you know, in the event of some kind of issue, I don't think we were thinking about the pandemic at the time, but certainly it helped prepare us for that, so, as different workloads were moved off to hosted or cloud-based, we didn't really need as much of the on-premise compute and storage as we had, and it was time to retire that cluster. And so I brought the experience I had with Nutanix with me, and we consolidated all that into a hyper-converged platform, running Nutanix AHV, which allowed us to get rid of all the cost of the VMware licensing as well and it is an easier platform to manage, especially for small IT shops like ours. >> Yeah, AHV is the Acropolis hypervisor and so you migrated off of VMware avoiding the VTax avoidance, that's a common theme among Nutanix customers and now, did you consider moving into AWS? You know, what was the catalyst to consider Wasabi as part of your defense strategy? >> We were looking at cloud storage options and they were just all so expensive, especially in egress fees to get data back out, Wasabi became across our desks and it was such a low barrier to entry to sign up for a trial and get, you know, terabyte for a month and then it was, you know, $6 a month for terabyte. After that, I said, we can try this out in a very low stakes way to see how this works for us. And there was a couple things we were trying to solve at the time, it wasn't just a place to put backup, but we also needed a place to have some files that might serve to some degree as a content delivery network, you know, some of our software applications that are deployed through our mobile device management needed a place that was accessible on the internet that they could be stored as well. So we were testing it for a couple different scenarios and it worked great, you know, performance wise, fast, security wise, it has all the features of S3 compliance that works with Nutanix and anyone who's familiar with S3 permissions can apply them very easily and then there was no egress fees, we can pull data down, put data up at will, and it's not costing as any extra, which is excellent because especially in education, we need fixed costs, we need to know what we're going to spend over a year before we spend it and not be hit with, you know, bills for egress or because our workload or our data storage footprint grew tremendously, we need that, we can't have the variability that the cloud providers would give us. >> So Kevin, you explained you're hypersensitive about security and privacy for obvious reasons that we discussed, were you concerned about doing business with a company with a funny name? Was it the trial that got you through that knothole? How did you address those concerns as an IT practitioner? >> Yeah, anytime we adopt anything, we go through a risk review. So we did our homework and we checked the funny name really means nothing, there's lots of companies with funny names, I think we don't go based on the name necessarily, but we did go based on the history, understanding, you know, who started the company, where it came from, and really looking into the technology and understanding that the value proposition, the ability to provide that lower cost is based specifically on the technology in which it lays down data. So, having a legitimate, reasonable, you know, excuse as to why it's cheap, we weren't thinking, well, you know, you get what you pay for, it may be less expensive than alternatives, but it's not cheap, you know, it's reliable, and that was really our concern. So we did our homework for sure before even starting the trial, but then the trial certainly confirmed everything that we had learned. >> Yeah, thank you for that. Drew, explain the whole egress charge, we hear a lot about that, what do people need to know? >> First of all, it's not a funny name, it's a memorable name, Dave, just like theCUBE, let's be very clear about that, second of all, egress charges, so, you know, other storage providers charge you for every API call, right? Every get, every put, every list, everything, okay, it's part of their process, it's part of how they make money, it's part of how they cover the cost of all their other services, we don't do that. And I think, you know, as Kevin has pointed out, right, that's a huge differentiator because you're talking about a significant amount of money above and beyond what is the list price. In fact, I would tell you that most of the other storage providers, hyperscalers, you know, their list price, first of all, is, you know, far exceeding anything else in the industry, especially what we offer and then, right, their additional cost, the egress costs, the API requests can be two, three, 400% more on top of what you're paying per terabyte. >> So, you used a little coffee analogy earlier in our conversation, so here's what I'm imagining, like I have a lot of stuff, right? And I had to clear up my bar and I put some stuff in storage, you know, right down the street and I pay them monthly, I can't imagine having to pay them to go get my stuff, that's kind of the same thing here. >> Oh, that's a great metaphor, right? That storage locker, right? You know, can you imagine every time you want to open the door to that storage locker and look inside having to pay a fee? >> No, that would be annoying. >> Or, every time you pull into the yard and you want to put something in that storage locker, you have to pay an access fee to get to the yard, you have to pay a door opening fee, right, and then if you want to look and get an inventory of everything in there, you have to pay, and it's ridiculous, it's your data, it's your storage, it's your locker, you've already paid the annual fee, probably, 'cause they gave you a discount on that, so why shouldn't you have unfettered access to your data? That's what Wasabi does and I think as Kevin pointed out, right, that's what sets us completely apart from everybody else. >> Okay, good, that's helpful, it helps us understand how Wasabi's different. Kevin, I'm always interested when I talk to practitioners like yourself in learning what you do, you know, outside of the technology, what are you doing in terms of educating your community and making them more cyber aware? Do you have training for students and faculty to learn about security and ransomware protection, for example? >> Yes, cyber security awareness training is definitely one of the required things everyone should be doing in their organizations. And we do have a program that we use and we try to make it fun and engaging too, right, this is often the checking the box kind of activity, insurance companies require it, but we want to make it something that people want to do and want to engage with so, even last year, I think we did one around the holidays and kind of pointed out the kinds of scams they may expect in their personal life about, you know, shipping of orders and time for the holidays and things like that, so it wasn't just about protecting our school data, it's about the fact that, you know, protecting their information is something do in all aspects of your life, especially now that the folks are working hybrid often working from home with equipment from the school, the stakes are much higher and people have a lot of our data at home and so knowing how to protect that is important, so we definitely run those programs in a way that we want to be engaging and fun and memorable so that when they do encounter those things, especially email threats, they know how to handle them. >> So when you say fun, it's like you come up with an example that we can laugh at until, of course, we click on that bad link, but I'm sure you can come up with a lot of interesting and engaging examples, is that what you're talking about, about having fun? >> Yeah, I mean, sometimes they are kind of choose your own adventure type stories, you know, they stop as they run, so they're telling a story and they stop and you have to answer questions along the way to keep going, so, you're not just watching a video, you're engaged with the story of the topic, yeah, and that's what I think is memorable about it, but it's also, that's what makes it fun, you're not just watching some talking head saying, you know, to avoid shortened URLs or to check, to make sure you know the sender of the email, no, you're engaged in a real life scenario story that you're kind of following and making choices along the way and finding out was that the right choice to make or maybe not? So, that's where I think the learning comes in. >> Excellent. Okay, gentlemen, thanks so much, appreciate your time, Kevin, Drew, awesome having you in theCUBE. >> My pleasure, thank you. >> Yeah, great to be here, thanks. >> Okay, in a moment, I'll give you some closing thoughts on the changing world of data protection and the evolution of cloud object storage, you're watching theCUBE, the leader in high tech enterprise coverage. >> Announcer: Some things just don't make sense, like showing up a little too early for the big game. >> How early are we? >> Couple months. Popcorn? >> Announcer: On and off season, the Red Sox cover their bases with affordable, best in class cloud storage. >> These are pretty good seats. >> Hey, have you guys seen the line from the bathroom? >> Announcer: Wasabi Hot Cloud Storage, it just makes sense. >> You don't think they make these in left hand, do you? >> We learned today how a serial entrepreneur, along with his co-founder saw the opportunity to tap into the virtually limitless scale of the cloud and dramatically reduce the cost of storing data while at the same time, protecting against ransomware attacks and other data exposures with simple, fast storage, immutability, air gaps, and solid operational processes, let's not forget about that, okay? People and processes are critical and if you can point your people at more strategic initiatives and tasks rather than wrestling with infrastructure, you can accelerate your process redesign and support of digital transformations. Now, if you want to learn more about immutability and Object Block, click on the Wasabi resource button on this page, or go to wasabi.com/objectblock. Thanks for watching Secure Storage Hot Takes made possible by Wasabi. This is Dave Vellante for theCUBE, the leader in enterprise and emerging tech coverage, well, see you next time. (gentle upbeat music)

Published Date : Jul 11 2022

SUMMARY :

and secure solution on the market. the speed with which you and I want to get your perspectives but applied to cloud storage is, you know, you about data sovereignty. one is, you know, if you're and the elimination of and every country, you know, and how do you think in the cloud, as opposed to, you know, In addition to which, you know, you don't want to be wasting your time money to buy a Ferrari ♪ hey Drew, good to see you again, Dave, great to be the pandemic, Zero Trust, but you know, done properly and using some of the best, you know, you could talk a little bit and, you know, put in your private keys, not having to run, you know, and the best part is from vine to vine, you know, and so forth, you know, the Excellent, so thank you for that. and most folks find that within, you know, to see that you guys have done that to be a need for, you know, All right, Drew, thank you for that, Hey, I'm Nate, and we wrote We know you are, but this go to wasabi.com ♪ ♪ Cause if we had Wasabi's immutability ♪ in the beautiful Northwest Hello, it's nice to be Yeah, you bet. that supports the school in the area of IT security, in terms of all the operations we do and the right strategies to do these days in including, you know, and across the board, right? kind of the applications that you support planning, you know, in the and then it was, you know, and really looking into the technology Yeah, thank you for that. And I think, you know, as you know, right down the and then if you want to in learning what you do, you know, it's about the fact that, you know, and you have to answer awesome having you in theCUBE. and the evolution of cloud object storage, like showing up a little the Red Sox cover their it just makes sense. and if you can point your people

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

KevinPERSON

0.99+

DrewPERSON

0.99+

Kevin WarendaPERSON

0.99+

Dave VellantePERSON

0.99+

Drew SchlusselPERSON

0.99+

BrendaPERSON

0.99+

DavePERSON

0.99+

ParisLOCATION

0.99+

Jeff FlowersPERSON

0.99+

SydneyLOCATION

0.99+

Drew SchlusselPERSON

0.99+

SingaporeLOCATION

0.99+

TorontoLOCATION

0.99+

LondonLOCATION

0.99+

WasabiORGANIZATION

0.99+

30-dayQUANTITY

0.99+

FrankfurtLOCATION

0.99+

AmazonORGANIZATION

0.99+

BombayLOCATION

0.99+

ConnecticutLOCATION

0.99+

CarboniteORGANIZATION

0.99+

15QUANTITY

0.99+

20QUANTITY

0.99+

JeffPERSON

0.99+

Red SoxORGANIZATION

0.99+

AsiaLOCATION

0.99+

NairobiLOCATION

0.99+

80%QUANTITY

0.99+

The Hotchkiss SchoolORGANIZATION

0.99+

JBSORGANIZATION

0.99+

16 terabyteQUANTITY

0.99+

NatePERSON

0.99+

David FriendPERSON

0.99+

60QUANTITY

0.99+

30QUANTITY

0.99+

U.S.LOCATION

0.99+

S3TITLE

0.99+

threeQUANTITY

0.99+

May of 2018DATE

0.99+

oneQUANTITY

0.99+

2020sDATE

0.99+

twoQUANTITY

0.99+

fifteenQUANTITY

0.99+

Hotchkiss SchoolORGANIZATION

0.99+

Zero TrustORGANIZATION

0.99+

100 terabyteQUANTITY

0.99+

500 acreQUANTITY

0.99+

firstQUANTITY

0.99+

200 acreQUANTITY

0.99+

ConvoORGANIZATION

0.99+

a yearQUANTITY

0.99+

one terabyteQUANTITY

0.99+

34,000 customersQUANTITY

0.99+

Matthew Scullion, Matillion & Harveer Singh, Western Union | Snowflake Summit 2022


 

>>Hey everyone. Welcome back to Las Vegas. This is the Cube's live coverage of day. One of snowflake summit 22 fourth annual. We're very happy to be here. A lot of people here, Lisa Martin with Dave Valante, David's always great to be at these events with you, but me. This one is shot out of the cannon from day one, data, data, data, data. That's what you heard of here. First, we have two guests joining us next, please. Welcome Matthew Scalian. Who's an alumni of the cube CEO and founder of Matillion and Jer staying chief data architect and global head of data engineering from Western union. Welcome gentlemen. Thank >>You. Great to be here. >>We're gonna unpack the Western union story in a second. I love that, but Matthew, I wanted to start with you, give the audience who might not be familiar with Matillion an overview, your vision, your differentiators, your joint value statement with snowflake, >>Of course. Well, first of all, thank you for having me on the cube. Again, Matillion S mission is to make the world's data useful, and we do that by providing a technology platform that allows our customers to load transform, synchronize, and orchestrate data on the snowflake data cloud. And on, on the cloud in general, we've been doing that for a number of years. We're co headquartered in the UK and the us, hence my dat accents. And we work with all sorts of companies, commercial scale, large end enterprises, particularly including of course, I'm delighted to say our friends at Western union. So that's why we're here today. >>And we're gonna talk about that in a second, but I wanna understand what's new with the data integration platform from Matillion perspective, lots of stuff coming out, give us an overview. >>Yeah, of course, it's been a really busy year and it's great to be here at snowflake summit to be able to share some of what we've been working on. You know, the Matillion platform is all about making our customers as productive as possible in terms of time to value insight on that analytics, data science, AI projects, like get you to value faster. And so the more technology we can put in the platform and the easier we can make it to use, the better we can achieve that goal. So this year we've, we've shipped a product that we call MDL 2.0, that's enterprise focused, exquisitely, easy to use batch data pipelines. So customers can load data even more simply into the snowflake data cloud, very excitingly we've also launched Matillion CDC. And so this is an industry first cloud native writer, head log based change data capture. >>I haven't come up with a shorter way of saying that, but, and surprise customers need this technology and it's been around for years, but mostly pre-cloud technology. That's been repurposed for the cloud. And so Matillion has rebuilt that concept for the cloud. And we launched that earlier this year. And of course we've continued to build out the core Matillion ETL platform that today over a thousand joint snowflake Matillion customers use, including Western union, of course we've been adding features there such as universal connectivity. And so a challenge that all data integration vendors have is having the right connectors for their source systems. Universal connectivity allows you to connect to any source system without writing code point and click. We shape that as well. So it's been a busy year, >>Was really simple. Sorry. I love that. He said that and it also sounded great with your accent. I didn't wanna >>Thank you. Excellent. Javier, talk about your role at Western union in, in what you've seen in terms of the evolution of the, the data stack. >>So in the last few years, well, a little bit of Western union, a 70 or 170 year old company, pretty much everybody knows what Western union is, right? Driving an interesting synergy from what Matthew says, when data moves money moves, that's what we do when he moves the da, he moves the data. We move the money. That's the synergy between, you know, us and the organization that support us from data move perspective. So what I've seen in the last few years is obviously a shift towards the cloud, but, you know, within the cloud itself, obviously there's a lot of players as well. And we as customers have always been wishing to have a short, smaller footprint of data so that the movement becomes a little lesser. You know, interestingly enough, in this conference, I've heard some very interesting stuff, which kind of helping me to bring that footprint down to a manageable number, to be more governed, to be more, you know, effective in terms of delivering more end results for my customers as well. >>So Matillion has been a great partner for us from our cloud adoption perspective. During the COVID times, we were a re we are a, you know, multi-channel organization. We have retail stores as well, our digital presence, but people just couldn't go to the retail stores. So we had to find ways to accelerate our adoption, make sure our systems are scaling and making sure that we are delivering the same experience to our customers. And that's where, you know, tools like Matillion came in and really, really partnered up with us to kind of bring it up to the level. >>So talk specifically about the stack evolution. Cause I have this sort of theory that everybody talks about injecting data and, and machine intelligence and AI and machine learning into apps. But the application development stack is like totally separate from the, the data analytics and the data pipeline stack. And the database is somewhere over here as well. How is that evolving? Are those worlds coming together? >>Some part of those words are coming together, but where I still see the difference is your heavy lifting will still happen on the data stack. You cannot have that heavy lifting on the app because if once the apps becomes heavy, you'll have trouble communicating with, with, with the organizations. You know, you need to be as lean as possible in the front end and make sure things are curated. Things are available on demand as soon as possible. And that's why you see all these API driven applications are doing really, really well because they're delivering those results back to the, the leaner applications much faster. So I'm a big proponent of, yes, it can be hybrid, but the majority of the heavy lifting still needs to happen down at the data layer, which is where I think snowflake plays a really good role >>In APIs are the connective tissue >>APIs connections. Yes. >>Also I think, you know, in terms of the, the data stack, there's another parallel that you can draw from applications, right? So technology is when they're new, we tend to do things in a granular way. We write a lot of code. We do a lot of sticking of things together with plasters and sticky tape. And it's the purview of high end engineers and people enthusiastic about that to get started. Then the business starts to see the value in this stuff, and we need to move a lot faster. And technology solutions come in and this is what the, the data cloud is all about, right? The technology getting out of the way and allowing people to focus on higher order problems of innovating around analytics, data applications, AI, machine learning, you know, that's also where Matillion sit as well as other companies in this modern enterprise data stack is technology vendors are coming in allowing organizations to move faster and have high levels of productivity. So I think that's a good parallel to application development. >>And's just follow up on that. When you think about data prep and you know, all the focus on data quality, you've got a data team, you know, in the data pipeline, a very specialized, maybe even hyper specialized data engineers, quality engineers, data, quality engineers, data analysts, data scientist, but they, and they serve a lot of different business lines. They don't necessarily have the business, they don't have the business context typically. So it's kind of this back and forth. Do you see that changing in your organization or, or the are the lines of business taking more responsibility for the data and, and addressing that problem? It's, >>It's like you die by thousand paper cuts or you just die. Right? That's the kind >>Of, right, >>Because if I say it's, it's good to be federated, it comes with its own flaws. But if I say, if it's good to be decentralized, then I'm the, the guy to choke, right? And in my role, I'm the guy to choke. So I've selectively tried to be a pseudo federated organization, where do I do have folks reporting into our organization, but they sit close to the line of business because the business understands data better. We are working with them hand in glove. We have dedicated teams that support them. And our problem is we are also regional. We are 200 countries. So the regional needs are very different than our us needs. Majority of the organizations that you probably end up talking to have like very us focused, 50 per more than 50% of our revenue is international. So we do, we are dealing with people who are international, their needs for data, their needs for quality and their needs for the, the delivery of those analytics and the data is completely different. And so we have to be a little bit more closer to the business than traditionally. Some, some organizations feel that they need >>To, is there need for the underlying infrastructure and the operational details that as diverse, or is that something that you bring standardization to the, >>So the best part about this, the cloud that happened to us is exactly that, because at one point of time, I had infrastructure in one country. I had another infrastructure sitting in another country, regional teams, making different different decisions of bringing in different tools. Now I can standardize. I will say, Matillion is our standard for doing ETL work. If this is the use case, but then it gets deployed across the geographies because the cloud helps us or the cloud platform helps us to manage it. Sitting down here. I have three centers around the world, you know, Costa Rica, India, and the us. I can manage 24 7 sitting here. No >>Problem. So the underlying our infrastructure is, is global, but the data needs are dealt with locally. Yep. >>One of the pav question, I was just thinking JVE is super well positioned funds for you, which is around that business domain knowledge versus technical expertise. Cause again, early in technology journeys tend, things tend to be very technical and therefore only high end engineers can do it, but high end engineers are scar. Right? Right. And, and also, I mean, we survey our hundreds of large enterprise customers and they tell us they spend two thirds of their time doing stuff they don't really want to do like reinventing the wheel, basic data movement and the low order staff. And so if you can make those people more productive and allow them to focus on higher value problems, but also bring pseudo technical people into it. Overall, the business can go a lot faster. And the way you do that is by making it easier. That's why Matillion is a low code NOCO platform, but Jer and Western union are doing this right. I >>Mean, I can't compete with AWS and Google to hire people. So I need to find people who are smart to figure the products that we have to make them work. I don't want them to spend time on infrastructure, Adam, I don't want them to spend time on trying to manage platforms. I want them to deliver the data, deliver the results to the business so that they can build and serve their customers better. So it's a little bit of a different approach, different mindset. I used to be in consulting for 17 years. I thought I knew it all, but it changed overnight when I own all of these systems. And I'm like, I need to be a little bit more smarter than this. I need to be more proactive and figure out what my business needs rather than what just from a technology needs. It's more what the business needs and how I can deliver that needs to them. So simple analogy, you know, I can build the best architecture in the world. It's gonna cost me an arm and leg, but I can't drive it because the pipeline is not there. So I can have a Ferrari, but I can't drive it. It's still capped at 80, 80 miles an hour. So rather than spend, rather than building one Ferrari, let me have 10 Toyotas or 10 Fs, which will go further along and do better for my cus my, for my customers. >>So how do you see this whole, we hearing about the data cloud. We hear about the marketplace, data products now, application development inside the data cloud. How do you see that affecting not so much the productivity of the data teams. I don't wanna necessarily say, but the product, the value that, that customers like you can get out >>Data. So data is moving closer to the business. That's the value I see, because you are injecting the business and you're injecting the application much more closer to the data because it, in the past, it was days and days of, you know, churn the data to actually clear results. Now the data has moved much closer. So I have a much faster turnaround time. The business can adapt and actually react much, much faster. It took us like 16 to 30 days to deliver, you know, data for marketing. Now I can turn it down in four hours. If I see something happening, I'll give you an example. The war in Ukraine happened. Let is shut down operations in Russia. Ukraine is cash swamp. There's no cash in Ukraine. We have cash. We roll out campaign, $0 money, transferred to Ukraine within four hours of the world going on. That's the impact that we have >>Massive impact. That's huge, especially with such a macro challenge going on, on the, in, in the world. Thank you so much for sharing the Matillion snowflake partnership story, how it's helping Western union really transform into a data company. We love hearing stories of organizations that are 170 years old that have always really been technology focused, but to see it come to life so quickly is pretty powerful. Guys. Thank you so much for your time. Thanks >>Guys. Thank you, having it. Thank >>You >>For Dave Velante and our guests. I'm Lisa Martin. You're watching the cubes live coverage of snowflake summit 22 live from Las Vegas. Stick around. We'll be back after a short break.

Published Date : Jun 14 2022

SUMMARY :

Who's an alumni of the cube give the audience who might not be familiar with Matillion an overview, your vision, And on, on the cloud in general, we've been doing that for a number of And we're gonna talk about that in a second, but I wanna understand what's new with the data integration platform from Matillion And so the more technology we can put in the platform and the easier we can make it to use, And so Matillion has rebuilt that concept for the cloud. He said that and it also sounded great with your accent. in what you've seen in terms of the evolution of the, the data stack. That's the synergy between, you know, us and the organization that support us from data move perspective. are delivering the same experience to our customers. So talk specifically about the stack evolution. but the majority of the heavy lifting still needs to happen down at the data layer, Then the business starts to see the value or the are the lines of business taking more responsibility for the data and, That's the kind And in my role, I'm the guy to choke. So the best part about this, the cloud that happened to us is exactly that, So the underlying our infrastructure is, is global, And the way you do that is by making it easier. the data, deliver the results to the business so that they can build and serve their customers but the product, the value that, that customers like you can get out it, in the past, it was days and days of, you know, churn the data to actually clear in, in the world. Thank For Dave Velante and our guests.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MatthewPERSON

0.99+

Dave ValantePERSON

0.99+

Lisa MartinPERSON

0.99+

Dave VelantePERSON

0.99+

Matthew ScalianPERSON

0.99+

JavierPERSON

0.99+

80QUANTITY

0.99+

DavidPERSON

0.99+

Las VegasLOCATION

0.99+

16QUANTITY

0.99+

MatillionORGANIZATION

0.99+

UkraineLOCATION

0.99+

$0QUANTITY

0.99+

AWSORGANIZATION

0.99+

Western unionORGANIZATION

0.99+

70QUANTITY

0.99+

UKLOCATION

0.99+

17 yearsQUANTITY

0.99+

IndiaLOCATION

0.99+

GoogleORGANIZATION

0.99+

two guestsQUANTITY

0.99+

Matthew ScullionPERSON

0.99+

AdamPERSON

0.99+

FirstQUANTITY

0.99+

RussiaLOCATION

0.99+

four hoursQUANTITY

0.99+

30 daysQUANTITY

0.99+

Costa RicaLOCATION

0.99+

FerrariORGANIZATION

0.99+

MatillionPERSON

0.99+

todayDATE

0.99+

OneQUANTITY

0.99+

Western UnionLOCATION

0.98+

JerORGANIZATION

0.98+

firstQUANTITY

0.98+

one countryQUANTITY

0.97+

two thirdsQUANTITY

0.97+

earlier this yearDATE

0.97+

oneQUANTITY

0.97+

JVEORGANIZATION

0.96+

200 countriesQUANTITY

0.96+

one pointQUANTITY

0.96+

10QUANTITY

0.96+

Harveer SinghPERSON

0.95+

three centersQUANTITY

0.95+

this yearDATE

0.95+

day oneQUANTITY

0.94+

Snowflake Summit 2022EVENT

0.92+

170 years oldQUANTITY

0.91+

170 year oldQUANTITY

0.91+

Western unionLOCATION

0.9+

50 perQUANTITY

0.9+

22QUANTITY

0.86+

80 miles an hourQUANTITY

0.85+

thousand paper cutsQUANTITY

0.84+

UkraiLOCATION

0.83+

fourth annualQUANTITY

0.83+

over a thousand joint snowflakeQUANTITY

0.82+

more than 50%QUANTITY

0.8+

CubeORGANIZATION

0.76+

Matillion CDCORGANIZATION

0.76+

lastDATE

0.75+

hundreds of large enterprise customersQUANTITY

0.74+

JerPERSON

0.73+

24 7QUANTITY

0.72+

MDL 2.0OTHER

0.71+

yearsDATE

0.71+

WesternORGANIZATION

0.65+

cubeORGANIZATION

0.64+

10COMMERCIAL_ITEM

0.62+

secondQUANTITY

0.61+

ToyotasORGANIZATION

0.58+

George Axberg, VAST Data | VeeamON 2022


 

>>Welcome back to the cubes coverage of Veeam on 2022 at the RS. Nice to be at the aria. My co-host Dave Nicholson here. We spend a lot of time at the Venetian convention center, formerly the sand. So it's nice to have a more intimate venue. I really like it here. George Burg is joining us. He's the vice president of data protection at vast data, a company that some of you may not know about. George. >>Welcome a pleasure. Thank you so much for having me. >>So VAs is smoking hot, raised a ton of dough. You've got great founders, hard charging, interesting tech. We've covered a little bit on the Wikibon research side, but give us the overview of the company. Yeah, >>If I could please. So we're here at the, you know, the Veeam show and, you know, the theme is modern data protection, and I don't think there's any company that epitomizes modern data protection more than vast data. The fact that we're able to do an all flash system at exabyte scale, but the economics of cloud object based deep, cheap, and deep archive type solutions and an extremely resilient platform is really game changing for the marketplace. So, and quite frankly, a marketplace from a data protection target space that I think is, is ripe for change and in need of change based on the things that are going on in the marketplace today. >>Yeah. So a lot of what you said is gonna be surprising to people, wait a minute, you're talking about data protection and all flash sure. I thought you'd use cheap and deep disc or, you know, even tape for that or, you know, spin it up in the cloud in a, in a deep archive or a glacier. Explain your approach in, in architecture. Yeah. At a >>High level. Yeah. So great question. We get that question every day and got it in the booth yesterday, probably about 40 or 50 times. How could it be all flash that at an economic point that is the fitting that of, you know, data protection. Yeah. >>What is this Ferrari minivan of which you speak? >>Yeah, yeah, yeah. The minivan that goes 180 miles an hour, right. That, you know, it's, it's really all about the architecture, right? The component tree is, is somewhat similar to what you'll see in other devices. However, it's how we're leveraging them in the architecture and design, you know, from our founders years ago and building a solution that just not, was not available in the marketplace. So yeah, sure. We're using, you know, all flash QLC drives, but the technology, you know, the advanced next generation algorithms or erasure coding or rage striping allows us to be extremely efficient. We also have some technologies around what we call similarity, some advanced data reduction. So you need less, less capacity if you will, with a vast system. So that obviously help obviously helps us out tremendously with their economics. But the other thing is I could sell a customer exactly what they need. If you think about the legacy data protection market purpose built back of appliances, for example, you know, ALA, Adele, Aita, and HP, you know, they're selling systems that are somewhat rigid. There's always a controller in a capacity. It's tied to a model number right. Soon as you need more performance, you buy another, as soon as you need more capacity, you buy another, it's really not modular in any way. It's great >>Model. If you want to just keep, keep billing the >>Customer. Yeah. If, if that, if yeah. And, and I, I think, I think at this point, the purpose, you know, Dave, the purpose built backup appliance market is, is hungry for a change. Right. You know, there's, there's not anyone that has one. It doesn't exist. I'm not just talking about having two because of replication. I'm it's because of organic growth. Ransomware needs to have a second unit, a second copy. And just, and just scalability. Well, you >>Guys saw that fatigue with that model of, oh, you need more buy more, >>Right? Oh, without a doubt, you said we're gonna attack that. Yeah. Yeah. Sorry. No, no, no. That's great. Without a doubt. So, so we can configure a solution exactly. To the need. Cause let's face it. Every single data center, every single vertical market, it's a work of art. You know, everyone's retention policies are different. Everyone's compliance needs are different. There might be some things that are self mandated or government mandated and they're all gonna be somewhat different. Right? The fact of the matter is the way that our, our architecture works, disaggregated shared everything. Architecture is different because when we go back to those model numbers and there's more rigid purpose built back of appliances, or, or maybe a raise designed specifically for data protection, they don't offer that flexibility. And, you know, I, I, I think our, our, our, our entry point is sized to exactly what the need is. Our ease of scalability. You need more performance. We just add another compute, another compute box, what we call our C box. If you need more capacity, we just add another data box, a D box, you know, where the data resides. And, you know, I, you know, especially here at Veeam, I think customers are really clamoring for that next generation solution. They love the idea that there's a low point of entry, but they also love the idea that, that it's easy to scale on demand, you know, as, as needed and as needed basis. >>So just, I wanna be just, I want to go down another layer on that architecturally. Cause I think it's important for people to understand. Sure, exactly what you're saying. When you're talking about scaling, there's this concept of the, of the sort of devil's triangle, the tyranny of this combination of memory, CPU and storage. Sure. And if you're too rigid, like in an appliance, you end up paying for things you don't need. Correct. When all I need is a little more capacity. Correct. All I need is a little more horsepower. Well, you wanna horsepower? No, you gotta buy a bunch of capacity. Exactly. Oh, need capacity. No, no. You need to buy expensive CPUs and suck a bunch of power. All I need is capacity. So what, so go through that, just a little more detail in terms of sure. How you cobble these systems together. Sure. My, the way my brain works, it's always about Legos. So feel free to use Legos. >>Yeah. We, so, so with our disaggregated solution, right. We've separated basically hardware from software. Right. So, so, so that's a good thing, right? From an economic standpoint, but also a design and architecture standpoint, but also an underlining underpinning of that solution is we've also separated the capacity from the performance. And as you just mentioned, those are typically relatively speaking for every other solution on the planet. Those are tied together. Right? Right. So we've disaggregated that as well within our architecture. So we, we again have basically three tier, tier's not the right word, three components that build out a vast cluster. And again, we don't sell like a solution designed by a model number. And that's typically our C boxes connected via NVMe over fabric to a D box C is all the performance D is all the capacity because they're modular. You can end up like our, our baseline product would start out as a one by one, one C box one D box, right? >>Connected again, via different, different size and Vme fabrics. And that could scale to hundreds. When we do have customers with dozens of C boxes, meeting high performance requirements, keep in mind when, when vast data came to market, our founders brought it to the market for high performance computing machine learning, AI data protection was an afterthought, but those found, you know, foundational things that we're able to build in that modularity with performance at scale, it behooves itself, it's perfect fit for data protection. So we see in clients today, just yesterday, two clients standing next to each other in the same market in the same vertical. I have a 30 day retention. I have a 90 day retention. I have to keep one year worth of full backups. I have to keep seven years worth of full backups. We can accommodate both and size it to exactly what the need is. >>Now, the moment that they need one more terabyte, we license into 100 terabyte increments so they can actually buy it in a sense, almost in arrears, we don't turn it off. We don't, there's not a hard cat. They have access to that capacity within the solution that they provide and they can have access immediate access. And without going through, let's face it. A lot of the other companies that we're both thinking of that have those traditional again, purpose-built solutions or arrays. They want you to buy everything up front in advance, signing license agreements. We're the exact opposite. We want you to buy for the need as, and as needed basis. And also because the fact that we're, multi-protocol multi-use case, you see people doing many things within even a single vast cluster. >>I, I wanna come back to the architecture if I, I can and just understand it better. And I said, David, Flo's written a lot about this on our site, but I've had three key meetings in my life with Mosia and I, and I you've obviously know the first week you showed up in my offices at IDC in the late 1980s said, tell me everything, you know about the IBM mainframe IO subsystem. I'm like, oh, this is gonna be a short meeting. And then they came back a year later and showed us symmetric. I was like, wow, that's pretty impressive. The second one was, I gave a speech at 43 south of 42 south. He came up and gave me a big hug. I'm like, wow. He knows me. And the third one, he was in my offices at, in Mabo several years ago. And we were arguing about the flash versus spinning disc. And he's like, I can outperform an all flash array because we've tuned our algorithms for spinning disc. Everybody else is missing that. You're basically saying the opposite. Correct. We've turned tuned our algorithms to, for QC David Flos says Dave, there's a lot of ways to skin a cat in this technology industry. So I wanted to make sure I got that right. Basically you're skinning the cat with different >>Approach. Yeah. We've also changed really the approach of backup. I mean, the, the term backup is really legacy. I mean, that's 10, 12 years of our recovery. The, the story today is really about, about restore resiliency and recovery. So when you think about those legacy solutions, right, they were built to ingest fast, right? We wanna move the data off our primary systems, our, our primary applications and we needed to fit within a backup window. Restore was an afterthought. Restore was, I might occasionally need to restore something. Something got lost, something got re corrupted. I have to restore something today with the, you know, let's face it, the digital pandemic of, of, of cyber threats and, and ransomware it's about sometimes restoring everything. So if you look at a legacy system, they ingest, I'm sorry. They, they, they write very fast. They, they, they can bring the data in very quickly, but their restore time is typically about 20 to 25%. >>So their reading at only 20, 25% of their right speed, you know, is their rate speed. We flip the script on that. We actually read eight times faster than we write. So I could size again to the performance that you need. If you need 40 terabytes, an hour 50 terabytes an hour, we can do that. But those systems that write at 40 terabytes an hour are restoring at only eight. We're writing at a similarly size system, which actually comes out about 51 terabytes an hour 54 terabytes. We're restoring at 432 terabytes an hour. So we've broken the mold of data protection targets. We're no longer the bottleneck. We're no longer part of your recovery plan going to be the issue right now, you gotta start thinking about network connectivity. Do I have, you know, you know, with the, with our Veeam partners, do we have the right data movers, whether virtual or physical, where am I gonna put the data? >>We've really helped customer aided customers to rethinking their whole Dr. Plan, cuz let's face it. When, when ransomware occurs, you might not be able to get in the building, your phones don't work. Who do you call right? By the time you get that all figured out and you get to the point where you're start, you want to start recovering data. If I could recover 50 times faster than a purpose built backup appliance. Right? Think about it. Is it one day or is it 50 days? Am I gonna be back online? Is it one hour? Is it 50 hours? How many millions of dollars, tens of thousands of dollars were like, will that cost us? And that's why our architecture though our thought process and how the system was designed lends itself. So well for the requirements of today, data protection, not backup it's about data protection. >>Can you give us a sense as to how much of your business momentum is from data protection? >>Yeah, sure. So I joined VAs as we were talking chatting before I come on about six months ago. And it's funny, we had a lot of vast customers on their own because they wanted to leverage the platform and they saw the power of VAs. They started doing that. And then as our founders, you know, decided to lean in heavily into this marketplace with investments, not just in people, but also in technology and research and development, and also partnering with the likes of, of Veeam. We, we don't have a data mover, right. We, we require a data mover to bring us the data we've leaned in tremendously. Last quarter was really our, probably our first quarter where we had a lot of marketing and momentum around data protection. We sold five X last quarter than we did all of last year. So right now the momentum's great pipeline looks phenomenal and you know, we're gonna continue to lean in here. >>Describe the relationship with Veeam, like kind of, sort of started recently. It sounds like as customer demand. Yeah. But what's that like, what are you guys doing in terms of engineering integration go to market? >>Yeah. So, so we've gone through all the traditional, you know, verifications and certifications and, and, and I'm proud to say that we kind of blew the, the, the roof off the requirements of a Veeam environ. Remember Veeam was very innovative. 10, 12 years ago, they were putting flash in servers because they, they, they want a high performing environment, a feature such as instant recovery. We've now enabled. When I talked about all those things about re about restore. We had customers yesterday come to us that have tens of thousands of VMs. Imagine that I can spin them up instantaneously and run Veeam's instant recovery solution. While then in the background, restoring those items that is powerful and you need a very fast high performance system to enable that instant. Recovery's not new. It's been in the market for very long, but you can ask nine outta 10 customers walk in the floor. >>They're not able to leverage that today in the systems that they have, or it's over architected and very expensive and somewhat cost prohibitive. So our relationship with Veeam is really skyrocketing actually, as part of that, that success and our, our last quarter, we did seven figure deals here in the United States. We've done deals in Australia. We were chatting. I, I, I happened to be in Dubai and we did a deal there with the government there. So, you know, there's no, there's no specific vertical market. They're all different. You know, it's, it's really driven by, you know, they have a great, you know, cyber resilient message. I mean, you get seen by the last couple of days today and they just want that power that vast. Now there are other systems in the marketplace today that leverage all flash, but they don't have the economic solution that we have. >>No, your, your design anticipated the era that we're we're in right now from it, it anticipated the ability to scale in, to scale, you know, in >>A variety. Well, listen, anticipation of course, co coincidental architecture. It's a fantastic fit either way, either way. I mean, it's a fantastic fit for today. And that's the conversations that we're having with, with all the customers here, it's really all about resiliency. And they know, I mean, one of the sessions, I think it was mentioned 82 or 84% of, of all clients interviewed don't believe that they can do a restore after a cyber attack or it'll cost them millions of dollars. So that there's a tremendous amount of risk there. So time is, is, is ultimately equals dollars. So we see a, a big uptick there, but we're, we're actually continuing our validation work and testing with Veeam. They've been very receptive, very receptive globally. Veeam's channel has also been very receptive globally because you know, their customers are, you know, hungry for innovation as well. And I really strongly believe ASBO brings that >>George, we gotta go, but thank you. Congratulations. Pleasure on the momentum. Say hi to Jeff for us. >>We'll we'll do so, you know, and we'll, can I leave you with one last thought? Yeah, >>Please do give us your final thought. >>If I could, in closing, I think it's pretty important when, when customers are, are evaluating vast, if I could give them three data points, 100% of customers that Triva test vast POC, vast BVAs 100% Gartner peer insights recently did a survey. You know, they, they do it with our, you know, blind survey, dozens of vast customers and never happened before where 100% of the respondents said, yes, I would recommend VA and I will buy VAs again. It was more >>Than two respondents. >>It was more, it was dozens. They won't do it. If it's not dozens, it's dozens. It's not dozen this >>Check >>In and last but not. And, and last but not least our customers are, are speaking with their wallet. And the fact of the matter is for every customer that spends a dollar with vast within a year, they spend three more. So, I mean, if there's no better endorsement, if you have a customer base, a client base that are coming back and looking for more use cases, not just data protection, but again, high performance computing machine learning AI for a company like VA data. >>Awesome. And a lot of investment in engineering, more investment in engineering than marketing. How do I know? Because your capacity nodes, aren't the C nodes. They're the D nodes somehow. So the engineers obviously won that naming. >>They'll always win that one and we, and we, and we let them, we need them. Thank you. So that awesome product >>Sales, it's the golden rule. All right. Thank you, George. Keep it right there. VEON 20, 22, you're watching the cube, Uber, Uber right back.

Published Date : May 18 2022

SUMMARY :

a company that some of you may not know about. Thank you so much for having me. We've covered a little bit on the Wikibon research side, So we're here at the, you know, the Veeam show and, you know, the theme is modern data protection, or, you know, even tape for that or, you know, spin it up in the cloud in a, the fitting that of, you know, data protection. all flash QLC drives, but the technology, you know, the advanced next generation algorithms If you want to just keep, keep billing the And, and I, I think, I think at this point, the purpose, you know, And, you know, I, you know, especially here at Veeam, you end up paying for things you don't need. And as you just mentioned, those are typically relatively you know, foundational things that we're able to build in that modularity with performance at scale, We want you to buy for the need as, and as needed basis. And the third one, he was in my offices at, I have to restore something today with the, you know, let's face it, the digital pandemic of, So I could size again to the performance that you need. By the time you get that all figured out and you get to the point where you're start, And then as our founders, you know, But what's that like, what are you guys doing in terms of engineering integration go to market? It's been in the market for very long, but you can ask nine outta know, it's, it's really driven by, you know, they have a great, you know, been very receptive globally because you know, their customers are, you know, Pleasure on the momentum. you know, blind survey, dozens of vast customers and never happened before where 100% of the respondents If it's not dozens, it's dozens. And the fact of the matter is for every customer that spends a dollar with vast within a year, So the engineers obviously won that naming. So that awesome product Sales, it's the golden rule.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

JeffPERSON

0.99+

Dave NicholsonPERSON

0.99+

AustraliaLOCATION

0.99+

GeorgePERSON

0.99+

50 daysQUANTITY

0.99+

dozensQUANTITY

0.99+

George BurgPERSON

0.99+

30 dayQUANTITY

0.99+

George AxbergPERSON

0.99+

90 dayQUANTITY

0.99+

50 hoursQUANTITY

0.99+

50 timesQUANTITY

0.99+

DubaiLOCATION

0.99+

40 terabytesQUANTITY

0.99+

seven yearsQUANTITY

0.99+

one yearQUANTITY

0.99+

10QUANTITY

0.99+

yesterdayDATE

0.99+

two clientsQUANTITY

0.99+

HPORGANIZATION

0.99+

82QUANTITY

0.99+

IBMORGANIZATION

0.99+

432 terabytesQUANTITY

0.99+

VeeamORGANIZATION

0.99+

United StatesLOCATION

0.99+

100%QUANTITY

0.99+

last yearDATE

0.99+

one hourQUANTITY

0.99+

last quarterDATE

0.99+

ALAORGANIZATION

0.99+

one dayQUANTITY

0.99+

UberORGANIZATION

0.99+

seven figureQUANTITY

0.99+

twoQUANTITY

0.99+

FerrariORGANIZATION

0.99+

84%QUANTITY

0.99+

ASBOORGANIZATION

0.99+

DavePERSON

0.99+

David FlosPERSON

0.99+

todayDATE

0.99+

eight timesQUANTITY

0.99+

100 terabyteQUANTITY

0.99+

nineQUANTITY

0.99+

a year laterDATE

0.99+

threeQUANTITY

0.99+

tens of thousands of dollarsQUANTITY

0.99+

bothQUANTITY

0.99+

late 1980sDATE

0.98+

2022DATE

0.98+

second unitQUANTITY

0.98+

second copyQUANTITY

0.98+

oneQUANTITY

0.98+

FloPERSON

0.98+

millions of dollarsQUANTITY

0.98+

180 miles an hourQUANTITY

0.98+

WikibonORGANIZATION

0.98+

AitaORGANIZATION

0.98+

10DATE

0.98+

one more terabyteQUANTITY

0.97+

10 customersQUANTITY

0.97+

hundredsQUANTITY

0.97+

third oneQUANTITY

0.97+

GartnerORGANIZATION

0.97+

12 years agoDATE

0.97+

LegosORGANIZATION

0.96+

singleQUANTITY

0.96+

MaboLOCATION

0.96+

Breaking Analysis: What you May not Know About the Dell Snowflake Deal


 

>> From theCUBE Studios in Palo Alto, in Boston bringing you Data Driven Insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. >> In the pre-cloud era hardware companies would run benchmarks, showing how database and or application performance ran better on their systems relative to competitors or previous generation boxes. And they would make a big deal out of it. And the independent software vendors, you know they'd do a little golf clap if you will, in the form of a joint press release it became a game of leaprog amongst hardware competitors. That was pretty commonplace over the years. The Dell Snowflake Deal underscores that the value proposition between hardware companies and ISVs is changing and has much more to do with distribution channels, volumes and the amount of data that lives On-Prem in various storage platforms. For cloud native ISVs like Snowflake they're realizing that despite their Cloud only dogma they have to grit their teeth and deal with On-premises data or risk getting shut out of evolving architectures. Hello and welcome to this week's Wikibon Cube Insights powered by ETR. In this breaking analysis, we unpack what little is known about the Snowflake announcement from Dell Technologies World and discuss the implications of a changing Cloud landscape. We'll also share some new data for Cloud and Database platforms from ETR that shows Snowflake has actually entered the Earth's orbit when it comes to spending momentum on its platform. Now, before we get into the news I want you to listen to Frank's Slootman's answer to my question as to whether or not Snowflake would ever architect the platform to run On-Prem because it's doable technically, here's what he said, play the clip >> Forget it, this will only work in the Public Cloud. Because it's, this is how the utility model works, right. I think everybody is coming through this realization, right? I mean, excuses are running out at this point. You know, we think that it'll, people will come to the Public Cloud a lot sooner than we will ever come to the Private Cloud. It's not that we can't run a private Cloud. It's just diminishes the potential and the value that we bring. >> So you may be asking yourselves how do you square that circle? Because basically the Dell Snowflake announcement is about bringing Snowflake to the private cloud, right? Or is it let's get into the news and we'll find out. Here's what we know at Dell Technologies World. One of the more buzzy announcements was the, by the way this was a very well attended vet event. I should say about I would say 8,000 people by my estimates. But anyway, one of the more buzzy announcements was Snowflake can now run analytics on Non-native Snowflake data that lives On-prem in a Dell object store Dell's ECS to start with. And eventually it's software defined object store. Here's Snowflake's clark, Snowflake's Clark Patterson describing how it works this past week on theCUBE. Play the clip. The way it works is I can now access Non-native Snowflake data using what materialized views, external tables How does that work? >> Some combination of the, all the above. So we've had in Snowflake, a capability called External Tables, which you refer to, it goes hand in hand with this notion of external stages. Basically there's a through the combination of those two capabilities, it's a metadata layer on data, wherever it resides. So customers have actually used this in Snowflake for data lake data outside of Snowflake in the Cloud, up until this point. So it's effectively an extension of that functionality into the Dell On-Premises world, so that we can tap into those things. So we use the external stages to expose all the metadata about what's in the Dell environment. And then we build external tables in Snowflake. So that data looks like it is in Snowflake. And then the experience for the analyst or whomever it is, is exactly as though that data lives in the Snowflake world. >> So as Clark explained, this capability of External tables has been around in the Cloud for a while, mainly to suck data out of Cloud data lakes. Snowflake External Tables use file level metadata, for instance, the name of the file and the versioning so that it can be queried in a stage. A stage is just an external location outside of Snowflake. It could be an S3 bucket or an Azure Blob and it's soon will be a Dell object store. And in using this feature, the Dell looks like it lives inside of Snowflake and Clark essentially, he's correct to say to an analyst that looks exactly like the data is in Snowflake, but uh, not exactly the data's read only which means you can't do what are called DML operations. DML stands for Data Manipulation Language and allows for things like inserting data into tables or deleting and modifying existing data. But the data can be queried. However, the performance of those queries to External Tables will almost certainly be slower. Now users can build things like materialized views which are going to speed things up a bit, but at the end of the day, it's going to run faster than the Cloud. And you can be almost certain that's where Snowflake wants it to run, but some organizations can't or won't move data into the Cloud for a variety of reasons, data sovereignty, compliance security policies, culture, you know, whatever. So data can remain in place On-prem, or it can be moved into the Public Cloud with this new announcement. Now, the compute today presumably is going to be done in the Public Cloud. I don't know where else it's going to be done. They really didn't talk about the compute side of things. Remember, one of Snowflake's early innovations was to separate compute from storage. And what that gave them is you could more efficiently scale with unlimited resources when you needed them. And you could shut off the compute when you don't need us. You didn't have to buy, and if you need more storage you didn't have to buy more compute and vice versa. So everybody in the industry has copied that including AWS with Redshift, although as we've reported not as elegantly as Snowflake did. RedShift's more of a storage tiering solution which minimizes the compute required but you can't really shut it off. And there are companies like Vertica with Eon Mode that have enabled this capability to be done On-prem, you know, but of course in that instance you don't have unlimited elastic compute scale on-Prem but with solutions like Dell Apex and HPE GreenLake, you can certainly, you can start to simulate that Cloud elasticity On-prem. I mean, it's not unlimited but it's sort of gets you there. According to a Dell Snowflake joint statement, the companies the quote, the companies will pursue product integrations and joint go to market efforts in the second half of 2022. So that's a little vague and kind of benign. It's not really clear when this is going to be available based on that statement from the two first, but, you know, we're left wondering will Dell develop an On-Prem compute capability and enable queries to run locally maybe as part of an extended apex offering? I mean, we don't know really not sure there's even a market for that but it's probably a good bet that again, Snowflake wants that data to land in the Snowflake data Cloud kind of makes you wonder how this deal came about. You heard Sloop on earlier Snowflake has always been pretty dogmatic about getting data into its native snowflake format to enable the best performance as we talked about but also data sharing and governance. But you could imagine that data architects they're building out their data mesh we've reported on this quite extensively and their data fabric and those visions around that. And they're probably telling Snowflake, Hey if you want to be a strategic partner of ours you're going to have to be more inclusive of our data. That for whatever reason we're not putting in your Cloud. So Snowflake had to kind of hold its nose and capitulate. Now the good news is it further opens up Snowflakes Tam the total available market. It's obviously good marketing posture. And ultimately it provides an on ramp to the Cloud. And we're going to come back to that shortly but let's look a little deeper into what's happening with data platforms and to do that we'll bring in some ETR data. Now, let me just say as companies like Dell, IBM, Cisco, HPE, Lenovo, Pure and others build out their hybrid Clouds. The cold hard fact is not only do they have to replicate the Cloud Operating Model. You will hear them talk about that a lot, but they got to do that. So it, and that's critical from a user experience but in order to gain that flywheel momentum they need to build a robust ecosystem that goes beyond their proprietary portfolios. And, you know, honestly they're really not even in the first inning most companies and for the likes of Snowflake to sort of flip this, they've had to recognize that not everything is moving into the Cloud. Now, let's bring up the next slide. One of the big areas of discussion at Dell Tech World was Apex. That's essentially Dell's nascent as a service offering. Apex is infrastructure as a Service Cloud On-prem and obviously has the vision of connecting to the Cloud and across Clouds and out to the Edge. And it's no secret that database is one of the most important ingredients of infrastructure as a service generally in Cloud Infrastructure specifically. So this chart here shows the ETR data for data platforms inside of Dell accounts. So the beauty of ETR platform is you can cut data a million different ways. So we cut it. We said, okay, give us the Cloud platforms inside Dell accounts, how are they performing? Now, this is a two dimensional graphic. You got net score or spending momentum on the vertical axis and what ETR now calls Overlap formally called Market Share which is a measure of pervasiveness in the survey. That's on the horizontal axis that red dotted line at 40% represents highly elevated spending on the Y. The table insert shows the raw data for how the dots are positioned. Now, the first call out here is Snowflake. According to ETR quote, after 13 straight surveys of astounding net scores, Snowflake has finally broken the trend with its net score dropping below the 70% mark among all respondents. Now, as you know, net score is measured by asking customers are you adding the platform new? That's the lime green in the bar that's pointing from Snowflake in the graph and or are you increasing spend by 6% or more? That's the forest green is spending flat that's the gray is you're spend decreasing by 6% or worse. That's the pinkish or are you decommissioning the platform bright red which is essentially zero for Snowflake subtract the reds from the greens and you get a net score. Now, what's somewhat interesting is that snowflakes net score overall in the survey is 68 which is still huge, just under 70%, but it's net score inside the Dell account base drops to the low sixties. Nonetheless, this chart tells you why Snowflake it's highly elevated spending momentum combined with an increasing presence in the market over the past two years makes it a perfect initial data platform partner for Dell. Now and in the Ford versus Ferrari dynamic. That's going on between the likes of Dell's apex and HPE GreenLake database deals are going to become increasingly important beyond what we're seeing with this recent Snowflake deal. Now noticed by the way HPE is positioned on this graph with its acquisition of map R which is now part of HPE Ezmeral. But if these companies want to be taken seriously as Cloud players, they need to further expand their database affinity to compete ideally spinning up databases as part of their super Clouds. We'll come back to that that span multiple Clouds and include Edge data platforms. We're a long ways off from that. But look, there's Mongo, there's Couchbase, MariaDB, Cloudera or Redis. All of those should be on the short list in my view and why not Microsoft? And what about Oracle? Look, that's to be continued on maybe as a future topic in a, in a Breaking Analysis but I'll leave you with this. There are a lot of people like John Furrier who believe that Dell is playing with fire in the Snowflake deal because he sees it as a one way ticket to the Cloud. He calls it a one way door sometimes listen to what he said this past week. >> I would say that that's a dangerous game because we've seen that movie before, VMware and AWS. >> Yeah, but that we've talked about this don't you think that was the right move for VMware? >> At the time, but if you don't nurture the relationship AWS will take all those customers ultimately from VMware. >> Okay, so what does the data say about what John just said? How is VMware actually doing in Cloud after its early missteps and then its subsequent embracing of AWS and other Clouds. Here's that same XY graphic spending momentum on the Y and pervasiveness on the X and the same table insert that plots the dots and the, in the breakdown of Dell's net score granularity. You see that at the bottom of the chart in those colors. So as usual, you see Azure and AWS up and to the right with Google well behind in a distant third, but still in the mix. So very impressive for Microsoft and AWS to have both that market presence in such elevated spending momentum. But the story here in context is that the VMware Cloud on AWS and VMware's On-Prem Cloud like VMware Cloud Foundation VCF they're doing pretty well in the market. Look, at HPE, gaining some traction in Cloud. And remember, you may not think HPE and Dell and VCF are true Cloud but these are customers answering the survey. So their perspective matters more than the purest view. And the bad news is the Dell Cloud is not setting the world on fire from a momentum standpoint on the vertical axis but it's above the line of zero and compared to Dell's overall net score of 20 you could see it's got some work to do. Okay, so overall Dell's got a pretty solid net score to you know, positive 20, as I say their Cloud perception needs to improve. Look, Apex has to be the Dell Cloud brand not Dell reselling VMware. And that requires more maturity of Apex it's feature sets, its selling partners, its compensation models and it's ecosystem. And I think Dell clearly understands that. I think they're pretty open about that. Now this includes partners that go beyond being just sellers has to include more tech offerings in the marketplace. And actually they got to build out a marketplace like Cloud Platform. So they got a lot of work to do there. And look, you've got Oracle coming up. I mean they're actually kind of just below the magic 40% in the line which is pro it's pretty impressive. And we've been telling you for years, you can hate Oracle all you want. You can hate its price, it's closed system all of that it's red stack shore. You can say it's legacy. You can say it's old and outdated, blah, blah, blah. You can say Oracle is irrelevant in trouble. You are dead wrong. When it comes to mission critical workloads. Oracle is the king of the hill. They're a founder led company that knows exactly what it's doing and they're showing Cloud momentum. Okay, the last point is that while Microsoft AWS and Google have major presence as shown on the X axis. VMware and Oracle now have more than a hundred citations in the survey. You can see that on the insert in the right hand, right most column. And IBM had better keep the momentum from last quarter going, or it won't be long before they get passed by Dell and HP in Cloud. So look, John might be right. And I would think Snowflake quietly agrees that this Dell deal is all about access to Dell's customers and their data. So they can Hoover it into the Snowflake Data Cloud but the data right now, anyway doesn't suggest that's happening with VMware. Oh, by the way, we're keeping an eye close eye on NetApp who last September ink, a similar deal to VMware Cloud on AWS to see how that fares. Okay, let's wrap with some closing thoughts on what this deal means. We learned a lot from the Cloud generally in AWS, specifically in two pizza teams, working backwards, customer obsession. We talk about flywheel all the time and we've been talking today about marketplaces. These have all become common parlance and often fundamental narratives within strategic plans investor decks and customer presentations. Cloud ecosystems are different. They take both competition and partnerships to new heights. You know, when I look at Azure service offerings like Apex, GreenLake and similar services and I see the vendor noise or hear the vendor noise that's being made around them. I kind of shake my head and ask, you know which movie were these companies watching last decade? I really wish we would've seen these initiatives start to roll out in 2015, three years before AWS announced Outposts not three years after but Hey, the good news is that not only was Outposts a wake up call for the On-Prem crowd but it's showing how difficult it is to build a platform like Outposts and bring it to On-Premises. I mean, Outpost isn't currently even a rounding era in the marketplace. It really doesn't do much in terms of database support and support of other services. And, you know, it's unclear where that that is going. And I don't think it has much momentum. And so the Hybrid Cloud Vendors they've had time to figure it out. But now it's game on, companies like Dell they're promising a consistent experience between On-Prem into the Cloud, across Clouds and out to the Edge. They call it MultCloud which by the way my view has really been multi-vendor Chuck, Chuck Whitten. Who's the new co-COO of Dell called it Multi-Cloud by default. (laughing) That's really, I think an accurate description of that. I call this new world Super Cloud. To me, it's different than MultiCloud. It's a layer that runs on top of hyperscale infrastructure kind of hides the underlying complexity of the Cloud. It's APIs, it's primitives. And it stretches not only across Clouds but out to the Edge. That's a big vision and that's going to require some seriously intense engineering to build out. It's also going to require partnerships that go beyond the portfolios of companies like Dell like their own proprietary stacks if you will. It's going to have to replicate the Cloud Operating Model and to do that, you're going to need more and more deals like Snowflake and even deeper than Snowflake, not just in database. Sure, you'll need to have a catalog of databases that run in your On-Prem and Hybrid and Super Cloud but also other services that customers can tap. I mean, can you imagine a day when Dell offers and embraces a directly competitive service inside of apex. I have trouble envisioning that, you know not with their historical posture, you think about companies like, you know, Nutanix, you know, or Cisco where they really, you know those relationships cooled quite quickly but you know, look, think about it. That's what AWS does. It offers for instance, Redshift and Snowflake side by side happily and the Redshift guys they probably hate Snowflake. I wouldn't blame them, but the EC Two Folks, they love them. And Adam SloopesKy understands that ISVs like Snowflake are a key part of the Cloud ecosystem. Again, I have a hard time envisioning that occurring with Dell or even HPE, you know maybe less so with HPE, but what does this imply that the Edge will allow companies like Dell to a reach around on the Cloud and somehow create a new type of model that begrudgingly accommodates the Public Cloud but drafts of the new momentum of the Edge, which right now to these companies is kind of mostly telco and retail. It's hard to see that happening. I think it's got to evolve in a more comprehensive and inclusive fashion. What's much more likely is companies like Dell are going to substantially replicate that Cloud Operating Model for the pieces that they own pieces that they control which admittedly are big pieces of the market. But unless they're able to really tap that ecosystem magic they're not going to be able to grow much beyond their existing install bases. You take that lime green we showed you earlier that new adoption metric from ETR as an example, by my estimates, AWS and Azure are capturing new accounts at a rate between three to five times faster than Dell and HPE. And in the more mature US and mere markets it's probably more like 10 X and a major reason is because of the Cloud's robust ecosystem and the optionality and simplicity of transaction that that is bringing to customers. Now, Dell for its part is a hundred billion dollar revenue company. And it has the capability to drive that kind of dynamic. If it can pivot its partner ecosystem mindset from kind of resellers to Cloud services and technology optionality. Okay, that's it for now? Thanks to my colleagues, Stephanie Chan who helped research topics for Breaking Analysis. Alex Myerson is on the production team. Kristen Martin and Cheryl Knight and Rob Hof, on editorial they helped get the word out and thanks to Jordan Anderson for the new Breaking Analysis branding and graphics package. Remember these episodes are all available as podcasts wherever you listen. All you do is search Breaking Analysis podcasts. You could check out ETR website @etr.ai. We publish a full report every week on wikibon.com and siliconangle.com. You want to get in touch. @dave.vellente @siliconangle.com. You can DM me @dvellante. You can make a comment on our LinkedIn posts. This is Dave Vellante for the Cube Insights powered by ETR. Have a great week, stay safe, be well. And we'll see you next time. (upbeat music)

Published Date : May 7 2022

SUMMARY :

bringing you Data Driven and the amount of data that lives On-Prem and the value that we bring. One of the more buzzy into the Dell On-Premises world, Now and in the Ford I would say that At the time, but if you And it has the capability to

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jordan AndersonPERSON

0.99+

Stephanie ChanPERSON

0.99+

IBMORGANIZATION

0.99+

DellORGANIZATION

0.99+

Clark PattersonPERSON

0.99+

Alex MyersonPERSON

0.99+

Dave VellantePERSON

0.99+

AWSORGANIZATION

0.99+

Rob HofPERSON

0.99+

LenovoORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

JohnPERSON

0.99+

MicrosoftORGANIZATION

0.99+

John FurrierPERSON

0.99+

OracleORGANIZATION

0.99+

2015DATE

0.99+

GoogleORGANIZATION

0.99+

Cheryl KnightPERSON

0.99+

ClarkPERSON

0.99+

HPORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

BostonLOCATION

0.99+

HPEORGANIZATION

0.99+

6%QUANTITY

0.99+

FordORGANIZATION

0.99+

threeQUANTITY

0.99+

40%QUANTITY

0.99+

Chuck WhittenPERSON

0.99+

VMwareORGANIZATION

0.99+

NutanixORGANIZATION

0.99+

Kristen MartinPERSON

0.99+

FerrariORGANIZATION

0.99+

Adam SloopesKyPERSON

0.99+

EarthLOCATION

0.99+

13 straight surveysQUANTITY

0.99+

70%QUANTITY

0.99+

firstQUANTITY

0.99+

68QUANTITY

0.99+

last quarterDATE

0.99+

RedshiftTITLE

0.99+

siliconangle.comOTHER

0.99+

theCUBE StudiosORGANIZATION

0.99+

SnowflakeEVENT

0.99+

SnowflakeTITLE

0.99+

8,000 peopleQUANTITY

0.99+

bothQUANTITY

0.99+

20QUANTITY

0.99+

VCFORGANIZATION

0.99+

SnowflakeORGANIZATION

0.99+

Nick Volpe, Accenture and Kym Gully, Guardian Life | AWS Executive Summit 2021


 

>>And welcome back to the cubes coverage of AWS executive summit at re-invent 2021. I'm John ferry hosts of the cube. This segment is about surviving and thriving and with the digital revolution that's happening, the digital transformation that's turning into and changing businesses. We've got two great guests here with guardian life. Nick Volpi CIO of individual markets at guardian life and Kim golly CTO of life. And is at Accenture essentially, obviously doing a lot of cutting-edge work, guardian changing the game. Nick, thanks for coming on, Kevin. Thanks for coming on. >>Thanks John. Good to be here. >>So I wonder before I get into the question, I want to just set the table a little bit. The pandemic has given everyone a mandate, the good projects are exposed. The bad projects are exposed. Everyone can kind of see kind of what's happening because of the pandemic forced everyone to kind of identify what's working. What's not working what the double-down on innovation for customers is a big focus, but now with the pandemic kind of relieving and coming out of it, the world's changed. This is an opportunity for businesses, Nick, this is something that you guys are focused on. Can you take us through what guardian lives doing kind of in this post pandemic changeover as cloud goes next level? >>Yeah. Thanks John. So, you know, the immediate need in the pandemic situation was about the new business capability. So those familiar with insurance traditionally, you know, life insurance, underwriting, disability underwriting is very in-person fluids labs, uh, attending physician statements. And when March of 2020 broke that all came to an abrupt halt, right doctor's office were either closed. Testing centers were either closed or inundated with COVID testing. So we had to come up with some creative ways to digitize our new business, um, adopt the application and adopt our new medical questionnaires and also get creative on some of our underwriting standards that put us at, you know, certain limits and certain levels and how we, when we needed fluids. So we, we, we have pretty quickly, we're agile about decisions there. And we moved from about, uh, you know, 40 to 50% adoption rate of our electronic applications to, you know, north of 98% across the board. >>Um, in addition, we kind of saw some opportunities for products and more capabilities beyond new business. So after we weathered the storm, we started taking a step back. And like you said, look at what we were doing. Like kind of have a start, stop, continue conversation internally to say, you know, this digitation digitization is a new norm. How do we meet it from every angle, not just a new business, right? And that's where we started to look at our policy administration systems, moving more to the cloud and leveraging the cloud to its fullest extent versus just a lift and shift. >>Kim, I want to get your perspective at a century I'm, I've done a lot of interviews with the past, I think 18 months, lots of use cases with a central, almost in every vertical where you guys are almost like the firefighters get called in to like help out cause the cloud actually now isn't an enabler. Um, how do you see the impact of the, of the pandemic around reverbing through? I mean, obviously you guys come to the table, you guys bring in, I mean, what's your perspective on this? >>So, yeah, it's really interesting. I think the most interesting fact >>Is, you know, we talk about Nick raised the, you know, such a strong area in our business of underwriting and how can we expedite that? There's been talking on the table for a number of years. Um, but the industry has been very slow or reluctant to embrace. And the pandemic became a very informed, I became an enforcer in it to be honest. And a lot of the companies were thinking about a prior. Um, but that's, it they'll think about it. I mean, even essentially we, we launched a huge three-year investment to get clients into cloud and digital transformation, but the pandemic just expedited everything. Now the upside is clients that were in a well-advanced stage of planning, uh, that we're easily able to adopt. Uh, but clients that weren't were really left behind. Um, so we became very, very busy just supporting the clients that weren't didn't have as much forethought as the likes of guardian, et cetera. >>Nick, that brings up a good point. I want to get your reaction to see if you agree. I mean, people who didn't put their toe in the cloud, or just jump in the deep end, really got flat-footed when the pandemic hit, because they weren't prepared people who were either ingratiated in with the cloud or how many active projects were even being full deployments in there did well, what's your take on that? >>Yeah, the, the enablement we had and, and the gift we were given by starting our cloud journey, and I want to say 2016, 17 was we really started moving to the cloud. And I think we were the only insurer that moved production load to the cloud at that point. Um, most of insurers were putting their development environments, maybe even their environments, but, you know, guardian had a strategy of getting out of the data center and moving to a much more flexible, scalable environment architecture using the AWS cloud. Um, so we completed our journey into the cloud by 2018, 19, and we were at the point of really capitalizing versus moving. So we were able to move very quickly, very nimbly, uh, when, when the pandemic hit or in any digital situation, we have that, that flexibility and capacity that AWS provides us to really respond to our customers, our customer's needs. So we were one of the more fortunate insurers that were well into our cloud journey and at the point of optimization versus the point of moving. >>So let's talk about the connection with, with the sensors, life insurance and annuity platform also known as a, I think the acronym is, uh, what was that? Why was that relevant? What, what was that all about? >>Yeah. So I'll go first and then Kim, you can jump in and see if you agree with me. Um, so >>It's essentially, >>I suspect you would write John, like I said, our new business focus was the original, like the, the, the, the emergency situation when the pandemic hit. But as we went further into it and realized the mortality and morbidity and the needs and wants of our customers, which is a major focus of guardian, really being, having the client at the center of every conversation we have, we realized that there was a real opportunity for product and his product continues to change. And you had regulations like 7,702 coming out where you had to reprice the entire portfolio to be able to sell it by January 1st, 2022, we realized our current systems are for policy admin. We're not matching our digital capabilities that we had moved to the cloud. So we embarked on a very extensive RFP to Accenture and a few other vendors that would come to the table and work with us. >>And we just really got to a place where combination of our, our desire to be on the cloud, be flexible and be capable for our customers. Married really well with the, the knowledge, the industry knowledge and the capabilities that Accenture brought to the table with the Ayla platform, um, their book of business, their current infrastructure, their configuration versus development, really all aligned with our need for flexible, fast time to market. You know, we're looking to cut development times significantly. We're looking to cut tests in times niggly. And as of right now, it's all proving true between the CA the cloud capability and halo capability. We are reaping the benefits of having this new platform, uh, coming up in live very soon here before. >>Well, I get to, um, a center's perspective. I want to just ask you a quick follow-up on that. Nick, if you don't mind the, you basically talk us through, okay, I can see what's happening here. You get with Accenture take advantage of what they got going on. You get into the cloud, you start getting the efficiencies, get the cultural change. What refactoring has you have you seen? What's your vision? I should say, what's your vision around what's next? Because clearly there's a, there's a, there's a, there's a playbook you get in the cloud replatform, you get the cultural fit, you understand the personnel issues, how to tap the resources. Then you gotta look for innovation where you can start changing. What, how you do things to refactor the business model. >>Yeah. So I think that, you know, specifically to this conversation, that's around the product capability, right? So for all too long, the insurance companies have had three specific sleeves of insurance products. We've had individual life. We have an individual disability and we'd have individual annuities, right? Each of them serving a specific purpose in the customer's lives, what this platform and this cloud platform allows us to do is start to think about, can we create the concept of a single rapper? Can we bring some of these products together? Can we centralize the buying process? And with ALA behind the scenes, you don't have that. You know, I kind of equate it to building a Ferrari and attaching a, uh, a trailer to it, right? And that's what we were doing today. Our digital front ends, our new business capabilities are all being anchored down or slowed down by our traditional mainframe backends by introducing Accenture on the cloud in AWS, we now have our Ferrari fully free to run as fast as it can versus anchoring this massive, you know, trailer to it. Um, so it really was a matter of bringing our product innovation to our digital front end innovation that we've been working on for, you know, two or three years prior. >>I mean, this is the kind of the Amazon way, right? You decouple things, you decompose, you don't want to have a drag. And with containers, we're seeing companies look at existing legacy in a way that's different. Um, can you talk about how you guys look at that Nick and terminally? Because a lot of CEO's are saying, Hey, you know what? I can have the best of both worlds. I don't have to kill the old to bring in the new, but I can certainly modernize everything. What's your reaction to that? >>Yeah. And I think that's, that's our exact, that's our exact path forward, right? We don't, we don't feel like we need to boil the ocean. Right. We're going after the surgically for the things that we think are going to be most impactful to our customers, right? So legacy blocks of business that are sitting out there that are, you know, full, completely closed. They're not our concern. It's really hitching this new ALA capability to the next generation of products. The next generation of customer needs understanding data, data capture is very important. And right. So if you look at the mainframes and what we're living on now, it's all about the owner of the policy. You lose connection with the beneficiary or the insured, what these new platforms allowed us to do is really understand the household around the products that they're buying. Right. I know it sounds simple, but that data architecture, that data infrastructure on these newer platforms and in the cloud, you can turn it faster. >>You have scale to do more analysis, but you're also able to capture in a much cleaner way on the traditional systems. You're talking about what we call intimately the blob on the mainframe that has your name, your first name, your last name, your address, all in one free form field sitting in some database. It's very hard to discern on these new platforms, given our need and our desire to be deeper into the client's lives, understanding their needs, ALA coupled with em, with AWS, with our new business capabilities on the front end really puts together that true customer value chain. That's going to differentiate us. >>Okay. I'm okay. CTO of a live as he calls it, the acronym for the service you have, this is a great example. I hate to use the word on-ramp cause that sounds so old, right? But in a way in vertical markets, you're seeing the power of the cloud because the data and the AI could be freed up and you can take advantage of all the heavy lifting by providing some platform or some support with Amazon, the, your expertise. This is a great use case of that, I think. And I think, you know, this is, I think a future trend where the developments can be faster, that value can be faster and your customers don't have to build all that lower level abstractions. If you will. Can you describe the essential relationship to your customers as you guys? Cause this is a real great use case. >>Yeah, it is. You know, our philosophy is simple. Let's not reinvent the wheel and with cloud and native services as AWS and, uh, provide w we want to focus on the business of what the system needs to do and not all the little side bets, we can get a great service. That's fully managed that has, uh, security patches updates. We want to focus on the real deal. Like Nick wants to focus on the business and not so much what's underneath it. That's my problem. I'm focusing on that. And we will work together, uh, in a nice little gel. You've had the relatively new term, no code, low code. You know, it's strange a modern system, like a lip has been that way for a number of years. Basically it means I don't want to make code changes. I just want to be able to configure it. >>So now more people can have access to make change, and we can even get it to the point where it's the people that are sitting there, dealing with the clients that would be the ultimate, where they can innovate and come up with ideas and try things because we've got it so simple. We're not there yet, but that's the ultimate goal. So alien, the no code, no code has been around for quite some time. And maybe we should take advantage of that, but I think we're missing one thing. So as good as the platform is the cloud moving in calculating native services, using the built-in security that comes with all that, um, and extending the function and then being able to tap into, you know, the InsureTech FinTech internet of things, and quickly adapt. I think the partnership is big. Okay. Uh, it's, it's very strong part of the exercise, so you can have the product, but without the people that work well together, I think it's also a big challenge. >>You know, all programs have their idiosyncrasies and there's a lot of challenges along the way. You know, there's one really small, simple example I can use. Um, I'd say guardian is one of our industries, market leaders, when, and when they approach the security, they really do lead the way out there. They're very strict, very, um, very responsible, which is such a pleasure to say, but at the end of the day, you still need to run a business. So, you know, because we're a partnership because we all have the same challenges we want to get to success. We were able to work together quite quickly. We planned out the right approach that maximize the security, but it also progressed the business. So, and we applied that into the overall program. So I think it is the product. Definitely. I think it is, uh, everything Nick said you actually elaborated on, but I'd like to point out there's a big part of the partnership to make it a success. >>Yeah. Great, great call out there, Nick, let's get your reaction on that because I want to get into the customer side of it. This enablement platform is kind of the new platform has been around for awhile, but the notion of buying tools and having platforms are now interesting because you have to take this kind of low code, no code capability, and you still got to code. I mean, there's some coding going on, but what it means is ease of use composing and being fast, um, platforms are super important. That requires real architecture and partnership. What's your reaction. >>Yeah. So I think, you know, I'll, I'll tie it all together between AWS and ALA, right? And here's the beauty of it. So we have something called launchpad where we're able to quickly stand up in AIDAP instance for development capabilities because of our Amazon relationship. And then to Kim's point, we have been successful 85% or more of all the work we've done with Inala is configuration versus code. And I'd actually I'd venture to say 90%. So that's extremely powerful when you think about the speed to market and our need to be product innovative. Um, so if our developers and even our, our analysts that sit on the business side could come in and quickly stand up a development buyer and start to play with, um, actuarial calculations, new product features and function, and then spin that to a more higher end development environment. You now have the perfect coupling of a new policy administration system that has the flexibility and configuration with a cloud provider like Amazon and AWS that allows us to move quickly with environments. Whereas in days past you'd have to have an architecture team come in and stand up the servers. And, you know, I'm going way back, but like buy the boxes, put the boxes in place and wire them down. This combination available in AWS has really a new capability to guardian that we're really excited about. >>I love that little comparison. Let me just quickly ask you compared to the old way, give us an order of magnitude of pain and timing involved versus what you just described as standing up something very quickly and getting value and having people shift their, their intellectual capital into value activities versus undifferentiated heavy lifting. >>Yes. I'll, I'll give you real dates. Right? So we engage really engaged with Accenture on the ALA program. Right before Thanksgiving of last year, we had our environment stood up and running all of our vitamins dev set UAT up by February, March timeframe on AWS. And we are about to launch our first product configuration into the, of the platform come November. So within a year we've taken arguably decades of product innovation from our mainframes and built it onto the Ayla platform on the Amazon cloud. So I don't know that you can do that in any other type of environment or partnership. >>It's amazing. You know, that's just great example to me, uh, where cloud scale and real refactoring and business agility is kinda plays out. So congratulations. I got to ask you now on the customer side, you mentioned, um, you guys love, uh, providing value to the customers. What is the impact of the customer? Okay, now you're a customer guardian life's customer. What's the impact of them. Can you share how you see that rendering itself in the marketplace? >>Yeah, so, so clearly AWS has rendered tons of value to the customer across the value stream, right? Whether it be our new business capability, our underwriting capability, our ability to process data and use their scale. I mean, it just goes on and on about the AWS, but specifically around ad-lib, um, the new API environment that we have, the connectivity that we can now make with the new backend policy admin systems has really brought us to a new, a new level. Um, whether it be repricing, product innovation, um, responding to claims capabilities, responding to servicing capabilities that the customer may need. You know, we're able to introduce more self-service. So if you think about it from the back end policy admin, going forward to our client portal, we're able to expose more transactions to self-serve. So minimize calls to the call center, minimize frustration of hold times and allow them to come onto the portal and do more and interact more with their policies because we're on this new, more modern cloud environment and a new, more modern policy admin. So we're delivering new capabilities to the customer from beginning to end being on the cloud with, with, >>Okay, final question. What's next for guardian life's journey year with Accenture. What's your plans? What do you want to knock down for the next year? What's what's on your mind? What's next? >>Uh, so that's an easy question. We've had this roadmap plan since we first started talking to Excentra, at least I've had it in my head. Um, we, we want off all of our policy admin systems for new business come end of 2025. So we've got about four policy admin systems maintaining our different lines of business, our individual disability or life insurance, and our newest, um, four systems that are kind of weighing us down a little bit. We have a glide path and a roadmap with Accenture as a partner to get off of all of these, for new business capability, um, by end of 2024. And that's, you know, I'm being gracious to my teams when I say that I'd like to go a little bit sooner, and then we begin to migrate the, the most important blocks of business that caused the most angst and most concerned with the executive leadership team and then, you know, complete the product. >>But along the way, you know, given regulation, given new, uh, customer customer needs, you know, meeting the needs of the customers changing life, we're going to have parallel tracks, right? So I envision we continue to have this flywheel turning of moving, but then we begin another flywheel right next to it that says we're going to innovate now on the new platform as well. So ultimately John, next year, if I could have my entire whole life block, as it stands today on the new admin platform and one or two new product innovations on the platform as well, by the third quarter, fourth quarter of next year, that would be a success. As far as that. >>Awesome. You guys had all planned out. I love, and I have such a passion for how technology powers business. And this is such a great story for next gen kind of where the modernization trend is today and kind of where it's going. It's the Nick. Appreciate it, Kim. Thanks for coming out with a censure Nixon. It's an easy question for you. I have to ask you another one. Um, this is, I got you here. You know, you guys are doing a lot of great work for other CEOs out there that are going through this right now, whether whatever they are on the spectrum missed the cloud way of getting in. Now this notion of refactoring and then replatforming, and then refactoring business is a playbook we're seeing emerge. People can get the benefits of going to the cloud, certainly for efficiency, but now it opens up the aperture for different kinds of business models. With more data access with machine learning. This refactoring seems to be the new hot thing where the best minds are saying, wow, we could do more, even more. What's your vision? How would you share those folks out there, out there, or the CEOs? What should they be thinking? What's their approach? What advice would you give? >>Yeah, so a lot of the mistakes we make as CEOs, we go for the white hot core first, right? We went the other way. We went for the newer digital assets. We went for the stuff that wasn't as concerning to the business should be fall over. Should there be an outage? Should there be anything? Right? So if you avoid the white hot core, improve it with your peripherals, easier moves to the cloud portals, broker, portals, um, beneficiary portals, uh, simple, you know, AIX frames, moving to the cloud and making them cloud native new builds. Right? So we started with all those peripheral pieces of the architecture and we avoided the white hot core because that's where you start to get those very difficult conversations about, I don't know if I'm ready to move. And I don't see the obvious benefit of moving a dividend generating policy admin system to the cloud. Like why, when you prove it in the pudding and you put the other things out there and prove you can be successful the conversation and move your core and your white hot core out to the platform out to leverage the cloud and to leverage new admin platforms, it becomes a much easier conversation because you've kind of cut your teeth on something much less detrimental to the business. Should it be >>What's the other expression, put water through the pipes, get some reps in and get the team ready to bring training, whatever metaphor you. That's what you're essentially saying. There, get, get some, get some, get your sea legs, get, get practice >>Exactly. Then go for the hard stuff, right? >>It's such a valid point. John is, you know, we see a lot of different approaches across a lot of different companies and, and the biggest challenges, the core is the biggest part. And if you start with that, it can be the scariest part. And I've seen companies trip up big time and you know, it becomes such a bubble spend, which really knocks you on for years, lose confidence in your strategy and everything else. And you're only as strong as your weakest link. So whether you do the outside first or the inside first from a weakest link until it's, the journey is complete, you're never going to maximize. So it was a, it was a very, uh, different and new and great approach that they took by doing a learning curve around the easiest stuff. And then, >>Yeah. Well, that's a great point. One quick, quick followup on that is that the talk about the impact of the personnel, Kim and Nick, because you know, there's a morale issue going on too. There's a, there's a, there's a training. I won't say training, but there's not re-skilling, but there's the rigor. If you're refactoring, you are, re-skilling, you're doing new things, the impact on morale and confidence. If you're not, you get the white, you don't wanna be in the white core unconfident. >>Maybe I should get first. Cause it's Nick's stuff. So he probably might want to say a lot, but yeah. Um, what we see with a lot of insurance companies, uh, they grow through acquisition. Okay. They're very large companies grown over time, uh, buying companies with businesses and systems and bringing it in. They usually bring a ten-year staff. So getting the staff to the next generation, uh, those staff is extremely important because they know everything that you've got today, and they're not so, uh, fair with what's coming up in the future. And there is a transition and people shouldn't feel threatened, but there is change and people do need to adopt and evolve and it should be fun and interesting, but it is a challenge at that turnover point on who controlling what, and then you get the concerns and get paranoid. So it is a true HR issue that you need to manage through >>The final word here. Go for it. >>Yeah. John, I'll give you a story that I think will sum the whole thing up about the excitement versus contention. We see here at guardian. I have a 50 year veteran on my legacy platform team and this person is so excited, got themselves certified in Amazon and is now leading the charge to bring our mainframes onto a lip and is one of the most essential. And I've actually had Accenture tell me if I had a person like this on every one of my engagements who is not only knowledgeable of the legacy, but is so excited to move to the new. I don't think I'd have a failed implementation. So that's the kind of guardian, the kind of backing guardians putting behind this, right? We are absolutely focusing on rescaling. We are not going to the market. We're giving everyone the opportunity and we have an amazing take-up rate. And again, like I said, 50 year veteran who probably could have retired 10 years ago is so excited, reeducated themselves, and is now a key part of this implementation, >>Hey, who wouldn't want to drive a Ferrari when you see it come in, right? I mean Barston magnet trailer. Great story, Nick. Thank you for coming on. Great insight, Kim, great stuff for the century as always a great story here, right? At the heart of the real focus where all companies are feeling right now, we're surviving and thriving and coming out of the pandemic with a growth strategy and a business model with powered by technology. So thanks for sharing the story. Appreciate it. Thanks John. Appreciate it. Okay. So cube coverage of 80 of us executive summit at re-invent 2021. I'm John furrier, your host of the cube. Thanks for watching.

Published Date : Nov 9 2021

SUMMARY :

I'm John ferry hosts of the cube. because of the pandemic forced everyone to kind of identify what's working. So those familiar with insurance traditionally, you know, life insurance, underwriting, Like kind of have a start, stop, continue conversation internally to say, you know, this digitation digitization lots of use cases with a central, almost in every vertical where you guys are almost like the firefighters get called in I think the most interesting fact And a lot of the companies were thinking about a prior. I want to get your reaction to see if you agree. but, you know, guardian had a strategy of getting out of the data center and moving to a much more flexible, Um, so And you had regulations like 7,702 coming out where you had to reprice the entire portfolio the knowledge, the industry knowledge and the capabilities that Accenture brought to the table with the I want to just ask you a quick follow-up on that. the scenes, you don't have that. I can have the best of both worlds. So legacy blocks of business that are sitting out there that are, you know, into the client's lives, understanding their needs, ALA coupled with em, with AWS, CTO of a live as he calls it, the acronym for the service you have, this is a great example. Let's not reinvent the wheel and with cloud and native services So now more people can have access to make change, and we can even get it to the point where but at the end of the day, you still need to run a business. but the notion of buying tools and having platforms are now interesting because you So that's extremely powerful when you think about the speed to market Let me just quickly ask you compared to the old way, So I don't know that you can do that in any other type of environment or partnership. I got to ask you now on the customer side, you mentioned, um, you guys love, uh, the new API environment that we have, the connectivity that we can now make with the new backend policy admin systems has What do you want to knock down for the next year? And that's, you know, I'm being gracious to my teams when I say that I'd like to go a little bit sooner, But along the way, you know, given regulation, given new, I have to ask you another one. and you put the other things out there and prove you can be successful the conversation and move your core and your white What's the other expression, put water through the pipes, get some reps in and get the team ready to bring training, Then go for the hard stuff, right? So whether you do the outside first or the inside Kim and Nick, because you know, there's a morale issue going on too. So getting the staff to the next generation, Go for it. is not only knowledgeable of the legacy, but is so excited to move to the So thanks for sharing the story.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

NickPERSON

0.99+

AmazonORGANIZATION

0.99+

Nick VolpePERSON

0.99+

40QUANTITY

0.99+

KimPERSON

0.99+

January 1st, 2022DATE

0.99+

Nick VolpiPERSON

0.99+

AWSORGANIZATION

0.99+

oneQUANTITY

0.99+

next yearDATE

0.99+

March of 2020DATE

0.99+

KevinPERSON

0.99+

Kym GullyPERSON

0.99+

2016DATE

0.99+

50 yearQUANTITY

0.99+

ten-yearQUANTITY

0.99+

three-yearQUANTITY

0.99+

AccentureORGANIZATION

0.99+

NovemberDATE

0.99+

Kim gollyPERSON

0.99+

90%QUANTITY

0.99+

2018DATE

0.99+

FerrariORGANIZATION

0.99+

EachQUANTITY

0.99+

85%QUANTITY

0.99+

end of 2025DATE

0.99+

third quarterDATE

0.99+

ALAORGANIZATION

0.99+

ExcentraORGANIZATION

0.99+

end of 2024DATE

0.99+

19DATE

0.99+

threeQUANTITY

0.98+

todayDATE

0.98+

decadesQUANTITY

0.98+

50%QUANTITY

0.98+

first productQUANTITY

0.98+

17DATE

0.98+

firstQUANTITY

0.98+

ThanksgivingEVENT

0.97+

80QUANTITY

0.97+

18 monthsQUANTITY

0.97+

both worldsQUANTITY

0.97+

pandemicEVENT

0.96+

last yearDATE

0.96+

two great guestsQUANTITY

0.96+

10 years agoDATE

0.96+

Nick Volpe and Kym Gully AWS Executive Summit 2021


 

(upbeat music) >> Hello and welcome back to theCUBE's coverage of AWS Executive Summit at re:Invent 2021. I'm John Furrier, your host of theCUBE. This segment is about surviving and thriving with the digital revolution that's happening in the digital transformation that's turning into and changing businesses. We've got two great guests here with Guardian Life, Nick Volpe, CIO of Individual Markets at Guardian Life and Kim Gully, CTO of Life and Annuities at Accenture. Accenture obviously doing a lot of cutting-edge work, Guardian changing the game. Nick, thanks for coming on. Kim, thanks for coming on. >> Thanks John, good to be here. >> So, well, before I get into the question, I want to just set the table a little bit. The pandemic has given everyone a mandate. The good projects are exposed. The bad projects are exposed. Everyone can see what's happening because the pandemic forced everyone to identify what's working, what's not working, what the double-down on. Innovation for customers is a big focus, but now with the pandemic relieving and coming out of it, the world's changed. This is an opportunity for businesses. Nick, this is something that you guys are focused on. Can you take us through what Guardian Life's doing in this post pandemic changeover as cloud goes next level? >> Yeah, thanks John. So the immediate need in the pandemic situation was about the new business capability. So those familiar with insurance, traditionally life insurance underwriting, disability underwriting is very in-person, fluids, labs, attending physician statements. And when March of 2020 broke, that all came to an abrupt halt. Doctor's office were either closed. Testing centers were either closed or inundated with COVID testing. So we had to come up with some creative ways to digitize our new business, adopt the application, and adopt our medical questionnaires, and also get creative on some of our underwriting standards that put us at certain limits and certain levels and when we needed the fluids. So we are pretty quickly, we're agile about decisions there. And we moved from about 40 to 50% adoption rate of our electronic applications to the north of 98% across the board. In addition, we saw some opportunities for products and more capabilities beyond new business. So after we weathered the storm, we started to take a step back. And like you said, look at what we were doing, that have a start, stop, continue conversation internally to say, this digitization is a new norm. How do we meet it from every angle, not just a new business. And that's where we started to look at our policy administration systems, moving more to the cloud and leveraging the cloud to its fullest extent versus just a lifted shift. >> Kim, I want to get your perspective at Accenture. I've done a lot of interviews with the past, I think 18 months. A lot of use cases with Accenture, almost in every vertical where you guys are almost like the firefighters, get called in to like help out 'cause the cloud actually now is an enabler. How do you see the impact of the pandemic reverbing through? I mean, obviously you guys come to the table, you guys bring in, I mean, what's your perspective on this? >> So, yeah, it's really interesting. I think the most interesting fact is we talk about, Nick raise such a strong area in our business of underwriting and how can we expedite that, is being talked on the table for a number of years, but the industry has been very slow or reluctant to embrace. And the pandemic became an enforcer in it to be honest. And a lot of the companies were thinking about it prior, but that's it, they'll think about it. I mean, even Accenture, we launched a huge three-year investment to get clients into cloud and digital transformation, but the pandemic just expedited everything. Now the upside is clients that were in a well-advanced stage of planning, they were easily able to adopt, but clients that weren't, were really left behind. So we became very, very busy just supporting the clients that didn't have as much forethought as likes of Guardian, et cetera. >> Nick, it brings up a good point. I want to get your reaction to see if you agree. I mean, people who didn't put their toe in the cloud or just jump in the deep end, really got flat-footed when the pandemic hit, because they weren't prepared. People who were either ingratiated in with the cloud or having active projects or even being full deployments in there did well. What's your take on that? >> Yeah, the enablement we had and the gift we were given by starting our cloud journey, in I want to say 2016, 17 was we really started moving to the cloud. And I think we were the only insurer that moved production load to the cloud. At that point, most of insurers were putting their development environments, maybe even their SIT environments, but Guardian had the strategy of getting out of the data center or moving to a much more flexible, scalable environment architecture using the AWS cloud. So we completed our journey into the cloud by 2018, 19, and we were at the point of really capitalizing versus moving. So we were able to move very quickly, very nimbly. When the pandemic hit or in any digital situation, we have that flexibility and capacity that AWS provides us to really respond to our customers, our customers need. So we were one of the more fortunate insurers that were well into our cloud journey. And at the point of optimization versus the point of moving. >> Let's talk about the connection with Accenture's life insurance and annuity platform also known ALIP, I think the acronym is. What was that? Why was that relevant? What was that all about? >> Yeah, so I'll go first and then Kim, you can jump in and see if you agree with me. >> He essentially help that, love it. (laughs) >> Yeah, you would suspect you would, right John? >> Yeah. (laughs) >> Like I said, our new business focus was the original, like the emergency situation when the pandemic hit. But as we went further into it and realized the mortality and morbidity and the needs and wants of our customers, which is a major focus of Guardian, really being, having the client at the center of every conversation we have, we realized that there was a real opportunity for product and it's product continues to change and you had regulations like 7702 coming out where you had to reprice the entire portfolio to be able to sell it by January 1, 2022. We realized our current systems are for policy admin. We're not matching our digital capabilities that we had moved to the cloud. So we embarked on a very extensive RFP to Accenture and a few other vendors that would come to the table and work with us. And we just really got to a place where combination of our desire to be on the cloud, be flexible, and be capable for our customers, married really well with the knowledge, the industry knowledge and the capabilities that Accenture brought to the table with the ALIP platform. Their book of business, their current infrastructure, their configuration versus development, really all aligned with our need for flexible, fast time to market. We're looking to cut development times significantly. We're looking to cut test in times significantly. And as of right now, it's all proving true between the cloud capability and the ALIP capability. We are reaping the benefits of having this new platform coming up in live very soon here. >> Before I get to Accenture's perspective, I want to just ask you a quick follow-up on that, Nick, if you don't mind. You basically talk us through, okay, I can see what's happening here. You get with Accenture, take advantage of what they got going on. You get into the cloud, you start getting the efficiencies, get the cultural change. What refactoring have you seen? What's your vision, I should say. What's your vision around what's next? Because clearly there's a playbook. You get in the cloud, re-platform, you get the cultural fit, you understand the personnel issues, how to tap the resources, then you've got to look for innovation where you can start changing, how you do things to refactor the business model. >> Yeah, so I think that, specifically to this conversation, that's around the product capability. So for all too long, the insurance companies have had three specific sleeves of insurance products. We've had individual life. We have an individual disability and we'd have individual annuities. Each of them serving a specific purpose in the customer's lives. What this platform and this cloud platform allows us to do is start to think about, can we create the concept of a single wrapper? Can we bring some of these products together? Can we centralize the buying process? And with ALIP behind the scenes, you don't have that, I kind of equate it to building a Ferrari and attaching a trailer to it, and that's what we were doing today. Our digital front-ends, our new business capabilities are all being anchored down or slowed down by our traditional mainframe back-ends. By introducing Accenture on the cloud in AWS, we now have our Ferrari fully free to run as fast as it can versus anchoring this massive trailer to it. So it really was a matter of bringing our product innovation to our digital front-end innovation that we've been working on for two or three years prior. >> I mean, this is the kind of the Amazon way. You decouple things, you decompose, you don't want to have a drag. And with containers, we're seeing companies look at existing legacy in a way that's different. Could you talk about how you guys look at that Nick internally because a lot of CIO's are saying, Hey, you know what? I can have the best of both worlds. I don't have to kill the old to bring in the new, but I can certainly modernize everything. What's your reaction to that? >> Yeah. And I think that's our exact path forward. We don't feel like we need to blow the ocean. We're going after this surgically for the things that we think are going to be most impactful to our customers. So legacy blocks of business that are sitting out there, that are for completely closed, they're not our concern. It's really hitching this new ALIP capability to the next generation of products, the next generation of customer needs, understanding data. Data capture is very important. So if you look at the mainframes and what we're living on now, it's all about the owner of the policy. You lose connection with the beneficiary or the insured. What these new platforms allowed us to do is really understand the household around the products that they're buying. I know it sounds simple, but that data architecture, that data infrastructure on these newer platforms and in the cloud, you can churn it faster, you have scale to do more analysis, but you're also able to capture in a much cleaner way. On the traditional systems, you're talking about what we call intimately the blob on the mainframe that has your name, your first name, your last name, your address, all in one free form field sitting in some database. It's very hard to discern. On these new platforms, given our need and our desire to be deeper into the client's lives, understanding their needs, ALIP coupled with AWS, with our new business capabilities on the front-end really puts together that true customer value chain. That's going to differentiate us. >> Kim, okay, CTO of ALIP as he calls it, the acronym for the service you have. This is a great example. I hate to use the word on-ramp cause that sounds so old. But in a way, in vertical markets, you're seeing the power of the cloud because the data and the AI could be freed up and you can take advantage of all the heavy lifting by providing some platform or some support with Amazon, your expertise. This is a great use case of that, I think. And this is I think a future trend where the developments can be faster, that value can be faster, and your customers don't have to build all the lower level abstractions, if you will. Can you describe the essential relationship to your customers as you guys? Because this is a real great use case. >> Yeah, it is. Our philosophy is simple. Let's not reinvent the wheel. And with cloud and native services that AWS provide, we want to focus on the business of what the system needs to do and not all the little side bits. We can get a great service that's fully managed, that has security patches updates. We want to focus on the real deal. Like Nick wants to focus on the business and not so much what's underneath it. That's my problem, I'm focusing on that. And we will work together in a nice little gel. You've had the relatively new term, no code/low code. It's strange. A modern system like ALIP has been that way for a number of years. Basically it means, I don't want to make code changes. I just want to be able to configure it. So now more people can have access to make change, and we can even get it to the point where it's the people that are sitting there, dealing with the clients. That would be the ultimate, where they can innovate and come up with ideas and try things because we've got it so simple. We're not there yet, let's be realistic, but that's the ultimate goal. So ALIP, the no code/low code has been around for quite some time. And maybe we should take advantage of that, but I think we're missing one thing. So as good as the platform is, the cloud moving in, calculating native services using the built-in security that comes with all that and extending the function and then be able to tap into the InsurTech, FinTech, internet of things, and quickly adapt. I think the partnership is big. Okay, it's very strong part of the exercise. So you can add the product, but without the people that work well together, I think it's also a big challenge. All programs have their idiosyncrasies and there's a lot of challenges along the way. There's one really small simple example I can use. I'd say Guardian is one of our industries market leaders when they approach the security. They really do lead the way out there. They're very strict, very responsible, which is such a pleasure to say, but at the end of the day, you still need to run a business. So, 'cause we're a partnership because we all have the same challenges, we want to get to success. We were able to work together quite quickly. We planned out the right approach that maximize the security, but it also progressed the business and we applied that into the overall program. So I think it is a product definitely. I think it is everything Nick said, you actually elaborated on, but I'd like to point out, there's a big part of the partnership to make it a success as well. >> Yeah, great, great call out there. Nick, let's get your reaction on that because I want to get it to the customer side of it. This enablement platform is the new, I mean, platform has been around for awhile, but the notion of buying tools and having platforms are now interesting 'cause you have to take this low code/no code capability. I mean, you still got a code. I mean, there's some coding going on, but what it means is ease of use composing and being fast. Platforms are super important. That requires real architecture and partnership. What's your reaction? >> Yeah, so I think I'll tie it all together between AWS and ALIP, and here's the beauty of it. So we have something called LaunchPad where we're able to quickly stand up in ALIP instance for development capabilities because of our Amazon relationship. And then to Kim's point, we have been successful with 85% or more, of all the work we've done with an ALIP is configuration versus code and I'd actually I'd venture to say 90%. So that's extremely powerful when you think about the speed to market and our need to be product innovative. So if our developers and even our analysts that sit on the business side could come in and quickly stand up a development environment, start to play with actuarial calculations, new product features and function and then spin that to a more higher-end development environment. You now have the perfect coupling of a new policy administration system that has a flexibility and configuration with a cloud provider like Amazon and AWS that allows us to move quickly with environments, whereas in days past, you'd have to have an architecture team come in and stand up the servers. And I'm going way back, but like buy the boxes, put the boxes in place and wire them down. This combination of ALIP and AWS has really brought a new capability to Guardian and we're really excited about. >> I love that little comparison. Let me just quickly ask you, compared to the old way, give us an order of magnitude of pain and timing involved versus what you just described as standing up something very quickly and getting value and having people shift their intellectual capital into value activities versus undifferentiated heavy lifting. >> Yes, I'll give you real dates. So we really engaged with Accenture on the ALIP program right before Thanksgiving of last year. We had our environment stood up and running, all of our DEV, SIT, UAT up by February, March timeframe on AWS and we are about to launch our first product configuration into the ALIP platform coming November. So within a year, we've taken arguably decades of product innovation from our mainframes and built it onto the ALIP platform on the Amazon cloud. So I don't know that you can do that in any other type of environment or partnership. >> That's amazing. That's just great example to me where cloud scale and real refactoring and business agility is plays out. So congratulations. I got to ask you now, on the customer side you mentioned, you guys love providing value to the customers. What is the impact to the customer? Okay, now you're a customer, Guardian Life's customer. What's the impact to them? Can you share how you see that rendering itself in the marketplace? >> Yeah, so clearly AWS has rendered tons of value to the customer across the value stream whether it be our new business capability, our underwriting capability, our ability to process data and use their scale. I mean, it just goes on and on about the AWS, but specifically around ALIP, the new API environment that we have, the connectivity that we can now make with the new back-end policy admin systems has really brought us to a new level, whether it be repricing, product innovation, responding to claims capabilities, responding to servicing capabilities that the customer might need. We're able to introduce more self-service. So if you think about it from the back-end policy admin going forward to our client portal, we're able to expose more transactions to self-serve. So minimize calls to the call center, minimize frustration of hold times and allow them to come onto the portal and do more and interact more with their policies because we're on this new, more modern cloud environment and a new more modern policy admin. So we're delivering new capabilities to the customer from beginning to end being on the cloud with ALIP. >> Okay, final question. What's next for Guardian Life's journey year with Accenture? What's your plans? What do you want to knock down for the next year? What's on your mind? What's next? >> So that's an easy question. We've had this roadmap plan since we first started talking to Accenture, at least I've had it in my head. We want off all of our policy admin systems for new business come end of 2025. So we've got about four policy admin systems maintaining our different lines of business, our individual disability, our life insurance, and our annuities, for systems that are weighing us down a little bit. We have a glide path and a roadmap with Accenture as a partner to get off of all of these for new business capability by end of 2024. And I'm being gracious to my teams when I say that I'd like to go a little bit sooner. And then we begin to migrate the most important blocks of business that caused the most angst and most concerned with the executive leadership team and then complete the product. But along the way, given regulation, given new customer needs, meeting the needs of the customer's changing life, we're going to have parallel tracks. So I envision we continue to have this flywheel turning of moving, but then we begin another flywheel right next to it that says we're going to innovate now on the new platform as well. So ultimately John, next year, if I could have my entire whole life block, as it stands today on the new admin platform, and one or two new product innovations on the platform as well by the 3rd quarter, 4th quarter of next year, that would be a success as far as I'm concerned. >> Awesome, you guys had all planned out. I love, and I have such a passion for how technology powers business. And this is such a great story for next gen where the modernization trend is today and where it's going. So Nick appreciate it. Kim, thanks for coming out with Accenture. Nick, so just an easy question for you. I have to ask you another one. This is I got you here. You guys are doing a lot of great work. For other CIOs out there that are going through this right now, whatever they are on the spectrum, missed the CloudWave, getting in now, this notion of refactoring and then re-platforming and then refactoring business is a playbook we're seeing emerge. People can get the benefits of going to the cloud, certainly for efficiency, but now it opens up the aperture for different kinds of business models. With more data access, with machine learning. This refactoring seems to be the new hot thing where the best minds are saying, wow, we could do more, even more. What's your vision? How would you share those folks out there of the CIOs? What should they be thinking? What's their approach? What advice would you give? >> Yeah, so a lot of the mistakes we make as CIOs, we go for the white hot core first. We went the other way. We went for the newer digital assets. We went for the stuff that wasn't as concerning to the business. Should we fall over? Should there be an outage? Should there be anything? So if you avoid the white hot core, improve it with your peripherals, easier moves to the cloud portals, broker portals, beneficiary portals, simple AIX frames, moving to the cloud and making them cloud native, new builds. So we started with all those peripheral pieces of the architecture and we avoided the white hot core because that's where you start to get those very difficult conversations about, I don't know if I'm ready to move. And I don't see the obvious benefit of moving a dividend generating policy admin system to the cloud, like why? When you prove it in the pudding and you put the other things out there and prove you can be successful, the conversation to move your core and your white hot core out to the platform out to leverage the cloud and to leverage new admin platforms, it becomes a much easier conversation because you've kind of cut your teeth on something much less detrimental to the business should it go alright. >> What's the old expression, put water through the pipes, get some reps in and get the team ready to bring training, whatever your metaphor you use, that's what you're essentially saying there. Get some, your sea legs, get practice. >> Exactly. >> Then go for the hard stuff. >> It's such a valid point, John. We see a lot of different approaches across a lot of different companies and the biggest challenges, the core is the biggest part. And if you start with that, it can be the scariest part. And I've seen companies trip up big time and it becomes such a bubble spend, which really knocks you on for years, lose confidence in your strategy and everything else. And you're only as strong as your weakest link. So whether you do the outside first or the inside first, from a weakest link until the journey is complete, you never going to maximize. So it was a very different and new and great approach that they took by doing a learning curve around the easiest stuff and then hit in the core. >> Yeah, well, that's a great point. One quick, quick followup on that is that, talk about the impact to the personnel, Kim and Nick, because there's a morale issue going on too. There's a training. I won't say training, but there's a not re-skilling, but there's the rigor, if you're refactoring, you are re-skilling, you're doing new things. The impact of morale and confidence you get certainly. you don't want to be in the white core unconfident. >> Maybe I should get first 'cause it's Nick's stuff. So he probably might want to say a lot, yeah. What we see with a lot of insurance companies, they grow through acquisition. Okay, they're very large companies, grown over time, buying companies with businesses and systems and bringing it in. They usually bring a tenure staff. So getting the staff to the next generation, that staff is extremely important because they know everything that you've got today and then not so aware with what's coming up in the future. And there is a transition and people shouldn't feel threatened, but there is change and people do need to adopt and evolve and it should be fun and interesting, but it is a challenge at that turnover point on who controlling what, and then you get the concerns and get paranoid. So it is a true HR issue that you need to manage through. >> Nick you're the final word here. Go for it. >> Yeah, John. I'll give you a story that I think will sum the whole thing up about the excitement versus contention we see here at Guardian. I have a 50-year veteran on my legacy platform team and this person is so excited, got themselves certified in Amazon and is now leading the charge to bring our mainframes onto ALIP and is one of the most essential, and I've actually had Accenture tell me, if I had a person like this on every one of my engagements who is not only knowledgeable of the legacy, but is so excited to move to the new, I don't think I'd have a failed implementation. So that's the kind of Guardian, the kind of backing Guardian's putting behind this. We are absolutely focusing on rescaling. We are not going to the market. We're giving everyone the opportunity and we have an amazing take-up rate. And again, like I said, 50-year veteran who probably could have retired 10 years ago is so excited, reeducated themselves and is now a key part of this implementation. >> And who wouldn't want to drive a Ferrari when you see it come in. I mean, back in the trailer. Great story, Nick. Thank you for coming on, great insight. Kim, great stuff for the Accenture, as always a great story here. We're here at the heart of the real focus where all companies are feeling right now. We're surviving and thriving and coming out of the pandemic with a growth strategy and a business model powered by technology. So thanks for sharing the story, appreciate it. >> Thanks John, appreciate it. >> Okay, it's CUBE coverage of AWS Executive Summit at re:Invent 2021. I'm John Furrier, your host of theCUBE. Thanks for watching. (bright music)

Published Date : Oct 27 2021

SUMMARY :

in the digital transformation and coming out of it, the world's changed. and leveraging the cloud 'cause the cloud actually And a lot of the companies to see if you agree. had and the gift we were given Let's talk about the connection and then Kim, you can jump in He essentially help that, love it. Yeah. and the ALIP capability. You get in the cloud, re-platform, I kind of equate it to building a Ferrari I can have the best of both worlds. and in the cloud, you can churn it faster, and the AI could be freed up but at the end of the day, you but the notion of buying of all the work we've done with an ALIP compared to the old way, and built it onto the ALIP What is the impact to the customer? and on about the AWS, down for the next year? of business that caused the most angst I have to ask you another one. the conversation to move your core get some reps in and get the and the biggest challenges, talk about the impact to the personnel, So getting the staff Go for it. and is now leading the charge and coming out of the pandemic of AWS Executive Summit

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

Nick VolpePERSON

0.99+

AmazonORGANIZATION

0.99+

KimPERSON

0.99+

John FurrierPERSON

0.99+

NickPERSON

0.99+

Kim GullyPERSON

0.99+

January 1, 2022DATE

0.99+

AccentureORGANIZATION

0.99+

2016DATE

0.99+

oneQUANTITY

0.99+

AWSORGANIZATION

0.99+

next yearDATE

0.99+

GuardianORGANIZATION

0.99+

March of 2020DATE

0.99+

90%QUANTITY

0.99+

three-yearQUANTITY

0.99+

end of 2024DATE

0.99+

50-yearQUANTITY

0.99+

3rd quarterDATE

0.99+

2018DATE

0.99+

85%QUANTITY

0.99+

Guardian LifeORGANIZATION

0.99+

FerrariORGANIZATION

0.99+

EachQUANTITY

0.99+

18 monthsQUANTITY

0.99+

17DATE

0.99+

ALIPORGANIZATION

0.98+

end of 2025DATE

0.98+

one thingQUANTITY

0.98+

firstQUANTITY

0.98+

both worldsQUANTITY

0.97+

todayDATE

0.97+

OneQUANTITY

0.97+

ThanksgivingEVENT

0.97+

two great guestsQUANTITY

0.97+

first productQUANTITY

0.97+

Tom Anderson, Red Hat | AnsibleFest 2021


 

(bright music) >> Well, hi everybody. John Walls here on theCUBE, continuing our coverage of AnsibleFest 2021 with Tom Anderson, the Vice President of Product Management at Red Hat. And Tom, you've been the answer, man, for theCUBE here over the last a week, 10 days or so. Third cube appearance, I hope we haven't worn you out. >> No, you haven't John, I love it, I love doing it. So that's great to have you have you at the event. >> Thank you for letting us be a part of that. It's been a lot of fun. Let's let's go and look at the event now. As far as big picture here, major takeaways that you think that have been talked about, that you think you'd like people, customers to go home with. If you will, though, a lot of this has been virtual obviously, but when I say go home, I made that figuratively, but what, what do you want people to remember and then apply to their businesses? >> Right. So being a product guy, I want to talk about products usually, right? So the big kind of product announcements from this year's event have been the rollout, and really, the next generation of the Ansible automation platform, which is really a rearchitecture turning it into a cloud native application an automation application itself that scales to our customer needs. So a lot of big announcements around that. And so what does that do for customers? That's really bringing them the automation platform that they can scale from the data center, to the cloud, to the Edge and everywhere in between, across a single platform with a single easy to use automation language. And then secondly, on that, as automation starts to shift left, we always talk about technology shifting left towards the developer, as automation is also shifting left towards the developer and other personas in an organization we're really happy about the developer tools and the tooling that we're providing to the customers with the new automation platform too, that brings development of content automation content. So the creation, the testing, the deployment and the management of that content across an enterprise far easier than it's ever been. So it's really kind of, it's a little bit about the democratization of automation. We see that shifting left, if you will. And I know I've said that already, but we see that shifting left of automation into other parts of the organization, beyond the domain experts, the network engineers or the storage experts, et cetera, pushing that automation out into the hands of other personas in the organization has been a big trend that we've seen and a lot of product announcements around that. So really excited about the product announcements in particular, but also the involvement and the engagement of our ecosystem, our upstream community. So important to our product and our success, our ecosystem partners, and obviously last but not least our customers and our users. >> So you hit a lot of big topics there. So let's talk about the Edge. You know, that seems to be a, you know, a fairly significant trend at this point, right? 'Cause trying to get the automation out there where the data besides, and that's where the apps are. Right? So where the data is, that's where things are happening out there on the Edge. So maybe just dive into that a little bit and about how you're trying to facilitate that need. >> Yeah. So a couple of trends around the Edge, obviously it's the architecture itself with lower capacity or lower capability devices and compute infrastructure at the Edge. And whether that's at the far edge with very low capacity devices, or even at near edge scenarios where you don't have, you know, data center, IT people out there to support those environments. So being able to get at those low capability, low capacity environments remotely Ansible is a really good fit for that because of our agentless architecture, the agentless architecture of Ansible itself allows you to drive automation out into the devices and into the environments where there isn't a high capacity infrastructure. And the other thing that the other theme that we've seen is one of the commonalities that no matter where the compute is taking place and the users are, there always has to be network. So we see a lot of network automation use cases out at the Edge and Ansible is, you know, the defacto network automation solution in the market. So we see a lot of our customers driving Ansible use cases out into their Edge devices. >> You know, you talk about development too, and just kind of this changing relationship between Ansible and DevOps and how that has certainly been maturing and seems to be really taking off right now. >> Yeah. So for, you know, what we've seen a lot of, as you know, is becoming frictionless, right? How do we take the friction out of the system that frees developers up to be more productive for organizations to be more agile, to roll out applications faster? How do we do that? We need to get access to the infrastructure and the resources that developers need. We need to get that access into their hands when they need it. And in our frictionless sort of way, right? So, you know, all of the old school, traditional ways of developers having to get infrastructure by opening a help desk ticket to get servers built for them and waiting for IT ops to build the servers and to deploy them and to send them back a message, all that is gone now. These, you know, subsystem owners, whether that's compute or cloud or network or storage, their ability to use Ansible to expose their resources for consumption by other personas, developers in this case, makes developers happy and more efficient because they can just use those automation playbooks, those Ansible playbooks to deploy the infrastructure that they need to develop, test and deploy their applications on. And the actual subsystem owners themselves can be assured that the usage of those environments is compliant with their standards because they've built and shared the automation with those developers to be able to consume when they want. So we're making both sides happy, agile, efficient developers and happy infrastructure owners, because they know that the governance and compliance around that system usage is on point with what they need and what they want. >> Yeah. It's a big win-win and a very good point. I always like it when we kind of get down to the nitty-gritty and talk about what a customer is really doing. Yeah. And because if we could talk about hypotheticals and trends and developing and maturity rates and all those kinds of things, but in terms of actual customers, you know, what people really are doing, what do you think have been a few of the plums that you'd like to make sure people were paying attention to? >> Yeah. I think from this year's event, I was really taken by the JP Morgan Chase presentation. And it really kind of fits into my idea of shifting left in the democratization of automation. They talked about, I think the number was around 7,000 people, associates inside that organization that are across 22 countries. So kind of global consumption of this. Building automation playbooks and sharing those across the organization. I mean, so gone are the days of, you know, very small teams of people doing, just automating the things that they do and it's grown so big. And, so pervasive now, I think JP Morgan Chase really kind of brings that out, tease that out, that kind of cultural impacts that's had on their organization, the efficiencies that have been able to draw off from that their ability to bring the developers and their operations teams together to be working as one. I think their story is really fantastic. And I think this is the second year. I think this is the second year that JP Morgan Chase has been presenting at Fest and this years session was fantastic. I really, really enjoyed that. So I would encourage, I would encourage anybody to go back and look at the recording of that session and there's game six groups, total other end of the spectrum, right? Financial services, JP Morgan Chase, global company to Gamesis, right? These people who are rolling out new games and need to be able to manage capacity really well. When a new game hits, right? Think about a new game hits and the type of demand and consumption there is for that game. And then the underlying infrastructure to support it. And Gamesis did a really great presentation around being able to scale out automation to scale up and down automation, to be able to spin up clusters and deploy infrastructure, to run their games on an as-needed basis. So kind of that business agility and how automation is driving that, or business agility is driving the need for automation in these organizations. So that that's just a couple of examples, but there was a good ones from another financial services that talked about the cultural impacts of automation, their idea of extreme automation. In fact, one of the sessions I interviewed Joe Mills, a gentlemen from this card services, financial services company, and he talked about extreme automation there and how they're using automation guilds in communities of practice in their organization to get over the cultural hurdles of adopting automation and sharing automation across an organization. >> Hm. So a wide array obviously of customer uses and all very effective, I guess, and, you know, and telling their own story. Somewhat related to that, and you, as you put it out there too, if you want to go back and look, these are really great case studies to take a look at. For those who, again, who maybe couldn't attend, or haven't had a chance to look at any of the sessions yet, what are some of the kinds of things that were discussed in terms of sessions to give somebody a flavor of what was discussed and maybe to tease them a little bit for next year, right? And just in case that you weren't able to participate and can't right now, there's always next year. So maybe if you could give us a little bit of flavor of that, too. >> Yeah. So we kind of break down the sessions a little bit into the more kind of technical sessions and then the sort of less technical sessions, let's put it that way. And on the technical session front, certainly a couple of sessions were really about getting started. Those are always popular with people new to Ansible. So there's the session that aired on the 29th, which has been recorded and you can rewatch it. That's getting started Q and A with the technical Ansible experts. That's a really, really great session 'cause you see that the types of questions that are being asked. So you know, you're not alone. If you're new to Ansible, the types of questions are probably the questions that you have as well. And then the, obviously the value of the tech Ansible experts who are answering this question. So that was a great session. And then for a lot of folks who may want to get involved in the community, the upstream community, there's a great session that was also on the 29th. And it was recorded for rewatching, around getting started with participation in the Ansible community and a live Q and A there. So the Ansible community, for those who don't know is a large, robust, vibrant, upstream community of users, of software companies, of all manners of people that are contributing and contributing upstream to the code and making Ansible a better solution for them and for everybody. So that's a great session. And then last but not least, almost always the most popular session is the roadmap sessions and Massimo Ferrari, gentleman on my team did a great session on the Ansible roadmap. So I do a search on roadmap in the session catalog, and you can see the recording of that. So that's always a big deal. >> Yeah, roadmaps were great, right? Because especially for newcomers, they want to know how I'm down here at 0.0. And, I've got a destination in mind, I want to go way out there. So how do I get there? So, to that point for somebody who is beginning their journey, and maybe they have, you know, they're automated with the ability to manually intervene, right? And now you've got to take the hands off the wheel and you're going to allow for full automation. So how, what's the message you want to get across to those people who maybe are going to lose that security blanket they've been hanging on to, you know, for a long time and you take the wheels off and go. >> No John, that's a great question. And that's usually a big apprehension of kind of full automation, which is, you know, that kind of turning over the reins, if you will, right to somebody else. If I'm the person who's responsible for this storage system, if I'm the person responsible for this network elements, these routers, these firewalls, whatever it might be, I'm really kind of freaked out about giving controls or access to those things, from a configuration standpoint, to people outside of my organization, who don't have the same level of expertise that I do, but here's the deal that in a well implemented well architected Ansible automation platform environment, you can control the type of automation that people do. Who does that against what managing that automation as code. So checking in, checking out, version control, deployment access. So there's a lot of controls that can be put in place. So it isn't just a free-for-all automated. Everybody automating everything. Organizations can roll out automation and have access to different kinds of automation, can control and manage what their organizations can use and see and do with Ansible. So there's lots of controls built-in for organizations to put in place and to make those subsystem owners give them confidence that how people are accessing their subsystems using Ansible automation can be controlled in a way that makes them comfortable and assures compliance and governance around those resources. >> Well, Tom, we appreciate the time. Once again, I know you've been a regular here on theCUBE over the course of the event. We'll give you a little bit of time off and let you get back to your day job, but we do appreciate that and I wish you success down the road. >> Thank you very much. And we'll see you again next year. >> You bet. Thank you, Tom Anderson, joining us Vice President of Product Management at Red Hat, talking about AnsibleFest, 2021. I'm John Walls, and you're watching theCUBE. (lively instrumental music)

Published Date : Oct 1 2021

SUMMARY :

the Vice President of Product So that's great to have that you think you'd like people, and really, the next generation You know, that seems to be a, you know, and into the environments where and seems to be really and the resources that developers need. been a few of the plums I mean, so gone are the days of, you know, and maybe to tease them that aired on the 29th, and you take the wheels off and go. and have access to different and let you get back to your day job, And we'll see you again next year. I'm John Walls, and

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Tom AndersonPERSON

0.99+

Massimo FerrariPERSON

0.99+

John WallsPERSON

0.99+

TomPERSON

0.99+

JohnPERSON

0.99+

Joe MillsPERSON

0.99+

JP Morgan ChaseORGANIZATION

0.99+

GamesisORGANIZATION

0.99+

next yearDATE

0.99+

Red HatORGANIZATION

0.99+

oneQUANTITY

0.99+

AnsibleORGANIZATION

0.99+

second yearQUANTITY

0.99+

second yearQUANTITY

0.99+

2021DATE

0.98+

22 countriesQUANTITY

0.98+

around 7,000 peopleQUANTITY

0.98+

ThirdQUANTITY

0.97+

both sidesQUANTITY

0.96+

EdgeTITLE

0.96+

this yearDATE

0.96+

10 daysQUANTITY

0.96+

single platformQUANTITY

0.95+

JP Morgan ChaseORGANIZATION

0.94+

singleQUANTITY

0.93+

secondlyQUANTITY

0.93+

six groupsQUANTITY

0.92+

29thDATE

0.9+

Vice PresidentPERSON

0.84+

29thQUANTITY

0.83+

AnsibleFestTITLE

0.83+

last a weekDATE

0.8+

DevOpsORGANIZATION

0.79+

AnsibleFest 2021EVENT

0.68+

gameQUANTITY

0.68+

AnsibleFestORGANIZATION

0.67+

EdgeORGANIZATION

0.59+

theCUBEORGANIZATION

0.58+

EdgeCOMMERCIAL_ITEM

0.56+

couple of sessionsQUANTITY

0.55+

Roberto Giordano, Borsa Italiana | Postgres Vision 2021


 

(upbeat music) >> From around the globe, it's theCUBE! With digital coverage of Postgres Vision 2021, brought to you by EDB. >> Welcome back to Postgres Vision 21, where theCUBE is covering the innovations in open source trends in this new age of application development and how to leverage open source database technologies to create world-class platforms that are cost-effective and also scale. My name is Dave Vellante, and with me is Roberto Giordano, who is the End User Computing, Corporate, and Database Services Manager at Borsa Italiana, the Italian Stock Exchange. Roberto, great to have you. Thanks for coming on. >> Thanks Dave, and thanks to the interview friend for the invitation. >> Okay, and we're going to dig in to the great customer story here. First, Roberto, tell us a little bit more about Borsa Italiana and your role at the organization. >> Absolutely. Well, as you mentioned, Borsa is the Italian Stock Exchange. We used to be part of the London Stock Exchange, but last month we left that group, and we joined another group called Euronext, so we are now part of another group, I would say. And right now within Euronext, Euronext provide the biggest liquidity pool in Europe, just to mention something. And basically we provide the market infrastructure to our customers across Europe and the whole world. So probably if it happens for you to buy a little of, I don't know, Ferrari for instance, probably use our infrastructure. >> So I wonder if you could talk about the key drivers in the exchange business in Italy. I don't know how closely you follow what's going on in the United States, but it's crypto madness, there's the Reddit army driving up stocks that have big short positions, and of course the regulators have to look at that, and there's a big debate going on. Well, I don't know what's it like in Italy, but what are the key drivers that are really informing the priorities for your technology strategy? >> Well, you mentioned, for instance, the stereotypical cases that are a little bit of laterally to the global markets and also to our markets as a it professional running market infrastructure is our first the goal to provide an infrastructure that is reliable and be with the lowest possible latency. So we are very focused on performance and reliability just to mention the two main drivers within our systems. >> Well, and you have end-user computing in your title and we're going to get into the database discussion, but I presumably with with COVID you had to pivot and that that piece of your job was escalated in 2020, I would imagine. And you mentioned latency which is a key factor in obviously in database access but that must've been a big challenge last year. >> Well, it was really a challenge, but basically we move just within a weekend, the wall organization working remotely. And it has been like this since February, 2020. Think about the challenge of moving almost 1000 people that used to come to the office every day to start to work remotely. And as within my team of the end user computing this was really a challenge but it was a good one at the end. We, we, we succeeded and everything work. It's fine from our perspective, no news is is a good news, you know, because normally when something doesn't work, we are on newspapers. So if you didn't heard about us it means that everything worked out just fine. >> Yeah. It's amazing, Roberto. We both in the technology business that you'll be you're a practitioner observer, but I mean if you're in the tech business most companies actually pivoted quite well. You're have always been a digital business, different. I mean, if you're a Ferrari and making cars and you can't get semiconductors, but but most technology companies actually made the transition you know, quite amazingly, let's get into the, the case study a bit of it. I wonder if you could paint a picture of your organization's infrastructure and applications what it looks like and and particularly your database infrastructure what does that look like? >> Well, we are a multi-vendor shop. So we would like to pick the right technology for for the right service. This means that my database services teams currently manage several different technology where possible that plays a big role in, in, in our portfolio. And because we, we, we currently support both the open source, fully open source version of Postgres, but also the EDB distribution in particular we prefer to use EDB distribution where we did specific functionalities that just EDB provide. And we, when we need a first class level of support that EDB in recent year was able to provide to us. >> When you say full functioning, are you talking about things like acid compliance, two phase commits? I mean, all these enterprise capabilities, is that right? Or maybe you could be >> Just too much just to mention one, for instance we recently migrated our wire intrasite availability solution using the ADB fail-over manager. That is an additional component that just it'll be provide. >> Yeah. Okay. So, so par recovery obviously is, is and so that's a solution that you to get from the EDB distro as opposed to having to build it yourself with open source tooling. >> Yeah, correct. Well, basically sterically, we used to rely on OSTP clustering from, from, from that perspective. But over the years we found that even if it's a technology that works fine, it has been around for four decades. And so on. We faced some challenges internally because within my team we don't own also the operative system layers. So we want a solution that was 100% within our control and perimeter. So just few months ago we asked the EDB EDB folks if they can provide something. And after a couple of meetings also with their pre-sales engineers, we found the the right solution for us. So we launched long story short, just a quick proof of concept to a tissue test together, again using the ADB consultancy. And, and then we, beginning of this year, we, we went live with the first mission critical service using this brand new technology, well brand new technology for us. You know, it'd be created a few years ago >> And I do have some follow-up questions but I want to understand what catalyzed the, you know what was the motivation for going with an open source database? I mean, you're, you're a great example because you have your multi-vendor so you have experienced with all of it, the full spectrum. What was it about open source database generally EDB specifically that triggered the, the choice? >> Well thanks for the question. It is, this is one of the, or one of the questions that I always, like. I think what really drove us was the right combination between easy to use, so simplicity and also good value for money. So we like to pick the right database technology for the right kind of service slash budget that the survey says and, and the open source solution for a specific service. It, it, it's, it's our, you know, first, first, first choice. So we are not going to say a company that use just one technology. We like to take the best of breed that the market can offer. In some cases, the open source and Postgres in particular is, is our choice. How involved was >> The line of business in this both the decision and the implementation? Was it kind of invisible to them, or this was really more of a technology decision based on the your interpretation of the requirements I'm interested in who was involved and how you actually got it done? >> Well, I, I think this decision was transplant for, for, for, for the business at the end of the day don't really have that kind of visibility. You know, they just provide requirements in particular in terms of performance and rehabil area, the reliability. And so, so this this is something they are not really involved about. And obviously if they, if we are in opposition to save a little bit of money everybody's at the, even the business >> No. So what did you have to do? So that makes sense to me, I figured that was the case. Who would, who were the stakeholders on your team? I mean, what kind of technical resources did you require an implementation resources? What take us through what the project if you will look like, wh how did you do it? >> Well, it's a combination of database expertise. I got the pleasure to run a team that is paid by very, very senior, very, very skilled database services professional that are able to support more than one more than what the county and also are very open to innovation and changes. Plus obviously we need also the development teams the relevant development teams on board, when you when you run this kind of transformations and it looks like also, they liked the idea to use PostgreSQL for for this specific service I got in mind. So it, it, it was quite, quite easy, not be discussion. You know. >> What was the, what was the elapsed time from from when you said, okay, we're in, you know signed the agreement we're going here you made the decision to actually getting into production. >> Well, as I mentioned, we, we, we were on we're on services and application that are really focused on high availability and performance. So generally speaking, we are not a peak organization. Also we run a business that is highly regulated. So as you know, as you can imagine we are an organization that don't have a lot of appetite for risk, you know, so generally speaking in order to run this kind of transformation is a matter of several months, I will say six nine months to have something delivered in that space. >> Okay. Well, that's, I mean, that's reasonable. I mean, if you could do it inside of a year that's I think quite good especially in the highly regulated industry. And then you mentioned kind of the fail over the high availability Cape Cape capabilities. Were there other specific EDB tools that that you utilize to sort of address the objectives? >> Yeah, absolutely. We were in particular, we used Postgres enterprise, AKA Pam. Okay. And very recently we were involved within ADB about per se specifically developing one functionality that, that that we needed back in the day. I think together with Bart these are the free EDB specific tools that, that we, that that we use right now. >> And, and I'm, I'm interested in, I want to get to the business impact and I know it's early days for you but the real motivation was to save money and simplify. I would actually, I would imagine your developers were happy because they get to use modern tooling and open source. But, but really though if your industry is bottom line, right, I mean that's really what the, the business case was all about. But I wonder if you could add some color there in terms of the business impact that you expect. And then, I mean I don't know how much visibility you have now but anything you can share with us. >> Well, thinking about the EFM implementation that the business impact the, was that in case of a failure or the DBA team that a services team is it is able to provide a solution that is within our 100% within our perimeter. So this means that we are fully accountable for it. So in a nutshell, when you run a service, the less people the less teams you have to involve the more control you can deliver. And in some, again, very critical services that is a great value. >> Okay. So, and, and where do you want to take this? I mean, how do you see w what's your, if you're thinking about your Postgres and, and generally an EDB you know, roadmap, where do you want it to go? >> Well, I stay to, to trends within within the organization, the, the, the, the the first one is about migrating more existing services to open source solution for database is going to be, is going to be prosperous. And other trends that I see within my organization is about designing applications, not really to be, to to use PostgreSQL as the base, as it does a base layer. I think both trends are more or less surroundings at the same state right now. >> Yeah. A lot of the audience members at Postgres vision 21 is just like you they they're managing day-to-day infrastructure. They're there they're expert practitioners. What advice would you give to somebody that is thinking about, you know taking this journey, maybe if you had to do something over again maybe what would you do differently? How can you help your peers here? >> Well, I think in particular, if you are going to say a big organization that runs a highly regulated business in some cases, you are a little bit afraid of open source because there is this, I can say general consideration about the lack of enterprise level support. I would like to say that it is just about the past because they're around bunch of companies like EDB that are we're a hundred percent capable of providing enterprise level of support, even on, on, on even on the open source distribution of Paul's presser. Obviously Dan is you're going to go with their specific distribution. The level of support is going to be even more accurate but as we know, it could be currently is they across say main contributor of the pollsters community. And I think is, is that an insurance for every organization? >> Your advice is don't be afraid. >> Yeah. My advice is done is absolutely, don't be, don't be afraid. And if, if, if I can, if we can mention about also about, you know, the cloud called technologies this is also another, another topic where if possible I would like to suggest to not being afraid EDB as every every I would say organization within the it industry is really pushing for it. And I think for a very, for, for a lot of cases not all of them, but a lot of cases, there is a great value about the design services application to be cloud native or migrating existing application into the cloud. >> Okay. But, but being a highly regulated industry and being a, you know, very much aware of the the narrative around open source, et cetera, you, you must've had just a little piece of your mind saying, okay I have to manage this risk. So there's anything specifically you did with managing the risks that you would advise? Was it, was it or is it really just about good change management? >> I think it was mainly about a good change management when you got, you know the relevant stakeholders that you need on board and we are, everybody's going the same direction. That basically is about executing. >> Excellent. Well, Roberto, I really appreciate your time and your knowledge that you share with the audience. So thanks so much for coming on the cube. >> Thank you, Dave. It was a great pleasure. >> And thank you for watching the cubes continuous coverage of Postgres vision 21. We'll be right back. (upbeat music)

Published Date : Jun 21 2021

SUMMARY :

brought to you by EDB. the Italian Stock Exchange. for the invitation. role at the organization. Europe and the whole world. and of course the regulators the goal to provide an Well, and you have end-user computing So if you didn't heard about us I wonder if you could paint a picture of Postgres, but also the EDB distribution in particular that just it'll be provide. and so that's a solution that you to get the right solution for us. all of it, the full spectrum. breed that the market can offer. at the end of the day No. So what did you have to do? I got the pleasure to signed the agreement we're going here of appetite for risk, you that you utilize to sort that we needed back in the day. impact that you expect. the less teams you have to involve I mean, how do you see w the same state right now. maybe what would you do differently? of the pollsters community. about also about, you know, that you would advise? the relevant stakeholders that you need So thanks so much for coming on the cube. It was a great pleasure. And thank you for watching the cubes

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
RobertoPERSON

0.99+

EuronextORGANIZATION

0.99+

Dave VellantePERSON

0.99+

DavePERSON

0.99+

EuropeLOCATION

0.99+

Borsa ItalianaORGANIZATION

0.99+

ItalyLOCATION

0.99+

FerrariORGANIZATION

0.99+

Roberto GiordanoPERSON

0.99+

100%QUANTITY

0.99+

PaulPERSON

0.99+

February, 2020DATE

0.99+

BorsaORGANIZATION

0.99+

2020DATE

0.99+

United StatesLOCATION

0.99+

oneQUANTITY

0.99+

last yearDATE

0.99+

London Stock ExchangeORGANIZATION

0.99+

RedditORGANIZATION

0.99+

firstQUANTITY

0.99+

FirstQUANTITY

0.99+

last monthDATE

0.99+

PamPERSON

0.99+

DanPERSON

0.99+

bothQUANTITY

0.99+

PostgresORGANIZATION

0.99+

EDBORGANIZATION

0.98+

two main driversQUANTITY

0.98+

six nine monthsQUANTITY

0.98+

few months agoDATE

0.98+

four decadesQUANTITY

0.98+

BartPERSON

0.98+

Italian Stock ExchangeORGANIZATION

0.97+

almost 1000 peopleQUANTITY

0.97+

PostgreSQLTITLE

0.96+

more than oneQUANTITY

0.95+

first classQUANTITY

0.95+

first oneQUANTITY

0.94+

two phaseQUANTITY

0.93+

few years agoDATE

0.9+

Cape CapeLOCATION

0.9+

EDBTITLE

0.88+

Postgres VisionORGANIZATION

0.88+

one technologyQUANTITY

0.88+

this yearDATE

0.88+

a yearQUANTITY

0.87+

one ofQUANTITY

0.84+

first missionQUANTITY

0.81+

hundred percentQUANTITY

0.8+

one functionalityQUANTITY

0.79+

recent yearDATE

0.78+

Postgres vision 21ORGANIZATION

0.75+

questionsQUANTITY

0.74+

theCUBEORGANIZATION

0.71+

2021DATE

0.71+

both trendsQUANTITY

0.7+

first choiceQUANTITY

0.7+

Postgres Vision 21ORGANIZATION

0.69+

ADBTITLE

0.68+

ADBORGANIZATION

0.63+

PostgresTITLE

0.53+

COVIDORGANIZATION

0.51+

Vision 2021EVENT

0.41+

old version - Roberto Giordano, Borsa Italiana | Postgres Vision 2021


 

(upbeat music) >> From around the globe, it's theCUBE! With digital coverage of Postgres Vision 2021, brought to you by EDB. >> Welcome back to Postgres Vision 21, where theCUBE is covering the innovations in open source trends in this new age of application development and how to leverage open source database technologies to create world-class platforms that are cost-effective and also scale. My name is Dave Vellante, and with me is Roberto Giordano, who is the End User Computing, Corporate, and Database Services Manager at Borsa Italiana, the Italian Stock Exchange. Roberto, great to have you. Thanks for coming on. >> Thanks Dave, and thanks to the interview friend for the invitation. >> Okay, and we're going to dig in to the great customer story here. First, Roberto, tell us a little bit more about Borsa Italiana and your role at the organization. >> Absolutely. Well, as you mentioned, Borsa is the Italian Stock Exchange. We used to be part of the London Stock Exchange, but last month we left that group, and we joined another group called Euronext, so we are now part of another group, I would say. And right now within Euronext, Euronext provide the biggest liquidity pool in Europe, just to mention something. And basically we provide the market infrastructure to our customers across Europe and the whole world. So probably if it happens for you to buy a little of, I don't know, Ferrari for instance, probably use our infrastructure. >> So I wonder if you could talk about the key drivers in the exchange business in Italy. I don't know how closely you follow what's going on in the United States, but it's crypto madness, there's the Reddit army driving up stocks that have big short positions, and of course the regulators have to look at that, and there's a big debate going on. Well, I don't know what's it like in Italy, but what are the key drivers that are really informing the priorities for your technology strategy? >> Well, you mentioned, for instance, the stereotypical cases that are a little bit of laterally to the global markets and also to our markets as a it professional running market infrastructure is our first the goal to provide an infrastructure that is reliable and be with the lowest possible latency. So we are very focused on performance and reliability just to mention the two main drivers within our systems. >> Well, and you have end-user computing in your title and we're going to get into the database discussion, but I presumably with with COVID you had to pivot and that that piece of your job was escalated in 2020, I would imagine. And you mentioned latency which is a key factor in obviously in database access but that must've been a big challenge last year. >> Well, it was really a challenge, but basically we move just within a weekend, the wall organization working remotely. And it has been like this since February, 2020. Think about the challenge of moving almost 1000 people that used to come to the office every day to start to work remotely. And as within my team of the end user computing this was really a challenge but it was a good one at the end. We, we, we succeeded and everything work. It's fine from our perspective, no news is is a good news, you know, because normally when something doesn't work, we are on newspapers. So if you didn't heard about us it means that everything worked out just fine. >> Yeah. It's amazing, Roberto. We both in the technology business that you'll be you're a practitioner observer, but I mean if you're in the tech business most companies actually pivoted quite well. You're have always been a digital business, different. I mean, if you're a Ferrari and making cars and you can't get semiconductors, but but most technology companies actually made the transition you know, quite amazingly, let's get into the, the case study a bit of it. I wonder if you could paint a picture of your organization's infrastructure and applications what it looks like and and particularly your database infrastructure what does that look like? >> Well, we are a multi-vendor shop. So we would like to pick the right technology for for the right service. This means that my database services teams currently manage several different technology where possible that plays a big role in, in, in our portfolio. And because we, we, we currently support both the open source, fully open source version of PostgreSQL, but also the EDB distribution in particular we prefer to use DDB distribution where we did specific functionalities that just EDB provide. And we, when we need a first class level of support that ADB in in recent year was able to provide to us. >> When you say full functioning, are you talking about things like acid compliance, two phase commits? I mean, all these enterprise capabilities, is that right? Or maybe you could be >> Just too much just to mention one, for instance we recently migrated our wire intrasite availability solution using the ADB fail-over manager. That is an additional component that just it'll be provide. >> Yeah. Okay. So, so par recovery obviously is, is and so that's a solution that you to get from the EDB distro as opposed to having to build it yourself with open source tooling. >> Yeah, correct. Well, basically sterically, we used to rely on OSTP clustering from, from, from that perspective. But over the years we found that even if it's a technology that works fine, it has been around for four decades. And so on. We faced some challenges internally because within my team we don't own also the operative system layers. So we want a solution that was 100% within our control and perimeter. So just few months ago we asked the EDB EDB folks if they can provide something. And after a couple of meetings also with their pre-sales engineers, we found the the right solution for us. So we launched long story short, just a quick proof of concept to a tissue test together, again using the ADB consultancy. And, and then we, beginning of this year, we, we went live with the first mission critical service using this brand new technology, well brand new technology for us. You know, it'd be created a few years ago >> And I do have some follow-up questions but I want to understand what catalyzed the, you know what was the motivation for going with an open source database? I mean, you're, you're a great example because you have your multi-vendor so you have experienced with all of it, the full spectrum. What was it about open source database generally EDB specifically that triggered the, the choice? >> Well thanks for the question. It is, this is one of the, or one of the questions that I always, like. I think what really drove us was the right combination between easy to use, so simplicity and also good value for money. So we like to pick the right database technology for the right kind of service slash budget that the survey says and, and the open source solution for a specific service. It, it, it's, it's our, you know, first, first, first choice. So we are not going to say a company that use just one technology. We like to take the best of breed that the market can offer. In some cases, the open source and Pasquesi in particular is, is our choice. How involved was >> The line of business in this both the decision and the implementation? Was it kind of invisible to them, or this was really more of a technology decision based on the your interpretation of the requirements I'm interested in who was involved and how you actually got it done? >> Well, I, I think this decision was transplant for, for, for, for the business at the end of the day don't really have that kind of visibility. You know, they just provide requirements in particular in terms of performance and rehabil area, the reliability. And so, so this this is something they are not really involved about. And obviously if they, if we are in opposition to save a little bit of money everybody's at the, even the business >> No. So what did you have to do? So that makes sense to me, I figured that was the case. Who would, who were the stakeholders on your team? I mean, what kind of technical resources did you require an implementation resources? What take us through what the project if you will look like, wh how did you do it? >> Well, it's a combination of database expertise. I got the pleasure to run a team that is paid by very, very senior, very, very skilled database services professional that are able to support more than one more than what the county and also are very open to innovation and changes. Plus obviously we need also the development teams the relevant development teams on board, when you when you run this kind of transformations and it looks like also, they liked the idea to use PostgreSQL for for this specific service I got in mind. So it, it, it was quite, quite easy, not be discussion. You know. >> What was the, what was the elapsed time from from when you said, okay, we're in, you know signed the agreement we're going here you made the decision to actually getting into production. >> Well, as I mentioned, we, we, we were on we're on services and application that are really focused on high availability and performance. So generally speaking, we are not a peak organization. Also we run a business that is highly regulated. So as you know, as you can imagine we are an organization that don't have a lot of appetite for risk, you know, so generally speaking in order to run this kind of transformation is a matter of several months, I will say six nine months to have something delivered in that space. >> Okay. Well, that's, I mean, that's reasonable. I mean, if you could do it inside of a year that's I think quite good especially in the highly regulated industry. And then you mentioned kind of the fail over the high availability Cape Cape capabilities. Were there other specific EDB tools that that you utilize to sort of address the objectives? >> Yeah, absolutely. We were in particular, we used Postgres enterprise, AKA Pam. Okay. And very recently we were involved within ADB about per se specifically developing one functionality that, that that we needed back in the day. I think together with Bart these are the free EDB specific tools that, that we, that that we use right now. >> And, and I'm, I'm interested in, I want to get to the business impact and I know it's early days for you but the real motivation was to save money and simplify. I would actually, I would imagine your developers were happy because they get to use modern tooling and open source. But, but really though if your industry is bottom line, right, I mean that's really what the, the business case was all about. But I wonder if you could add some color there in terms of the business impact that you expect. And then, I mean I don't know how much visibility you have now but anything you can share with us. >> Well, thinking about the EFM implementation that the business impact the, was that in case of a failure or the DBA team that a services team is it is able to provide a solution that is within our 100% within our perimeter. So this means that we are fully accountable for it. So in a nutshell, when you run a service, the less people the less teams you have to involve the more control you can deliver. And in some, again, very critical services that is a great value. >> Okay. So, and, and where do you want to take this? I mean, how do you see w what's your, if you're thinking about your Postgres and, and generally an EDB you know, roadmap, where do you want it to go? >> Well, I stay to, to trends within within the organization, the, the, the, the the first one is about migrating more existing services to open source solution for database is going to be, is going to be prosperous. And other trends that I see within my organization is about designing applications, not really to be, to to use PostgreSQL as the base, as it does a base layer. I think both trends are more or less surroundings at the same state right now. >> Yeah. A lot of the audience members at Postgres vision 21 is just like you they they're managing day-to-day infrastructure. They're there they're expert practitioners. What advice would you give to somebody that is thinking about, you know taking this journey, maybe if you had to do something over again maybe what would you do differently? How can you help your peers here? >> Well, I think in particular, if you are going to say a big organization that runs a highly regulated business in some cases, you are a little bit afraid of open source because there is this, I can say general consideration about the lack of enterprise level support. I would like to say that it is just about the past because they're around bunch of companies like EDB that are we're a hundred percent capable of providing enterprise level of support, even on, on, on even on the open source distribution of Paul's presser. Obviously Dan is you're going to go with their specific distribution. The level of support is going to be even more accurate but as we know, it could be currently is they across say main contributor of the pollsters community. And I think is, is that an insurance for every organization? >> Your advice is don't be afraid. >> Yeah. My advice is done is absolutely, don't be, don't be afraid. And if, if, if I can, if we can mention about also about, you know, the cloud called technologies this is also another, another topic where if possible I would like to suggest to not being afraid EDB as every every I would say organization within the it industry is really pushing for it. And I think for a very, for, for a lot of cases not all of them, but a lot of cases, there is a great value about the design services application to be cloud native or migrating existing application into the cloud. >> Okay. But, but being a highly regulated industry and being a, you know, very much aware of the the narrative around open source, et cetera, you, you must've had just a little piece of your mind saying, okay I have to manage this risk. So there's anything specifically you did with managing the risks that you would advise? Was it, was it or is it really just about good change management? >> I think it was mainly about a good change management when you got, you know the relevant stakeholders that you need on board and we are, everybody's going the same direction. That basically is about executing. >> Excellent. Well, Roberto, I really appreciate your time and your knowledge that you share with the audience. So thanks so much for coming on the cube. >> Thank you, Dave. It was a great pleasure. >> And thank you for watching the cubes continuous coverage of Postgres vision 21. We'll be right back. (upbeat music)

Published Date : May 27 2021

SUMMARY :

brought to you by EDB. the Italian Stock Exchange. for the invitation. role at the organization. Europe and the whole world. and of course the regulators the goal to provide an Well, and you have end-user computing So if you didn't heard about us We both in the technology of PostgreSQL, but also the that just it'll be provide. and so that's a solution that you to get the right solution for us. all of it, the full spectrum. breed that the market can offer. at the end of the day No. So what did you have to do? I got the pleasure to signed the agreement we're going here of appetite for risk, you that you utilize to sort that we needed back in the day. impact that you expect. the less teams you have to involve I mean, how do you see w the same state right now. maybe what would you do differently? of the pollsters community. about also about, you know, that you would advise? the relevant stakeholders that you need So thanks so much for coming on the cube. It was a great pleasure. And thank you for watching the cubes

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
RobertoPERSON

0.99+

EuronextORGANIZATION

0.99+

Dave VellantePERSON

0.99+

DavePERSON

0.99+

EuropeLOCATION

0.99+

Borsa ItalianaORGANIZATION

0.99+

ItalyLOCATION

0.99+

FerrariORGANIZATION

0.99+

Roberto GiordanoPERSON

0.99+

100%QUANTITY

0.99+

February, 2020DATE

0.99+

BorsaORGANIZATION

0.99+

PaulPERSON

0.99+

2020DATE

0.99+

United StatesLOCATION

0.99+

oneQUANTITY

0.99+

last yearDATE

0.99+

firstQUANTITY

0.99+

London Stock ExchangeORGANIZATION

0.99+

RedditORGANIZATION

0.99+

FirstQUANTITY

0.99+

last monthDATE

0.99+

PostgreSQLTITLE

0.99+

PamPERSON

0.99+

bothQUANTITY

0.99+

PostgresORGANIZATION

0.99+

DanPERSON

0.99+

EDBORGANIZATION

0.99+

two main driversQUANTITY

0.98+

four decadesQUANTITY

0.98+

six nine monthsQUANTITY

0.98+

few months agoDATE

0.97+

BartPERSON

0.97+

first oneQUANTITY

0.97+

Italian Stock ExchangeORGANIZATION

0.97+

almost 1000 peopleQUANTITY

0.97+

first classQUANTITY

0.96+

more than oneQUANTITY

0.95+

two phaseQUANTITY

0.94+

this yearDATE

0.89+

few years agoDATE

0.88+

Cape CapeLOCATION

0.87+

both trendsQUANTITY

0.86+

one functionalityQUANTITY

0.86+

first missionQUANTITY

0.85+

a yearQUANTITY

0.83+

hundred percentQUANTITY

0.83+

Postgres VisionORGANIZATION

0.82+

DDBTITLE

0.8+

2021DATE

0.8+

one technologyQUANTITY

0.75+

theCUBEORGANIZATION

0.71+

one of the questionsQUANTITY

0.71+

ADBTITLE

0.71+

Postgres Vision 21ORGANIZATION

0.69+

Postgres vision 21ORGANIZATION

0.68+

ADBORGANIZATION

0.66+

EDBTITLE

0.66+

recent yearDATE

0.65+

COVIDORGANIZATION

0.51+

Vision 2021EVENT

0.41+

Glenn Grossman and Yusef Khan | Io-Tahoe ActiveDQ Intelligent Automation


 

>>from around the globe. It's the >>cube presenting >>active de que intelligent automation for data quality brought to you by Iota Ho >>Welcome to the sixth episode of the I. O. Tahoe data automation series. On the cube. We're gonna start off with a segment on how to accelerate the adoption of snowflake with Glenn Grossman, who is the enterprise account executive from Snowflake and yusef khan, the head of data services from Iota. Gentlemen welcome. >>Good afternoon. Good morning, Good evening. Dave. >>Good to see you. Dave. Good to see you. >>Okay glenn uh let's start with you. I mean the Cube hosted the snowflake data cloud summit in November and we heard from customers and going from love the tagline zero to snowflake, you know, 90 minutes very quickly. And of course you want to make it simple and attractive for enterprises to move data and analytics into the snowflake platform but help us understand once the data is there, how is snowflake helping to achieve savings compared to the data lake? >>Absolutely. dave. It's a great question, you know, it starts off first with the notion and uh kind of, we coined it in the industry or t shirt size pricing. You know, you don't necessarily always need the performance of a high end sports car when you're just trying to go get some groceries and drive down the street 20 mph. The t shirt pricing really aligns to, depending on what your operational workload is to support the business and the value that you need from that business? Not every day. Do you need data? Every second of the moment? Might be once a day, once a week through that t shirt size price and we can align for the performance according to the environmental needs of the business. What those drivers are the key performance indicators to drive that insight to make better decisions, It allows us to control that cost. So to my point, not always do you need the performance of a Ferrari? Maybe you need the performance and gas mileage of the Honda Civic if you would just get and deliver the value of the business but knowing that you have that entire performance landscape at a moments notice and that's really what what allows us to hold and get away from. How much is it going to cost me in a data lake type of environment? >>Got it. Thank you for that yussef. Where does Io Tahoe fit into this equation? I mean what's, what's, what's unique about the approach that you're taking towards this notion of mobilizing data on snowflake? >>Well, Dave in the first instance we profile the data itself at the data level, so not just at the level of metadata and we do that wherever that data lives. So it could be structured data could be semi structured data could be unstructured data and that data could be on premise. It could be in the cloud or it could be on some kind of SAAS platform. And so we profile this data at the source system that is feeding snowflake within snowflake itself within the end applications and the reports that the snowflake environment is serving. So what we've done here is take our machine learning discovery technology and make snowflake itself the repository for knowledge and insights on data. And this is pretty unique. Uh automation in the form of our P. A. Is being applied to the data both before after and within snowflake. And so the ultimate outcome is that business users can have a much greater degree of confidence that the data they're using can be trusted. Um The other thing we do uh which is unique is employee data R. P. A. To proactively detect and recommend fixes the data quality so that removes the manual time and effort and cost it takes to fix those data quality issues. Uh If they're left unchecked and untouched >>so that's key to things their trust, nobody's gonna use the data. It's not trusted. But also context. If you think about it, we've contextualized are operational systems but not our analytic system. So there's a big step forward glen. I wonder if you can tell us how customers are managing data quality when they migrate to snowflake because there's a lot of baggage in in traditional data warehouses and data lakes and and data hubs. Maybe you can talk about why this is a challenge for customers. And like for instance can you proactively address some of those challenges that customers face >>that we certainly can. They have. You know, data quality. Legacy data sources are always inherent with D. Q. Issues whether it's been master data management and data stewardship programs over the last really almost two decades right now, you do have systemic data issues. You have siloed data, you have information operational, data stores data marks. It became a hodgepodge when organizations are starting their journey to migrate to the cloud. One of the things that were first doing is that inspection of data um you know first and foremost even looking to retire legacy data sources that aren't even used across the enterprise but because they were part of the systemic long running operational on premise technology, it stayed there when we start to look at data pipelines as we onboard a customer. You know we want to do that era. We want to do QA and quality assurance so that we can, And our ultimate goal eliminate the garbage in garbage out scenarios that we've been plagued with really over the last 40, 50 years of just data in general. So we have to take an inspection where traditionally it was E. T. L. Now in the world of snowflake, it's really lt we're extracting were loading or inspecting them. We're transforming out to the business so that these routines could be done once and again give great business value back to making decisions around the data instead of spending all this long time. Always re architect ng the data pipeline to serve the business. >>Got it. Thank you. Glenda yourself of course. Snowflakes renowned for customers. Tell me all the time. It's so easy. It's so easy to spin up a data warehouse. It helps with my security. Again it simplifies everything but so you know, getting started is one thing but then adoption is also a key. So I'm interested in the role that that I owe. Tahoe plays in accelerating adoption for new customers. >>Absolutely. David. I mean as Ben said, you know every every migration to Snowflake is going to have a business case. Um uh and that is going to be uh partly about reducing spending legacy I. T. Servers, storage licenses, support all those good things um that see I want to be able to turn off entirely ultimately. And what Ayatollah does is help discover all the legacy undocumented silos that have been built up, as Glenn says on the data estate across a period of time, build intelligence around those silos and help reduce those legacy costs sooner by accelerating that that whole process. Because obviously the quicker that I. T. Um and Cdos can turn off legacy data sources the more funding and resources going to be available to them to manage the new uh Snowflake based data estate on the cloud. And so turning off the old building, the new go hand in hand to make sure those those numbers stack up the program is delivered uh and the benefits are delivered. And so what we're doing here with a Tahoe is improving the customers are y by accelerating their ability to adopt Snowflake. >>Great. And I mean we're talking a lot about data quality here but in a lot of ways that's table stakes like I said, if you don't trust the data, nobody's going to use it. And glenn, I mean I look at Snowflake and I see obviously the ease of use the simplicity you guys are nailing that the data sharing capabilities I think are really exciting because you know everybody talks about sharing data but then we talked about data as an asset, Everyone so high I to hold it. And so sharing is is something that I see as a paradigm shift and you guys are enabling that. So one of the things beyond data quality that are notable that customers are excited about that, maybe you're excited about >>David, I think you just cleared it out. It's it's this massive data sharing play part of the data cloud platform. Uh you know, just as of last year we had a little over about 100 people, 100 vendors in our data marketplace. That number today is well over 450 it is all about democratizing and sharing data in a world that is no longer held back by FTp s and C. S. V. S and then the organization having to take that data and ingested into their systems. You're a snowflake customer. want to subscribe to an S and P data sources an example, go subscribe it to it. It's in your account there was no data engineering, there was no physical lift of data and that becomes the most important thing when we talk about getting broader insights, data quality. Well, the data has already been inspected from your vendor is just available in your account. It's obviously a very simplistic thing to describe behind the scenes is what our founders have created to make it very, very easy for us to democratize not only internal with private sharing of data, but this notion of marketplace ensuring across your customers um marketplace is certainly on the type of all of my customers minds and probably some other areas that might have heard out of a recent cloud summit is the introduction of snow park and being able to do where all this data is going towards us. Am I in an ale, you know, along with our partners at Io Tahoe and R. P. A. Automation is what do we do with all this data? How do we put the algorithms and targets now? We'll be able to run in the future R and python scripts and java libraries directly inside Snowflake, which allows you to even accelerate even faster, Which people found traditionally when we started off eight years ago just as a data warehousing platform. >>Yeah, I think we're on the cusp of just a new way of thinking about data. I mean obviously simplicity is a starting point but but data by its very nature is decentralized. You talk about democratizing data. I like this idea of the global mesh. I mean it's very powerful concept and again it's early days but you know, keep part of this is is automation and trust, yussef you've worked with Snowflake and you're bringing active D. Q. To the market what our customers telling you so far? >>Well David the feedback so far has been great. Which is brilliant. So I mean firstly there's a point about speed and acceleration. Um So that's the speed to incite really. So where you have inherent data quality issues uh whether that's with data that was on premise and being brought into snowflake or on snowflake itself, we're able to show the customer results and help them understand their data quality better Within Day one which is which is a fantastic acceleration. I'm related to that. There's the cost and effort to get that insight is it's a massive productivity gain versus where you're seeing customers who've been struggling sometimes too remediate legacy data and legacy decisions that they've made over the past couple of decades, so that that cost and effort is much lower than it would otherwise have been. Um 3rdly, there's confidence and trust, so you can see Cdos and see IOS got demonstrable results that they've been able to improve data quality across a whole bunch of use cases for business users in marketing and customer services, for commercial teams, for financial teams. So there's that very quick kind of growth in confidence and credibility as the projects get moving. And then finally, I mean really all the use cases for the snowflake depend on data quality, really whether it's data science, uh and and the kind of snow park applications that Glenn has talked about, all those use cases work better when we're able to accelerate the ri for our joint customers by very quickly pushing out these data quality um insights. Um And I think one of the one of the things that the snowflake have recognized is that in order for C. I. O. Is to really adopt enterprise wide, um It's also as well as the great technology with Snowflake offers, it's about cleaning up that legacy data state, freeing up the budget for CIA to spend it on the new modern day to a state that lets them mobilise their data with snowflake. >>So you're seeing the Senate progression. We're simplifying the the the analytics from a tech perspective. You bring in Federated governance which which brings more trust. Then then you bring in the automation of the data quality piece which is fundamental. And now you can really start to, as you guys are saying, democratized and scale uh and share data. Very powerful guys. Thanks so much for coming on the program. Really appreciate your time. >>Thank you. I appreciate as well. Yeah.

Published Date : Apr 29 2021

SUMMARY :

It's the the head of data services from Iota. Good afternoon. Good to see you. I mean the Cube hosted the snowflake data cloud summit and the value that you need from that business? Thank you for that yussef. so not just at the level of metadata and we do that wherever that data lives. so that's key to things their trust, nobody's gonna use the data. Always re architect ng the data pipeline to serve the business. Again it simplifies everything but so you know, getting started is one thing but then I mean as Ben said, you know every every migration to Snowflake is going I see obviously the ease of use the simplicity you guys are nailing that the data sharing that might have heard out of a recent cloud summit is the introduction of snow park and I mean it's very powerful concept and again it's early days but you know, Um So that's the speed to incite And now you can really start to, as you guys are saying, democratized and scale uh and I appreciate as well.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

Glenn GrossmanPERSON

0.99+

BenPERSON

0.99+

Io TahoeORGANIZATION

0.99+

Yusef KhanPERSON

0.99+

DavePERSON

0.99+

20 mphQUANTITY

0.99+

GlennPERSON

0.99+

CIAORGANIZATION

0.99+

IOSTITLE

0.99+

GlendaPERSON

0.99+

90 minutesQUANTITY

0.99+

100 vendorsQUANTITY

0.99+

FerrariORGANIZATION

0.99+

last yearDATE

0.99+

OneQUANTITY

0.99+

firstQUANTITY

0.99+

first instanceQUANTITY

0.99+

NovemberDATE

0.99+

sixth episodeQUANTITY

0.99+

once a dayQUANTITY

0.99+

once a weekQUANTITY

0.98+

SenateORGANIZATION

0.98+

todayDATE

0.98+

bothQUANTITY

0.98+

eight years agoDATE

0.97+

yusef khanPERSON

0.97+

overQUANTITY

0.96+

oneQUANTITY

0.95+

R. P. A. AutomationORGANIZATION

0.95+

pythonTITLE

0.95+

TahoeORGANIZATION

0.94+

I. O. TahoeTITLE

0.93+

HondaORGANIZATION

0.93+

Io-TahoeORGANIZATION

0.93+

one thingQUANTITY

0.91+

Io TahoePERSON

0.87+

firstlyQUANTITY

0.87+

CivicCOMMERCIAL_ITEM

0.87+

SnowflakeTITLE

0.86+

TahoePERSON

0.85+

AyatollahPERSON

0.84+

SnowflakeEVENT

0.83+

past couple of decadesDATE

0.82+

about 100 peopleQUANTITY

0.81+

two decadesQUANTITY

0.8+

over 450QUANTITY

0.79+

40, 50 yearsQUANTITY

0.76+

Day oneQUANTITY

0.75+

glennPERSON

0.74+

javaTITLE

0.72+

snowflakeEVENT

0.7+

Iota HoORGANIZATION

0.68+

P.ORGANIZATION

0.62+

ActiveDQ Intelligent AutomationORGANIZATION

0.61+

snowflake data cloud summitEVENT

0.6+

IotaLOCATION

0.58+

FTpTITLE

0.56+

SnowflakeORGANIZATION

0.54+

zeroQUANTITY

0.53+

RTITLE

0.52+

O.EVENT

0.41+

C.EVENT

0.34+

December 8th Keynote Analysis | AWS re:Invent 2020


 

>>From around the globe. It's the cube with digital coverage of AWS reinvent 2020 sponsored by Intel, AWS, and our community partners. >>Hi everyone. Welcome back to the cubes. Virtual coverage of AWS reinvent 2020 virtual. We are the cube virtual I'm John ferry, your host with my coach, Dave Alante for keynote analysis from Swami's machine learning, all things, data huge. Instead of announcements, the first ever machine learning keynote at a re-invent Dave. Great to see you. Thanks Johnny. And from Boston, I'm here in Palo Alto. We're doing the cube remote cube virtual. Great to see you. >>Yeah, good to be here, John, as always. Wall-to-wall love it. So, so, John, um, how about I give you my, my key highlights from the, uh, from the keynote today, I had, I had four kind of curated takeaways. So the first is that AWS is, is really trying to simplify machine learning and use machine intelligence into all applications. And if you think about it, it's good news for organizations because they're not the become machine learning experts have invent machine learning. They can buy it from Amazon. I think the second is they're trying to simplify the data pipeline. The data pipeline today is characterized by a series of hyper specialized individuals. It engineers, data scientists, quality engineers, analysts, developers. These are folks that are largely live in their own swim lane. Uh, and while they collaborate, uh, there's still a fairly linear and complicated data pipeline, uh, that, that a business person or a data product builder has to go through Amazon making some moves to the front of simplify that they're expanding data access to the line of business. I think that's a key point. Is there, there increasingly as people build data products and data services that can monetize, you know, for their business, either cut costs or generate revenue, they can expand that into line of business where there's there's domain context. And I think the last thing is this theme that we talked about the other day, John of extending Amazon, AWS to the edge that we saw that as well in a number of machine learning tools that, uh, Swami talked about. >>Yeah, it was great by the way, we're live here, uh, in Palo Alto in Boston covering the analysis, tons of content on the cube, check out the cube.net and also check out at reinvent. There's a cube section as there's some links to so on demand videos with all the content we've had. Dave, I got to say one of the things that's apparent to me, and this came out of my one-on-one with Andy Jassy and Andy Jassy talked about in his keynote is he kind of teased out this idea of training versus a more value add machine learning. And you saw that today in today's announcement. To me, the big revelation was that the training aspect of machine learning, um, is what can be automated away. And it's under a lot of controversy around it. Recently, a Google paper came out and the person was essentially kind of, kind of let go for this. >>But the idea of doing these training algorithms, some are saying is causes more harm to the environment than it does good because of all the compute power it takes. So you start to see the positioning of training, which can be automated away and served up with, you know, high powered ships and that's, they consider that undifferentiated heavy lifting. In my opinion, they didn't say that, but that's clearly what I see coming out of this announcement. The other thing that I saw Dave that's notable is you saw them clearly taking a three lane approach to this machine, learning the advanced builders, the advanced coders and the developers, and then database and data analysts, three swim lanes of personas of target audience. Clearly that is in line with SageMaker and the embedded stuff. So two big revelations, more horsepower required to process training and modeling. Okay. And to the expansion of the personas that are going to be using machine learning. So clearly this is a, to me, a big trend wave that we're seeing that validates some of the startups and I'll see their SageMaker and some of their products. >>Well, as I was saying at the top, I think Amazon's really trying, working hard on simplifying the whole process. And you mentioned training and, and a lot of times people are starting from scratch when they have to train models and retrain models. And so what they're doing is they're trying to create reusable components, uh, and allow people to, as you pointed out to automate and streamline some of that heavy lifting, uh, and as well, they talked a lot about, uh, doing, doing AI inferencing at the edge. And you're seeing, you know, they, they, uh, Swami talked about several foundational premises and the first being a foundation of frameworks. And you think about that at the, at the lowest level of their S their ML stack. They've got, you know, GPU's different processors, inferential, all these alternative processes, processors, not just the, the Xav six. And so these are very expensive resources and Swami talked a lot about, uh, and his colleagues talked a lot about, well, a lot of times the alternative processor is sitting there, you know, waiting, waiting, waiting. And so they're really trying to drive efficiency and speed. They talked a lot about compressing the time that it takes to, to run these, these models, uh, from, from sometimes weeks down to days, sometimes days down to hours and minutes. >>Yeah. Let's, let's unpack these four areas. Let's stay on the firm foundation because that's their core competency infrastructure as a service. Clearly they're laying that down. You put the processors, but what's interesting is the TensorFlow 92% of tensor flows on Amazon. The other thing is that pie torch surprisingly is back up there, um, with massive adoption and the numbers on pie torch literally is on fire. I was coming in and joke on Twitter. Um, we, a PI torch is telling because that means that TensorFlow is originally part of Google is getting, is getting a little bit diluted with other frameworks, and then you've got MX net, some other things out there. So the fact that you've got PI torch 91% and then TensorFlow 92% on 80 bucks is a huge validation. That means that the majority of most machine learning development and deep learning is happening on AWS. Um, >>Yeah, cloud-based, by the way, just to clarify, that's the 90% of cloud-based cloud, uh, TensorFlow runs on and 91% of cloud-based PI torch runs on ADM is amazingly massive numbers. >>Yeah. And I think that the, the processor has to show that it's not trivial to do the machine learning, but, you know, that's where the infrared internship came in. That's kind of where they want to go lay down that foundation. And they had Tanium, they had trainee, um, they had, um, infrared chow was the chip. And then, you know, just true, you know, distributed training training on SageMaker. So you got the chip and then you've got Sage makers, the middleware games, almost like a machine learning stack. That's what they're putting out there >>And how bad a Gowdy, which was, which is, which is a patrol also for training, which is an Intel based chip. Uh, so that was kind of interesting. So a lot of new chips and, and specialized just, we've been talking about this for awhile, particularly as you get to the edge and do AI inferencing, you need, uh, you know, a different approach than we're used to with the general purpose microbes. >>So what gets your take on tenant? Number two? So tenant number one, clearly infrastructure, a lot of announcements we'll go through those, review them at the end, but tenant number two, that Swami put out there was creating the shortest path to success for builders or machine learning builders. And I think here you lays out the complexity, Dave butts, mostly around methodology, and, you know, the value activities required to execute. And again, this points to the complexity problem that they have. What's your take on this? >>Yeah. Well you think about, again, I'm talking about the pipeline, you collect data, you just data, you prepare that data, you analyze that data. You, you, you make sure that it's it's high quality and then you start the training and then you're iterating. And so they really trying to automate as much as possible and simplify as much as possible. What I really liked about that segment of foundation, number two, if you will, is the example, the customer example of the speaker from the NFL, you know, talked about, uh, you know, the AWS stats that we see in the commercials, uh, next gen stats. Uh, and, and she talked about the ways in which they've, well, we all know they've, they've rearchitected helmets. Uh, they've been, it's really a very much database. It was interesting to see they had the spectrum of the helmets that were, you know, the safest, most safe to the least safe and how they've migrated everybody in the NFL to those that they, she started a 24%. >>It was interesting how she wanted a 24% reduction in reported concussions. You know, you got to give the benefit of the doubt and assume some of that's through, through the data. But you know, some of that could be like, you know, Julian Edelman popping up off the ground. When, you know, we had a concussion, he doesn't want to come out of the game with the new protocol, but no doubt, they're collecting more data on this stuff, and it's not just head injuries. And she talked about ankle injuries, knee injuries. So all this comes from training models and reducing the time it takes to actually go from raw data to insights. >>Yeah. I mean, I think the NFL is a great example. You and I both know how hard it is to get the NFL to come on and do an interview. They're very coy. They don't really put their name on anything much because of the value of the NFL, this a meaningful partnership. You had the, the person onstage virtually really going into some real detail around the depth of the partnership. So to me, it's real, first of all, I love stat cast 11, anything to do with what they do with the stats is phenomenal at this point. So the real world example, Dave, that you starting to see sports as one metaphor, healthcare, and others are going to see those coming in to me, totally a tale sign that Amazon's continued to lead. The thing that got my attention was is that it is an IOT problem, and there's no reason why they shouldn't get to it. I mean, some say that, Oh, concussion, NFL is just covering their butt. They don't have to, this is actually really working. So you got the tech, why not use it? And they are. So that, to me, that's impressive. And I think that's, again, a digital transformation sign that, that, you know, in the NFL is doing it. It's real. Um, because it's just easier. >>I think, look, I think, I think it's easy to criticize the NFL, but the re the reality is, is there anything old days? It was like, Hey, you get your bell rung and get back out there. That's just the way it was a football players, you know, but Ted Johnson was one of the first and, you know, bill Bellacheck was, was, you know, the guy who sent him back out there with a concussion, but, but he was very much outspoken. You've got to give the NFL credit. Uh, it didn't just ignore the problem. Yeah. Maybe it, it took a little while, but you know, these things take some time because, you know, it's generally was generally accepted, you know, back in the day that, okay, Hey, you'd get right back out there, but, but the NFL has made big investments there. And you can say, you got to give him, give him props for that. And especially given that they're collecting all this data. That to me is the most interesting angle here is letting the data inform the actions. >>And next step, after the NFL, they had this data prep data Wrangler news, that they're now integrating snowflakes, Databricks, Mongo DB, into SageMaker, which is a theme there of Redshift S3 and Lake formation into not the other way around. So again, you've been following this pretty closely, uh, specifically the snowflake recent IPO and their success. Um, this is an ecosystem play for Amazon. What does it mean? >>Well, a couple of things, as we, as you well know, John, when you first called me up, I was in Dallas and I flew into New York and an ice storm to get to the one of the early Duke worlds. You know, and back then it was all batch. The big data was this big batch job. And today you want to combine that batch. There's still a lot of need for batch, but when people want real time inferencing and AWS is bringing that together and they're bringing in multiple data sources, you mentioned Databricks and snowflake Mongo. These are three platforms that are doing very well in the market and holding a lot of data in AWS and saying, okay, Hey, we want to be the brain in the middle. You can import data from any of those sources. And I'm sure they're going to add more over time. Uh, and so they talked about 300 pre-configured data transformations, uh, that now come with stage maker of SageMaker studio with essentially, I've talked about this a lot. It's essentially abstracting away the, it complexity, the whole it operations piece. I mean, it's the same old theme that AWS is just pointing. It's its platform and its cloud at non undifferentiated, heavy lifting. And it's moving it up the stack now into the data life cycle and data pipeline, which is one of the biggest blockers to monetizing data. >>Expand on that more. What does that actually mean? I'm an it person translate that into it. Speak. Yeah. >>So today, if you're, if you're a business person and you want, you want the answers, right, and you want say to adjust a new data source, so let's say you want to build a new, new product. Um, let me give an example. Let's say you're like a Spotify, make it up. And, and you do music today, but let's say you want to add, you know, movies, or you want to add podcasts and you want to start monetizing that you want to, you want to identify, who's watching what you want to create new metadata. Well, you need new data sources. So what you do as a business person that wants to create that new data product, let's say for podcasts, you have to knock on the door, get to the front of the data pipeline line and say, okay, Hey, can you please add this data source? >>And then everybody else down the line has to get in line and Hey, this becomes a new data source. And it's this linear process where very specialized individuals have to do their part. And then at the other end, you know, it comes to self-serve capability that somebody can use to either build dashboards or build a data product. In a lot of that middle part is our operational details around deploying infrastructure, deploying, you know, training machine learning models that a lot of Python coding. Yeah. There's SQL queries that have to be done. So a lot of very highly specialized activities, what Amazon is doing, my takeaway is they're really streamlining a lot of those activities, removing what they always call the non undifferentiated, heavy lifting abstracting away that it complexity to me, this is a real positive sign, because it's all about the technology serving the business, as opposed to historically, it's the business begging the technology department to please help me. The technology department obviously evolving from, you know, the, the glass house, if you will, to this new data, data pipeline data, life cycle. >>Yeah. I mean, it's classic agility to take down those. I mean, it's undifferentiated, I guess, but if it actually works, just create a differentiated product. So, but it's just log it's that it's, you can debate that kind of aspect of it, but I hear what you're saying, just get rid of it and make it simpler. Um, the impact of machine learning is Dave is one came out clear on this, uh, SageMaker clarify announcement, which is a bias decision algorithm. They had an expert, uh, nationally CFUs presented essentially how they're dealing with the, the, the bias piece of it. I thought that was very interesting. What'd you think? >>Well, so humans are biased and so humans build models or models are inherently biased. And so I thought it was, you know, this is a huge problem to big problems in artificial intelligence. One is the inherent bias in the models. And the second is the lack of transparency that, you know, they call it the black box problem, like, okay, I know there was an answer there, but how did it get to that answer and how do I trace it back? Uh, and so Amazon is really trying to attack those, uh, with, with, with clarify. I wasn't sure if it was clarity or clarified, I think it's clarity clarify, um, a lot of entirely certain how it works. So we really have to dig more into that, but it's essentially identifying situations where there is bias flagging those, and then, you know, I believe making recommendations as to how it can be stamped. >>Nope. Yeah. And also some other news deep profiling for debugger. So you could make a debugger, which is a deep profile on neural network training, um, which is very cool again on that same theme of profiling. The other thing that I found >>That remind me, John, if I may interrupt there reminded me of like grammar corrections and, you know, when you're typing, it's like, you know, bug code corrections and automated debugging, try this. >>It wasn't like a better debugger come on. We, first of all, it should be bug free code, but, um, you know, there's always biases of the data is critical. Um, the other news I thought was interesting and then Amazon's claiming this is the first SageMaker pipelines for purpose-built CIC D uh, for machine learning, bringing machine learning into a developer construct. And I think this started bringing in this idea of the edge manager where you have, you know, and they call it the about machine, uh, uh, SageMaker store storing your functions of this idea of managing and monitoring machine learning modules effectively is on the edge. And, and through the development process is interesting and really targeting that developer, Dave, >>Yeah, applying CIC D to the machine learning and machine intelligence has always been very challenging because again, there's so many piece parts. And so, you know, I said it the other day, it's like a lot of the innovations that Amazon comes out with are things that have problems that have come up given the pace of innovation that they're putting forth. And, and it's like the customers drinking from a fire hose. We've talked about this at previous reinvents and the, and the customers keep up with the pace of Amazon. So I see this as Amazon trying to reduce friction, you know, across its entire stack. Most, for example, >>Let me lay it out. A slide ahead, build machine learning, gurus developers, and then database and data analysts, clearly database developers and data analysts are on their radar. This is not the first time we've heard that. But we, as the kind of it is the first time we're starting to see products materialized where you have machine learning for databases, data warehouse, and data lakes, and then BI tools. So again, three different segments, the databases, the data warehouse and data lakes, and then the BI tools, three areas of machine learning, innovation, where you're seeing some product news, your, your take on this natural evolution. >>Well, well, it's what I'm saying up front is that the good news for, for, for our customers is you don't have to be a Google or Amazon or Facebook to be a super expert at AI. Uh, companies like Amazon are going to be providing products that you can then apply to your business. And, and it's allowed you to infuse AI across your entire application portfolio. Amazon Redshift ML was another, um, example of them, abstracting complexity. They're taking, they're taking S3 Redshift and SageMaker complexity and abstracting that and presenting it to the data analysts. So that, that, that individual can worry about, you know, again, getting to the insights, it's injecting ML into the database much in the same way, frankly, the big query has done that. And so that's a huge, huge positive. When you talk to customers, they, they love the fact that when, when ML can be embedded into the, into the database and it simplifies, uh, that, that all that, uh, uh, uh, complexity, they absolutely love it because they can focus on more important things. >>Clearly I'm this tenant, and this is part of the keynote. They were laying out all their announcements, quick excitement and ML insights out of the box, quick, quick site cue available in preview all the announcements. And then they moved on to the next, the fourth tenant day solving real problems end to end, kind of reminds me of the theme we heard at Dell technology worlds last year end to end it. So we are starting to see the, the, the land grab my opinion, Amazon really going after, beyond I, as in pass, they talked about contact content, contact centers, Kendra, uh, lookout for metrics, and that'll maintain men. Then Matt would came on, talk about all the massive disruption on the, in the industries. And he said, literally machine learning will disrupt every industry. They spent a lot of time on that and they went into the computer vision at the edge, which I'm a big fan of. I just loved that product. Clearly, every innovation, I mean, every vertical Dave is up for grabs. That's the key. Dr. Matt would message. >>Yeah. I mean, I totally agree. I mean, I see that machine intelligence as a top layer of, you know, the S the stack. And as I said, it's going to be infused into all areas. It's not some kind of separate thing, you know, like, Coobernetti's, we think it's some separate thing. It's not, it's going to be embedded everywhere. And I really like Amazon's edge strategy. It's this, you, you are the first to sort of write about it and your keynote preview, Andy Jassy said, we see, we see, we want to bring AWS to the edge. And we see data center as just another edge node. And so what they're doing is they're bringing SDKs. They've got a package of sensors. They're bringing appliances. I've said many, many times the developers are going to be, you know, the linchpin to the edge. And so Amazon is bringing its entire, you know, data plane is control plane, it's API APIs to the edge and giving builders or slash developers, the ability to innovate. And I really liked the strategy versus, Hey, here's a box it's, it's got an x86 processor inside on a, throw it over the edge, give it a cool name that has edge in it. And here you go, >>That sounds call it hyper edge. You know, I mean, the thing that's true is the data aspect at the edge. I mean, everything's got a database data warehouse and data lakes are involved in everything. And then, and some sort of BI or tools to get the data and work with the data or the data analyst, data feeds, machine learning, critical piece to all this, Dave, I mean, this is like databases used to be boring, like boring field. Like, you know, if you were a database, I have a degree in a database design, one of my degrees who do science degrees back then no one really cared. If you were a database person. Now it's like, man data, everything. This is a whole new field. This is an opportunity. But also, I mean, are there enough people out there to do all this? >>Well, it's a great point. And I think this is why Amazon is trying to extract some of the abstract. Some of the complexity I sat in on a private session around databases today and listened to a number of customers. And I will say this, you know, some of it I think was NDA. So I can't, I can't say too much, but I will say this Amazon's philosophy of the database. And you address this in your conversation with Andy Jassy across its entire portfolio is to have really, really fine grain access to the deep level API APIs across all their services. And he said, he said this to you. We don't necessarily want to be the abstraction layer per se, because when the market changes, that's harder for us to change. We want to have that fine-grained access. And so you're seeing that with database, whether it's, you know, no sequel, sequel, you know, the, the Aurora the different flavors of Aurora dynamo, DV, uh, red shift, uh, you know, already S on and on and on. There's just a number of data stores. And you're seeing, for instance, Oracle take a completely different approach. Yes, they have my SQL cause they know got that with the sun acquisition. But, but this is they're really about put, is putting as much capability into a single database as possible. Oh, you only need one database only different philosophy. >>Yeah. And then obviously a health Lake. And then that was pretty much the end of the, the announcements big impact to health care. Again, the theme of horizontal data, vertical specialization with data science and software playing out in real time. >>Yeah. Well, so I have asked this question many times in the cube, when is it that machines will be able to make better diagnoses than doctors and you know, that day is coming. If it's not here, uh, you know, I think helped like is really interesting. I've got an interview later on with one of the practitioners in that space. And so, you know, healthcare is something that is an industry that's ripe for disruption. It really hasn't been disruption disrupted. It's a very high, high risk obviously industry. Uh, but look at healthcare as we all know, it's too expensive. It's too slow. It's too cumbersome. It's too long sometimes to get to a diagnosis or be seen, Amazon's trying to attack with its partners, all of those problems. >>Well, Dave, let's, let's summarize our take on Amazon keynote with machine learning, I'll say pretty historic in the sense that there was so much content in first keynote last year with Andy Jassy, he spent like 75 minutes. He told me on machine learning, they had to kind of create their own category Swami, who we interviewed many times on the cube was awesome. But a lot of still a lot more stuff, more, 215 announcements this year, machine learning more capabilities than ever before. Um, moving faster, solving real problems, targeting the builders, um, fraud platform set of things is the Amazon cadence. What's your analysis of the keynote? >>Well, so I think a couple of things, one is, you know, we've said for a while now that the new innovation cocktail is cloud plus data, plus AI, it's really data machine intelligence or AI applied to that data. And the scale at cloud Amazon Naylor obviously has nailed the cloud infrastructure. It's got the data. That's why database is so important and it's gotta be a leader in machine intelligence. And you're seeing this in the, in the spending data, you know, with our partner ETR, you see that, uh, that AI and ML in terms of spending momentum is, is at the highest or, or at the highest, along with automation, uh, and containers. And so in. Why is that? It's because everybody is trying to infuse AI into their application portfolios. They're trying to automate as much as possible. They're trying to get insights that, that the systems can take action on. >>And, and, and actually it's really augmented intelligence in a big way, but, but really driving insights, speeding that time to insight and Amazon, they have to be a leader there that it's Amazon it's, it's, it's Google, it's the Facebook's, it's obviously Microsoft, you know, IBM's Tron trying to get in there. They were kind of first with, with Watson, but with they're far behind, I think, uh, the, the hyper hyper scale guys. Uh, but, but I guess like the key point is you're going to be buying this. Most companies are going to be buying this, not building it. And that's good news for organizations. >>Yeah. I mean, you get 80% there with the product. Why not go that way? The alternative is try to find some machine learning people to build it. They're hard to find. Um, so the seeing the scale of kind of replicating machine learning expertise with SageMaker, then ultimately into databases and tools, and then ultimately built into applications. I think, you know, this is the thing that I think they, my opinion is that Amazon continues to move up the stack, uh, with their capabilities. And I think machine learning is interesting because it's a whole new set of it's kind of its own little monster building block. That's just not one thing it's going to be super important. I think it's going to have an impact on the startup scene and innovation is going, gonna have an impact on incumbent companies that are currently leaders that are under threat from new entrance entering the business. >>So I think it's going to be a very entrepreneurial opportunity. And I think it's going to be interesting to see is how machine learning plays that role. Is it a defining feature that's core to the intellectual property, or is it enabling new intellectual property? So to me, I just don't see how that's going to fall yet. I would bet that today intellectual property will be built on top of Amazon's machine learning, where the new algorithms and the new things will be built separately. If you compete head to head with that scale, you could be on the wrong side of history. Again, this is a bet that the startups and the venture capitals will have to make is who's going to end up being on the right wave here. Because if you make the wrong design choice, you can have a very complex environment with IOT or whatever your app serving. If you can narrow it down and get a wedge in the marketplace, if you're a company, um, I think that's going to be an advantage. This could be great just to see how the impact of the ecosystem this will be. >>Well, I think something you said just now it gives a clue. You talked about, you know, the, the difficulty of finding the skills. And I think that's a big part of what Amazon and others who were innovating in machine learning are trying to do is the gap between those that are qualified to actually do this stuff. The data scientists, the quality engineers, the data engineers, et cetera. And so companies, you know, the last 10 years went out and tried to hire these people. They couldn't find them, they tried to train them. So it's taking too long. And now that I think they're looking toward machine intelligence to really solve that problem, because that scales, as we, as we know, outsourcing to services companies and just, you know, hardcore heavy lifting, does it doesn't scale that well, >>Well, you know what, give me some machine learning, give it to me faster. I want to take the 80% there and allow us to build certainly on the media cloud and the cube virtual that we're doing. Again, every vertical is going to impact a Dave. Great to see you, uh, great stuff. So far week two. So, you know, we're cube live, we're live covering the keynotes tomorrow. We'll be covering the keynotes for the public sector day. That should be chock-full action. That environment is going to impact the most by COVID a lot of innovation, a lot of coverage. I'm John Ferrari. And with Dave Alante, thanks for watching.

Published Date : Dec 9 2020

SUMMARY :

It's the cube with digital coverage of Welcome back to the cubes. people build data products and data services that can monetize, you know, And you saw that today in today's And to the expansion of the personas that And you mentioned training and, and a lot of times people are starting from scratch when That means that the majority of most machine learning development and deep learning is happening Yeah, cloud-based, by the way, just to clarify, that's the 90% of cloud-based cloud, And then, you know, just true, you know, and, and specialized just, we've been talking about this for awhile, particularly as you get to the edge and do And I think here you lays out the complexity, It was interesting to see they had the spectrum of the helmets that were, you know, the safest, some of that could be like, you know, Julian Edelman popping up off the ground. And I think that's, again, a digital transformation sign that, that, you know, And you can say, you got to give him, give him props for that. And next step, after the NFL, they had this data prep data Wrangler news, that they're now integrating And today you want to combine that batch. Expand on that more. you know, movies, or you want to add podcasts and you want to start monetizing that you want to, And then at the other end, you know, it comes to self-serve capability that somebody you can debate that kind of aspect of it, but I hear what you're saying, just get rid of it and make it simpler. And so I thought it was, you know, this is a huge problem to big problems in artificial So you could make a debugger, you know, when you're typing, it's like, you know, bug code corrections and automated in this idea of the edge manager where you have, you know, and they call it the about machine, And so, you know, I said it the other day, it's like a lot of the innovations materialized where you have machine learning for databases, data warehouse, Uh, companies like Amazon are going to be providing products that you can then apply to your business. And then they moved on to the next, many, many times the developers are going to be, you know, the linchpin to the edge. Like, you know, if you were a database, I have a degree in a database design, one of my degrees who do science And I will say this, you know, some of it I think was NDA. And then that was pretty much the end of the, the announcements big impact And so, you know, healthcare is something that is an industry that's ripe for disruption. I'll say pretty historic in the sense that there was so much content in first keynote last year with Well, so I think a couple of things, one is, you know, we've said for a while now that the new innovation it's, it's, it's Google, it's the Facebook's, it's obviously Microsoft, you know, I think, you know, this is the thing that I think they, my opinion is that Amazon And I think it's going to be interesting to see is how machine And so companies, you know, the last 10 years went out and tried to hire these people. So, you know, we're cube live, we're live covering the keynotes tomorrow.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Ted JohnsonPERSON

0.99+

Dave AlantePERSON

0.99+

Julian EdelmanPERSON

0.99+

AmazonORGANIZATION

0.99+

Andy JassyPERSON

0.99+

New YorkLOCATION

0.99+

JohnnyPERSON

0.99+

AWSORGANIZATION

0.99+

DallasLOCATION

0.99+

JohnPERSON

0.99+

Palo AltoLOCATION

0.99+

MicrosoftORGANIZATION

0.99+

SwamiPERSON

0.99+

DavePERSON

0.99+

John FerrariPERSON

0.99+

FacebookORGANIZATION

0.99+

80%QUANTITY

0.99+

24%QUANTITY

0.99+

90%QUANTITY

0.99+

GoogleORGANIZATION

0.99+

BostonLOCATION

0.99+

December 8thDATE

0.99+

IBMORGANIZATION

0.99+

MattPERSON

0.99+

NFLORGANIZATION

0.99+

80 bucksQUANTITY

0.99+

PythonTITLE

0.99+

91%QUANTITY

0.99+

92%QUANTITY

0.99+

75 minutesQUANTITY

0.99+

OracleORGANIZATION

0.99+

todayDATE

0.99+

last yearDATE

0.99+

cube.netOTHER

0.99+

IntelORGANIZATION

0.99+

Sasha Kipervarg, LiveRamp | Cloud Native Insights


 

>> Narrator: From theCUBE studios in Palo Alto in Boston, connecting with thought leaders around the globe, these are Cloud Native Insights. >> Hi, and welcome to another episode of Cloud Native Insights. I'm your host, Stu Miniman. And when we talk about Cloud Native of course, it's not just moving to the cloud as a location, but how do we take advantage of what's happened in the cloud of the changes that need to happen. And this is not only from a technology standpoint, it's an organizational standpoint. And we're also going to touch on the financial implications and something you've probably heard about FinOps, relatively new last couple of years as a term. Of course, the financial engineering cloud has been around for many years and how that ties into DevOps and to help us understand this movement, what's going on really thrilled that we have a practitioner in this space. I want to welcome Sasha Kipervarg. He's a head, the head of Global Cloud Operations in special projects with LiveRamp. Sasha, thanks so much for joining us. >> Thanks very much too, happy to be here. >> All right, so why don't we start off first for those that don't know LiveRamp, I'm sorry, you're in the ad tech space. Maybe just give us a little bit about, you know, the organization and what your team does there? >> Sure, so LiveRamp is in the advertising technology space, and we help connect companies to their customers and send targeted advertising to them. We're based in San Francisco and have engineering teams across the globe, primarily New York, London, China, all over the map, really. And we're a fast growing company, we've gone from perhaps 400 to maybe 12, 1300 employees over the last year and a half. >> Well, you know that whole space is a whole separate discussion. I like when I looked up a little bit about LiveRamp the discussion point is, you know, cookies for eating not for following you, in looking where are you going all over the company. So your role inside LiveRamp, though. Tell us a little bit... You know, we're cloud bits in New York? >> Sure, so I'm responsible for the engineering teams that help other development teams operate in the cloud. So whereas on premise, it would have been a traditional operations team in the cloud. It's basically an engineering team that are experts in all the different areas that other engineering teams need us to be in so that we can express good practices and help them deliver products. >> Great, you actually had a real forcing function for cloud. You know, right now during the global pandemic we've seen lots of acceleration of people looking at cloud, if you could briefly just bring us back as to one of the things that helped push LiveRamp, you know, to go much heavier into cloud. >> Yeah, so we had some initial plans and we were exploring. But what really pushed us over the edge was we had a three to four day outage at our data center here in San Francisco during a heatwave. And during that time, the data center couldn't control their temperature. We had unusually warm temperatures in San Francisco, they weren't that warm. It was like maybe in the, you know, mid 90s. But for the Bay Area in the summertime, you know, where it's usually 70, it was a big deal. And so we had racks of servers going down because it was too hot. And so if we weren't quite convinced before that we certainly were after that, and that made us realize that there were lots of good reasons to be in the cloud. And so we did it. We put together a migration and over the course of a year, we not only containerized but we migrated our environment into GCP. >> I wonder if you could just bring us inside a little bit that move to the cloud, you talk about adopting containerization. You know, your applications, you know, how much of it did you just kind of move there? How much did you build new? Where there some things that you just said, hey, I can kind of, you know, adopt a SAS equivalent, you know, how did your application portfolio look? >> Yeah, so it's probably good to think of them in terms of the infrastructure services that we use in the cloud, and then the customer facing applications themselves. And what we try to do is essentially containerize all of our infrastructure applications. Actually, let me rephrase that. We took the customer facing applications, and we containerize those. Now the applications themselves, did not change but they swapped out their underlying infrastructure for containers, running on the GCP native container service. On the back end of things we use the native services in GCP up as much as possible. So if we were using a database on premise, we tried to use the native database service in the Cloud with Google. I think the one interesting exception to that which we're changing now, in fact, was we decided to run our hundred petabyte Hadoop cluster in the Cloud using our own native service because of some price concerns. Those price concerns have gotten better since time and we're now migrating to Dataproc, which is Google's native Hadoop service. >> Yeah, it's fascinating when you think about just how fast things change in the cloud, new services can become available and as you're alluding to the finances can change significantly over you know, a couple of months or a quarter. Overall, how's the experience been? You know, moving to cloud, though? >> Well, it's been fantastic in some ways, painful in others because, you know, you discover and maybe this is begin to touch on the FinOp stuff like, you discover that you've gone from quarterly planning cycles where you opt to purchase a whole rack of servers, and you implement them over the next quarter or something like that, to making by the second decisions, to spin up resources via command line by developer and spend unlimitless operating expenses. So, it's quite a big shift. And I think a lot of companies are caught, you know, flat footed by it. We certainly work for a little bit. And there's some financial pain that gets expressed. And you know, the question that I would pose to the audience when they think about the cloud is, you know, we think of the migrations and we only think about their technical success, but if you migrate to the cloud and you do it technically and you containerize and it's on schedule, but then you blow your budget, was it really a success? Because ultimately, you know the business needs to be profitable in order for things to work. >> Yeah, absolutely Sasha. So what I've heard you talk about this before is in the pre-cloud model, you met with the budget team quarterly, and it was mostly a look back function. And of course, when you think about leveraging the cloud, things are changing on a fairly regular basis. And are you able to understand what decisions you're making and what the impact will be on you know, next month and next quarters, billing? So bring us inside a little bit as to, you know, that interaction and what that meant to your teams and how they had to think about you know, engineering and finance together? >> Yeah, it's a fantastic question. So, I guess the first thing is, let me let me zoom out for a moment and just make sure that the audience understands that you know, typically it's just engineering leadership, and a fairly small number of maybe high level developers, maybe an architect that get together with finance once a quarter and have a conversation about what they want to spend and how much they want to spend, and where it should be implemented. And that is a fairly regular thing that's been going on for many years. When you move to the cloud, all of a sudden that decision needs to happen on a real time basis. And typically, companies are not set up for that kind of a conversation. There's usually like a large wall between finance and engineering. And it's because you want the engineering teams to be engineers and the finance folks to be doing finance related things. And the two don't really mix all that often. But when you give a developer an API to spend money essentially right, that's what you've done. They don't just spend up resources, they spend money by API. You need to have a real time conversation where they can make trade offs, where you can track the budget, and those expenses shift from something called CapEx to OpEx. And that's treated in a very different way, on the books. Where we are today is we've created what a team, we call it a FinOps practice. But it's a team that's cross functional by nature that sits within engineering that's made up of a FinOps practitioner, person dedicated to the role. And then members of the finance team. And then many other members of engineer and they work together to first, express the cost by helping developers understand what they're actually spending and where they're spending it. And then the system also makes, recommendations about how to optimize and then the developers absorb that information and figure out what they should optimize, do that work. And then the system re-represents the information for them, and lets them know that their optimizations make sense or not from a financial perspective. The way that we've talked to developers, we've discovered that they care about efficiency. They care about efficiency in different ways. They care about CPU efficiency, they care about RAM efficiency. And it turns out, they care about how efficient their application is from a cost perspective to, right? And you can either tell them directly to care about it, or help them become aware. Or you can use proxies, like what I just mentioned about CPU, RAM, disk, network. If they understand how efficient their application is. They have a natural instinct to want to make it better on a daily and weekly basis. It's just sort of baked into their deep engineering persona. And we try to harness that. We try to position things in such a way that they can do the right thing, because most developers want to do the right. >> Yeah, it's really interesting to me Sasha I remember back, you know you go back seven, eight years ago and I looked at cloud models, and how cloud providers were trying to give more visibility and even give guidance to customers as to how they could adjust things to make them more financially reasonable. I've come from the infrastructure side, when I think about you know, deployments in a data center. It was very well understood you had systems engineer work with a customer, they deploy something, they understand what the growth of is expected to be, and if you needed more, more computer, more storage, what the cost of that would be, you understand the you know, how many years you will be writing that off for, but everything's well understood, and as you said, like developers often they've got, n minus one technology, okay, here's some gear you could work on. But finances were clearly written, they were put into some spreadsheet or understood as opposed to the cloud. There is much more burden on the user to understand what they're doing. Because you have that limitless capability as opposed to some fixed asset that you're writing it off. We're huge proponents of ledger than the cloud. And often there are, cost savings by going to the cloud. But it feels like they're also some of this overhead of having to do the financial engineering is an overhead cost that might not be considered in the overall movement to the cloud. >> Yeah, and maybe now is a good time to swing back to the concept of DevOps, right? Because I want to frame FinOps in this concept of having the budget overhead and I want to link it to the Agile, okay. So, part of the reason we moved to DevOps which is an Agile movement that essentially, puts the responsibility of owning infrastructure and deploying it into the hands of the engineers themselves. The reason that it existed was because we had a problem deploying, we had two different teams typically operations and engineering. And one of them would write the code, and they would throw it over the wall to the operations team that will deploy the code. And because they were two different teams, and they didn't necessarily sit together or sometimes even report into the same leadership, they had different goals, right. And when there was a problem, the problem had to cross both of the team boundaries. And so it was slower to resolve issues. And so, people had the bright idea to essentially put the teams together, right. And allow the developers themselves to deploy the code. And of course, depending on the size of the company was structured--or it is structured slightly differently this idea of DevOps. And, essentially what you had was a situation that worked beautifully because if you had two separate teams that all of a sudden became one team that was fully responsible for writing the code, writing the tests and deploying the code, they saw each other's pain, they understood the problem really well. And it was an opportunity for them to go faster, and they could see the powerful thing. And I think that's essentially what made the DevOps movement incredibly successful. It was the opportunity to be able to control their own destiny, and move faster that made it successful. I view FinOps in a similar fashion. It is an opportunity for developers to understand their cost efficiency and deploy in the cloud by API, and do it in a fully responsible way. Everything that we've been talking about related to DevOps, there is a higher goal here. And that is the goal of unit economics, which is figuring out precisely what your application actually costs being deployed and used by the consumer on a unit basis, right. And that is the thing we're all trying to get to. And this FinOps gets us one step closer to that sort of financial nirvana. Now if you can achieve it, or even if you can achieve the basics of it. You can structure your contracts in a different way, you can create products that take better advantage of your financial model. You can destroy certain products that you have, that don't really make sense to operate in the cloud. You can fire customers. You can do a whole variety of things, if you know what your full costs are, and FinOps allows us to do that. And FinOps allows developers to think of their applications in a way that perhaps they never have in a fully transparent, holistic way. Like there's no sense to build a Ferrari, if it costs too much to operate, right. And FinOps helps you get there. >> It's such an important point Sasha. I'm so glad you brought that up, back in the traditional infrastructure data center world, we spent decades talking about Showback and Chargeback and what visibility you had? And of course for the most part, it was, oh well you know, that sunk costs or something that facilities takes care of. I'm not going to work at it and therefore, we did not have a clear picture of IT and how it really impacted the bottom line of business. So FinOps as you said, help move us towards that ultimate goal that we know we've had for years. I want to tease on that thing that you mentioned there, speed. We understand that, absolutely speed is one of the most important things, how do we react to the business? How to react to the customer, as close to real time as possible? How do you make sure that FinOps doesn't slow things down? If I'm an engineer, and I need to think about oh, wait. I've been told that, the best code to write is no code. But, I have to constantly think about, am I being financially sound? Am I doing that? How do we make sure that this movement doesn't slow me down, but actually enables me to move forward faster? >> Yeah I mean, let me mention a couple of things there. The first is that, what I alluded to before, which is that if you don't think about this as a developer, it's possible that the finance folks in the company could decide well hey, operating the cloud doesn't make financial sense for us. And so we're not going to do it and we're going to go back to data center and you maybe that's the right business move for some businesses who aren't growing rapidly, for whom speed and flexibility isn't as important. Maybe they stay in the data center or they go back to a data center. And so like, I would think a developer has stakes in the game, if they want to be flexible, if they want to continue to be flexible. And from a company perspective, like we... You know, this idea still being sort of fleshed out and even within the FinOps movement, like there is a question of how much time should a developer spend thinking about costs stuff? I'll tell you what my answer is, and perhaps I can touch on what other people think about it as well. My answer is that it's best to be transparent with developers as much as possible and share with them as much data as we possibly can, the right kind of data, right? Not overwhelm them with statistics, that help them understand their applications and applications efficiency. And if when you are implementing a FinOps practice within your org, if you get the sense that people are very touchy, and they're not used to this idea of talking about cost directly, you can talk about it in terms of proxies, right. And as I mentioned before, CPU, RAM, disk, network. Those are all good proxies for cost. So if you tell them hey, your application is efficient or inefficient on these different dimensions, go do something about it, right. Like, when you build your next architecture for your application, incorporate efficiencies across these particular dimensions. That will resonate and that will ensure that developers don't feel like it's hampering their speed. I think the cultural shift that FinOps emphasizes is key. This, helping developers get the high level understanding of why we're doing what we're doing and why it's important and embedding it into their not only their architectural design, but their daily operations. That is the key, like FinOps has multiple pieces to it. I think it's successful because it emphasizes a system that's made up of governance practices, rules that tell you how you should behave within the system. Tools like a CMP, and we can talk about that in a bit. But essentially, it's a cost management platform which is a tool that is designed to figure out what you're spending and express it back to you. It's designed to create anomalies and there's a whole segment in the marketplace of these different kinds of tools. And then of course, the cultural shift. If you can do all three at your organization whether you want to call it a FinOps or not, you're going to be set up for success and it will solve that problem for you. >> So Sasha, one of the things I've really enjoyed the last decade or so is it used to be that IT organizations thought what they were doing was, the differentiator and therefore, they were a bit guarded about what they would share. And of course, these days leveraging cloud leveraging open source, there is much more collaboration out there. And LiveRamp, not only is using FinOps, but you're a member of the FinOps Foundation, which has over 1500, individual members participating in that oversaw by the Linux Foundation, maybe bring us in a little bit as to, why LiveRamp decided to join this group. And, for final word on really kind of the mission of the FinOps Foundation. >> Yeah, I mean as members of the audience might know, the FinOps Foundation recently moved to the Linux Foundation, and I think part of that move was to express the independence of the FinOps Foundation, it was connected to a company in a CMP space before and I think J.R and the team made a wonderful decision in doing so. And I wanted to give a shout out to them. I'm very excited about the shift, and we look forward to contributing to the codebase and all the conversations. In terms of how we discovered it. I was feeling the pain of all these different problems of being, over my budget in the cloud. And, I had arrived at like this idea of like, I needed a dedicated person, a dedicated team that was cross-functional in order to solve the problem. But, on a whim, I attended a FinOps course at a conference and Mike Fuller, who was the author or one of the authors of the FinOps book, along with J.R. was teaching it and I spent eight hours just in like, in literal wonder thinking holy crap this guy and whoever came up with this concept put together and synthesized all of the pain that I had felt and all the different things I thought about in order to solve the problem in a beautiful, holistic manner. And they were just presenting it back to me on a platter, back to everyone on a platter and I thought that was beautiful. And the week that I got back to work from the conference, I put together a presentation for the executives to position a FinOps practice as the solution for LiveRamps budgetary cloud pain. We went for it, and we... It's helped us, it's helped lots of other companies. And, I'm here today partly because I want to give back because there's so much that I learned from being in the Slack channel. There's so much that I learned by reading the book, things that I hadn't thought of that I hadn't experienced yet. So I didn't have the pain. But you know, J.R and Mike, they had all interviewed, hundreds of different folks for the book, got lots of input, and they were talking about things that I hadn't experienced yet, that I was going to. And so I want to give back, they clearly want to give back. And I think it's, a wonderful, a wonderful practice, a wonderful book, a wonderful Slack channel. I would recommend that anyone facing the budgetary challenge in the cloud, join the organization There is a monthly conversation, where someone presents and you learn a lot from doing it. You learn problems and solutions that you perhaps wouldn't have thought of, so I would highly recommend it. >> All right, well Sasha thank you so much for sharing your story with our community and everything that you've learned and best of luck going forward. >> Thanks very much Stu. It's great to talk. >> Alright, and if you want to learn more about what Sasha was talking about, Linux Foundation it is this finops.org is their website. Linux Foundation, of course theCUBE. Cloud Native, big piece of what happens and what we're doing will be at theCUBEcon, CloudNativeCon shows this year. Look for more interviews in this space. I'm Stu Miniman. And look forward to hearing more about your Cloud Native Insights. (upbeat music)

Published Date : Jul 9 2020

SUMMARY :

leaders around the globe, of the changes that need to happen. and what your team does there? and send targeted advertising to them. you know, cookies for eating in all the different areas that you know, to go much heavier into cloud. and over the course of a year, bit that move to the cloud, and we containerize those. you know, a couple of months or a quarter. and maybe this is begin to and how they had to think about and just make sure that the in the overall movement to the cloud. And that is the goal of unit economics, and what visibility you had? and express it back to you. of the FinOps Foundation. and solutions that you perhaps and everything that you've learned It's great to talk. Alright, and if you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
SashaPERSON

0.99+

Sasha KipervargPERSON

0.99+

Mike FullerPERSON

0.99+

J.R.PERSON

0.99+

Stu MinimanPERSON

0.99+

San FranciscoLOCATION

0.99+

FinOps FoundationORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

New YorkLOCATION

0.99+

LondonLOCATION

0.99+

MikePERSON

0.99+

GoogleORGANIZATION

0.99+

Linux FoundationORGANIZATION

0.99+

J.RPERSON

0.99+

oneQUANTITY

0.99+

one teamQUANTITY

0.99+

LiveRampORGANIZATION

0.99+

FerrariORGANIZATION

0.99+

eight hoursQUANTITY

0.99+

San FranciscoLOCATION

0.99+

ChinaLOCATION

0.99+

threeQUANTITY

0.99+

two separate teamsQUANTITY

0.99+

Cloud Native InsightsTITLE

0.99+

400QUANTITY

0.99+

Bay AreaLOCATION

0.99+

firstQUANTITY

0.99+

bothQUANTITY

0.99+

four dayQUANTITY

0.99+

two different teamsQUANTITY

0.99+

BostonLOCATION

0.99+

twoQUANTITY

0.98+

todayDATE

0.98+

70QUANTITY

0.98+

Cloud NativeTITLE

0.98+

sevenDATE

0.98+

hundredsQUANTITY

0.98+

DevOpsTITLE

0.98+

mid 90sDATE

0.98+

hundred petabyteQUANTITY

0.98+

over 1500QUANTITY

0.98+

CloudNativeConEVENT

0.98+

second decisionsQUANTITY

0.97+

next monthDATE

0.97+

FinOpsTITLE

0.97+

DataprocORGANIZATION

0.96+

eight years agoDATE

0.96+

first thingQUANTITY

0.95+

Global Cloud OperationsORGANIZATION

0.95+

once a quarterQUANTITY

0.94+

theCUBEORGANIZATION

0.94+

LiveRampTITLE

0.93+

last year and a halfDATE

0.93+

SlackORGANIZATION

0.92+

StuPERSON

0.92+

CapExTITLE

0.91+

next quartersDATE

0.9+

one stepQUANTITY

0.9+

this yearDATE

0.89+

theCUBEconEVENT

0.89+

FinOpsORGANIZATION

0.89+

Jim Whitehurst, IBM | IBM Think 2020


 

[Music] from the cube studios in Palo Alto in Boston it's the cube covering the IBM thing brought to you by IBM hi I'm Stu Minuteman and this is the cubes coverage of IBM think 2020 the digital experience we talked to IBM executives their partners and their customers really thrilled to welcome back one of our cube alumni he has a new role since the last time he was on the cube at an event Jim white Hersey is now the president of IBM of course former CEO of Red Hat Jim pleasure to see you thanks so much for joining us hey it's great to be back hope you're doing well we are all trying to stay safe we miss seeing you and the team in person had a great digital event with the Red Hat team last week for summit of course I love you know either going to San Francisco or my backyard here in Boston from it but the thing we've been saying is we are now together even when we're apart so so many changes going on of course the global pandemic impacting everyone and the keynote you and the other IBM executive talking about you know really how it's helping IBM solidify what they believe their their decisions are and the technology direction so you know not a big vivid or change but Jim really want to get your feedback as to what advice you have for your customers where should they be investing worst they be slowing down how should they be thinking about their IP spend in today's world yeah so first off you know our hybrid cloud strategy which IBM and you know Red Hat now combined have been on for quite a long time has been all about flexibility resilience in an unknown future I think there were ever a time where having flexibility is important it's now so you know we have had clients saying hey I can use the cloud because all of a sudden with work at home I have huge increases in demand we find others that say wow I was using the cloud but I have a reduction in absolute demand so I want to pull those workloads back I'm gonna run premises say the marginal dollars so you have people kind of doing very different things than we thought we would be doing this month and going forward through the year and so having an architecture that's built for change it's certainly hybrid cloud architectures a part of that is I think being born out here as people are trying to understand new ways of working and certainly with IBM you know with some of the technologies we have around AI with helping various industries as they're all volumes increased as people are you know changing tickets or have more questions and our ability to help people scale up AI to address those so they're not trying to add people in a very difficult time you know just broadly you know our platforms run some of the most mission critical systems so keeping those systems up going and being resilient and with thousands of things CEOs and CIOs have to worry about you know knowing that you have a partner that's gonna keep your most important systems up and running are all things that we do every day and I think that value shows through even more right now yeah absolutely we've been hearing plenty of reports customers as they you know might have been thinking about how fast they move or how do they leverage cloud pods an important piece of what they need to be doing how does the combination of IBM and Red Hat differentiate from some of the other cloud offerings both cloud Nai across the industry today yeah sure well let me start off with cloud and then I'll talk about how AI complements and accelerates that that strategy so what's different about what IBM is doing is we have a vision that the best architecture is a choice full horizontal architecture where you can run your application anywhere but it's not just about running it now you know clouds are now becoming internet themselves a source of innovation via various api's with functionality behind those so in order to consume innovation learn ever it might come from you have to have flexibility to be able to move your work and so IBM is unique in saying hey we're not just a cloud provider we're actually providing a platform that runs across any of the major cloud providers and so we make that real by having the Red Hat platform OpenShift is a core part of what we do I think secondly as having the platform's great but it's all about having the platform so you can consume innovation to deliver business value and iBM has injected that with a whole series of capabilities whether that's being able to pull data and information out of you know existing workloads to the whole AI portfolio to help people really build a cognitive Enterprise and inject intelligence and AI into business processes so they can build you know a different intelligent kind of AI infused set of business processes or in our new businesses so the combination of a horizontal platform going to run anywhere with the ability whether it's with software or with services capability to add on top we can now help you leverage that we can help you take that Ferrari he built out for a drive to help you build new sources of value right one of the big discussion points this week has been edge computing a lot of discussion it's you know much earlier in the adoption and maturation of the ecosystems compared to what we were talking about for cloud so what's important with edge how our BM and RedHat going to extend what they've been doing to edge type of deployment well edge becomes an extension of the data center you know I think there was a period of time when we thought about computers as individual things and now we've had this idea of a data center is where computing happens and then they're you know thin devices like phones or whatever kind of out in the ether the tether back but you know as the Internet of Things continues to expand as the ability of computing technology towards the edge you know continues to grow with technology advances as 5g continues to expand out and you know abroad the ability to have use cases of computing at the edge just increase it increases so whether that's autonomous driving is an obvious major use case where they'll be massive amounts of you that you can't handle the latency of taking all that compute back to the data center to you know how you're making sure the paint finish that a factory is putting on a you know a piece of metal is being done you know correctly and optimally and environmentally efficiently all those things are far sensing at the edge and computing at the edge to be economic but here's the issue you don't want to have to develop a whole new infrastructure of software and you'll be able to do that a whole different set of developers with different skill sets and different rules on different infrastructure so what we're doing with this platform I talked about when I said this platform runs everywhere it's not just that it runs on the major public clouds or in your data center or bare metal or virtualized it runs all the way out to the edge now as soon as you get out to the edge you have a whole new set of management challenges because the types of applications are different how they temper hether back are different so we are working with large enterprises and with telcos not only on Bhaiji rollout but also edge infrastructure and the management tooling to be able to have an application run in the factory in an effective efficient safe way but then be able to be tethered all the way back to bringing data back for analytics in the data center so we've made some really exciting announcements on what we're doing with both industrial enterprise customers on edge computing and then how we're working with telcos to bring that to life because a lot of that obviously gets integrated back into the core telco infrastructure so this idea of edge computing and mobile edge computing are critical to the future of you know of computing but importantly they're critical to the future of how enterprises are going to operate that value going forward and so you know we've taken a real leadership position around that given that we have the core infrastructure but we also understand you know our clients and you know industry verticals and business processes so we could kind of come at it from both angles and really bring that value quickly to our all right and Jim what's the role of open source there you know one of the bigger points that was talked about at Summit last week was the I believe it's the advanced cluster management for cloud and it was some IBM people and some IBM technology came in to Red Hat and they've opened forced it we're just talking about edge computing and telecommunications service providers I remember talking with you and the team you know back at OpenStack summits with network functions fertilization open source was a big piece of it so where does open play in these ecosystem discussions well I should say this is one of the really exciting things about the the marriage of Red Hat and IBM is in Red Hat has deep capability and open source and delivering open source platforms and has been doing that for two decades now in IBM's always been a large participant in open source but has never really delivered platforms right it's always infused open source components in other kind of solutions and so by bringing the two together we can truly leverage the power of open source to help enterprises and telcos consume open source at scale to really be able to take advantage of this massive innovation is happening and so in particular you know we're seeing in telco exactly what we saw happened in the data center which is people did have these vertical stacks and the data center it was the unix's you know of the past where applications were tied to the operating systems tied to the hardware the same thing exists in telco infrastructure now and the telcos understand this idea the value of a horizontal platform so how do you have a commodity yet infrastructure underneath so hardware with an open source infrastructure so people can feel confident they're not locked into one vendor so also can feel confident that they can drive feature set that they need into these platforms and so the idea that open kind of almost think of it as Oh Linux but for data centers are now Linux for a 5g which is a combination of OpenStack on the virtualized side OpenShift brunetti ECM containers from a container of perspective be able to bring that to telcos and 5g rollouts allows them to separate the in functionality which sits in an application whether that's a virtualized application or a container and be able to confidently be able to run that on open infrastructure is something that open-source is doing today in telco and the same way it disrupted you know traditional data center infrastructure over the last couple decades and then IBM can both bring that with services capability as well as a whole set of value-added services kind of further up the stack which makes the open source infrastructure usable you know in a manageable cost-effective way today and so that's why we're so excited about especially what we could do with edge because we're bringing the same disruption we brought to the data center 20 years ago and we can do it in a safe secure reliable and manageable way all right well Jim thank you so much for the updates congratulations on all the accomplishments of the Red Hat team last week and the IBM team this week great thank you it's great to be back and I look forward to seeing you again live in the not-too-distant future absolutely until we're back in person the cube bringing you IBM think the digital experience on Stu minimun and as always thank you for watching the queue [Music] you

Published Date : May 5 2020

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
San FranciscoLOCATION

0.99+

Jim WhitehurstPERSON

0.99+

BostonLOCATION

0.99+

IBMORGANIZATION

0.99+

Jim white HerseyPERSON

0.99+

JimPERSON

0.99+

Palo AltoLOCATION

0.99+

Red HatORGANIZATION

0.99+

twoQUANTITY

0.99+

last weekDATE

0.99+

two decadesQUANTITY

0.99+

Stu MinutemanPERSON

0.99+

telcosORGANIZATION

0.99+

telcoORGANIZATION

0.99+

last weekDATE

0.99+

20 years agoDATE

0.98+

Red HatTITLE

0.98+

FerrariORGANIZATION

0.97+

both anglesQUANTITY

0.97+

thousandsQUANTITY

0.97+

oneQUANTITY

0.96+

bothQUANTITY

0.96+

secondlyQUANTITY

0.96+

todayDATE

0.95+

firstQUANTITY

0.95+

OpenShiftTITLE

0.95+

iBMORGANIZATION

0.95+

this weekDATE

0.94+

2020DATE

0.94+

last couple decadesDATE

0.93+

one vendorQUANTITY

0.92+

LinuxTITLE

0.91+

this monthDATE

0.88+

OpenStackTITLE

0.88+

Think 2020COMMERCIAL_ITEM

0.86+

thingsQUANTITY

0.79+

unixTITLE

0.79+

RedHatORGANIZATION

0.71+

CEOPERSON

0.64+

of reportsQUANTITY

0.57+

BhaijiTITLE

0.56+

5gQUANTITY

0.54+

dayQUANTITY

0.54+

OpenStackEVENT

0.53+

BMORGANIZATION

0.47+

pandemicEVENT

0.46+

5gORGANIZATION

0.41+

Stu minimunPERSON

0.36+

brunettiCOMMERCIAL_ITEM

0.29+

Rich Gaston, Micro Focus | Virtual Vertica BDC 2020


 

(upbeat music) >> Announcer: It's theCUBE covering the virtual Vertica Big Data Conference 2020 brought to you by Vertica. >> Welcome back to the Vertica Virtual Big Data Conference, BDC 2020. You know, it was supposed to be a physical event in Boston at the Encore. Vertica pivoted to a digital event, and we're pleased that The Cube could participate because we've participated in every BDC since the inception. Rich Gaston this year is the global solutions architect for security risk and governance at Micro Focus. Rich, thanks for coming on, good to see you. >> Hey, thank you very much for having me. >> So you got a chewy title, man. You got a lot of stuff, a lot of hairy things in there. But maybe you can talk about your role as an architect in those spaces. >> Sure, absolutely. We handle a lot of different requests from the global 2000 type of organization that will try to move various business processes, various application systems, databases, into new realms. Whether they're looking at opening up new business opportunities, whether they're looking at sharing data with partners securely, they might be migrating it to cloud applications, and doing migration into a Hybrid IT architecture. So we will take those large organizations and their existing installed base of technical platforms and data, users, and try to chart a course to the future, using Micro Focus technologies, but also partnering with other third parties out there in the ecosystem. So we have large, solid relationships with the big cloud vendors, with also a lot of the big database spenders. Vertica's our in-house solution for big data and analytics, and we are one of the first integrated data security solutions with Vertica. We've had great success out in the customer base with Vertica as organizations have tried to add another layer of security around their data. So what we will try to emphasize is an enterprise wide data security approach, where you're taking a look at data as it flows throughout the enterprise from its inception, where it's created, where it's ingested, all the way through the utilization of that data. And then to the other uses where we might be doing shared analytics with third parties. How do we do that in a secure way that maintains regulatory compliance, and that also keeps our company safe against data breach. >> A lot has changed since the early days of big data, certainly since the inception of Vertica. You know, it used to be big data, everyone was rushing to figure it out. You had a lot of skunkworks going on, and it was just like, figure out data. And then as organizations began to figure it out, they realized, wow, who's governing this stuff? A lot of shadow IT was going on, and then the CIO was called to sort of reign that back in. As well, you know, with all kinds of whatever, fake news, the hacking of elections, and so forth, the sense of heightened security has gone up dramatically. So I wonder if you can talk about the changes that have occurred in the last several years, and how you guys are responding. >> You know, it's a great question, and it's been an amazing journey because I was walking down the street here in my hometown of San Francisco at Christmastime years ago and I got a call from my bank, and they said, we want to inform you your card has been breached by Target, a hack at Target Corporation and they got your card, and they also got your pin. And so you're going to need to get a new card, we're going to cancel this. Do you need some cash? I said, yeah, it's Christmastime so I need to do some shopping. And so they worked with me to make sure that I could get that cash, and then get the new card and the new pin. And being a professional in the inside of the industry, I really questioned, how did they get the pin? Tell me more about this. And they said, well, we don't know the details, but you know, I'm sure you'll find out. And in fact, we did find out a lot about that breach and what it did to Target. The impact that $250 million immediate impact, CIO gone, CEO gone. This was a big one in the industry, and it really woke a lot of people up to the different types of threats on the data that we're facing with our largest organizations. Not just financial data; medical data, personal data of all kinds. Flash forward to the Cambridge Analytica scandal that occurred where Facebook is handing off data, they're making a partnership agreement --think they can trust, and then that is misused. And who's going to end up paying the cost of that? Well, it's going to be Facebook at a tune of about five billion on that, plus some other finds that'll come along, and other costs that they're facing. So what we've seen over the course of the past several years has been an evolution from data breach making the headlines, and how do my customers come to us and say, help us neutralize the threat of this breach. Help us mitigate this risk, and manage this risk. What do we need to be doing, what are the best practices in the industry? Clearly what we're doing on the perimeter security, the application security and the platform security is not enough. We continue to have breaches, and we are the experts at that answer. The follow on fascinating piece has been the regulators jumping in now. First in Europe, but now we see California enacting a law just this year. They came into a place that is very stringent, and has a lot of deep protections that are really far-reaching around personal data of consumers. Look at jurisdictions like Australia, where fiduciary responsibility now goes to the Board of Directors. That's getting attention. For a regulated entity in Australia, if you're on the Board of Directors, you better have a plan for data security. And if there is a breach, you need to follow protocols, or you personally will be liable. And that is a sea change that we're seeing out in the industry. So we're getting a lot of attention on both, how do we neutralize the risk of breach, but also how can we use software tools to maintain and support our regulatory compliance efforts as we work with, say, the largest money center bank out of New York. I've watched their audit year after year, and it's gotten more and more stringent, more and more specific, tell me more about this aspect of data security, tell me more about encryption, tell me more about money management. The auditors are getting better. And we're supporting our customers in that journey to provide better security for the data, to provide a better operational environment for them to be able to roll new services out with confidence that they're not going to get breached. With that confidence, they're not going to have a regulatory compliance fine or a nightmare in the press. And these are the major drivers that help us with Vertica sell together into large organizations to say, let's add some defense in depth to your data. And that's really a key concept in the security field, this concept of defense in depth. We apply that to the data itself by changing the actual data element of Rich Gaston, I will change that name into Ciphertext, and that then yields a whole bunch of benefits throughout the organization as we deal with the lifecycle of that data. >> Okay, so a couple things I want to mention there. So first of all, totally board level topic, every board of directors should really have cyber and security as part of its agenda, and it does for the reasons that you mentioned. The other is, GDPR got it all started. I guess it was May 2018 that the penalties went into effect, and that just created a whole Domino effect. You mentioned California enacting its own laws, which, you know, in some cases are even more stringent. And you're seeing this all over the world. So I think one of the questions I have is, how do you approach all this variability? It seems to me, you can't just take a narrow approach. You have to have an end to end perspective on governance and risk and security, and the like. So are you able to do that? And if so, how so? >> Absolutely, I think one of the key areas in big data in particular, has been the concern that we have a schema, we have database tables, we have CALMS, and we have data, but we're not exactly sure what's in there. We have application developers that have been given sandbox space in our clusters, and what are they putting in there? So can we discover that data? We have those tools within Micro Focus to discover sensitive data within in your data stores, but we can also protect that data, and then we'll track it. And what we really find is that when you protect, let's say, five billion rows of a customer database, we can now know what is being done with that data on a very fine grain and granular basis, to say that this business process has a justified need to see the data in the clear, we're going to give them that authorization, they can decrypt the data. Secure data, my product, knows about that and tracks that, and can report on that and say at this date and time, Rich Gaston did the following thing to be able to pull data in the clear. And that could be then used to support the regulatory compliance responses and then audit to say, who really has access to this, and what really is that data? Then in GDPR, we're getting down into much more fine grained decisions around who can get access to the data, and who cannot. And organizations are scrambling. One of the funny conversations that I had a couple years ago as GDPR came into place was, it seemed a couple of customers were taking these sort of brute force approach of, we're going to move our analytics and all of our data to Europe, to European data centers because we believe that if we do this in the U.S., we're going to violate their law. But if we do it all in Europe, we'll be okay. And that simply was a short-term way of thinking about it. You really can't be moving your data around the globe to try to satisfy a particular jurisdiction. You have to apply the controls and the policies and put the software layers in place to make sure that anywhere that someone wants to get that data, that we have the ability to look at that transaction and say it is or is not authorized, and that we have a rock solid way of approaching that for audit and for compliance and risk management. And once you do that, then you really open up the organization to go back and use those tools the way they were meant to be used. We can use Vertica for AI, we can use Vertica for machine learning, and for all kinds of really cool use cases that are being done with IOT, with other kinds of cases that we're seeing that require data being managed at scale, but with security. And that's the challenge, I think, in the current era, is how do we do this in an elegant way? How do we do it in a way that's future proof when CCPA comes in? How can I lay this on as another layer of audit responsibility and control around my data so that I can satisfy those regulators as well as the folks over in Europe and Singapore and China and Turkey and Australia. It goes on and on. Each jurisdiction out there is now requiring audit. And like I mentioned, the audits are getting tougher. And if you read the news, the GDPR example I think is classic. They told us in 2016, it's coming. They told us in 2018, it's here. They're telling us in 2020, we're serious about this, and here's the finds, and you better be aware that we're coming to audit you. And when we audit you, we're going to be asking some tough questions. If you can't answer those in a timely manner, then you're going to be facing some serious consequences, and I think that's what's getting attention. >> Yeah, so the whole big data thing started with Hadoop, and Hadoop is open, it's distributed, and it just created a real governance challenge. I want to talk about your solutions in this space. Can you tell us more about Micro Focus voltage? I want to understand what it is, and then get into sort of how it works, and then I really want to understand how it's applied to Vertica. >> Yeah, absolutely, that's a great question. First of all, we were the originators of format preserving encryption, we developed some of the core basic research out of Stanford University that then became the company of Voltage; that build-a-brand name that we apply even though we're part of Micro Focus. So the lineage still goes back to Dr. Benet down at Stanford, one of my buddies there, and he's still at it doing amazing work in cryptography and keeping moving the industry forward, and the science forward of cryptography. It's a very deep science, and we all want to have it peer-reviewed, we all want to be attacked, we all want it to be proved secure, that we're not selling something to a major money center bank that is potentially risky because it's obscure and we're private. So we have an open standard. For six years, we worked with the Department of Commerce to get our standard approved by NIST; The National Institute of Science and Technology. They initially said, well, AES256 is going to be fine. And we said, well, it's fine for certain use cases, but for your database, you don't want to change your schema, you don't want to have this increase in storage costs. What we want is format preserving encryption. And what that does is turns my name, Rich, into a four-letter ciphertext. It can be reversed. The mathematics of that are fascinating, and really deep and amazing. But we really make that very simple for the end customer because we produce APIs. So these application programming interfaces can be accessed by applications in C or Java, C sharp, other languages. But they can also be accessed in Microservice Manor via rest and web service APIs. And that's the core of our technical platform. We have an appliance-based approach, so we take a secure data appliance, we'll put it on Prim, we'll make 50 of them if you're a big company like Verizon and you need to have these co-located around the globe, no problem; we can scale to the largest enterprise needs. But our typical customer will install several appliances and get going with a couple of environments like QA and Prod to be able to start getting encryption going inside their organization. Once the appliances are set up and installed, it takes just a couple of days of work for a typical technical staff to get done. Then you're up and running to be able to plug in the clients. Now what are the clients? Vertica's a huge one. Vertica's one of our most powerful client endpoints because you're able to now take that API, put it inside Vertica, it's all open on the internet. We can go and look at Vertica.com/secure data. You get all of our documentation on it. You understand how to use it very quickly. The APIs are super simple; they require three parameter inputs. It's a really basic approach to being able to protect and access data. And then it gets very deep from there because you have data like credit card numbers. Very different from a street address and we want to take a different approach to that. We have data like birthdate, and we want to be able to do analytics on dates. We have deep approaches on managing analytics on protected data like Date without having to put it in the clear. So we've maintained a lead in the industry in terms of being an innovator of the FF1 standard, what we call FF1 is format preserving encryption. We license that to others in the industry, per our NIST agreement. So we're the owner, we're the operator of it, and others use our technology. And we're the original founders of that, and so we continue to sort of lead the industry by adding additional capabilities on top of FF1 that really differentiate us from our competitors. Then you look at our API presence. We can definitely run as a dup, but we also run in open systems. We run on main frame, we run on mobile. So anywhere in the enterprise or one in the cloud, anywhere you want to be able to put secure data, and be able to access the protect data, we're going to be there and be able to support you there. >> Okay so, let's say I've talked to a lot of customers this week, and let's say I'm running in Eon mode. And I got some workload running in AWS, I've got some on Prim. I'm going to take an appliance or multiple appliances, I'm going to put it on Prim, but that will also secure my cloud workloads as part of a sort of shared responsibility model, for example? Or how does that work? >> No, that's absolutely correct. We're really flexible that we can run on Prim or in the cloud as far as our crypto engine, the key management is really hard stuff. Cryptography is really hard stuff, and we take care of all that, so we've all baked that in, and we can run that for you as a service either in the cloud or on Prim on your small Vms. So really the lightweight footprint for me running my infrastructure. When I look at the organization like you just described, it's a classic example of where we fit because we will be able to protect that data. Let's say you're ingesting it from a third party, or from an operational system, you have a website that collects customer data. Someone has now registered as a new customer, and they're going to do E-commerce with you. We'll take that data, and we'll protect it right at the point of capture. And we can now flow that through the organization and decrypt it at will on any platform that you have that you need us to be able to operate on. So let's say you wanted to pick that customer data from the operational transaction system, let's throw it into Eon, let's throw it into the cloud, let's do analytics there on that data, and we may need some decryption. We can place secure data wherever you want to be able to service that use case. In most cases, what you're doing is a simple, tiny little atomic efetch across a protected tunnel, your typical TLS pipe tunnel. And once that key is then cashed within our client, we maintain all that technology for you. You don't have to know about key management or dashing. We're good at that; that's our job. And then you'll be able to make those API calls to access or protect the data, and apply the authorization authentication controls that you need to be able to service your security requirements. So you might have third parties having access to your Vertica clusters. That is a special need, and we can have that ability to say employees can get X, and the third party can get Y, and that's a really interesting use case we're seeing for shared analytics in the internet now. >> Yeah for sure, so you can set the policy how we want. You know, I have to ask you, in a perfect world, I would encrypt everything. But part of the reason why people don't is because of performance concerns. Can you talk about, and you touched upon it I think recently with your sort of atomic access, but can you talk about, and I know it's Vertica, it's Ferrari, etc, but anything that slows it down, I'm going to be a concern. Are customers concerned about that? What are the performance implications of running encryption on Vertica? >> Great question there as well, and what we see is that we want to be able to apply scale where it's needed. And so if you look at ingest platforms that we find, Vertica is commonly connected up to something like Kafka. Maybe streamsets, maybe NiFi, there are a variety of different technologies that can route that data, pipe that data into Vertica at scale. Secured data is architected to go along with that architecture at the node or at the executor or at the lowest level operator level. And what I mean by that is that we don't have a bottleneck that everything has to go through one process or one box or one channel to be able to operate. We don't put an interceptor in between your data and coming and going. That's not our approach because those approaches are fragile and they're slow. So we typically want to focus on integrating our APIs natively within those pipeline processes that come into Vertica within the Vertica ingestion process itself, you can simply apply our protection when you do the copy command in Vertica. So really basic simple use case that everybody is typically familiar with in Vertica land; be able to copy the data and put it into Vertica, and you simply say protect as part of the data. So my first name is coming in as part of this ingestion. I'll simply put the protect keyword in the Syntax right in SQL; it's nothing other than just an extension SQL. Very very simple, the developer, easy to read, easy to write. And then you're going to provide the parameters that you need to say, oh the name is protected with this kind of a format. To differentiate it between a credit card number and an alphanumeric stream, for example. So once you do that, you then have the ability to decrypt. Now, on decrypt, let's look at a couple different use cases. First within Vertica, we might be doing select statements within Vertica, we might be doing all kinds of jobs within Vertica that just operate at the SQL layer. Again, just insert the word "access" into the Vertica select string and provide us with the data that you want to access, that's our word for decryption, that's our lingo. And we will then, at the Vertica level, harness the power of its CPU, its RAM, its horsepower at the node to be able to operate on that operator, the decryption request, if you will. So that gives us the speed and the ability to scale out. So if you start with two nodes of Vertica, we're going to operate at X number of hundreds of thousands of transactions a second, depending on what you're doing. Long strings are a little bit more intensive in terms of performance, but short strings like social security number are our sweet spot. So we operate very very high speed on that, and you won't notice the overhead with Vertica, perse, at the node level. When you scale Vertica up and you have 50 nodes, and you have large clusters of Vertica resources, then we scale with you. And we're not a bottleneck and at any particular point. Everybody's operating independently, but they're all copies of each other, all doing the same operation. Fetch a key, do the work, go to sleep. >> Yeah, you know, I think this is, a lot of the customers have said to us this week that one of the reasons why they like Vertica is it's very mature, it's been around, it's got a lot of functionality, and of course, you know, look, security, I understand is it's kind of table sticks, but it's also can be a differentiator. You know, big enterprises that you sell to, they're asking for security assessments, SOC 2 reports, penetration testing, and I think I'm hearing, with the partnership here, you're sort of passing those with flying colors. Are you able to make security a differentiator, or is it just sort of everybody's kind of got to have good security? What are your thoughts on that? >> Well, there's good security, and then there's great security. And what I found with one of my money center bank customers here in San Francisco was based here, was the concern around the insider access, when they had a large data store. And the concern that a DBA, a database administrator who has privilege to everything, could potentially exfil data out of the organization, and in one fell swoop, create havoc for them because of the amount of data that was present in that data store, and the sensitivity of that data in the data store. So when you put voltage encryption on top of Vertica, what you're doing now is that you're putting a layer in place that would prevent that kind of a breach. So you're looking at insider threats, you're looking at external threats, you're looking at also being able to pass your audit with flying colors. The audits are getting tougher. And when they say, tell me about your encryption, tell me about your authentication scheme, show me the access control list that says that this person can or cannot get access to something. They're asking tougher questions. That's where secure data can come in and give you that quick answer of it's encrypted at rest. It's encrypted and protected while it's in use, and we can show you exactly who's had access to that data because it's tracked via a different layer, a different appliance. And I would even draw the analogy, many of our customers use a device called a hardware security module, an HSM. Now, these are fairly expensive devices that are invented for military applications and adopted by banks. And now they're really spreading out, and people say, do I need an HSM? Well, with secure data, we certainly protect your crypto very very well. We have very very solid engineering. I'll stand on that any day of the week, but your auditor is going to want to ask a checkbox question. Do you have HSM? Yes or no. Because the auditor understands, it's another layer of protection. And it provides me another tamper evident layer of protection around your key management and your crypto. And we, as professionals in the industry, nod and say, that is worth it. That's an expensive option that you're going to add on, but your auditor's going to want it. If you're in financial services, you're dealing with PCI data, you're going to enjoy the checkbox that says, yes, I have HSMs and not get into some arcane conversation around, well no, but it's good enough. That's kind of the argument then conversation we get into when folks want to say, Vertica has great security, Vertica's fantastic on security. Why would I want secure data as well? It's another layer of protection, and it's defense in depth for you data. When you believe in that, when you take security really seriously, and you're really paranoid, like a person like myself, then you're going to invest in those kinds of solutions that get you best in-class results. >> So I'm hearing a data-centric approach to security. Security experts will tell you, you got to layer it. I often say, we live in a new world. The green used to just build a moat around the queen, but the queen, she's leaving her castle in this world of distributed data. Rich, incredibly knowlegable guest, and really appreciate you being on the front lines and sharing with us your knowledge about this important topic. So thanks for coming on theCUBE. >> Hey, thank you very much. >> You're welcome, and thanks for watching everybody. This is Dave Vellante for theCUBE, we're covering wall-to-wall coverage of the Virtual Vertica BDC, Big Data Conference. Remotely, digitally, thanks for watching. Keep it right there. We'll be right back right after this short break. (intense music)

Published Date : Mar 31 2020

SUMMARY :

Vertica Big Data Conference 2020 brought to you by Vertica. and we're pleased that The Cube could participate But maybe you can talk about your role And then to the other uses where we might be doing and how you guys are responding. and they said, we want to inform you your card and it does for the reasons that you mentioned. and put the software layers in place to make sure Yeah, so the whole big data thing started with Hadoop, So the lineage still goes back to Dr. Benet but that will also secure my cloud workloads as part of a and we can run that for you as a service but can you talk about, at the node to be able to operate on that operator, a lot of the customers have said to us this week and we can show you exactly who's had access to that data and really appreciate you being on the front lines of the Virtual Vertica BDC, Big Data Conference.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AustraliaLOCATION

0.99+

EuropeLOCATION

0.99+

TargetORGANIZATION

0.99+

VerizonORGANIZATION

0.99+

VerticaORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

Dave VellantePERSON

0.99+

May 2018DATE

0.99+

NISTORGANIZATION

0.99+

2016DATE

0.99+

BostonLOCATION

0.99+

2018DATE

0.99+

San FranciscoLOCATION

0.99+

New YorkLOCATION

0.99+

Target CorporationORGANIZATION

0.99+

$250 millionQUANTITY

0.99+

50QUANTITY

0.99+

Rich GastonPERSON

0.99+

SingaporeLOCATION

0.99+

TurkeyLOCATION

0.99+

FerrariORGANIZATION

0.99+

six yearsQUANTITY

0.99+

2020DATE

0.99+

one boxQUANTITY

0.99+

ChinaLOCATION

0.99+

CTITLE

0.99+

Stanford UniversityORGANIZATION

0.99+

JavaTITLE

0.99+

FirstQUANTITY

0.99+

oneQUANTITY

0.99+

AWSORGANIZATION

0.99+

U.S.LOCATION

0.99+

this weekDATE

0.99+

National Institute of Science and TechnologyORGANIZATION

0.99+

Each jurisdictionQUANTITY

0.99+

bothQUANTITY

0.99+

VerticaTITLE

0.99+

RichPERSON

0.99+

this yearDATE

0.98+

Vertica Virtual Big Data ConferenceEVENT

0.98+

one channelQUANTITY

0.98+

one processQUANTITY

0.98+

GDPRTITLE

0.98+

SQLTITLE

0.98+

five billion rowsQUANTITY

0.98+

about five billionQUANTITY

0.97+

OneQUANTITY

0.97+

C sharpTITLE

0.97+

BenetPERSON

0.97+

firstQUANTITY

0.96+

four-letterQUANTITY

0.96+

Vertica Big Data Conference 2020EVENT

0.95+

HadoopTITLE

0.94+

KafkaTITLE

0.94+

Micro FocusORGANIZATION

0.94+

Colin Mahony, Vertica at Micro Focus | Virtual Vertica BDC 2020


 

>>It's the queue covering the virtual vertical Big Data Conference 2020. Brought to you by vertical. >>Hello, everybody. Welcome to the new Normal. You're watching the Cube, and it's remote coverage of the vertical big data event on digital or gone Virtual. My name is Dave Volante, and I'm here with Colin Mahoney, who's a senior vice president at Micro Focus and the GM of Vertical Colin. Well, strange times, but the show goes on. Great to see you again. >>Good to see you too, Dave. Yeah, strange times indeed. Obviously, Safety first of everyone that we made >>a >>decision to go Virtual. I think it was absolutely the right all made it in advance of how things have transpired, but we're making the best of it and appreciate your time here, going virtual with us. >>Well, Joe and we're super excited to be here. As you know, the Cube has been at every single BDC since its inception. It's a great event. You just you just presented the key note to your to your audience, You know, it was remote. You didn't have that that live vibe. And you have a lot of fans in the vertical community But could you feel the love? >>Yeah, you know, it's >>it's hard to >>feel the love virtually, but I'll tell you what. The silver lining in all this is the reach that we have for this event now is much broader than it would have been a Z you know, you know, we brought this event back. It's been a few years since we've done it. We're super excited to do it, obviously, you know, in Boston, where it was supposed to be on location, but there wouldn't have been as many people that could participate. So the silver lining in all of this is that I think there's there's a lot of love out there we're getting, too. I have a lot of participants who otherwise would not have been able to participate in this. Both live as well. It's a lot of these assets that we're gonna have available. So, um, you know, it's out there. We've got an amazing customers and of practitioners with vertical. We've got so many have been with us for a long time. We've of course, have a lot of new customers as well that we're welcoming, so it's exciting. >>Well, it's been a while. Since you've had the BDC event, a lot of transpired. You're now part of micro focus, but I know you and I know the vertical team you guys have have not stopped. You've kept the innovation going. We've been following the announcements, but but bridge the gap between the last time. You know, we had coverage of this event and where we are today. A lot has changed. >>Oh, yeah, a lot. A lot has changed. I mean, you know, it's it's the software industry, right? So nothing stays the same. We constantly have Teoh keep going. Probably the only thing that stays the same is the name Vertical. Um and, uh, you know, you're not spending 10 which is just a phenomenal released for us. So, you know, overall, the the organization continues to grow. The dedication and commitment to this great form of vertical continues every single release we do as you know, and this hasn't changed. It's always about performance and scale and adding a whole bunch of new capabilities on that front. But it's also about are our main road map and direction that we're going towards. And I think one of the things have been great about it is that we've stayed true that from day one we haven't tried to deviate too much and get into things that are barred to outside your box. But we've really done, I think, a great job of extending vertical into places where people need a lot of help. And with vertical 10 we know we're going to talk more about that. But we've done a lot of that. It's super exciting for our customers, and all of this, of course, is driven by our customers. But back to the big data conference. You know, everybody has been saying this for years. It was one of the best conferences we've been to just so really it's. It's developers giving tech talks, its customers giving talks. And we have more customers that wanted to give talks than we had slots to fill this year at the event, which is another benefit, a little bit of going virtually accommodate a little bit more about obviously still a tight schedule. But it really was an opportunity for our community to come together and talk about not just America, but how to deal with data, you know, we know the volumes are slowing down. We know the complexity isn't slowing down. The things that people want to do with AI and machine learning are moving forward in a rapid pace as well. There's a lot talk about and share, and that's really huge part of what we try to do with it. >>Well, let's get into some of that. Um, your customers are making bets. Micro focus is actually making a bet on one vertical. I wanna get your perspective on one of the waves that you're riding and where are you placing your bets? >>Yeah, No, it's great. So, you know, I think that one of the waves that we've been writing for a long time, obviously Vertical started out as a sequel platform for analytics as a sequel, database engine, relational engine. But we always knew that was just sort of takes that we wanted to do. People were going to trust us to put enormous amounts of data in our platform and what we owe everyone else's lots of analytics to take advantage of that data in the lots of tools and capabilities to shape that data to get into the right format. The operational reporting but also in this day and age for machine learning and from some pretty advanced regressions and other techniques of things. So a huge part of vertical 10 is just doubling down on that commitment to what we call in database machine learning and ai. Um, And to do that, you know, we know that we're not going to come up with the world's best algorithms. Nor is that our focus to do. Our advantage is we have this massively parallel platform to ingest store, manage and analyze the data. So we made some announcements about incorporating PM ML models into the product. We continue to deepen our python integration. Building off of a new open source project we started with uber has been a great customer and partner on This is one of our great talks here at the event. So you know, we're continuing to do that, and it turns out that when it comes to anything analytics machine learning, certainly so much of what you have to do is actually prepare the big shape the data get the data in the right format, apply the model, fit the model test a model operationalized model and is a great platform to do that. So that's a huge bet that were, um, continuing to ride on, taking advantage of and then some of the other things that we've just been seeing. You continue. I'll take object. Storage is an example on, I think Hadoop and what would you point through ultimately was a huge part of this, but there's just a massive disruption going on in the world around object storage. You know, we've made several bets on S three early we created America Yang mode, which separates computing story. And so for us that separation is not just about being able to take care of your take advantage of cloud economics as we do, or the economics of object storage. It's also about being able to truly isolate workloads and start to set the sort of platform to be able to do very autonomous things in the databases in the database could actually start self analysing without impacting many operational workloads, and so that continues with our partnership with pure storage. On premise, we just announced that we're supporting beyond Google Cloud now. In addition to Amazon, we supported on we've got a CFS now being supported by are you on mode. So we continue to ride on that mega trend as well. Just the clouds in general. Whether it's a public cloud, it's a private cloud on premise. Giving our customers the flexibility and choice to run wherever it makes sense for them is something that we are very committed to. From a flexibility standpoint. There's a lot of lock in products out there. There's a lot of cloud only products now more than ever. We're hearing our customers that they want that flexibility to be able to run anywhere. They want the ease of use and simplicity of native cloud experiences, which we're giving them as well. >>I want to stay in that architectural component for a minute. Talk about separating compute from storage is not just about economics. I mean apart Is that you, you know, green, really scale compute separate from storage as opposed to in chunks. It's more efficient, but you're saying there's other advantages to operational and workload. Specificity. Um, what is unique about vertical In this regard, however, many others separate compute from storage? What's different about vertical? >>Yeah, I think you know, there's a lot of differences about how we do it. It's one thing if you're a cloud native company, you do it and you have a shared catalog. That's key value store that all of your customers are using and are on the same one. Frankly, it's probably more of a security concern than anything. But it's another thing. When you give that capability to each customer on their own, they're fully protected. They're not sharing it with any other customers. And that's something that we hear a lot of insights from our customers. They want to be able to separate compute and storage. But they want to be able to do this in their own environment so that they know that in their data catalog there's no one else is. You share in that catalog, there's no single point of failure. So, um, that's one huge advantage that we have. And frankly, I think it just comes from being a company that's operating on premise and, uh, up in the cloud. I think another huge advantages for us is we don't know what object storage platform is gonna win, nor do we necessarily have. We designed the young vote so that it's an sdk. We started with us three, but it could be anything. It's DFS. That's three. Who knows what what object storage formats were going to be there and then finally, beyond just the object storage. We're really one of the only database companies that actually allows our customers to natively operate on data in very different formats, like parquet and or if you're familiar with those in the Hadoop community. So we not only embrace this kind of object storage disruption, but we really embrace the different data formats. And what that means is our customers that have data pipelines that you know, fully automated, putting this information in different places. They don't have to completely reload everything to take advantage of the Arctic analytics. We can go where the data is connected into it, and we offer them a lot of different ways to take advantage of those analytics. So there are a couple of unique differences with verdict, and again, I think are really advance. You know, in many ways, by not being a cloud native platform is that we're very good at operating in different environments with different formats that changing formats over time. And I don't think a lot of the other companies out there that I think many, particularly many of the SAS companies were scrambling. They even have challenges moving from saying Amazon environment to a Microsoft azure environment with their office because they've got so much unique Band Aid. Excuse me in the background. Just holding the system up that is native to any of those. >>Good. I'm gonna summarize. I'm hearing from you your Ferrari of databases that we've always known. Your your object store agnostic? Um, it's any. It's the cloud experience that you can bring on Prem to virtually any cloud. All the popular clouds hybrid. You know, aws, azure, now Google or on Prem and in a variety of different data formats. And that is, I think, you know, you need the combination of those I think is unique in the marketplace. Um, before we get into the news, I want to ask you about data silos and data silos. You mentioned H DFs where you and I met back in the early days of big data. You know, in some respects, you know, Hadoop help break down the silos with distributing the date and leave it in place, and in other respects, they created Data Lakes, which became silos. And so we have. Yet all these other sales people are trying to get to, Ah, digital transformation meeting, putting data at their core virtually obviously, and leave it in place. What's your thoughts on that in terms of data being a silo buster Buster, How does verdict of way there? >>Yeah, so And you're absolutely right, I think if even if you look at his due for all the new data that gets into the do. In many ways, it's created yet another large island of data that many organizations are struggling with because it's separate from their core traditional data warehouse. It's separate from some of the operational systems that they have, and so there might be a lot of data in there, but they're still struggling with How do I break it out of that large silo and or combine it again? I think some some of the things that verdict it doesn't part of the announcement just attend his migration tools to make it really easy. If you do want to move it from one platform to another inter vertical, but you don't have to move it, you can actually take advantage of a lot of the data where it resides with vertical, especially in the Hadoop brown with our external table storage with our building or compartment natively. So we're very pragmatic about how our customers go about this. Very few customers, Many of them tried it with Hadoop and realize that didn't work. But very few customers want a wholesale. Just say we're going to throw everything out. We're gonna get rid of our data warehouse. We're gonna hit the pause button and we're going to go from there. Just it's not possible to do that. So we've spent a lot of time investing in the product, really work with them to go where the data is and then seamlessly migrate. And when it makes sense to migrate, you mentioned the performance of America. Um, and you talked about it is the variety. It definitely is. And one other thing that we're really proud of this is that it actually is not a gas guzzler. Easy either One of the things that we're seeing, a lot of the other cloud databases pound for pound you get on the 10th the hardware vertical running up there. You get over 10 x performance. We're seeing that a lot, so it's Ah, it's not just about the performance, but it's about the efficiency as well. And I think that efficiency is really important when it comes to silos. Because there's there's just only so much horsepower out there. And it's easier for companies to play tricks and lots of servers environment when they start up for so many organizations and cloud and frankly, looking at the bills they're getting from these cloud workloads that are running. They really conscious of that. >>Yeah. The big, big energy companies love the gas guzzlers. A lot of a lot of cloud. Cute. But let's get into the news. Uh, 10 dot io you shared with your the audience in your keynote. One of the one of the highlights of data. What do we need to know? >>Yeah, so, you know, again doubling down on these mega trends, I'll start with Machine Learning and ai. We've done a lot of work to integrate so that you can take native PM ml models, bring them into vertical, run them massively parallel and help shape you know your data and prepare it. Do all the work that we know is required true machine learning. And for all the hype that there is around it, this is really you know, people want to do a lot of unsupervised machine learning, whether it's for healthcare fraud, detection, financial services. So we've doubled down on that. We now also support things like Tensorflow and, you know, as I mentioned, we're not going to come up with the best algorithms. Our job is really to ensure that those algorithms that people coming up with could be incorporated, that we can run them against massive data sets super efficiently. So that's that's number one number two on object storage. We continue to support Mawr object storage platforms for ya mode in the cloud we're expanding to Google G CPI, Google's cloud beyond just Amazon on premise or in the cloud. Now we're also supporting HD fs with beyond. Of course, we continue to have a great relationship with our partners, your storage on premise. Well, what we continue to invest in the eon mode, especially. I'm not gonna go through all the different things here, but it's not just sort of Hey, you support this and then you move on. There's so many different things that we learn about AP I calls and how to save our customers money and tricks on performance and things on the third areas. We definitely continue to build on that flexibility of deployment, which is related to young vote with. Some are described, but it's also about simplicity. It's also about some of the migration tools that we've announced to make it easy to go from one platform to another. We have a great road map on these abuse on security, on performance and scale. I mean, for us. Those are the things that we're working on every single release. We probably don't talk about them as much as we need to, but obviously they're critically important. And so we constantly look at every component in this product, you know, Version 10 is. It is a huge release for any product, especially an analytic database platform. And so there's We're just constantly revisiting you know, some of the code base and figuring out how we can do it in new and better ways. And that's a big part of 10 as well. >>I'm glad you brought up the machine Intelligence, the machine Learning and AI piece because we would agree that it is really one of the things we've noticed is that you know the new innovation cocktail. It's not being driven by Moore's law anymore. It's really a combination of you. You've collected all this data over the last 10 years through Hadoop and other data stores, object stores, etcetera. And now you're applying machine intelligence to that. And then you've got the cloud for scale. And of course, we talked about you bringing the cloud experience, whether it's on Prem or hybrid etcetera. The reason why I think this is important I wanted to get your take on this is because you do see a lot of emerging analytic databases. Cloud Native. Yes, they do suck up, you know, a lot of compute. Yeah, but they also had a lot of value. And I really wanted to understand how you guys play in that new trend, that sort of cloud database, high performance, bringing in machine learning and AI and ML tools and then driving, you know, turning data into insights and from what I'm hearing is you played directly in that and your differentiation is a lot of the things that we talk about including the ability to do that on from and in the cloud and across clouds. >>Yeah, I mean, I think that's a great point. We were a great cloud database. We run very well upon three major clouds, and you could argue some of the other plants as well in other parts of the world. Um, if you talk to our customers and we have hundreds of customers who are running vertical in the cloud, the experience is very good. I think it would always be better. We've invested a lot in taking advantage of the native cloud ecosystem, so that provisioning and managing vertical is seamless when you're in that environment will continue to do that. But vertical excuse me as a cloud platform is phenomenal. And, um, you know, there's a There's a lot of confusion out there, you know? I think there's a lot of marketing dollars spent that won't name many of the companies here. You know who they are, You know, the cloud Native Data Warehouse and it's true, you know their their software as a service. But if you talk to a lot of our customers, they're getting very good and very similar. experiences with Bernie comic. We stopped short of saying where software is a service because ultimately our customers have that control of flexibility there. They're putting verdict on whichever cloud they want to run it on, managing it. Stay tuned on that. I think you'll you'll hear from or more from us about, you know, that going going even further. But, um, you know, we do really well in the cloud, and I think he on so much of yang. And, you know, this has really been a sort of 2.5 years and never for us. But so much of eon is was designed around. The cloud was designed around Cloud Data Lakes s three, separation of compute and storage on. And if you look at the work that we're doing around container ization and a lot of these other elements, it just takes that to the next level. And, um, there's a lot of great work, so I think we're gonna get continue to get better at cloud. But I would argue that we're already and have been for some time very good at being a cloud analytic data platform. >>Well, since you open the door I got to ask you. So it's e. I hear you from a performance and architectural perspective, but you're also alluding two. I think something else. I don't know what you can share with us. You said stay tuned on that. But I think you're talking about Optionality, maybe different consumption models. That am I getting that right and you share >>your difficult in that right? And actually, I'm glad you wrote something. I think a huge part of Cloud is also has nothing to do with the technology. I think it's how you and seeing the product. Some companies want to rent the product and they want to rent it for a certain period of time. And so we allow our customers to do that. We have incredibly flexible models of how you provision and purchase our product, and I think that helps a lot. You know, I am opening the door Ah, a little bit. But look, we have customers that ask us that we're in offer them or, you know, we can offer them platforms, brawl in. We've had customers come to us and say please take over systems, um, and offer something as a distribution as I said, though I think one thing that we've been really good at is focusing on on what is our core and where we really offer offer value. But I can tell you that, um, we introduced something called the Verdict Advisor Tool this year. One of the things that the Advisor Tool does is it collects information from our customer environments on premise or the cloud, and we run through our own machine learning. We analyze the customer's environment and we make some recommendations automatically. And a lot of our customers have said to us, You know, it's funny. We've tried managed service, tried SAS off, and you guys blow them away in terms of your ability to help us, like automatically managed the verdict, environment and the system. Why don't you guys just take this product and converted into a SAS offering, so I won't go much further than that? But you can imagine that there's a lot of innovation and a lot of thoughts going into how we can do that. But there's no reason that we have to wait and do that today and being able to offer our customers on premise customers that same sort of experience from a managed capability is something that we spend a lot of time thinking about as well. So again, just back to the automation that ease of use, the going above and beyond. Its really excited to have an analytic platform because we can do so much automation off ourselves. And just like we're doing with Perfect Advisor Tool, we're leveraging our own Kool Aid or Champagne Dawn. However you want to say Teoh, in fact, tune up and solve, um, some optimization for our customers automatically, and I think you're going to see that continue. And I think that could work really well in a bunch of different wallets. >>Welcome. Just on a personal note, I've always enjoyed our conversations. I've learned a lot from you over the years. I'm bummed that we can't hang out in Boston, but hopefully soon, uh, this will blow over. I loved last summer when we got together. We had the verdict throwback. We had Stone Breaker, Palmer, Lynch and Mahoney. We did a great series, and that was a lot of fun. So it's really it's a pleasure. And thanks so much. Stay safe out there and, uh, we'll talk to you soon. >>Yeah, you too did stay safe. I really appreciate it up. Unity and, you know, this is what it's all about. It's Ah, it's a lot of fun. I know we're going to see each other in person soon, and it's the people in the community that really make this happen. So looking forward to that, but I really appreciate it. >>Alright. And thank you, everybody for watching. This is the Cube coverage of the verdict. Big data conference gone, virtual going digital. I'm Dave Volante. We'll be right back right after this short break. >>Yeah.

Published Date : Mar 31 2020

SUMMARY :

Brought to you by vertical. Great to see you again. Good to see you too, Dave. I think it was absolutely the right all made it in advance of And you have a lot of fans in the vertical community But could you feel the love? to do it, obviously, you know, in Boston, where it was supposed to be on location, micro focus, but I know you and I know the vertical team you guys have have not stopped. I mean, you know, it's it's the software industry, on one of the waves that you're riding and where are you placing your Um, And to do that, you know, we know that we're not going to come up with the world's best algorithms. I mean apart Is that you, you know, green, really scale Yeah, I think you know, there's a lot of differences about how we do it. It's the cloud experience that you can bring on Prem to virtually any cloud. to another inter vertical, but you don't have to move it, you can actually take advantage of a lot of the data One of the one of the highlights of data. And so we constantly look at every component in this product, you know, And of course, we talked about you bringing the cloud experience, whether it's on Prem or hybrid etcetera. And if you look at the work that we're doing around container ization I don't know what you can share with us. I think it's how you and seeing the product. I've learned a lot from you over the years. Unity and, you know, this is what it's all about. This is the Cube coverage of the verdict.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Colin MahoneyPERSON

0.99+

Dave VolantePERSON

0.99+

DavePERSON

0.99+

BostonLOCATION

0.99+

JoePERSON

0.99+

Colin MahonyPERSON

0.99+

AmazonORGANIZATION

0.99+

uberORGANIZATION

0.99+

threeQUANTITY

0.99+

GoogleORGANIZATION

0.99+

pythonTITLE

0.99+

hundredsQUANTITY

0.99+

FerrariORGANIZATION

0.99+

10QUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

oneQUANTITY

0.99+

2.5 yearsQUANTITY

0.99+

twoQUANTITY

0.99+

Kool AidORGANIZATION

0.99+

Vertical ColinORGANIZATION

0.99+

10thQUANTITY

0.99+

BothQUANTITY

0.99+

Micro FocusORGANIZATION

0.98+

each customerQUANTITY

0.98+

MoorePERSON

0.98+

AmericaLOCATION

0.98+

this yearDATE

0.98+

one platformQUANTITY

0.97+

todayDATE

0.96+

OneQUANTITY

0.96+

10TITLE

0.96+

VerticaORGANIZATION

0.96+

last summerDATE

0.95+

third areasQUANTITY

0.94+

one thingQUANTITY

0.93+

VerticalORGANIZATION

0.92+

this yearDATE

0.92+

single pointQUANTITY

0.92+

Big Data Conference 2020EVENT

0.92+

ArcticORGANIZATION

0.91+

HadoopORGANIZATION

0.89+

three major cloudsQUANTITY

0.88+

H DFsORGANIZATION

0.86+

Cloud Data LakesTITLE

0.86+

Stone BreakerORGANIZATION

0.86+

one huge advantageQUANTITY

0.86+

HadoopTITLE

0.85+

BDCEVENT

0.83+

day oneQUANTITY

0.83+

Version 10TITLE

0.83+

CubeCOMMERCIAL_ITEM

0.82+

Google CloudTITLE

0.82+

BDC 2020EVENT

0.81+

thingQUANTITY

0.79+

BerniePERSON

0.79+

firstQUANTITY

0.79+

over 10 xQUANTITY

0.78+

PremORGANIZATION

0.78+

one verticalQUANTITY

0.77+

Virtual VerticaORGANIZATION

0.77+

VerdictORGANIZATION

0.75+

SASORGANIZATION

0.75+

Champagne DawnORGANIZATION

0.73+

every single releaseQUANTITY

0.72+

PerfectTITLE

0.71+

yearsQUANTITY

0.7+

last 10 yearsDATE

0.69+

PalmerORGANIZATION

0.67+

TensorflowTITLE

0.65+

single releaseQUANTITY

0.65+

a minuteQUANTITY

0.64+

Advisor ToolTITLE

0.63+

customersQUANTITY

0.62+

Ben White, Domo | Virtual Vertica BDC 2020


 

>> Announcer: It's theCUBE covering the Virtual Vertica Big Data Conference 2020, brought to you by Vertica. >> Hi, everybody. Welcome to this digital coverage of the Vertica Big Data Conference. You're watching theCUBE and my name is Dave Volante. It's my pleasure to invite in Ben White, who's the Senior Database Engineer at Domo. Ben, great to see you, man. Thanks for coming on. >> Great to be here and here. >> You know, as I said, you know, earlier when we were off-camera, I really was hoping I could meet you face-to-face in Boston this year, but hey, I'll take it, and, you know, our community really wants to hear from experts like yourself. But let's start with Domo as the company. Share with us what Domo does and what your role is there. >> Well, if I can go straight to the official what Domo does is we provide, we process data at BI scale, we-we-we provide BI leverage at cloud scale in record time. And so what that means is, you know, we are a business-operating system where we provide a number of analytical abilities to companies of all sizes. But we do that at cloud scale and so I think that differentiates us quite a bit. >> So a lot of your work, if I understand it, and just in terms of understanding what Domo does, there's a lot of pressure in terms of being real-time. It's not, like, you sometimes don't know what's coming at you, so it's ad-hoc. I wonder if you could sort of talk about that, confirm that, maybe add a little color to it. >> Yeah, absolutely, absolutely. That's probably the biggest challenge it is to being, to operating Domo is that it is an ad hoc environment. And certainly what that means, is that you've got analysts and executives that are able to submit their own queries with out very... With very few limitations. So from an engineering standpoint, that challenge in that of course is that you don't have this predictable dashboard to plan for, when it comes to performance planning. So it definitely presents some challenges for us that we've done some pretty unique things, I think, to address those. >> So it sounds like your background fits well with that. I understand your people have called you a database whisperer and an envelope pusher. What does that mean to a DBA in this day and age? >> The whisperer part is probably a lost art, in the sense that it's not really sustainable, right? The idea that, you know, whatever it is I'm able to do with the database, it has to be repeatable. And so that's really where analytics comes in, right? That's where pushing the envelope comes in. And in a lot of ways that's where Vertica comes in with this open architecture. And so as a person who has a reputation for saying, "I understand this is what our limitations should be, but I think we can do more." Having a platform like Vertica, with such an open architecture, kind of lets you push those limits quite a bit. >> I mean I've always felt like, you know, Vertica, when I first saw the stone breaker architecture and talked to some of the early founders, I always felt like it was the Ferrari of databases, certainly at the time. And it sounds like you guys use it in that regard. But talk a little bit more about how you use Vertica, why, you know, why MPP, why Vertica? You know, why-why can't you do this with RDBMS? Educate us, a little bit, on, sort of, the basics. >> For us it was, part of what I mentioned when we started, when we talked about the very nature of the Domo platform, where there's an incredible amount of resiliency required. And so Vertica, the MPP platform, of course, allows us to build individual database clusters that can perform best for the workload that might be assigned to them. So the open, the expandable, the... The-the ability to grow Vertica, right, as your base grows, those are all important factors, when you're choosing early on, right? Without a real idea of how growth would be or what it will look like. If you were kind of, throwing up something to the dark, you look at the Vertica platform and you can see, well, as I grow, I can, kind of, build with this, right? I can do some unique things with the platform in terms of this open architecture that will allow me to not have to make all my decisions today, right? (mutters) >> So, you're using Vertica, I know, at least in part, you're working with AWS as well, can you describe sort of your environment? Do you give anything on-prem, is everything in cloud? What's your set up look like? >> Sure, we have a hybrid cloud environment where we have a significant presence in public files in our own private cloud. And so, yeah, having said that, we certainly have a really an extensive presence, I would say, in AWS. So, they're definitely the partner of our when it comes to providing the databases and the server power that we need to operate on. >> From a standpoint of engineering and architecting a database, what were some of the challenges that you faced when you had to create that hybrid architecture? What did you face and how did you overcome that? >> Well, you know, some of the... There were some things we faced in terms of, one, it made it easy that Vertica and AWS have their own... They play well together, we'll say that. And so, Vertica was designed to work on AWS. So that part of it took care of it's self. Now our own private cloud and being able to connect that to our public cloud has been a part of our own engineering abilities. And again, I don't want to make little, make light of it, it certainly not impossible. And so we... Some of the challenges that pertain to the database really were in the early days, that you mentioned, when we talked a little bit earlier about Vertica's most recent eon mode. And I'm sure you'll get to that. But when I think of early challenges, some of the early challenges were the architecture of enterprise mode. When I talk about all of these, this idea that we can have unique databases or database clusters of different sizes, or this elasticity, because really, if you know the enterprise architecture, that's not necessarily the enterprise architecture. So we had to do some unique things, I think, to overcome that, right, early. To get around the rigidness of enterprise. >> Yeah, I mean, I hear you. Right? Enterprise is complex and you like when things are hardened and fossilized but, in your ad hoc environment, that's not what you needed. So talk more about eon mode. What is eon mode for you and how do you apply it? What are some of the challenges and opportunities there, that you've found? >> So, the opportunities were certainly in this elastic architecture and the ability to separate in the storage, immediately meant that for some of the unique data paths that we wanted to take, right? We could do that fairly quickly. Certainly we could expand databases, right, quickly. More importantly, now you can reduce. Because previously, in the past, right, when I mentioned the enterprise architecture, the idea of growing a database in itself has it's pain. As far as the time it takes to (mumbles) the data, and that. Then think about taking that database back down and (telephone interference). All of a sudden, with eon, right, we had this elasticity, where you could, kind of, start to think about auto scaling, where you can go up and down and maybe you could save some money or maybe you could improve performance or maybe you could meet demand, At a time where customers need it most, in a real way, right? So it's definitely a game changer in that regard. >> I always love to talk to the customers because I get to, you know, I hear from the vendor, what they say, and then I like to, sort of, validate it. So, you know, Vertica talks a lot about separating compute and storage, and they're not the only one, from an architectural standpoint who do that. But Vertica stresses it. They're the only one that does that with a hybrid architecture. They can do it on-prem, they can do it in the cloud. From your experience, well first of all, is that true? You may or may not know, but is that advantageous to you, and if so, why? >> Well, first of all, it's certainly true. Earlier in some of the original beta testing for the on-prem eon modes that we... I was able to participate in it and be aware of it. So it certainly a realty, they, it's actually supported on Pure storage with FlashBlade and it's quite impressive. You know, for who, who will that be for, tough one. It's probably Vertica's question that they're probably still answering, but I think, obviously, some enterprise users that probably have some hybrid cloud, right? They have some architecture, they have some hardware, that they themselves, want to make use of. We certainly would probably fit into one of their, you know, their market segments. That they would say that we might be the ones to look at on-prem eon mode. Again, the beauty of it is, the elasticity, right? The idea that you could have this... So a lot of times... So I want to go back real quick to separating compute. >> Sure. Great. >> You know, we start by separating it. And I like to think of it, maybe more of, like, the up link. Because in a true way, it's not necessarily separated because ultimately, you're bringing the compute and the storage back together. But to be able to decouple it quickly, replace nodes, bring in nodes, that certainly fits, I think, what we were trying to do in building this kind of ecosystem that could respond to unknown of a customer query or of a customer demand. >> I see, thank you for that clarification because you're right, it's really not separating, it's decoupling. And that's important because you can scale them independently, but you still need compute and you still need storage to run your work load. But from a cost standpoint, you don't have to buy it in chunks. You can buy in granular segments for whatever your workload requires. Is that, is that the correct understanding? >> Yeah, and to, the ability to able to reuse compute. So in the scenario of AWS or even in the scenario of your on-prem solution, you've got this data that's safe and secure in (mumbles) computer storage, but the compute that you have, you can reuse that, right? You could have a scenario that you have some query that needs more analytic, more-more fire power, more memory, more what have you that you have. And so you can kind of move between, and that's important, right? That's maybe more important than can I grow them separately. Can I, can I borrow it. Can I borrow that compute you're using for my (cuts out) and give it back? And you can do that, when you're so easily able to decouple the compute and put it where you want, right? And likewise, if you have a down period where customers aren't using it, you'd like to be able to not use that, if you no longer require it, you're not going to get it back. 'Cause it-it opened the door to a lot of those things that allowed performance and process department to meet up. >> I wonder if I can ask you a question, you mentioned Pure a couple of times, are you using Pure FlashBlade on-prem, is that correct? >> That is the solution that is supported, that is supported by Vertica for the on-prem. (cuts out) So at this point, we have been discussing with them about some our own POCs for that. Before, again, we're back to the idea of how do we see ourselves using it? And so we certainly discuss the feasibility of bringing it in and giving it the (mumbles). But that's not something we're... Heavily on right now. >> And what is Domo for Domo? Tell us about that. >> Well it really started as this idea, even in the company, where we say, we should be using Domo in our everyday business. From the sales folk to the marketing folk, right. Everybody is going to use Domo, it's a business platform. For us in engineering team, it was kind of like, well if we use Domo, say for instance, to be better at the database engineers, now we've pointed Domo at itself, right? Vertica's running Domo in the background to some degree and then we turn around and say, "Hey Domo, how can we better at running you?" So it became this kind of cool thing we'd play with. We're now able to put some, some methods together where we can actually do that, right. Where we can monitor using our platform, that's really good at processing large amounts of data and spitting out useful analytics, right. We take those analytics down, make recommendation changes at the-- For now, you've got Domo for Domo happening and it allows us to sit at home and work. Now, even when we have to, even before we had to. >> Well, you know, look. Look at us here. Right? We couldn't meet in Boston physically, we're now meeting remote. You're on a hot spot because you've got some weather in your satellite internet in Atlanta and we're having a great conversation. So-so, we're here with Ben White, who's a senior database engineer at Domo. I want to ask you about some of the envelope pushing that you've done around autonomous. You hear that word thrown around a lot. Means a lot of things to a lot of different people. How do you look at autonomous? And how does it fit with eon and some of the other things you're doing? >> You know, I... Autonomous and the idea idea of autonomy is something that I don't even know if that I have already, ready to define. And so, even in my discussion, I often mention it as a road to it. Because exactly where it is, it's hard to pin down, because there's always this idea of how much trust do you give, right, to the system or how much, how much is truly autonomous? How much already is being intervened by us, the engineers. So I do hedge on using that. But on this road towards autonomy, when we look at, what we're, how we're using Domo. And even what that really means for Vertica, because in a lot of my examples and a lot of the things that we've engineered at Domo, were designed to maybe overcome something that I thought was a limitation thing. And so many times as we've done that, Vertica has kind of met us. Like right after we've kind of engineered our architecture stuff, that we thought that could help on our side, Vertica has a release that kind of addresses it. So, the autonomy idea and the idea that we could analyze metadata, make recommendations, and then execute those recommendations without innervation, is that road to autonomy. Once the database is properly able to do that, you could see in our ad hoc environment how that would be pretty useful, where with literally millions of queries every hour, trying to figure out what's the best, you know, profile. >> You know for- >> (overlapping) probably do a better job in that, than we could. >> For years I felt like IT folks sometimes were really, did not want that automation, they wanted the knobs to turn. But I wonder if you can comment. I feel as though the level of complexity now, with cloud, with on-prem, with, you know, hybrid, multicloud, the scale, the speed, the real time, it just gets, the pace is just too much for humans. And so, it's almost like the industry is going to have to capitulate to the machine. And then, really trust the machine. But I'm still sensing, from you, a little bit of hesitation there, but light at the end of the tunnel. I wonder if you can comment? >> Sure. I think the light at the end of the tunnel is even in the recent months and recent... We've really begin to incorporate more machine learning and artificial intelligence into the model, right. And back to what we're saying. So I do feel that we're getting closer to finding conditions that we don't know about. Because right now our system is kind of a rule, rules based system, where we've said, "Well these are the things we should be looking for, these are the things that we think are a problem." To mature to the point where the database is recognizing anomalies and taking on pattern (mutters). These are problems you didn't know happen. And that's kind of the next step, right. Identifying the things you didn't know. And that's the path we're on now. And it's probably more exciting even than, kind of, nailing down all the things you think you know. We figure out what we don't know yet. >> So I want to close with, I know you're a prominent member of the, a respected member of the Vertica Customer Advisory Board, and you know, without divulging anything confidential, what are the kinds of things that you want Vertica to do going forward? >> Oh, I think, some of the in dated base for autonomy. The ability to take some of the recommendations that we know can derive from the metadata that already exists in the platform and start to execute some of the recommendations. And another thing we've talked about, and I've been pretty open about talking to it, talking about it, is the, a new version of the database designer, I think, is something that I'm sure they're working on. Lightweight, something that can give us that database design without the overhead. Those are two things, I think, as they nail or basically the database designer, as they respect that, they'll really have all the components in play to do in based autonomy. And I think that's, to some degree, where they're heading. >> Nice. Well Ben, listen, I really appreciate you coming on. You're a thought leader, you're very open, open minded, Vertica is, you know, a really open community. I mean, they've always been quite transparent in terms of where they're going. It's just awesome to have guys like you on theCUBE to-to share with our community. So thank you so much and hopefully we can meet face-to-face shortly. >> Absolutely. Well you stay safe in Boston, one of my favorite towns and so no doubt, when the doors get back open, I'll be coming down. Or coming up as it were. >> Take care. All right, and thank you for watching everybody. Dave Volante with theCUBE, we're here covering the Virtual Vertica Big Data Conference. (electronic music)

Published Date : Mar 31 2020

SUMMARY :

brought to you by Vertica. of the Vertica Big Data Conference. I really was hoping I could meet you face-to-face And so what that means is, you know, I wonder if you could sort of talk about that, confirm that, is that you don't have this predictable dashboard What does that mean to a DBA in this day and age? The idea that, you know, And it sounds like you guys use it in that regard. that can perform best for the workload that we need to operate on. Some of the challenges that pertain to the database and you like when things are hardened and fossilized and the ability to separate in the storage, but is that advantageous to you, and if so, why? The idea that you could have this... And I like to think of it, maybe more of, like, the up link. And that's important because you can scale them the compute and put it where you want, right? that is supported by Vertica for the on-prem. And what is Domo for Domo? From the sales folk to the marketing folk, right. I want to ask you about some of the envelope pushing and a lot of the things that we've engineered at Domo, than we could. But I wonder if you can comment. nailing down all the things you think you know. And I think that's, to some degree, where they're heading. It's just awesome to have guys like you on theCUBE Well you stay safe in Boston, All right, and thank you for watching everybody.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AWSORGANIZATION

0.99+

Dave VolantePERSON

0.99+

Ben WhitePERSON

0.99+

BostonLOCATION

0.99+

VerticaORGANIZATION

0.99+

AtlantaLOCATION

0.99+

FerrariORGANIZATION

0.99+

DomoORGANIZATION

0.99+

Vertica Customer Advisory BoardORGANIZATION

0.99+

BenPERSON

0.99+

two thingsQUANTITY

0.98+

this yearDATE

0.98+

VerticaTITLE

0.98+

theCUBEORGANIZATION

0.97+

Vertica Big Data ConferenceEVENT

0.97+

DomoTITLE

0.97+

DomoPERSON

0.96+

Virtual Vertica Big Data ConferenceEVENT

0.96+

Virtual Vertica Big Data Conference 2020EVENT

0.96+

firstQUANTITY

0.95+

eonTITLE

0.92+

oneQUANTITY

0.87+

todayDATE

0.87+

millions of queriesQUANTITY

0.84+

FlashBladeTITLE

0.82+

Virtual VerticaEVENT

0.75+

coupleQUANTITY

0.7+

Pure FlashBladeCOMMERCIAL_ITEM

0.58+

BDC 2020EVENT

0.56+

MPPTITLE

0.55+

timesQUANTITY

0.51+

RDBMSTITLE

0.48+

Ben White, Domo


 

everybody welcome to this digital coverage of the verdict of big data conference you're watching the cube and my name is Dave Galante it's my pleasure to invite in Ben white who's the senior database engineer at Domo been great to see you man thanks for coming on great to be here and here you know as I said you know earlier when we were off camera I really was hoping I could meet you face to face and in Boston this year but hey I'll take it and you know our community really wants to hear from experts like yourself but let's start with with domo is the company share with us what Domo does and what your role is there well if Parker can go straight to the official what Domo does is we provide we process data at bi to scale with we provide VI leverage a cloud scale in record time and so what that means is that you know we are a business operating system where we provide a number of analytical abilities to companies of all sizes but we do that at cloud scale and so I think that difference is quite a bit so a lot of your work if I understand it and just in terms of understanding with Domo does--is there's a lot of pressure in terms of being real-time it's not like you sometimes don't know what's coming at you so it's AD Hoch I wonder if you could sort of talk about that confirm that and maybe add a little color to it yeah absolutely absolutely that's probably the biggest challenge it is to being the operating Domo is that it is an ad hoc environment and certainly what that means is that you've got analysts and executives that are able to submit their own queries without very with very few limitations so from an engineering standpoint the challenge in that of course is that you don't have this predictable dashboard to plan for when it comes to performance planning and so it definitely presents some challenges for us that we've done some pretty unique things I think to address those right sounds like your background fits well with that I understand here if people have called you a database whisperer and an envelope pusher what does that mean to do a DBA in this in this day and age well the whisperer part is probably a lost art in the sense that it's not really sustainable right the idea that you know whatever it is I'm able to do with the database it has to be repeatable and so that's really what analytics comes in right and that's where pushing the envelope comes in in a little right away that's what vertical comes in with this open architecture and so as a person who has a reputation for saying I understand this is what our limitations should be but I think we can do more having a platform like vertical is such an open architecture kinda lets you push those limits by the bit I mean I've always felt like you know vertical when I first saw the Stonebreaker architecture and doctors some of the early founders I always felt like it was the Ferrari of databases certainly at the time and it sounds like you guys use it in that in that regard but talk a little bit more about how you use Vertica why in a ym ppy Vertica you know why why can't you do this with our DBMS educate us a little bit on some of the basics but for us it was part of what I mentioned when we start and we talked about the very nature of the demo platform where there's a an incredible amount of resiliency required and so Vertica the NPP platform of course allows us to build individual database clusters that can perform best for the workload that may be assigned to them so the the open the expandable the the the ability to grow vertically as your base grow those are all important factors when you're losing early on right without a real idea of how growth would be or what it would look like if you were kind of doing that something to the dark you looked at the vertical platforming you can see well as I grow I can kind of feel with this right I can do some some unique things with the platform in terms of this poking architecture that will allow me to not have to make all my decisions today right about Harlem so you're using Vertica I know at least in part you you working with AWS as well can you describe sort of your environment that you give anything on Prem is everything in the cloud what's your setup sure we have a hybrid cloud environment where we have a significant presence in public files in our own private cloud and so yeah having said that we certainly have a really an extensive presence I will say an AWS and so they're definitely the partner of our when it comes to providing the databases the server power that we need to operator but from the standpoint of engineering and architecting a database what was some of the challenges that you faced when you had to create that hybrid architecture what did you face and how did you overcome that well you know some of the there are some things we need faced in terms of wine and made it easy that Vertica and AWS have their own they play well together we'll say that and so vertical is designed to reprise I'm gonna AWS and so that part of it the care of itself not our own private cloud and being able to connect that because our public clouds has been a part of our own engineering ability and again I don't want to make a little light of it it's certainly not impossible and so we've some of the challenges though this pertains to the database really were in their early days that you mentioned when we talked a little bit earlier about marathas most recent Eon mode and I'm sure you'll get to that but when I think of our early challenges some of the early challenges were the architecture of enterprise mode when I talk about all of these this idea that we could have unique databases or database clusters of different sizes so this elasticity that's really if you know that the enterprise architecture that's not necessarily dandified architecture so we added this Munich things I think to overcome that right early to get around the rigidness though enterprise yeah I mean I hear you right Enterprise is complex and and you like when things are hardened and fossilized but in your ad hoc environment that's not what you needed so talking more about Aeon mode what what is e on mode for you and how do you apply it what are some of the challenges and opportunities there that you found um so the opportunities were certainly in its elastic architecture the ability to separate the storage immediately meant that for some of the unique data paths that we wanted to take right we could do that fairly quickly certainly we could expand databases right quickly but more importantly now you could reduce because previously in the past right when I mention the Enterprise Architect with the idea of growing a database in itself has its pain right as far as the time it takes to speed the data in that but to read to then think about taking that database back down no Innova though all of us under the eon right you had this elasticity where you could kind of start to think about auto scaling where you go up and down and maybe used to save some money or maybe you could improve performance or maybe in needham and at a time when the customers needed most in a real way right so it was definitely a game in that regard I always have to talk to the customers because I get to you know I hear from the vendor what they say and I think they sort of validate it so you know Vertica talks a lot about separating compute and storage they're not the only one from an architectural standpoint to do that but Vertica stresses that they're the only one that does that with a hybrid architecture they can do it off ram they can do it in the cloud from your experience well first of all is it true you may or may not know it is that advantageous to you and if so why well first of all it's certainly true earlier in some of the original beta ethnic for the arm prim GI mode stuff we I was able to participate in it and be aware of it so it's certainly a reality day I'm it's actually supported on pure spirit with flash played and it's time quite impressive you know for who who that who that will be for tough one a Spartacus question that they're probably still answering but I think obviously some enterprise users that probably have some hybrid cloud right they have some architecture they have some hardware that their sales want to make you so we certainly would probably fit into one of their you know their market segments that they would say we might be the wants to look at on pram er mo begin the the beauty of it is the elasticity right that the idea that you could have this and so a lot of times so I want to go back real quick to separating them and you know we start by separating it and I like to think of it maybe more as like decoupling because a new in a true way it's not necessary separated there's ultimately you bring the compute and the doors back together but to be able to typically couple it quickly replace knows bring in those that's certainly fits I think what we were trying to do in building this Emma I'll me let the ecosystem that could respond to a unknown or of a customer demand I see thank you for that clarification because you're right it's really not separating its decoupling in it that's important because you can scale them independently but you still need compute and you still need storage to run you your workloads but from a cost standpoint you're not to buy it in in chunks you can you can't buy granular segments for whatever your workload requires is that is that the correct understanding yeah and to be able to the ability to be able to reuse compute throw it in a scenario of AWS or even in the scenario your on-prem solution you've got this data that's safest here and ask for your in your storage but then the compute that you have you can reuse that right you could have a scenario that you have some query that needs more analytic more firepower more memory more what have you that you haven't so you can kind of move to the next important right that's maybe more important then and I grow them separately can I can I borrow it can I borrow that computer use for my perfect give it back type of thing and you can do that when you're so easily a couple different ooh all right and likewise if you have a down period where customers aren't using it you'd like to be able to not use that if you no longer require if you'd like to give it back go in it open the door to a lot of those things that allow performance and cross the spark to meet up we're going to ask you a question winsome pure a couple times are you using pure flash blade on-prem is that correct that is the solution that is supported that is supported by Vertica for the on print so at this point we were we have been discuss with them about some our own PLC's for that time before again we back to the idea of how do we see ourselves using it and so we've certainly discussed the feasibility of bringing it in and give it a job but that's not something we're Oh happily all right now then what is Domo for Domo tell us about that we really started this this idea even in the company where we say you know we should be using Domo in our everyday business the sales folks the marketing folks right everybody we're gonna use Domo it's a business platform for us in the engineering team it was kind of like well if we use Domo say for instance to be better at the database engineers now we've pointed Domo edits tell fried verdict is running Domo in the background for some degree and then we turn around and say hey Domo how can we better at running you and so it became this kind of cool thing we played with where we're now able to put some dumb methods together where we can actually do their eye we can monitor using our platform it's really good at processing large amounts of data and spitting out useful analytics right we take those analytics out make recommendation changes that the day so now you've got still more for Domo happening it allows us to sit at home and and work now even when we have to even before we had to well you know look look at us here right it couldn't mean in Boston physically we're now meeting remote you're you're on a hot spot because you got some weather and your satellite internet and in Atlanta and we're having a great conversation so so we're here with with Ben white who's the senior database engineer at Domo I want to ask you about some of the envelope-pushing that you've done around autonomous you hear that that word thrown around a lot means a lot of things to a lot of different people how do you look at autonomous and how does it fit with Eon and some of the other things that you're doing you know I'm a tall amidst the idea of economy is something that I don't even know that I'm I have already ready to define and so even in my discussion I often mention it as a road to it exactly where it is it's hard to pin down because there's always this idea how much trust do you give right to the system or how much how much is truly autonomous how much authority is being intervened by us the engineers so I do hate on using it but on this road towards autonomy when we look at what would how we're using Domo and even what that really means to vertical because in a lot of my examples and a lot of the things that we've engineered a demo work designs maybe over something I thought was a limitation day and so many times Oh as we've done that verdict is kind of met us like right after we've kind of engineered our architecture stuff than we thought it felt on our side Vertica has some released it kinda addresses it so the autonomy idea and the idea that we could analyzed metadata make recommendations and then execute those recommendations without intervention is that road to autonomy and once the databases start able to do that you can see in our ad-hoc environment how that would be pretty pretty useful where with literally millions of queries every hour trying to figure out what's the best you know probably for years I felt like I I T folks sometimes we really did not want that automation they wanted the knobs to turn but but I wonder if you comment I feel as though the level of complexity now with cloud with on-prem with you know hybrid multi clouds the scale the speed the real-time it just gets the pace is just too much for for humans and so it's almost like you know the industries is gonna have to capitulate to the Machine and then really trust the machine but I'm sitting I'm still sensing from you a little bit of hesitation there but light at the end of the tunnel I wonder if you could comment sure I think that in the light of the tunnel is even in recent months in recent we've really began incorporating more machine learning in artificial intelligence to the model right and back to where we're saying it so I do feel they were getting close for too finding conditions that we don't know about because right now our system is kind of a rule rules based system where we've said well these are the things that we should be looking for these are the things that we think are a problem to mature to the point where the database is recognized and anomalies and taken on at imagining saying these are problems you didn't know happen and that's kind of the next step right identifying the things you didn't know and that's where that's the path we're on now and that's probably more exciting even then kind of nailing down all the things you think you know and to figure out what we don't know yet so I want to close with I know you're a prominent member of the respected member of the Vertica a customer advisory board you know without divulging anything confidential to me what are the kinds of things that you want Vertica to do going forward I think some of the end a in database autonomy the ability to take some of the recommendations that we know we can derive from the metadata that already exists in the platform and start to execute some of the recommendation another thing we talk about and I'm gonna pretty open about talking to it is talking about it is the new version of the database designer I think it's something that I'm sure they're working on lightweight something that can give us that's database design without the overhead those are two things I think as they nail or particularly the database designer as they respect that they'll really have all the components in place to do in based economy and I think that's just some victory where they're headed yeah nice well Ben listen I really appreciate you coming on your a thought leader be very open open-minded verdict is you know really open community I mean they've always been quite transparent in terms of where they're going it's just awesome to have guys like you on the cube to share with our community so thank you so much and hopefully we can meet face to face currently absolutely will you stay safe in Boston I'm one of my favorite towns and so no doubt when this when the doors get back open I'll be from coming down or coming I'm gonna do work take care all right and thank you for watching everybody Villante with a cube we're here covering the virtual Vertica of big data conference you [Music]

Published Date : Mar 25 2020

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
AtlantaLOCATION

0.99+

VerticaORGANIZATION

0.99+

BostonLOCATION

0.99+

Dave GalantePERSON

0.99+

AWSORGANIZATION

0.99+

Ben WhitePERSON

0.99+

Ben whitePERSON

0.99+

DomoORGANIZATION

0.99+

FerrariORGANIZATION

0.99+

EmmaPERSON

0.97+

DomoPERSON

0.96+

two thingsQUANTITY

0.96+

millions of queriesQUANTITY

0.96+

this yearDATE

0.95+

VerticaTITLE

0.95+

todayDATE

0.94+

domoORGANIZATION

0.93+

firstQUANTITY

0.91+

BenPERSON

0.9+

oneQUANTITY

0.87+

MunichLOCATION

0.83+

DomoTITLE

0.82+

lot of timesQUANTITY

0.81+

every hourQUANTITY

0.8+

EonTITLE

0.79+

couple timesQUANTITY

0.74+

EonORGANIZATION

0.74+

ParkerPERSON

0.7+

lot ofQUANTITY

0.69+

AeonTITLE

0.62+

StonebreakerTITLE

0.57+

coupleQUANTITY

0.52+

VillantePERSON

0.5+

favoriteQUANTITY

0.48+

HarlemLOCATION

0.47+

SpartacusTITLE

0.43+

Mike Miller, AWS | AWS re:Invent 2019


 

>> Announcer: Live from Las Vegas, it's theCUBE! Covering AWS re:Invent 2019. Brought to you by Amazon Web Services and Intel, along with its ecosystem partners. >> Hey welcome back, everyone, it's theCUBE's coverage here live in Las Vegas for re:Invent 2019, this is theCUBE's seventh year covering re:Invent, the event's only been going for eight years, it feels like a decade, so much growth, so much action, I'm John Furrier with my co-host Dave Vellante, here extracting the signal from the noise in the Intel AWS studio of theCUBE, thank you for that sponsorship. Mike Miller is our next guest, he's director of AI devices at AWS, super excited for this segment, because DeepRacer's here, and we got some music, AI is the front and center, great to see you again, thanks for coming on. >> Absolutely, thank you for having me on again, I appreciate it. >> All right, let's just jump right in, the toys. Developers are geeking out over DeepRacer and the toys you guys are putting out there as a fun way to play and learn. >> Absolutely, getting hands-on with these new broadly applicable machine learning technologies. >> Let's jump into DeepRacer, so first of all, give us a quick update on what's happened between last year and this year in the DeepRacer community, there's been a lot of froth, competitiveness, street battles, and then we'll get an update, give us a quick update on the community. >> So we launched DeepRacer last year as a 1/18 scale race car designed to teach reinforcement learning, so this thing drives by itself around the tracks. We've got an online experience where customers can train models, so we launched a DeepRacer league where we plan to visit 22 sites around the world at AWS summits, where developers can come visit us and race a car physically around a track, and we had online contests, so every month we had a new track for developers to be challenged by and race their cars around the track. We've seen tremendous engagement and excitement, a little bit of competition really gets developers' juices going. >> It's been a lot of fun, congratulations, by the way. >> Absolutely, thank you. >> All right, let's get into the new toy, so DeepRacer 2.0, whatever you're calling it, just DeepRacer-- >> DeepRacer Evo. >> Evo, okay. >> New generation, so we've basically provided more opportunities to race for developers, more challenges for them to learn, and more ways for them to win. So we integrated some new sensors on this car, so on top there's a LIDAR, which is a laser range finding device that can detect other cars or obstacles in the rear of the car and to the sides, and in the front of the car we have stereo cameras that we added so that the car can sense depth in front of it, so with those new sensors, developers can now be challenged by integrating depth sensing and object avoidance and head to head racing into their machine learning models. >> So currently it's not an obstacle course, correct, it's a race track, right? >> So we call it a time trial, so it's a single car on the track at a time, how fast can you make a lap, our world record actually is 7.44 seconds, set by a young lady from Tokyo this past year, really exciting. >> And she was holding up the trophy and said this is basically a dream come true. And so, what are they trying to optimize, is it just the speed at the turn, what are they sort of focused on? >> Yeah, it's a little bit of art and a little bit of science, so there's the reinforcement learning model that learns through what's called a reward function, so you give the car rewards for achieving specific objectives, or certain behaviors, and so it's really up to the developer to decide what kind of behaviors do they want to reward the car with, whether it's stay close to the center line, reduce the amount of turns, they can also determine its position on the track and so they can reward it for cutting corners close, speeding up or slowing down, so it's really a little bit of art and science through some experimentation and deciding. >> So we had Intel on yesterday, talking about some of their AI, Naveen Rao, great guy, but they were introducing this concept called GANs, Generative Adversarial Networks, which is kind of like neural network technology, lot of computer science in some of the tech here, this is not kiddie scripting kind of thing, this is like real deal. >> Yeah, so GANs actually formed the basis of the product that we just announced this year called DeepComposer, so DeepComposer is a keyboard and a cloud service designed to work together to teach developers about generative AI, and GANs are the technique that we teach developers. So what's interesting about generative AI is that machine learning moves from a predictions-based technology to something that can actually create new content, so create new music, new stories, new art, but also companies are using generative AI to do more practical things like take a sketch and turn it into a 3D model, or autocorrect colorize black and white photos, Autodesk even has a generative design product, where you can give, an industrial designer can give a product some constraints and it'll generate hundreds of ideas for the design. >> Now this is interesting to me, because I think this takes it to, I call basic machine learning, to really some more advanced practical examples, which is super exciting for people learning AI and machine learning. Can you talk about the composer and how it works, because pretend I'm just a musician, I'm 16 years old, I'm composing music, I got a keyboard, how can I get involved, what would be a path, do I buy a composer device, do I link it to Ableton Live, and these tools that are out there, there's a variety of different techniques, can you take us through the use case? >> Yeah, so really our target customer for this is an aspiring machine learning developer, maybe not necessarily a musician. So any developer, whether they have musical experience or machine learning background, can use the DeepComposer system to learn about the generative AI techniques. So GANs are comprised of these two networks that have to be trained in coordination, and what we do with DeepComposer is we walk users through or walk developers through exactly how to set up that structure, how these two things train, and how is it different from traditional machine learning where you've got a large data set, and you're training a single model to make a prediction. How do these multiple networks actually work against each other, and how do you make sure that they're generating new content that's actually of the right type of quality that you want, and so that's really the essence of the Generative Adversarial Networks and these two networks that work against each other. >> So a young musician who happens to like machine learning. >> So if I give this to my kid, he'll get hooked on machine learning? That's good for the college apps. >> Plug in his Looper and set two systems working together or against each other. >> When we start getting to visualization, that's going to be very interesting when you start getting the data at the fundamental level, now this is early days. Some would say day zero, because this is really early. How do you explain that to developers, and people you're trying to get attention to, because this is certainly exciting stuff, it's fun, playful, but it's got some nerd action in it, it's got some tech, what are some of the conversations you're having with folks when they say "Hey, how do I get involved, why should I get involved," and what's really going to be the impact, what's the result of all this? >> Yeah, well it's fascinating because through Amazon's 20 years of artificial intelligence investments, we've learned a lot, and we've got thousands of engineers working on artificial intelligence and machine learning, and what we want to do is try to take a lot of that knowledge and the experiences that those folks have learned through these years, and figure out how we can bring them to developers of all skill levels, so developers who don't know machine learning, through developers who might be data scientists and have some experience, we want to build tools that are engaging and tactile and actually tangible for them to learn and see the results of what machine learning can do, so in the DeepComposer case it's how do these generative networks actually create net new content, in this case music. For DeepRacer, how does reinforcement learning actually translate from a simulated environment to the real world, and how might that be applicable for, let's say, robotics applications? So it's really about reducing the learning curve and making it easy for developers to get started. >> But there is a bridge to real world applications in all this, it's a machine learning linchpin. >> Absolutely, and you can just look at all of the innovations that are being done from Amazon and from our customers, whether they're based on improving product recommendations, forecasting, streamlining supply chains, generating training data, all of these things are really practical applications. >> So what's happening at the device, and what's happening in the cloud, can you help us understand that? >> Sure, so in DeepComposer, the device is really just a way to input a signal, and in this case it's a MIDI signal, so MIDI is a digital audio format that allows machines to kind of understand music. So the keyboard allows you to input MIDI into the generative network, and then in the cloud, we've got the generative network takes that input, processes it, and then generates four-part accompaniments for the input that you provide, so say you play a little melody on the keyboard, we're going to generate a drum track, a guitar track, a keyboard track, maybe a synthesizer track, and let you play those back to hear how your input inspired the generation of this music. >> So GANs is a big deal with this. >> Absolutely, it forms the basis of the first technique that we're teaching using DeepComposer. >> All right, so I got to ask you the question that's on everyone's mind, including mine, what are some of the wackiest and/or coolest things you've seen this year with DeepComposer and DeepRacer because I can imagine developers' creativity straying off the reservation a little bit, any cool and wacky things you've seen? >> Well we've got some great stories of competitors in the DeepRacer league, so we've got father-son teams that come in and race at the New York summit, a 10 year old learning how to code with his dad. We had one competitor in the US was at our Santa Clara summit, tried again at our Atlanta summit, and then at the Chicago summit finally won a position to come back to re:Invent and race. Last year, we did the race here at re:Invent, and the winning time, the lap time, a single lap was 51 seconds, the current world record is 7.44 seconds and it's been just insane how these developers have been able to really optimize and generate models that drive this thing at incredible speeds around the track. >> I'm sure you've seen the movie Ford v Ferrari yet. You got to see that movie, because this DeepRacer, you're going to have to need a stadium soon, with eSports booming, this has got its own legs for its own business. >> Well we've got six tracks set up down at the MGM Grand Arena, so we've already got the arena set up, and that's where we're doing all the knock-out rounds and competitors. >> And you mentioned father-son, you remember when we were kids, Cub Scouts, I think it was, or Boy Scouts, whatever it was, you had the pinewood derby, right, you'd make a car and file down the nails that you use for the axles and, taking it to a whole new level here. >> It's a modern-day version. >> All right, Mike, thanks for coming on, appreciate it, let's keep in touch. If you can get us some of that B-roll for any video, I'd love to get some B-roll of some DeepRacer photos, send 'em our way, super excited, love what you're doing, I think this is a great way to make it fun, instructive, and certainly very relevant. >> Absolutely, that's what we're after. Thank you for having me. >> All right, theCUBE's coverage here, here in Las Vegas for our seventh, Amazon's eighth re:Invent, we're documenting history as the ecosystem evolves, as the industry wave is coming, IoT edge, lot of cool things happening, we're bringing it to you, we're back with more coverage after this short break. (techno music)

Published Date : Dec 4 2019

SUMMARY :

Brought to you by Amazon Web Services and Intel, great to see you again, thanks for coming on. Absolutely, thank you for having me on again, All right, let's just jump right in, the toys. Absolutely, getting hands-on with these new Let's jump into DeepRacer, so first of all, and we had online contests, so every month All right, let's get into the new toy, and in the front of the car we have stereo cameras on the track at a time, how fast can you make a lap, is it just the speed at the turn, so you give the car rewards in some of the tech here, this is not kiddie scripting and GANs are the technique that we teach developers. Now this is interesting to me, the essence of the Generative Adversarial Networks So if I give this to my kid, Plug in his Looper and set two systems working that's going to be very interesting and the experiences that those folks have learned to real world applications in all this, Absolutely, and you can just look at So the keyboard allows you to input MIDI of the first technique that we're teaching and the winning time, the lap time, a single lap You got to see that movie, because this DeepRacer, down at the MGM Grand Arena, that you use for the axles and, I think this is a great way to make it fun, Thank you for having me. as the ecosystem evolves, as the industry wave is coming,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Mike MillerPERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

John FurrierPERSON

0.99+

TokyoLOCATION

0.99+

51 secondsQUANTITY

0.99+

20 yearsQUANTITY

0.99+

Last yearDATE

0.99+

22 sitesQUANTITY

0.99+

last yearDATE

0.99+

eight yearsQUANTITY

0.99+

AWSORGANIZATION

0.99+

7.44 secondsQUANTITY

0.99+

MikePERSON

0.99+

Las VegasLOCATION

0.99+

this yearDATE

0.99+

six tracksQUANTITY

0.99+

thousandsQUANTITY

0.99+

Naveen RaoPERSON

0.99+

first techniqueQUANTITY

0.99+

USLOCATION

0.99+

IntelORGANIZATION

0.99+

two systemsQUANTITY

0.99+

MGM Grand ArenaLOCATION

0.98+

yesterdayDATE

0.98+

16 years oldQUANTITY

0.98+

AutodeskORGANIZATION

0.98+

10 year oldQUANTITY

0.97+

seventhQUANTITY

0.97+

re:Invent 2019EVENT

0.96+

one competitorQUANTITY

0.96+

AtlantaLOCATION

0.96+

four-partQUANTITY

0.95+

seventh yearQUANTITY

0.95+

two networksQUANTITY

0.94+

single carQUANTITY

0.94+

single modelQUANTITY

0.94+

New YorkLOCATION

0.93+

DeepComposerTITLE

0.93+

hundreds of ideasQUANTITY

0.92+

EvoCOMMERCIAL_ITEM

0.89+

FerrariORGANIZATION

0.87+

DeepComposORGANIZATION

0.87+

re:InventEVENT

0.85+

a decadeQUANTITY

0.84+

FordORGANIZATION

0.83+

theCUBEORGANIZATION

0.82+

Invent 2019EVENT

0.82+

DeepRacerTITLE

0.81+

AbletonORGANIZATION

0.8+

DeepRacerORGANIZATION

0.8+

this past yearDATE

0.75+

day zeroQUANTITY

0.75+

GANsORGANIZATION

0.74+

two thingsQUANTITY

0.74+

DeepComposerORGANIZATION

0.73+

single lapQUANTITY

0.72+

Santa ClaraLOCATION

0.71+

engineersQUANTITY

0.69+

theCUBEEVENT

0.68+

DeepRacerCOMMERCIAL_ITEM

0.66+

re:EVENT

0.66+

Cub ScoutsORGANIZATION

0.65+

1/18 scaleCOMMERCIAL_ITEM

0.62+

ChicagoLOCATION

0.6+

firstQUANTITY

0.6+

2.0TITLE

0.59+

InventEVENT

0.58+

eSportsTITLE

0.58+

Ben Di Qual, Microsoft | Commvault GO 2019


 

>>Live from Denver, Colorado. It's the cube covering com vault go 2019 brought to you by Combolt. >>Hey, welcome back to the cube at Lisa Martin with Steve men and men and we are coming to you alive from combo go 19 please to welcome to the cube, a gent from Microsoft Azure. We've got Ben call principal program manager. Ben, welcome. Thank you. Thanks for having me on. Thanks for coming on. So Microsoft combo, what's going on with the partnership? >>They wouldn't have have great storage pond is in data management space. We've been working with Convult for 20 years now in Microsoft and and they've been working with us on Azure for that as long as I can remember not being on that the Azure business for about seven years now. So just a long time in cloud terms like dog ears and it's sort of, they've been doing a huge amount there around getting customer data into the cloud, reducing costs, getting more resiliency and then also letting them do more with the data. So they're a pretty good partner to have and they make it much easy for their customers to to go and leverage cloud. >> So Ben, you know, in my career I've had lots of interactions with the Microsoft storage team. Things have changed a little bit when you're now talking about Azure compared to more, it was the interaction with the operating system or the business suite at had. >>So maybe bring us up to date as those people that might not have followed where kind of the storage positioning inside of Microsoft is now that when we talk about Azure and your title. Yeah, we, we sort of can just, just briefly, we worked very heavily with our own premises brethren, they are actually inside the O team is inside of the Azure engineering old male, which is kind of funny, but we do a load of things there. If he started looking at, firstly on that, that hybrid side, we have things like Azure files. It's a highly resilient as a service SMB NFS file Shafter a hundred terabytes, but that interacts directly with windows server to give you Azure file sync. So there is sort of synergies there as well. What I'm doing personally, my team, we work on scale storage. The big thing we have in there is owl is out blood storage technology, which really is the underpinning technology fault. >>Preapproval storage and Azure, which is an including our SAS offerings, which are hosted on Azure too. So disc is on blood storage of files on blood storage. You look at Xbox live, all these kind of stuff is a customer to us. So we build that out and we were doing work there and that's, that's really, really interesting. And how we do it. And that's not looking at going, we're gonna buy some compute, we're going to buy some storage, we're going to build it out, we're going to run windows or hyper V or maybe VM-ware with hoc with windows running on the VMware, whatever else. This is more a story about we're gonna provide you storage as a service. You didn't get a minimum of like 11 nines at your ability. And and be able to have that scale to petabytes of capacity in one logical namespace and give you multiple gigabytes, double digit gigabytes of throughput to that storage. >>And now we're even that about to multiple protocols. So rest API century. Today we've got Azure stack storage, EU API, she can go and use, but we give you that consistency of the actual backend storage and the objects and the data available via more than just one protocol. You can go and access that via HDFS API. We talk about data lakes all the time. For us, our blood storage is a data Lake. We turn on hierarchal namespace and you can go and access that via other protocols like as I mentioned HDFS as well. So that is a big story about what we want to do. We want to make that data available at crazy scale, have no limits in the end to the capacity or throughput or performance and over any protocol. That's kind of our lawn on the Hill about what we want to get to. >>And we've been talking to the Combolt team about some of the solutions that they are putting in the cloud. The new offering metallic that came out. They said if my customer has Azure storage or storage from that other cloud provider, you could just go ahead and use that. Maybe how familiar and how much I know you've been having about run metallic. >> We were working, we work pretty tightly with the product team over Convolt around this and my team as well around how do we design and how do we make it work the best and we're going to continue working to optimize as they get to beyond initial launch to go, wow, we've got data sets we we can analyze. We knew how to, we wanted out of tune it. Now really we love the solution particularly more because you know the default if you don't select the storage type where you want to go, you will run on Azure. >>So really sort of be cued off to the relationship there. They chose us as a first place we'll go to, but they've also done the choice for customers. So some customers may want to take it to another cloud. That's fine. It's reasonable. I mean we totally understand it's going to be a multicloud world and that's a reality for any large company. Our goal is to make sure we're growing faster than the competitors, not to knock out the competitors altogether because that just won't happen. So they've got that ability to go and, yeah, Hey, we'll use Azure as default because they feel we're offering the best support and the best solution there. But then if they have that customer, same customer wants to turn around and use a competitor of ours, fine as well. And I see people talking about that today where they may want to mitigate risks and say, I'm going to do, I'm doing off office three, six five on a, taken off this three 65 backup. It's cool. You use metallic, it'll take it maybe to a different region in Asia and they're backing up. They still going, well, I'm still all in on Microsoft. They may want to take it to another cloud or even take it back to on premise. So that does happen too because just in case of that moment we can get that data back in a different location. Something >>so metallic talking about that is this new venture is right. It's a Convolt venture and saw that the other day and thought that's interesting. So we dug into it a little bit yesterday and it's like a startup operating within a 20 year old company, which is very interesting. Not just from an incumbent customer perspective, but an incumbent partner perspective. How have you seen over the last few years and particularly bad in the last nine months with big leadership and GTM changes for condo? How has the partnership with Microsoft evolved as a result of those changes? >>Um, it's always been interesting. I guess when you start looking at adventure and everything seems to, things change a little bit. Priorities may change just to be fair, but we've had that tight relationship for a long time and a relationship level and an exec leadership level, nothing's really changed. But in the way they're building this platform, we, we sit down out of my team at the Azure engineering group and we'll sit down and do things like ideations. Like here's where we see gaps in the markets, here's what we believe could happen. And look back in July, we had inspire, which is our partner conference in Las Vegas and we sat down with their OT, our OT in a room, we'll talking about these kinds of things. And this is I think about two months after they may have started the initial development metallic from what I understand, but we're talking about exactly what they're doing with metallic offered as a service in Azure as, Hey, how about we do this? So we think it's really cool. It opens up a new market to convert I think too. I mean they're so strong in the enterprise, but they don't do much in the smaller businesses because with the full feature product, it also has inherent complexibility complexity around it. So by doing metallic, is it click, click, next done thing. They really opening I think new markets to them and also to us as a partner. >>I was going to add, you know, kind of click on that because they developed this very quickly. This is something that I think what student were here yesterday, metallic was kind of conceived, designed, built in about six months. So in terms of like acceleration, that's kind of a new area for Combolt. >>Yeah, and I think, I think they're really embracing the fact about let's release our code in production for, for products which are sort of getting the, getting to the, Hey, the product is at the viable stage now, not minimum viable, viable, let's release in production, let's find out how customers are using it and then let's keep optimizing and doing that constant iteration, taking that dev ops approach to let's get it out there, let's get it launched, and then let's do these small batches of changes based on customer need, based on tele telemetry. We can actually get in. We can't get the telemetry without having customers. So that's how it's going to keep working. So I think this initial product we see today, it's just going to keep evolving and improving as they get more data, as they get more information, more feedback, which is exactly what we want to see. >>Well, what will come to the cloud air or something you've been living in for a number of years. Ben, I'd love to hear you've been meeting with customers, they've been asking you questions, gives us some of the, you know, some of the things that, what's top of mind for some of the customers? What kinds of things did they come into Microsoft, Dawn, and how's that all fit together? >>There's many different conferences of interrelate, many different conversations and there'll be, we'll go from talking about, you know, Python machine learning or AI fits in PowerPoint. >>Yeah. >>It's a things like, you know, when are we gonna do incremental snapshots from the manage disks, get into the weeds on very infrastructure centric stuff. We're seeing range of conversations there. The big thing I think I see, keep seeing people call out and make assumptions of is that they're not going to be relevant because cloud, I don't know cloud yet. I don't know this whole coup cube thing, containers, I don't really understand that as well as I think I need to. And an AI, Oh my gosh, what do we even do there? Cause everyone's throwing the words and terms around. But to be honest, I think would still really evident is cloud is still is tiny fraction of enterprise workloads. So let's be honest, it's growing at a huge rate because it is that small fraction. So again, there's plenty of time for people to learn but they shouldn't go and try. >>And so it's not like you go and learn everything in the technology stack from networking to development to database management to, to running a data set of power and cooling. You learn the things that are applicable to what you're trying to do. And the same thing goes to cloud. Any of these technologies go and look at what you need to build for your business. Take it that step and then go and find out the details and levels you want to know. And as someone who's been on Azure for, like I said, almost seven years, which is crazy long. That was, that was literally like being in a startup instead of Microsoft when I joined and I wasn't sure if I wanted to join a licensing company. It's been very evident to me. I will not say I'm an Azure expert and I've been seven years in the platform. >>There are too many things for for me to be an expert in everything on, and I think people sort of just have to realize that anyone's saying that it's bravado. Nothing else. Oh, people. The goal is Microsoft as a platform provider. Hopefully you've got the software and the solution does make a lot of this easier for the customer, so hopefully they shouldn't need to become a Coobernetti's expert because it's baked into your platform. They shouldn't have to worry about some of these offerings because it's SAS. Most customers are there. Some things you need to learn between going from exchange to go into Oh three 65 absolutely. There's some nuances and things like that, but once you get over that initial hurdle, it should be a little easier. I think it's right and I think going back to that, sort of going back to bear principles going, what is the highest level of distraction that's viable for your business or that application or this workload has to always be done with everything. If it's like, well, class, not even viable, running on premises, don't, don't need to apologize for not running in cloud. If I as this, what's happening for you because of security, because of application architecture, run it that way. Don't feel the need and the pressure to have to push it that way. I think too many people get caught up in this shiny stuff up here, which is what you know 1% of people are doing versus the other 99% which is still happening in a lot of the areas we work and have challenges in today. >>That's a great point that you bring up because there is all the buzz words, right? AI, machine learning cloud. You've got to be cloud ready. You've got to be data-driven to customer. To your point going, I just need to make sure that what we have set up for our business is going to allow our business one to remain relevant, but to also be able to harness the power of the data that they have to extract new opportunities, new insights, and not get caught up with, shoot, should we be using automation? Should we be using AI? Everybody's talking about it. I liked that you brought up and I find it very respectfully, he said, Hey, I'm not an Azure expert. You'd been there seven, seven dog years like you said. And I think that's what customers probably gained confidence in is hearing the folks like you that they look to for that guidance and that leadership saying, no, I don't know everything to know. But giving them the confidence that their tribe, they're trusting you with that data and also helping look, trusting you to help them make the right decisions for their business. >>Yeah, and that's, we've got to do that. I mean, I as a tech guy, it's like I've, I've loved seeing the changes. When I joined Microsoft, I, I wasn't lying. I was almost there go enough. I really want to join this company. I was going to go join a startup instead and I got asked to one stage in an interview going, why do you want to join Microsoft? We see you've never applied to, I'd never wanted to. A friend told me to come in and it's just been amazing to see those changes and I'm pretty proud on that. So when we talk about those things we're doing, I mean, I think there is no shame going, I'm just going to lift and shift machines because cloud's about flexibility. If you're doing it just on cost, probably doing it for the wrong reason. It's about that flexibility to go and do something. >>Then change within months and slowly make steps to make things better and better as you find a need as you find the ability, whatever it may be. And some of the big things that we focus on right now with customers is we've got a product called Azure advisor. It'll go until people, when one, you don't build things in a resilient manner. Hey, do you know this has not ha because of this and you can do this. It's like, great. We'll also will tell you about security vulnerabilities that maybe should a gateway here for security. Maybe you should do this or this is not patched. But the big thing of that, it also goes and tells you, Hey, you're overspending. You don't need this much. It provisions, you provision like a Ferrari, you need a, you just need a Prius. Go and run a Prius because it's going to do what you need. >>I need a paler list and that's part of that trusted suit. Getting that understanding, and it's counterintuitive, but we're now like, it's coming out of mozzarella too, which is great. But seeing these guys were dropping contracts and licenses and basically, you know, once every three years I may call the customer, Hey, how about renewal? Now, go from that to now being focused on the customer's actual success. I've focused on their growth in Azure as a platform. Our services growth, like utilization not in sales has been a huge change. It scared some people away, but it's brought a lot more people in and and that sort of counterintuitive spend less money thing actually leads in the longterm to people using more. >>Absolutely. That's definitely not the shrink wrap software company of Microsoft that I remember from the 90s yeah. might be similar to, you know, just as to get Convolt to 2019 is not the same combo that many of us know from 15 years ago. A good >>mutual friend of ours, sort of Simon and myself before I took this job, he and I sat down, we're having a beer and discussing the merits, all not Yvette go to things like that. Same with Convolt there. They're changing such such a great deal with, you know, what they're putting in the cloud, what they're doing with the data, where they're trying to achieve with things like for data management across on premises and cloud with microservices applications and stuff going, Hey, this won't work like this anymore. When you now are doing it on premises and with containers, it's pretty good to see. I'm interested to see how they take that even further to their current audience, which is product predominantly. You know the it pro, the data center admin, storage manager. >>It's funny when you talked about just the choice that customers have and those saying, aye, we shouldn't be following the trends because they're the trends. We actually interviewed a couple of hours ago, one of customers that is all on prime healthcare company and said, he's like, I want to make a sticker that says no cloud and proud and it just what there was, we don't normally hear from them. We always talk about cloud, but for a company to sit down and look at what's best for our business, whether it's, you know, FedRAMP certification challenges or HIPAA or GDPR, other compelling requirements to keep it on prem, it was just refreshing to hear this customer say, >>yeah, I mean it's just appropriate for them. You do what's right for you. I, yeah, it's no shame in any of it. It's, I mean you don't, you definitely don't get fans by it by shaming people about not doing something right. And I mean I've, I'm personally very happy to fee fee, you know, see sort of hype around things like blockchain die down a little bit. So it's a slow database and we should use it for this specific case of that shared ledger. You know, things like that where people don't have to know blockchain. Now I have to know IOT. It's like, yeah, and that hype gets people there, but it also causes a lot of anxiety and it's good to see someone actually not be ashamed of it. And they agree the ones when they do take a step and use cloud citizen may be in the business already, they're probably going to do it appropriately because have a reason, not just because we think this would be cool, right? >>Well not. And how much inherit and complexity does that bring in if somebody is really feeling pressured to follow those trends. And maybe that's when you end up with this hodgepodge of technologies that don't work well together. You're spending way more in as as business it folks are consumers, you know, consumers in their personal lives, they expect things to be accessible, visible, but also cost efficient because they have so much choice. >>Yeah, the choice choice is hard. It's just a, just the conversation I was having recently, for example, just we'll take the storage cause of where we are, right? It's like I'm running something on Azure, I'm a, I'm using Souza, I want an NFS Mount point, which is available to me in Fs. Great, perfect. what do I use as like, well you can use any one of these seven options like that, but what's the right choice? And that's the thing about being a platform can be, we give you a lot of choices, but it's still up to you or up to app hotness. It can really help the customers as well to make the most appropriate choice. And, and I, I pushed back really hard in terms of best practices and things. I hate it because again, it's making the assumption this is the best thing to do. >>It's not. It's always about, you know, what are the patterns that have worked for other people? What are the anti-patterns and what's the appropriate path for me to take? And that's actually how we're building our docs now too. So we, we keep, we keep focusing on our Azure technology and we're bringing out some of the biggest things we've done is how we manage our documentation. It's all open sourced, it's all in markdown on get hub. So you can go in and read a document from someone like myself is doing product management going, this is how to use this product and you're actually, this bit's wrong, this bit needs to be like this and you can go in yourself even now, make a change and we can go, Oh yeah and take that committed in and dual this kind of stuff in that way. So we're constantly taking those documents in that way and getting realtime feedback from customers who are using it, not just ourself in an echo chamber. >>So you get this great insight and visibility that you never had before. Well, Ben, thank you, Georgie stew and me on the queue this afternoon. Excited to hear what's coming up next for Azure. Makes appreciate your time. Thank you for steam and event. I, Lisa Martin, you're watching the cue from Convault go 19.

Published Date : Oct 16 2019

SUMMARY :

com vault go 2019 brought to you by Combolt. Hey, welcome back to the cube at Lisa Martin with Steve men and men and we are coming to you alive So they're a pretty good partner to have and they make it much easy for their So Ben, you know, in my career I've had lots of interactions but that interacts directly with windows server to give you Azure file sync. And and be able to have that scale to petabytes of capacity in one logical no limits in the end to the capacity or throughput or performance and over any you could just go ahead and use that. you know the default if you don't select the storage type where you want to go, you will run on Azure. So really sort of be cued off to the relationship there. How have you seen over the last few years and I guess when you start looking at adventure and everything seems to, I was going to add, you know, kind of click on that because they developed this very quickly. So that's how it's going to keep working. been meeting with customers, they've been asking you questions, gives us some of the, you know, some of the things that, we'll go from talking about, you know, Python machine learning or AI fits in PowerPoint. of is that they're not going to be relevant because cloud, You learn the things that are applicable to what you're trying to I think too many people get caught up in this shiny stuff up here, which is what you know 1% I liked that you brought up and I find asked to one stage in an interview going, why do you want to join Microsoft? Go and run a Prius because it's going to do what you need. from that to now being focused on the customer's actual success. might be similar to, you know, just as to get Convolt to 2019 is not the same combo that many of us you know, what they're putting in the cloud, what they're doing with the data, where they're trying to achieve with things like It's funny when you talked about just the choice that customers have and those saying, they're probably going to do it appropriately because have a reason, not just because we think this would be cool, And how much inherit and complexity does that bring in if somebody is really feeling pressured to And that's the thing about being a platform can be, we give you a lot of choices, So you can go in and read a document from someone like myself is doing product management going, So you get this great insight and visibility that you never had before.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MicrosoftORGANIZATION

0.99+

Lisa MartinPERSON

0.99+

Ben Di QualPERSON

0.99+

JulyDATE

0.99+

Las VegasLOCATION

0.99+

AsiaLOCATION

0.99+

seven yearsQUANTITY

0.99+

StevePERSON

0.99+

BenPERSON

0.99+

sevenQUANTITY

0.99+

20 yearsQUANTITY

0.99+

ConvoltORGANIZATION

0.99+

ComboltORGANIZATION

0.99+

99%QUANTITY

0.99+

PowerPointTITLE

0.99+

1%QUANTITY

0.99+

seven optionsQUANTITY

0.99+

2019DATE

0.99+

PythonTITLE

0.99+

Georgie stewPERSON

0.99+

oneQUANTITY

0.99+

SimonPERSON

0.99+

todayDATE

0.99+

FerrariORGANIZATION

0.99+

Denver, ColoradoLOCATION

0.99+

GDPRTITLE

0.98+

yesterdayDATE

0.98+

TodayDATE

0.98+

about seven yearsQUANTITY

0.98+

90sDATE

0.98+

15 years agoDATE

0.98+

HIPAATITLE

0.98+

AzureTITLE

0.98+

ConvultORGANIZATION

0.97+

a hundred terabytesQUANTITY

0.97+

about six monthsQUANTITY

0.96+

one protocolQUANTITY

0.96+

SASORGANIZATION

0.96+

seven dog yearsQUANTITY

0.95+

AzureORGANIZATION

0.93+

one logical namespaceQUANTITY

0.92+

20 year oldQUANTITY

0.91+

sixQUANTITY

0.9+

this afternoonDATE

0.89+

DawnORGANIZATION

0.88+

steamPERSON

0.87+

one stageQUANTITY

0.86+

last nine monthsDATE

0.85+

almost seven yearsQUANTITY

0.85+

three yearsQUANTITY

0.84+

FedRAMPORGANIZATION

0.84+

couple of hours agoDATE

0.83+

windowsTITLE

0.82+

firstlyQUANTITY

0.81+

more thanQUANTITY

0.81+

Xbox liveCOMMERCIAL_ITEM

0.81+

first placeQUANTITY

0.81+

last few yearsDATE

0.8+

about two monthsQUANTITY

0.79+

SouzaORGANIZATION

0.77+

fiveQUANTITY

0.77+

odyPERSON

0.73+

Ben Di Qual, Microsoft | Commvault GO 2019


 

>>Live from Denver, Colorado. It's the cube covering com vault go 2019 brought to you by Combolt. >>Hey, welcome back to the Q but Lisa Martin with men and men and we are coming to you alive from Conn logo 19 please to welcome to the cube, a gent from Microsoft Azure. We've got Ben Nichol, principal program manager. Ben, welcome. Thank you. Thanks for having me on. Thanks for coming on. So Microsoft combo, what's going on with the partnership? >>They wouldn't have have great storage pond is in data management space. We've been working with Convolt for 20 years now in Microsoft and and they've been working with us on Azure for about as long as I can remember not being on that the Azure business RET seven years now. So just a long time in cloud terms like doggies and it sort of, they'd been doing a huge amount of their around getting customer data into the cloud, reducing costs, getting more resiliency and then also letting them do more with the data. So they were a pretty good partner to have and they make it much easy for their customers to to go and leverage cloud. So Ben, you know, in my career I've had lots of interactions with the Microsoft storage team. Things have changed a little bit when you're now talking about Azure compared to, you know, more. >>It was the interaction with the operating system or the business suite had. So maybe bring us up to date as those people that might not have followed. You know, we're kind of the storage positioning inside of Microsoft is now that when we talk about Azure and your title. Yeah, we, we sort of look and just just briefly, we worked very heavily with our on premises brethren. They actually inside the O S team is inside of the Azure engineering old male, which is kind of funny, but we do a lot of things there. If he started looking at, firstly on that hybrid side, we have things like Azure files. It's a highly resilient as a service SMB NFS file share up to a hundred terabytes but that interacts directly with windows server to give you Azure file sync. So there is sort of synergies there as well. When I'm doing personally my team, we work on scale storage. >>The big thing we have in there is Al is out blood storage technology, which really is the underpinning technology, full Priya tool storage and Azure which is including our SAS offerings which are hosted on Azure too. So disc is on blood storage, our files on blood storage, you look at Xbox live, all these kinds of stuff is a customer to us. So we build that out and we, we are doing work there and that's really, really interesting and how we do it and that's not looking at going we're going to buy some compute, we're going to buy some storage, we're going to build it out, we're going to run windows or hyper V or maybe VMware with windows running on the VMware, whatever else. This is more a story about wigging to provide you storage as a service. You didn't get a minimum of like 11 nines at your ability and and be able to have that scale to petabytes of capacity in one logical namespace and give you multiple gigabytes, double digit gigabytes of throughput to that storage. >>And now we're even moving about to model multiple protocols. So rest API century today we've got Azure stack storage, you pay API, she can go and use, but we give me that consistency of the actual back end storage and the objects and the data available via more than just one protocol. You can go and access that via HDFS API. As we talk about data lakes all the time. For us, our blood storage is a data Lake. We turn on hierarchal namespace and you can go and access that via our other protocols like as I mentioned HDFS as well. So that is a big story about what we want to do. We want to make that data available at crazy scale, have no limits in the end to the capacity or throughput or performance and over any protocol. That's kind of our line in the Hill about what we want to get to. >>And we've been talking to vault team about some of the solutions that they are putting in the cloud. The new offering metallic that came out. They said if my customer has Azure storage or storage from that other cloud provider, you could just go ahead and use that. Maybe how familiar and how much, I know you've been having a run metallic. We were working, we were pretty tightly with the product team over Convolt around this and my team as well around how do we design and how do we make it work the best and we're going to continue working to optimize as they get beyond initial launch to go, wow, we've got data sets we can analyze, we know how to, we wanted out of tune it. Now really we love the solution particularly more because the default, if you don't select the storage type where you want to go, you will run on Azure. >>So really sort of be kudos to the relationship there. They chose us as a first place we'll go to, but they've also done the choice for customers. Say some customers may want to take it to another cloud. That's fine. It's reasonable. I mean, we totally understand it's going to be a multi-cloud world and that's a reality for any large company. Our goal is to make sure we're growing faster than the competitors, not to knock out the competitors all together because that just won't happen. So they've got that ability to go and yet, Hey, we'll use Azure as default because they feel that way, offering the best support and the best solution there. But then if they have that customer, same customer wants to turn around and use a competitor, Val's fine as well. And I see people talking about that today where they may want to mitigate risks and say, I'm going to do, I'm doing all of office three, six, five on a taken office, three, six, five backup. It's cool. Use metallic, it'll take it maybe to a different region in Asia and they're backing up and they still going, well I'm still all in on Microsoft. They may want to take it to another cloud or even take it back to on premises. So that does happen too because just in case of that moment we can get that data back in a different location. Something happens. >>So metallic talking about that is this new venture is right. It's a Combolt venture and saw that the other day and thought that's interesting. So we dug into it a little bit yesterday and it's like a startup operating within a 20 year old company, which is very interesting. Not just from an incumbent customer perspective, but an incumbent partner perspective. How have you seen over the last few years and particularly bad in the last nine months with big leadership and GTM changes for combo? How has the partnership with Microsoft evolved as a result of those changes? >>Um, it's always been interesting. I guess when you start looking at adventure and everything, since things change a little bit, priorities may change just to be fair, but we've had that tight relationship for a long time. At a relationship level and an exec leadership level, nothing's really changed. But in the way they're building this platform, we sit down out of my team, out of the Azure engineering group and we'll sit down and do things like ideations, like here's where we see gaps in the markets, here's what we believe could happen. And look back in July, we had inspire, which is our partner conference in Las Vegas. When we sat down with their OT, our OT in a room, we'll talking about these kinds of things and this is I think about two months after they may have started the initial development metallic from what I understand, but we will talking about exactly what they're doing with metallic offered as a service in Azure is, Hey, how bout we do this? So we think it's really cool. It opens up a new market to Convolt I think too. I mean they're so strong in the enterprise, but they don't do much in smaller businesses because with a full feature product, it also has inherent complexibility complexity around it. So by doing metallic, is it click, click, next done thing. They're really opening, I think, new markets to them and also to us as a partner. >>I was going to ask, you know, kind of click on that because they developed this very quickly. This is something that I think what student were here yesterday, metallic was kind of conceived design built in about six months. So in terms of like acceleration, that's kind of a new area for Combalt. >>Yeah, and I think, I think they're really embracing the fact about um, let's release our code in production for products, which are sort of getting, getting to the, Hey that product is at the viable stage now, not minimum viable, viable, let's release in production, let's find out how customers are using Atlin, let's keep optimizing and doing that constant iteration, taking that dev ops approach to let's get it out there, let's get it launched. And then let's do these small batches of changes based on customer need, based on tele telemetry. We can actually get in. We can't get the telemetry without having customers. So that's how it's going to keep working. So I think this initial product we see today, it's just going to keep evolving and improving as they get more data, as they get more information, more feedback. Which is exactly what we want to see. >>Well, what will come to the cloud air or something you've been living in for a number of years. Ben, I'd love to hear you've been meeting with customers. They've been asking you questions, gives us some of the, you know, some of the things that, what's top of mind for some of the customers? What kinds of things did they come into Microsoft, Dawn, and how's that all fit together? >>There's many different conferences of interrelate, many different conversations and they'll, we will go from talking about, you know, Python machine learning or AI PowerPoint. >>Yeah. >>It's a things like, you know, when are we going to do incremental snapshots from a manage disks? Get into the weeds on very infrastructure century staff. We're seeing range of conversations there. The big thing I think I see, keep seeing people call out and make assumptions of is that they're not going to be relevant because cloud, I don't know cloud yet. I don't know this whole coup cube thing. Containers. I don't, I don't really understand that as well as I think I need to. And an AI, Oh my gosh, what do I even do there? Because everyone's throwing the words and terms around. But to be honest, I think what's still really evident is cloud is still is tiny fraction of enterprise workloads. Let's be honest, it's growing at a huge rate because it is that small fraction. So again, there's plenty of time for people to learn, but they shouldn't go and try and slip. >>It's not like you're going to learn everything in a technology stack, from networking to development to database management to, to running a data set of power and cooling. You learn the things that are applicable to what you're trying to do. And the same thing goes to cloud. Any of these technologies, go and look at what you need to build for your business. Take it to that step and then go and find out the details and levels you want to know. And as someone who's been on Azure for like a cinema seven years, which is crazy long. That was a, that was literally like being in a startup instead of Microsoft when I joined and I wasn't sure if I wanted to join a licensing company. It's been very evident to me. I will not say I'm an Azure expert and I've been seven years in the platform. >>There are too many things throughout my for me to be an expert in everything on and I think people sort of just have to realize that anyone saying that it's bravado, nothing else. The goal is Microsoft as a platform provider. Hopefully you've got the software and the solution to make a lot of this easier for the customer, so hopefully they shouldn't need to become a Kubernetes expert because it's baked into your platform. They shouldn't have to worry about some of these offerings because it's SAS. Most customers are there some things you need to learn between going from, you know, exchange to go into oath bricks, these five. Absolutely. There are some nuances and things like that, but once you get over that initial hurdle, it should be a little easier. I think it's right and I think going back to that, sort of going back to bare principles going, what is the highest level of distraction that's viable for your business or that application or this workload has to always be done with everything. >>If it's like, well, class, not even viable, run it on premises. Don't, don't need to apologize for not running in cloud. If I as is what's happening for you because of security, because of application architecture, run it that way. Don't feel the need and the pressure to have to push it that way. I think too many people get caught up in the shiny stuff up here, which is what you know 1% of people are doing versus the other 99% which is still happening in a lot of the areas we work and have challenges in today. >>That's a great point that you bring up because there is all the buzz words, right? AI, machine learning cloud. You've got to be cloud ready. You've gotta be data-driven to customer, to your point going, I just need to make sure that what we have set up for our business is going to allow our business one to remain relevant, but to also be able to harness the power of the data that they have to extract new opportunities, new insights, and not get caught up with, shoot, should we be using automation? Should we be using AI? Everybody's talking about it. I liked that you brought up and I find it very respectfully, he said, Hey, I'm not an Azure expert. You'd been there seven, seven dog years like you said. And I think that's what customers probably gained confidence in is hearing the folks like you that they look to for that guidance and that leadership saying, no, I don't know everything. To know that giving them the confidence that they're true, they're trusting you with that data and also helping trusting you to help them make the right decisions for their business. >>Yeah. And that that's, we've got to do that. I mean, I, as a tech guy, it's like I've, I've loved seeing the changes. When I joined Microsoft, I, I wasn't lying. I was almost there go inf I really want to join this company. I was going to go join a startup instead. And I got asked to one stage in an interview going, why do you want to join Microsoft? We see you've never applied to that. I never wanted to, a friend told me to come in and it's just been amazing to see those changes and I'm pretty proud on that. Um, so when we talk about, you know, those, the things we're doing, I mean I think there is no shame going, I'm just going to lift and shift machines because cloud is about flexibility. If you're doing it just on cost, probably doing it for the wrong reason, it's about that flexibility to go and do something. >>Then change within months of slowly make steps to make things better and better as you find a need as you find the ability, whatever it may be. And some of the big things that we focus on right now with customers is we've got a product called Azure advisor. It'll go until people want one. You know, you don't build things in a resilient manner. Hey, do you know this is not ha because of this and you can do this. It's like great. Also will tell you about security vulnerabilities that maybe she had a gateway here for security. Maybe you should do this or this is not patched. But the big thing is that it also goes and tells you, Hey, you're overspending. You don't need this much. It provisions, you provision like a Ferrari, you need a, you just need a Prius, go and run a Prius because it's going to do what you need and need to pay a lot less. >>And that's part of that trust. Getting that understanding. And it's counterintuitive that we're now like it's coming out of my team a lot too, which is great. But seeing these guys were dropping contracts and licenses and basically, you know, once every three years I may call the customer, Hey, how bout a renewal now go from that to now being focused on the customer's actual success and focused on their growth in Azure as a platform of our vast services growth like utilization not in sales has been a huge change. It scared some people away but it's brought a lot more people in and and that sort of counterintuitive spin less money thing actually leads in the longterm to people using more. >>Absolutely. That's definitely not the shrink wrap software company of Microsoft that I remember from the 90s yeah, very might be similar to you know, just as volt to 2019 is not the same combo, but many of us know from with 15 >>years and a good mutual friend of ours, sort of Simon and myself before I took this job, he and I sat down, we're having a beer and discussing the merits, all the not evacuate and things like that. Same with. They are changing such, such a great deal with, you know, what they're putting in the cloud, what they're doing with the data, where they're trying to achieve with things like Hedvig for data management across on premises and cloud with microservices applications and stuff going, Hey, this won't work like this anymore. When you now are doing an on premises and we containers, it's pretty good to see. I'm interested to see how they take that even further to their current audience, which is product predominantly, you know, the it pro, the data center admin, storage manager. >>It's funny when you talked about, um, just the choice that customers have and those saying I, we shouldn't be following the trends because they're the trends. We actually interviewed a couple of hours ago, one of Combolt's customers that is all on prime healthcare company and said, he's like, I want to make a secret that says no cloud and proud and it just, what that was, we don't normally hear from them. We always talk about cloud, but for a company to sit down and look at what's best for our business, whether it's, you know, FedRAMP certification challenges or HIPAA or GDPR or other compelling requirements to keep it on prem, it was just refreshing to hear this customer say, >>yeah, I mean it's, it's appropriate for the do what's right for you. I, yeah, it's no shame in any of them. It's, I mean, you don't, you definitely don't get fans by, by shaming people and not doing something right. And I mean, I, I'm personally very happy with the feet, you know, see sort of hype around things like blockchain died down a little bit. So it's a slow database unless you're using for the specific case of that shared ledger, you know, things like that where people don't have to know blockchain. Now I have to know IOT. It's like, yeah. And that hype gets people there, but it also causes a lot of anxiety and it's good to see someone actually not be ashamed of and like, and they grade the ones when they do take a step and use cloud citizen may be in the business already. They're probably going to do it appropriately because have a reason, not just because we think this would be cool. >>Well not and how much inherent and complexity does that bring in if somebody is really feeling pressured to follow those trends and maybe that's when you end up with this hodgepodge of technologies that don't work well together, you're spending way more in as as business it folks are consumers, you know, consumers in their personal lives, they expect things to be accessible, visible, but also cost efficient because they have so much choice. >>Yeah, the choice choice is hard. It's just a, just the conversation is having recently, for example, just we'll take the storage cause of where we are, right? It's like I'm running something on Azure. I'm a, I'm using Souza. I want an office Mount point, which is available to me in Fs. Great. Perfect. what do I use? It's like, well you use any one of these seven options, like what's the right choice? And that's the thing about being a platform company. We give you a lot of choices but it's still up to you or up to harness. It can really help the customers as well to make the most appropriate choice. And I pushed back really hard on terms like best practices and things. I hate it because again, it's making the assumption this is the best thing to do. It's not. It's always about, you know, what are the patterns that have worked for other people, what are the anti-patterns and the appropriate path for me to take. >>And that's actually how we're building our docs now too. So we keep, we keep focusing at our Azure technology and we're bringing out some of the biggest things we've done is how we manage our documentation. It's all open sourced. It's all in markdown on get hub. So you can go and read a document from someone like myself is doing product management going, this is how to use this product and you're actually this bits wrong. This bit needs to be like this, and you can go in yourself, even now, make a change and we can go, Oh yeah, and take that committed in and do all this kind of stuff in that way. So we're constantly taking those documents in that way, in getting real time feedback from customers who are using it, not just ourself and an echo chamber. >>So you get this great insight and visibility that you never had before. Well, Ben, thank you, Georgie stew and me on the Q this afternoon. Excited to hear what's coming up next for Azure. May appreciate your time. Thank you for streaming event. I, Lisa Martin, you're watching the cue from convo. Go 19.

Published Date : Oct 16 2019

SUMMARY :

com vault go 2019 brought to you by Combolt. Hey, welcome back to the Q but Lisa Martin with men and men and we are coming to you alive So Ben, you know, in my career I've had lots of interactions interacts directly with windows server to give you Azure file sync. and and be able to have that scale to petabytes of capacity in one no limits in the end to the capacity or throughput or performance and over any default, if you don't select the storage type where you want to go, you will run on Azure. So really sort of be kudos to the relationship there. So metallic talking about that is this new venture is right. I guess when you start looking at adventure and everything, since things change I was going to ask, you know, kind of click on that because they developed this very quickly. So that's how it's going to keep working. They've been asking you questions, gives us some of the, you know, some of the things that, we will go from talking about, you know, Python machine learning or AI PowerPoint. It's a things like, you know, when are we going to do incremental snapshots from a manage disks? Take it to that step and then go and find out the details and levels you want to know. I think it's right and I think going back to that, Don't feel the need and the pressure to have to push it that way. I liked that you brought up and I find And I got asked to run a Prius because it's going to do what you need and need to pay a lot less. Hey, how bout a renewal now go from that to now being focused on the very might be similar to you know, just as volt to 2019 is not the same combo, audience, which is product predominantly, you know, the it pro, the data center admin, storage manager. best for our business, whether it's, you know, FedRAMP certification challenges They're probably going to do it appropriately because have a reason, not just because we think this would be cool. you know, consumers in their personal lives, they expect things to be accessible, I hate it because again, it's making the assumption this is the best thing to do. This bit needs to be like this, and you can go in yourself, even now, make a change and we can go, So you get this great insight and visibility that you never had before.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MicrosoftORGANIZATION

0.99+

Ben NicholPERSON

0.99+

Lisa MartinPERSON

0.99+

JulyDATE

0.99+

Ben Di QualPERSON

0.99+

AsiaLOCATION

0.99+

ConvoltORGANIZATION

0.99+

Las VegasLOCATION

0.99+

BenPERSON

0.99+

20 yearsQUANTITY

0.99+

seven yearsQUANTITY

0.99+

sevenQUANTITY

0.99+

2019DATE

0.99+

FerrariORGANIZATION

0.99+

1%QUANTITY

0.99+

99%QUANTITY

0.99+

seven optionsQUANTITY

0.99+

Georgie stewPERSON

0.99+

GDPRTITLE

0.99+

PythonTITLE

0.99+

SimonPERSON

0.99+

todayDATE

0.99+

HIPAATITLE

0.99+

FedRAMPORGANIZATION

0.99+

yesterdayDATE

0.99+

Denver, ColoradoLOCATION

0.99+

firstQUANTITY

0.98+

fiveQUANTITY

0.98+

ComboltORGANIZATION

0.98+

90sDATE

0.97+

20 year oldQUANTITY

0.97+

oneQUANTITY

0.97+

AzureTITLE

0.97+

one protocolQUANTITY

0.97+

one stageQUANTITY

0.96+

up to a hundred terabytesQUANTITY

0.96+

about six monthsQUANTITY

0.94+

one logical namespaceQUANTITY

0.94+

Xbox liveCOMMERCIAL_ITEM

0.94+

seven dog yearsQUANTITY

0.94+

threeQUANTITY

0.92+

SouzaORGANIZATION

0.91+

sixQUANTITY

0.9+

this afternoonDATE

0.9+

CombaltORGANIZATION

0.9+

ConnLOCATION

0.89+

HedvigORGANIZATION

0.88+

AtlinTITLE

0.88+

AzureORGANIZATION

0.88+

15 >>yearsQUANTITY

0.87+

couple of hours agoDATE

0.86+

PowerPointTITLE

0.86+

last nine monthsDATE

0.85+

DawnORGANIZATION

0.83+

last few yearsDATE

0.81+

more thanQUANTITY

0.81+

GOEVENT

0.81+