Image Title

Search Results for Life:

IBM, The Next 3 Years of Life Sciences Innovation


 

>>Welcome to this exclusive discussion. IBM, the next three years of life sciences, innovation, precision medicine, advanced clinical data management and beyond. My name is Dave Volante from the Cuban today, we're going to take a deep dive into some of the most important trends impacting the life sciences industry in the next 60 minutes. Yeah, of course. We're going to hear how IBM is utilizing Watson and some really important in life impacting ways, but we'll also bring in real world perspectives from industry and the independent analyst view to better understand how technology and data are changing the nature of precision medicine. Now, the pandemic has created a new reality for everyone, but especially for life sciences companies, one where digital transformation is no longer an option, but a necessity. Now the upside is the events of the past 22 months have presented an accelerated opportunity for innovation technology and real world data are coming together and being applied to support life science, industry trends and improve drug discovery, clinical development, and treatment commercialization throughout the product life cycle cycle. Now I'd like to introduce our esteemed panel. Let me first introduce Lorraine Marshawn, who is general manager of life sciences at IBM Watson health. Lorraine leads the organization dedicated to improving clinical development research, showing greater treatment value in getting treatments to patients faster with differentiated solutions. Welcome Lorraine. Great to see you. >>Dr. Namita LeMay is the research vice-president of IDC, where she leads the life sciences R and D strategy and technology program, which provides research based advisory and consulting services as well as market analysis. The loan to meta thanks for joining us today. And our third panelist is Greg Cunningham. Who's the director of the RWE center of excellence at Eli Lilly and company. Welcome, Greg, you guys are doing some great work. Thanks for being here. Thanks >>Dave. >>Now today's panelists are very passionate about their work. If you'd like to ask them a question, please add it to the chat box located near the bottom of your screen, and we'll do our best to answer them all at the end of the panel. Let's get started. Okay, Greg, and then Lorraine and meta feel free to chime in after one of the game-changers that you're seeing, which are advancing precision medicine. And how do you see this evolving in 2022 and into the next decade? >>I'll give my answer from a life science research perspective. The game changer I see in advancing precision medicine is moving from doing research using kind of a single gene mutation or kind of a single to look at to doing this research using combinations of genes and the potential that this brings is to bring better drug targets forward, but also get the best product to a patient faster. Um, I can give, uh, an example how I see it playing out in the last decade. Non-oncology real-world evidence. We've seen an evolution in precision medicine as we've built out the patient record. Um, as we've done that, uh, the marketplace has evolved rapidly, uh, with, particularly for electronic medical record data and genomic data. And we were pretty happy to get our hands on electronic medical record data in the early days. And then later the genetic test results were combined with this data and we could do research looking at a single mutation leading to better patient outcomes. But I think where we're going to evolve in 2022 and beyond is with genetic testing, growing and oncology, providing us more data about that patient. More genes to look at, uh, researchers can look at groups of genes to analyze, to look at that complex combination of gene mutations. And I think it'll open the door for things like using artificial intelligence to help researchers plow through the complex number of permutations. When you think about all those genes you can look at in combination, right? Lorraine yes. Data and machine intelligence coming together, anything you would add. >>Yeah. Thank you very much. Well, I think that Greg's response really sets us up nicely, particularly when we think about the ability to utilize real-world data in the farm industry across a number of use cases from discovery to development to commercial, and, you know, in particular, I think with real world data and the comments that Greg just made about clinical EMR data linked with genetic or genomic data, a real area of interest in one that, uh, Watson health in particular is focused on the idea of being able to create a data exchange so that we can bring together claims clinical EMR data, genomics data, increasingly wearables and data directly from patients in order to create a digital health record that we like to call an intelligent patient health record that basically gives us the digital equivalent of a real life patient. And these can be used in use cases in randomized controlled clinical trials for synthetic control arms or natural history. They can be used in order to track patients' response to drugs and look at outcomes after they've been on various therapies as, as Greg is speaking to. And so I think that, you know, the promise of data and technology, the AI that we can apply on that is really helping us advance, getting therapies to market faster, with better information, lower sample sizes, and just a much more efficient way to do drug development and to track and monitor outcomes in patients. >>Great. Thank you for that now to meta, when I joined IDC many, many years ago, I really didn't know much about the industry that I was covering, but it's great to see you as a former practitioner now bringing in your views. What do you see as the big game-changers? >>So, um, I would, I would agree with what both Lorraine and Greg said. Um, but one thing that I'd just like to call out is that, you know, everyone's talking about big data, the volume of data is growing. It's growing exponentially actually about, I think 30% of data that exists today is healthcare data. And it's growing at a rate of 36%. That's huge, but then it's not just about the big, it's also about the broad, I think, um, you know, I think great points that, uh, Lorraine and Greg brought out that it's, it's not just specifically genomic data, it's multi omic data. And it's also about things like medical history, social determinants of health, behavioral data. Um, and why, because when you're talking about precision medicine and we know that we moved away from the, the terminology of personalized to position, because you want to talk about disease stratification and you can, it's really about convergence. >>Um, if you look at a recent JAMA paper in 2021, only 1% of EHS actually included genomic data. So you really need to have that ability to look at data holistically and IDC prediction is seeing that investments in AI to fuel in silico, silicone drug discovery will double by 20, 24, but how are you actually going to integrate all the different types of data? Just look at, for example, diabetes, you're on type two diabetes, 40 to 70% of it is genetically inherited and you have over 500 different, uh, genetic low side, which could be involved in playing into causing diabetes. So the earlier strategy, when you are looking at, you know, genetic risk scoring was really single trait. Now it's transitioning to multi rate. And when you say multi trade, you really need to get that integrated view that converging for you to, to be able to drive a precision medicine strategy. So to me, it's a very interesting contrast on one side, you're really trying to make it specific and focused towards an individual. And on the other side, you really have to go wider and bigger as well. >>Uh, great. I mean, the technology is enabling that convergence and the conditions are almost mandating it. Let's talk about some more about data that the data exchange and building an intelligent health record, as it relates to precision medicine, how will the interoperability of real-world data, you know, create that more cohesive picture for the, for the patient maybe Greg, you want to start, or anybody else wants to chime in? >>I think, um, the, the exciting thing from, from my perspective is the potential to gain access to data. You may be weren't aware of an exchange in implies that, uh, some kind of cataloging, so I can see, uh, maybe things that might, I just had no idea and, uh, bringing my own data and maybe linking data. These are concepts that I think are starting to take off in our field, but it, it really opens up those avenues to when you, you were talking about data, the robustness and richness volume isn't, uh, the only thing is Namita said, I think really getting to a rich high-quality data and, and an exchange offers a far bigger, uh, range for all of us to, to use, to get our work done. >>Yeah. And I think, um, just to chime, chime into that, uh, response from Greg, you know, what we hear increasingly, and it's pretty pervasive across the industry right now, because this ability to create an exchange or the intelligent, uh, patient health record, these are new ideas, you know, they're still rather nascent and it always is the operating model. Uh, that, that is the, uh, the difficult challenge here. And certainly that is the case. So we do have data in various silos. Uh, they're in patient claims, they're in electronic medical records, they might be in labs, images, genetic files on your smartphone. And so one of the challenges with this interoperability is being able to tap into these various sources of data, trying to identify quality data, as Greg has said, and the meta is underscoring as well. Uh, we've gotta be able to get to the depth of data that's really meaningful to us, but then we have to have technology that allows us to pull this data together. >>First of all, it has to be de-identified because of security and patient related needs. And then we've gotta be able to link it so that you can create that likeness in terms of the record, it has to be what we call cleaned or curated so that you get the noise and all the missing this out of it, that's a big step. And then it needs to be enriched, which means that the various components that are going to be meaningful, you know, again, are brought together so that you can create that cohort of patients, that individual patient record that now is useful in so many instances across farm, again, from development, all the way through commercial. So the idea of this exchange is to enable that exact process that I just described to have a, a place, a platform where various entities can bring their data in order to have it linked and integrated and cleaned and enriched so that they get something that is a package like a data package that they can actually use. >>And it's easy to plug into their, into their studies or into their use cases. And I think a really important component of this is that it's gotta be a place where various third parties can feel comfortable bringing their data together in order to match it with other third parties. That is a, a real value, uh, that the industry is increasingly saying would be important to them is, is the ability to bring in those third-party data sets and be able to link them and create these, these various data products. So that's really the idea of the data exchange is that you can benefit from accessing data, as Greg mentioned in catalogs that maybe are across these various silos so that you can do the kind of work that you need. And that we take a lot of the hard work out of it. I like to give an example. >>We spoke with one of our clients at one of the large pharma companies. And, uh, I think he expressed it very well. He said, what I'd like to do is have like a complete dataset of lupus. Lupus is an autoimmune condition. And I've just like to have like the quintessential lupus dataset that I can use to run any number of use cases across it. You know, whether it's looking at my phase one trial, whether it's selecting patients and enriching for later stage trials, whether it's understanding patient responses to different therapies as I designed my studies. And so, you know, this idea of adding in therapeutic area indication, specific data sets and being able to create that for the industry in the meta mentioned, being able to do that, for example, in diabetes, that's how pharma clients need to have their needs met is through taking the hard workout, bringing the data together, having it very therapeutically enriched so that they can use it very easily. >>Thank you for that detail and the meta. I mean, you can't do this with humans at scale in technology of all the things that Lorraine was talking about, the enrichment, the provenance, the quality, and of course, it's got to be governed. You've got to protect the privacy privacy humans just can't do all that at massive scale. Can it really tech that's where technology comes in? Doesn't it and automation. >>Absolutely. >>I, couldn't more, I think the biggest, you know, whether you talk about precision medicine or you talk about decentralized trials, I think there's been a lot of hype around these terms, but what is really important to remember is technology is the game changer and bringing all that data together is really going to be the key enabler. So multimodal data integration, looking at things like security or federated learning, or also when you're talking about leveraging AI, you're not talking about things like bias or other aspects around that are, are critical components that need to be addressed. I think the industry is, uh, it's partly, still trying to figure out the right use cases. So it's one part is getting together the data, but also getting together the right data. Um, I think data interoperability is going to be the absolute game changer for enabling this. Uh, but yes, um, absolutely. I can, I can really couldn't agree more with what Lorraine just said, that it's bringing all those different aspects of data together to really drive that precision medicine strategy. >>Excellent. Hey Greg, let's talk about protocols decentralized clinical trials. You know, they're not new to life silences, but, but the adoption of DCTs is of course sped up due to the pandemic we've had to make trade-offs obviously, and the risk is clearly worth it, but you're going to continue to be a primary approach as we enter 2022. What are the opportunities that you see to improve? How DCTs are designed and executed? >>I see a couple opportunities to improve in this area. The first is, uh, back to technology. The infrastructure around clinical trials has, has evolved over the years. Uh, but now you're talking about moving away from kind of site focus to the patient focus. Uh, so with that, you have to build out a new set of tools that would help. So for example, one would be novel trial, recruitment, and screening, you know, how do you, how do you find patients and how do you screen them to see if are they, are they really a fit for, for this protocol? Another example, uh, very important documents that we have to get is, uh, you know, the e-consent that someone's says, yes, I'm, well, I understand this study and I'm willing to do it, have to do that in a more remote way than, than we've done in the past. >>Um, the exciting area, I think, is the use of, uh, eco, uh, E-Pro where we capture data from the patient using apps, devices, sensors. And I think all of these capabilities will bring a new way of, of getting data faster, uh, in, in this kind of model. But the exciting thing from, uh, our perspective at Lily is it's going to bring more data about the patient from the patient, not just from the healthcare provider side, it's going to bring real data from these apps, devices and sensors. The second thing I think is using real-world data to identify patients, to also improve protocols. We run scenarios today, looking at what's the impact. If you change a cut point on a, a lab or a biomarker to see how that would affect, uh, potential enrollment of patients. So it, it definitely the real-world data can be used to, to make decisions, you know, how you improve these protocols. >>But the thing that we've been at the challenge we've been after that this probably offers the biggest is using real-world data to identify patients as we move away from large academic centers that we've used for years as our sites. Um, you can maybe get more patients who are from the rural areas of our countries or not near these large, uh, uh, academic centers. And we think it'll bring a little more diversity to the population, uh, who who's, uh, eligible, but also we have their data, so we can see if they really fit the criteria and the probability they are a fit for the trial is much higher than >>Right. Lorraine. I mean, your clients must be really pushing you to help them improve DCTs what are you seeing in the field? >>Yes, in fact, we just attended the inaugural meeting of the de-central trials research Alliance in, uh, in Boston about two weeks ago where, uh, all of the industry came together, pharma companies, uh, consulting vendors, just everyone who's been in this industry working to help define de-central trials and, um, think through what its potential is. Think through various models in order to enable it, because again, a nascent concept that I think COVID has spurred into action. Um, but it is important to take a look at the definition of DCT. I think there are those entities that describe it as accessing data directly from the patient. I think that is a component of it, but I think it's much broader than that. To me, it's about really looking at workflows and processes of bringing data in from various remote locations and enabling the whole ecosystem to work much more effectively along the data continuum. >>So a DCT is all around being able to make a site more effective, whether it's being able to administer a tele visit or the way that they're getting data into the electronic data captures. So I think we have to take a look at the, the workflows and the operating models for enabling de-central trials and a lot of what we're doing with our own technology. Greg mentioned the idea of electronic consent of being able to do electronic patient reported outcomes, other collection of data directly from the patient wearables tele-health. So these are all data acquisition, methodologies, and technologies that, that we are enabling in order to get the best of the data into the electronic data capture system. So edit can be put together and processed and submitted to the FDA for regulatory use for clinical trial type submission. So we're working on that. I think the other thing that's happening is the ability to be much more flexible and be able to have more cloud-based storage allows you to be much more inter-operable to allow API APIs in order to bring in the various types of data. >>So we're really looking at technology that can make us much more fluid and flexible and accommodating to all the ways that people live and work and manage their health, because we have to reflect that in the way we collect those data types. So that's a lot of what we're, what we're focused on. And in talking with our clients, we spend also a lot of time trying to understand along the, let's say de-central clinical trials continuum, you know, w where are they? And I know Namita is going to talk a little bit about research that they've done in terms of that adoption curve, but because COVID sort of forced us into being able to collect data in more remote fashion in order to allow some of these clinical trials to continue during COVID when a lot of them had to stop. What we want to make sure is that we understand and can codify some of those best practices and that we can help our clients enable that because the worst thing that would happen would be to have made some of that progress in that direction. >>But then when COVID is over to go back to the old ways of doing things and not bring some of those best practices forward, and we actually hear from some of our clients in the pharma industry, that they worry about that as well, because we don't yet have a system for operationalizing a de-central trial. And so we really have to think about the protocol it's designed, the indication, the types of patients, what makes sense to decentralize, what makes sense to still continue to collect data in a more traditional fashion. So we're spending a lot of time advising and consulting with our patients, as well as, I mean, with our clients, as well as CRS, um, on what the best model is in terms of their, their portfolio of studies. And I think that's a really important aspect of trying to accelerate the adoption is making sure that what we're doing is fit for purpose, just because you can use technology doesn't mean you should, it really still does require human beings to think about the problem and solve them in a very practical way. >>Great, thank you for that. Lorraine. I want to pick up on some things that Lorraine was just saying. And then back to what Greg was saying about, uh, uh, DCTs becoming more patient centric, you had a prediction or IDC, did I presume your fingerprints were on it? Uh, that by 20 25, 70 5% of trials will be patient-centric decentralized clinical trials, 90% will be hybrid. So maybe you could help us understand that relationship and what types of innovations are going to be needed to support that evolution of DCT. >>Thanks, Dave. Yeah. Um, you know, sorry, I, I certainly believe that, uh, you know, uh, Lorraine was pointing out of bringing up a very important point. It's about being able to continue what you have learned in over the past two years, I feel this, you know, it was not really a digital revolution. It was an attitude. The revolution that this industry underwent, um, technology existed just as clinical trials exist as drugs exist, but there was a proof of concept that technology works that this model is working. So I think that what, for example, telehealth, um, did for, for healthcare, you know, transition from, from care, anywhere care, anytime, anywhere, and even becoming predictive. That's what the decentralized clinical trials model is doing for clinical trials today. Great points again, that you have to really look at where it's being applied. You just can't randomly apply it across clinical trials. >>And this is where the industry is maturing the complexity. Um, you know, some people think decentralized trials are very simple. You just go and implement these centralized clinical trials, but it's not that simple as it it's being able to define, which are the right technologies for that specific, um, therapeutic area for that specific phase of the study. It's being also a very important point is bringing in the patient's voice into the process. Hey, I had my first telehealth visit sometime last year and I was absolutely thrilled about it. I said, no time wasted. I mean, everything's done in half an hour, but not all patients want that. Some want to consider going back and you, again, need to customize your de-centralized trials model to, to the, to the type of patient population, the demographics that you're dealing with. So there are multiple factors. Um, also stepping back, you know, Lorraine mentioned they're consulting with, uh, with their clients, advising them. >>And I think a lot of, um, a lot of companies are still evolving in their maturity in DCTs though. There's a lot of boys about it. Not everyone is very mature in it. So it's, I think it, one thing everyone's kind of agreeing with is yes, we want to do it, but it's really about how do we go about it? How do we make this a flexible and scalable modern model? How do we integrate the patient's voice into the process? What are the KPIs that we define the key performance indicators that we define? Do we have a playbook to implement this model to make it a scalable model? And, you know, finally, I think what organizations really need to look at is kind of developing a de-centralized mature maturity scoring model, so that I assess where I am today and use that playbook to define, how am I going to move down the line to me reach the next level of maturity. Those were some of my thoughts. Right? >>Excellent. And now remember you, if you have any questions, use the chat box below to submit those questions. We have some questions coming in from the audience. >>At one point to that, I think one common thread between the earlier discussion around precision medicine and around decentralized trials really is data interoperability. It is going to be a big game changer to, to enable both of these pieces. Sorry. Thanks, Dave. >>Yeah. Thank you. Yeah. So again, put your questions in the chat box. I'm actually going to go to one of the questions from the audience. I get some other questions as well, but when you think about all the new data types that are coming in from social media, omics wearables. So the question is with greater access to these new types of data, what trends are you seeing from pharma device as far as developing capabilities to effectively manage and analyze these novel data types? Is there anything that you guys are seeing, um, that you can share in terms of best practice or advice >>I'll offer up? One thing, I think the interoperability isn't quite there today. So, so what's that mean you can take some of those data sources. You mentioned, uh, some Omix data with, uh, some health claims data and it's the, we spend too much time and in our space putting data to gather the behind the scenes, I think the stat is 80% of the time is assembling the data 20% analyzing. And we've had conversations here at Lilly about how do we get to 80% of the time is doing analysis. And it really requires us to think, take a step back and think about when you create a, uh, a health record, you really have to be, have the same plugins so that, you know, data can be put together very easily, like Lorraine mentioned earlier. And that comes back to investing in as an industry and standards so that, you know, you have some of data standard, we all can agree upon. And then those plugs get a lot easier and we can spend our time figuring out how to make, uh, people's lives better with healthcare analysis versus putting data together, which is not a lot of fun behind the scenes. >>Other thoughts on, um, on, on how to take advantage of sort of novel data coming from things like devices in the nose that you guys are seeing. >>I could jump in there on your end. Did you want to go ahead? Okay. So, uh, I mean, I think there's huge value that's being seen, uh, in leveraging those multiple data types. I think one area you're seeing is the growth of prescription digital therapeutics and, um, using those to support, uh, you know, things like behavioral health issues and a lot of other critical conditions it's really taking you again, it is interlinking real-world data cause it's really taking you to the patient's home. Um, and it's, it's, there's a lot of patients in the city out here cause you can really monitor the patient real-time um, without the patient having coming, you know, coming and doing a site visit once in say four weeks or six weeks. So, um, I, and, uh, for example, uh, suicidal behavior and just to take an example, if you can predict well in advance, based on those behavioral parameters, that this is likely to trigger that, uh, the value of it is enormous. Um, again, I think, uh, Greg made a valid point about the industry still trying to deal with resolving the data interoperability issue. And there are so many players that are coming in the industry right now. There are really few that have the maturity and the capability to address these challenges and provide intelligence solutions. >>Yeah. Maybe I'll just, uh, go ahead and, uh, and chime into Nikita's last comment there. I think that's what we're seeing as well. And it's very common, you know, from an innovation standpoint that you have, uh, a nascent industry or a nascent innovation sort of situation that we have right now where it's very fragmented. You have a lot of small players, you have some larger entrenched players that have the capability, um, to help to solve the interoperability challenge, the standards challenge. I mean, I think IBM Watson health is certainly one of the entities that has that ability and is taking a stand in the industry, uh, in order to, to help lead in that way. Others are too. And, uh, but with, with all of the small companies that are trying to find interesting and creative ways to gather that data, it does create a very fragmented, uh, type of environment and ecosystem that we're in. >>And I think as we mature, as we do come forward with the KPIs, the operating models, um, because you know, the devil's in the detail in terms of the operating models, it's really exciting to talk these trends and think about the future state. But as Greg pointed out, if you're spending 80% of your time just under the hood, you know, trying to get the engine, all the spark plugs to line up, um, that's, that's just hard grunt work that has to be done. So I think that's where we need to be focused. And I think bringing all the data in from these disparate tools, you know, that's fine, we need, uh, a platform or the API APIs that can enable that. But I think as we, as we progress, we'll see more consolidation, uh, more standards coming into play, solving the interoperability types of challenges. >>And, um, so I think that's where we should, we should focus on what it's going to take and in three years to really codify this and make it, so it's a, it's a well hum humming machine. And, you know, I do know having also been in pharma that, uh, there's a very pilot oriented approach to this thing, which I think is really healthy. I think large pharma companies tend to place a lot of bets with different programs on different tools and technologies, to some extent to see what's gonna stick and, you know, kind of with an innovation mindset. And I think that's good. I think that's kind of part of the process of figuring out what is going to work and, and helping us when we get to that point of consolidating our model and the technologies going forward. So I think all of the efforts today are definitely driving us to something that feels much more codified in the next three to five years. >>Excellent. We have another question from the audience it's sort of related to the theme of this discussion, given the FDA's recent guidance on using claims and electronic health records, data to support regulatory decision-making what advancements do you think we can expect with regards to regulatory use of real-world data in the coming years? It's kind of a two-parter so maybe you guys can collaborate on this one. What role that, and then what role do you think industry plays in influencing innovation within the regulatory space? >>All right. Well, it looks like you've stumped the panel there. Uh, Dave, >>It's okay to take some time to think about it, right? You want me to repeat it? You guys, >>I, you know, I I'm sure that the group is going to chime into this. I, so the FDA has issued a guidance. Um, it's just, it's, it's exactly that the FDA issues guidances and says that, you know, it's aware and supportive of the fact that we need to be using real-world data. We need to create the interoperability, the standards, the ways to make sure that we can include it in regulatory submissions and the like, um, and, and I sort of think about it akin to the critical path initiative, probably, I don't know, 10 or 12 years ago in pharma, uh, when the FDA also embrace this idea of the critical path and being able to allow more in silico modeling of clinical trial, design and development. And it really took the industry a good 10 years, um, you know, before they were able to actually adopt and apply and take that sort of guidance or openness from the FDA and actually apply it in a way that started to influence the way clinical trials were designed or the in silico modeling. >>So I think the second part of the question is really important because while I think the FDA is saying, yes, we recognize it's important. Uh, we want to be able to encourage and support it. You know, when you look for example, at synthetic control arms, right? The use of real-world data in regulatory submissions over the last five or six years, all of the use cases have been in oncology. I think there've been about maybe somewhere between eight to 10 submissions. And I think only one actually was a successful submission, uh, in all those situations, the real-world data arm of that oncology trial that synthetic control arm was actually rejected by the FDA because of lack of completeness or, you know, equalness in terms of the data. So the FDA is not going to tell us how to do this. So I think the second part of the question, which is what's the role of industry, it's absolutely on industry in order to figure out exactly what we're talking about, how do we figure out the interoperability, how do we apply the standards? >>How do we ensure good quality data? How do we enrich it and create the cohort that is going to be equivalent to the patient in the real world, uh, in the end that would otherwise be in the clinical trial and how do we create something that the FDA can agree with? And we'll certainly we'll want to work with the FDA in order to figure out this model. And I think companies are already doing that, but I think that the onus is going to be on industry in order to figure out how you actually operationalize this and make it real. >>Excellent. Thank you. Um, question on what's the most common misconception that clinical research stakeholders with sites or participants, et cetera might have about DCTs? >>Um, I could jump in there. Right. So, sure. So, um, I think in terms of misconceptions, um, I think the communist misconceptions that sites are going away forever, which I do not think is really happening today. Then the second, second part of it is that, um, I think also the perspective that patients are potentially neglected because they're moving away. So we'll pay when I, when I, what I mean by that neglected, perhaps it was not the appropriate term, but the fact that, uh, will patients will, will, will patient engagement continue, will retention be strong since the patients are not interacting in person with the investigator quite as much. Um, so site retention and patient retention or engagement from both perspectives, I think remains a concern. Um, but actually if you look at, uh, look at, uh, assessments that have been done, I think patients are more than happy. >>Majority of the patients have been really happy about, about the new model. And in fact, sites are, seem to increase, have increased investments in technology by 50% to support this kind of a model. So, and the last thing is that, you know, decentralized trials is a great model and it can be applied to every possible clinical trial. And in another couple of weeks, the whole industry will be implementing only decentralized trials. I think we are far away from that. It's just not something that you would implement across every trial. And we discussed that already. So you have to find the right use cases for that. So I think those were some of the key misconceptions I'd say in the industry right now. Yeah. >>Yeah. And I would add that the misconception I hear the most about is, uh, the, the similar to what Namita said about the sites and healthcare professionals, not being involved to the level that they are today. Uh, when I mentioned earlier in our conversation about being excited about capturing more data, uh, from the patient that was always in context of, in addition to, you know, healthcare professional opinion, because I think both of them bring that enrichment and a broader perspective of that patient experience, whatever disease they're faced with. So I, I think some people think is just an all internet trial with just someone, uh, putting out there their own perspective. And, and it's, it's a combination of both to, to deliver a robust data set. >>Yeah. Maybe I'll just comment on, it reminds me of probably 10 or 15 years ago, maybe even more when, um, really remote monitoring was enabled, right? So you didn't have to have the study coordinator traveled to the investigative site in order to check the temperature of the freezer and make sure that patient records were being completed appropriately because they could have a remote visit and they could, they could send the data in a via electronic data and do the monitoring visit, you know, in real time, just the way we're having this kind of communication here. And there was just so much fear that you were going to replace or supplant the personal relationship between the sites between the study coordinators that you were going to, you know, have to supplant the role of the monitor, which was always a very important role in clinical trials. >>And I think people that really want to do embrace the technology and the advantages that it provided quickly saw that what it allowed was the monitor to do higher value work, you know, instead of going in and checking the temperature on a freezer, when they did have their visit, they were able to sit and have a quality discussion for example, about how patient recruitment was going or what was coming up in terms of the consent. And so it created a much more high touch, high quality type of interaction between the monitor and the investigative site. And I think we should be looking for the same advantages from DCT. We shouldn't fear it. We shouldn't think that it's going to supplant the site or the investigator or the relationship. It's our job to figure out where the technology fits and clinical sciences always got to be high touch combined with high-tech, but the high touch has to lead. And so getting that balance right? And so that's going to happen here as well. We will figure out other high value work, meaningful work for the site staff to do while they let the technology take care of the lower quality work, if you will, or the lower value work, >>That's not an, or it's an, and, and you're talking about the higher value work. And it, it leads me to something that Greg said earlier about the 80, 20, 80% is assembly. 20% is actually doing the analysis and that's not unique to, to, to life sciences, but, but sort of question is it's an organizational question in terms of how we think about data and how we approach data in the future. So Bamyan historically big data in life sciences in any industry really is required highly centralized and specialized teams to do things that the rain was talking about, the enrichment, the provenance, the data quality, the governance, the PR highly hyper specialized teams to do that. And they serve different constituencies. You know, not necessarily with that, with, with context, they're just kind of data people. Um, so they have responsibility for doing all those things. Greg, for instance, within literally, are you seeing a move to, to, to democratize data access? We've talked about data interoperability, part of that state of sharing, um, that kind of breaks that centralized hold, or is that just too far in the future? It's too risky in this industry? >>Uh, it's actually happening now. Uh, it's a great point. We, we try to classify what people can do. And, uh, the example would be you give someone who's less analytically qualified, uh, give them a dashboard, let them interact with the data, let them better understand, uh, what, what we're seeing out in the real world. Uh, there's a middle user, someone who you could give them, they can do some analysis with the tool. And the nice thing with that is you have some guardrails around that and you keep them in their lane, but it allows them to do some of their work without having to go ask those centralized experts that, that you mentioned their precious resources. And that's the third group is those, uh, highly analytical folks that can, can really deliver, uh, just value beyond. But when they're doing all those other things, uh, it really hinders them from doing what we've been talking about is the high value stuff. So we've, we've kind of split into those. We look at people using data in one of those three lanes and it, and it has helped I think, uh, us better not try to make a one fit solution for, for how we deliver data and analytic tools for people. Right. >>Okay. I mean, DCT hot topic with the, the, the audience here. Another question, um, what capabilities do sponsors and CRS need to develop in-house to pivot toward DCT? >>Should I jump in here? Yeah, I mean, um, I think, you know, when, when we speak about DCTs and when I speak with, uh, folks around in the industry, I, it takes me back to the days of risk-based monitoring. When it was first being implemented, it was a huge organizational change from the conventional monitoring models to centralize monitoring and risk-based monitoring, it needs a mental reset. It needs as Lorraine had pointed out a little while ago, restructuring workflows, re redefining processes. And I think that is one big piece. That is, I think the first piece, when, you know, when you're implementing a new model, I think organizational change management is a big piece of it because you are disturbing existing structures, existing methods. So getting that buy-in across the organization towards the new model, seeing what the value add in it. And where do you personally fit into that story? >>How do your workflows change, or how was your role impacted? I think without that this industry will struggle. So I see organizations, I think, first trying to work on that piece to build that in. And then of course, I also want to step back for the second to the, uh, to the point that you brought out about data democratization. And I think Greg Greg gave an excellent point, uh, input about how it's happening in the industry. But I would also say that the data democratization really empowerment of, of, of the stakeholders also includes the sites, the investigators. So what is the level of access to data that you know, that they have now, and is it, uh, as well as patients? So see increasingly more and more companies trying to provide access to patients finally, it's their data. So why shouldn't they have some insights to it, right. So access to patients and, uh, you know, the 80, 20 part of it. Uh, yes, he's absolutely right that, uh, we want to see that flip from, uh, 20%, um, you know, focusing on, on actually integrating the data 80% of analytics, but the real future will be coming in when actually the 20 and 18 has gone. And you actually have analysts the insights out on a silver platter. That's kind of wishful thinking, some of the industries is getting there in small pieces, but yeah, then that's just why I should, why we share >>Great points. >>And I think that we're, we're there in terms that like, I really appreciate the point around democratizing the data and giving the patient access ownership and control over their own data. I mean, you know, we see the health portals that are now available for patients to view their own records, images, and labs, and claims and EMR. We have blockchain technology, which is really critical here in terms of the patient, being able to pull all of their own data together, you know, in the blockchain and immutable record that they can own and control if they want to use that to transact clinical trial types of opportunities based on their data, they can, or other real world scenarios. But if they want to just manage their own data because they're traveling and if they're in a risky health situation, they've got their own record of their health, their health history, uh, which can avoid, you know, medical errors occurring. So, you know, even going beyond life sciences, I think this idea of democratizing data is just good for health. It's just good for people. And we definitely have the technology that can make it a reality. Now >>You're here. We have just about 10 minutes left and now of course, now all the questions are rolling in like crazy from the crowd. Would it be curious to know if there would be any comments from the panel on cost comparison analysis between traditional clinical trials in DCTs and how could the outcome effect the implementation of DCTs any sort of high-level framework you can share? >>I would say these are still early days to, to drive that analysis because I think many companies are, um, are still in the early stages of implementation. They've done a couple of trials. The other part of it that's important to keep in mind is, um, is for organizations it's, they're at a stage of, uh, of being on the learning curve. So when you're, you're calculating the cost efficiencies, if ideally you should have had two stakeholders involved, you could have potentially 20 stakeholders involved because everyone's trying to learn the process and see how it's going to be implemented. So, um, I don't think, and the third part of it, I think is organizations are still defining their KPIs. How do you measure it? What do you measure? So, um, and even still plugging in the pieces of technology that they need to fit in, who are they partnering with? >>What are the pieces of technology they're implementing? So I don't think there is a clear cut as answered at this stage. I think as you scale this model, the efficiencies will be seen. It's like any new technology or any new solution that's implemented in the first stages. It's always a little more complex and in fact sometimes costs extra. But as, as you start scaling it, as you establish your workflows, as you streamline it, the cost efficiencies will start becoming evident. That's why the industry is moving there. And I think that's how it turned out on the long run. >>Yeah. Just make it maybe out a comment. If you don't mind, the clinical trials are, have traditionally been costed are budgeted is on a per patient basis. And so, you know, based on the difficulty of the therapeutic area to recruit a rare oncology or neuromuscular disease, there's an average that it costs in order to find that patient and then execute the various procedures throughout the clinical trial on that patient. And so the difficulty of reaching the patient and then the complexity of the trial has led to what we might call a per patient stipend, which is just the metric that we use to sort of figure out what the average cost of a trial will be. So I think to point, we're going to have to see where the ability to adjust workflows, get to patients faster, collect data more easily in order to make the burden on the site, less onerous. I think once we start to see that work eases up because of technology, then I think we'll start to see those cost equations change. But I think right now the system isn't designed in order to really measure the economic benefit of de-central models. And I think we're going to have to sort of figure out what that looks like as we go along and since it's patient oriented right now, we'll have to say, well, you know, how does that work, ease up? And to those costs actually come down and then >>Just scale, it's going to be more, more clear as the media was saying, next question from the audiences, it's kind of a best fit question. You all have touched on this, but let me just ask it is what examples in which, in which phases suit DCT in its current form, be it fully DCT or hybrid models, none of our horses for courses question. >>Well, I think it's kind of, uh, it's, it's it's has its efficiencies, obviously on the later phases, then the absolute early phase trials, those are not the ideal models for DCTs I would say so. And again, the logic is also the fact that, you know, when you're, you're going into the later phase trials, the volume of number of patients is increasing considerably to the point that Lorraine brought up about access to the patients about patient selection. The fact, I think what one should look at is really the advantages that it brings in, in terms of, you know, patient access in terms of patient diversity, which is a big piece that, um, the cities are enabling. So, um, if you, if, if you, if you look at the spectrum of, of these advantages and, and just to step back for a moment, if you, if you're looking at costs, like you're looking at things like remote site monitoring, um, is, is a big, big plus, right? >>I mean, uh, site monitoring alone accounts for around a third of the trial costs. So there are so many pieces that fall in together. The challenge actually that comes when you're in defining DCTs and there are, as Rick pointed out multiple definitions of DCTs that are existing, uh, you know, in the industry right now, whether you're talking of what Detroit is doing, or you're talking about acro or Citi or others. But the point is it's a continuum, it's a continuum of different pieces that have been woven together. And so how do you decide which pieces you're plugging in and how does that impact the total cost or the solution that you're implementing? >>Great, thank you. Last question we have in the audience, excuse me. What changes have you seen? Are there others that you can share from the FDA EU APAC, regulators and supporting DCTs precision medicine for approval processes, anything you guys would highlight that we should be aware of? >>Um, I could quickly just add that. I think, um, I'm just publishing a report on de-centralized clinical trials should be published shortly, uh, perspective on that. But I would say that right now, um, there, there was a, in the FDA agenda, there was a plan for a decentralized clinical trials guidance, as far as I'm aware, one has not yet been published. There have been significant guidances that have been published both by email and by, uh, the FDA that, um, you know, around the implementation of clinical trials during the COVID pandemic, which incorporate various technology pieces, which support the DCD model. Um, but I, and again, I think one of the reasons why it's not easy to publish a well-defined guidance on that is because there are so many moving pieces in it. I think it's the Danish, uh, regulatory agency, which has per se published a guidance and revised it as well on decentralized clinical trials. >>Right. Okay. Uh, we're pretty much out of time, but I, I wonder Lorraine, if you could give us some, some final thoughts and bring us home things that we should be watching or how you see the future. >>Well, I think first of all, let me, let me thank the panel. Uh, we really appreciate Greg from Lily and the meta from IDC bringing their perspectives to this conversation. And, uh, I hope that the audience has enjoyed the, uh, the discussion that we've had around the future state of real world data as, as well as DCT. And I think, you know, some of the themes that we've talked about, number one, I think we have a vision and I think we have the right strategies in terms of the future promise of real-world data in any number of different applications. We certainly have talked about the promise of DCT to be more efficient, to get us closer to the patient. I think that what we have to focus on is how we come together as an industry to really work through these very vexing operational issues, because those are always the things that hang us up and whether it's clinical research or whether it's later stage, uh, applications of data. >>We, the healthcare system is still very fragmented, particularly in the us. Um, it's still very, state-based, uh, you know, different states can have different kinds of, uh, of, of cultures and geographic, uh, delineations. And so I think that, you know, figuring out a way that we can sort of harmonize and bring all of the data together, bring some of the models together. I think that's what you need to look to us to do both industry consulting organizations, such as IBM Watson health. And we are, you know, through DTRA and, and other, uh, consortia and different bodies. I think we're all identifying what the challenges are in terms of making this a reality and working systematically on those. >>It's always a pleasure to work with such great panelists. Thank you, Lorraine Marshawn, Dr. Namita LeMay, and Greg Cunningham really appreciate your participation today and your insights. The next three years of life sciences, innovation, precision medicine, advanced clinical data management and beyond has been brought to you by IBM in the cube. You're a global leader in high tech coverage. And while this discussion has concluded, the conversation continues. So please take a moment to answer a few questions about today's panel on behalf of the entire IBM life sciences team and the cube decks for your time and your feedback. And we'll see you next time.

Published Date : Dec 7 2021

SUMMARY :

and the independent analyst view to better understand how technology and data are changing The loan to meta thanks for joining us today. And how do you see this evolving the potential that this brings is to bring better drug targets forward, And so I think that, you know, the promise of data the industry that I was covering, but it's great to see you as a former practitioner now bringing in your Um, but one thing that I'd just like to call out is that, you know, And on the other side, you really have to go wider and bigger as well. for the patient maybe Greg, you want to start, or anybody else wants to chime in? from my perspective is the potential to gain access to uh, patient health record, these are new ideas, you know, they're still rather nascent and of the record, it has to be what we call cleaned or curated so that you get is, is the ability to bring in those third-party data sets and be able to link them and create And so, you know, this idea of adding in therapeutic I mean, you can't do this with humans at scale in technology I, couldn't more, I think the biggest, you know, whether What are the opportunities that you see to improve? uh, very important documents that we have to get is, uh, you know, the e-consent that someone's the patient from the patient, not just from the healthcare provider side, it's going to bring real to the population, uh, who who's, uh, eligible, you to help them improve DCTs what are you seeing in the field? Um, but it is important to take and submitted to the FDA for regulatory use for clinical trial type And I know Namita is going to talk a little bit about research that they've done the adoption is making sure that what we're doing is fit for purpose, just because you can use And then back to what Greg was saying about, uh, uh, DCTs becoming more patient centric, It's about being able to continue what you have learned in over the past two years, Um, you know, some people think decentralized trials are very simple. And I think a lot of, um, a lot of companies are still evolving in their maturity in We have some questions coming in from the audience. It is going to be a big game changer to, to enable both of these pieces. to these new types of data, what trends are you seeing from pharma device have the same plugins so that, you know, data can be put together very easily, coming from things like devices in the nose that you guys are seeing. and just to take an example, if you can predict well in advance, based on those behavioral And it's very common, you know, the operating models, um, because you know, the devil's in the detail in terms of the operating models, to some extent to see what's gonna stick and, you know, kind of with an innovation mindset. records, data to support regulatory decision-making what advancements do you think we can expect Uh, Dave, And it really took the industry a good 10 years, um, you know, before they I think there've been about maybe somewhere between eight to 10 submissions. onus is going to be on industry in order to figure out how you actually operationalize that clinical research stakeholders with sites or participants, Um, but actually if you look at, uh, look at, uh, It's just not something that you would implement across you know, healthcare professional opinion, because I think both of them bring that enrichment and do the monitoring visit, you know, in real time, just the way we're having this kind of communication to do higher value work, you know, instead of going in and checking the the data quality, the governance, the PR highly hyper specialized teams to do that. And the nice thing with that is you have some guardrails around that and you keep them in in-house to pivot toward DCT? That is, I think the first piece, when, you know, when you're implementing a new model, to patients and, uh, you know, the 80, 20 part of it. I mean, you know, we see the health portals that We have just about 10 minutes left and now of course, now all the questions are rolling in like crazy from learn the process and see how it's going to be implemented. I think as you scale this model, the efficiencies will be seen. And so, you know, based on the difficulty of the therapeutic Just scale, it's going to be more, more clear as the media was saying, next question from the audiences, the logic is also the fact that, you know, when you're, you're going into the later phase trials, uh, you know, in the industry right now, whether you're talking of what Detroit is doing, Are there others that you can share from the FDA EU APAC, regulators and supporting you know, around the implementation of clinical trials during the COVID pandemic, which incorporate various if you could give us some, some final thoughts and bring us home things that we should be watching or how you see And I think, you know, some of the themes that we've talked about, number one, And so I think that, you know, figuring out a way that we can sort of harmonize and and beyond has been brought to you by IBM in the cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
LorrainePERSON

0.99+

GregPERSON

0.99+

Lorraine MarshawnPERSON

0.99+

Greg CunninghamPERSON

0.99+

Dave VolantePERSON

0.99+

IBMORGANIZATION

0.99+

40QUANTITY

0.99+

80%QUANTITY

0.99+

DavePERSON

0.99+

RickPERSON

0.99+

Namita LeMayPERSON

0.99+

30%QUANTITY

0.99+

2022DATE

0.99+

secondQUANTITY

0.99+

Greg GregPERSON

0.99+

six weeksQUANTITY

0.99+

FDAORGANIZATION

0.99+

RWEORGANIZATION

0.99+

BostonLOCATION

0.99+

36%QUANTITY

0.99+

four weeksQUANTITY

0.99+

2021DATE

0.99+

20%QUANTITY

0.99+

20 stakeholdersQUANTITY

0.99+

90%QUANTITY

0.99+

three yearsQUANTITY

0.99+

second partQUANTITY

0.99+

50%QUANTITY

0.99+

eightQUANTITY

0.99+

todayDATE

0.99+

NikitaPERSON

0.99+

DCTORGANIZATION

0.99+

IDCORGANIZATION

0.99+

first pieceQUANTITY

0.99+

bothQUANTITY

0.99+

firstQUANTITY

0.99+

oneQUANTITY

0.99+

Nick Volpe, Accenture and Kym Gully, Guardian Life | AWS Executive Summit 2021


 

>>And welcome back to the cubes coverage of AWS executive summit at re-invent 2021. I'm John ferry hosts of the cube. This segment is about surviving and thriving and with the digital revolution that's happening, the digital transformation that's turning into and changing businesses. We've got two great guests here with guardian life. Nick Volpi CIO of individual markets at guardian life and Kim golly CTO of life. And is at Accenture essentially, obviously doing a lot of cutting-edge work, guardian changing the game. Nick, thanks for coming on, Kevin. Thanks for coming on. >>Thanks John. Good to be here. >>So I wonder before I get into the question, I want to just set the table a little bit. The pandemic has given everyone a mandate, the good projects are exposed. The bad projects are exposed. Everyone can kind of see kind of what's happening because of the pandemic forced everyone to kind of identify what's working. What's not working what the double-down on innovation for customers is a big focus, but now with the pandemic kind of relieving and coming out of it, the world's changed. This is an opportunity for businesses, Nick, this is something that you guys are focused on. Can you take us through what guardian lives doing kind of in this post pandemic changeover as cloud goes next level? >>Yeah. Thanks John. So, you know, the immediate need in the pandemic situation was about the new business capability. So those familiar with insurance traditionally, you know, life insurance, underwriting, disability underwriting is very in-person fluids labs, uh, attending physician statements. And when March of 2020 broke that all came to an abrupt halt, right doctor's office were either closed. Testing centers were either closed or inundated with COVID testing. So we had to come up with some creative ways to digitize our new business, um, adopt the application and adopt our new medical questionnaires and also get creative on some of our underwriting standards that put us at, you know, certain limits and certain levels and how we, when we needed fluids. So we, we, we have pretty quickly, we're agile about decisions there. And we moved from about, uh, you know, 40 to 50% adoption rate of our electronic applications to, you know, north of 98% across the board. >>Um, in addition, we kind of saw some opportunities for products and more capabilities beyond new business. So after we weathered the storm, we started taking a step back. And like you said, look at what we were doing. Like kind of have a start, stop, continue conversation internally to say, you know, this digitation digitization is a new norm. How do we meet it from every angle, not just a new business, right? And that's where we started to look at our policy administration systems, moving more to the cloud and leveraging the cloud to its fullest extent versus just a lift and shift. >>Kim, I want to get your perspective at a century I'm, I've done a lot of interviews with the past, I think 18 months, lots of use cases with a central, almost in every vertical where you guys are almost like the firefighters get called in to like help out cause the cloud actually now isn't an enabler. Um, how do you see the impact of the, of the pandemic around reverbing through? I mean, obviously you guys come to the table, you guys bring in, I mean, what's your perspective on this? >>So, yeah, it's really interesting. I think the most interesting fact >>Is, you know, we talk about Nick raised the, you know, such a strong area in our business of underwriting and how can we expedite that? There's been talking on the table for a number of years. Um, but the industry has been very slow or reluctant to embrace. And the pandemic became a very informed, I became an enforcer in it to be honest. And a lot of the companies were thinking about a prior. Um, but that's, it they'll think about it. I mean, even essentially we, we launched a huge three-year investment to get clients into cloud and digital transformation, but the pandemic just expedited everything. Now the upside is clients that were in a well-advanced stage of planning, uh, that we're easily able to adopt. Uh, but clients that weren't were really left behind. Um, so we became very, very busy just supporting the clients that weren't didn't have as much forethought as the likes of guardian, et cetera. >>Nick, that brings up a good point. I want to get your reaction to see if you agree. I mean, people who didn't put their toe in the cloud, or just jump in the deep end, really got flat-footed when the pandemic hit, because they weren't prepared people who were either ingratiated in with the cloud or how many active projects were even being full deployments in there did well, what's your take on that? >>Yeah, the, the enablement we had and, and the gift we were given by starting our cloud journey, and I want to say 2016, 17 was we really started moving to the cloud. And I think we were the only insurer that moved production load to the cloud at that point. Um, most of insurers were putting their development environments, maybe even their environments, but, you know, guardian had a strategy of getting out of the data center and moving to a much more flexible, scalable environment architecture using the AWS cloud. Um, so we completed our journey into the cloud by 2018, 19, and we were at the point of really capitalizing versus moving. So we were able to move very quickly, very nimbly, uh, when, when the pandemic hit or in any digital situation, we have that, that flexibility and capacity that AWS provides us to really respond to our customers, our customer's needs. So we were one of the more fortunate insurers that were well into our cloud journey and at the point of optimization versus the point of moving. >>So let's talk about the connection with, with the sensors, life insurance and annuity platform also known as a, I think the acronym is, uh, what was that? Why was that relevant? What, what was that all about? >>Yeah. So I'll go first and then Kim, you can jump in and see if you agree with me. Um, so >>It's essentially, >>I suspect you would write John, like I said, our new business focus was the original, like the, the, the, the emergency situation when the pandemic hit. But as we went further into it and realized the mortality and morbidity and the needs and wants of our customers, which is a major focus of guardian, really being, having the client at the center of every conversation we have, we realized that there was a real opportunity for product and his product continues to change. And you had regulations like 7,702 coming out where you had to reprice the entire portfolio to be able to sell it by January 1st, 2022, we realized our current systems are for policy admin. We're not matching our digital capabilities that we had moved to the cloud. So we embarked on a very extensive RFP to Accenture and a few other vendors that would come to the table and work with us. >>And we just really got to a place where combination of our, our desire to be on the cloud, be flexible and be capable for our customers. Married really well with the, the knowledge, the industry knowledge and the capabilities that Accenture brought to the table with the Ayla platform, um, their book of business, their current infrastructure, their configuration versus development, really all aligned with our need for flexible, fast time to market. You know, we're looking to cut development times significantly. We're looking to cut tests in times niggly. And as of right now, it's all proving true between the CA the cloud capability and halo capability. We are reaping the benefits of having this new platform, uh, coming up in live very soon here before. >>Well, I get to, um, a center's perspective. I want to just ask you a quick follow-up on that. Nick, if you don't mind the, you basically talk us through, okay, I can see what's happening here. You get with Accenture take advantage of what they got going on. You get into the cloud, you start getting the efficiencies, get the cultural change. What refactoring has you have you seen? What's your vision? I should say, what's your vision around what's next? Because clearly there's a, there's a, there's a, there's a playbook you get in the cloud replatform, you get the cultural fit, you understand the personnel issues, how to tap the resources. Then you gotta look for innovation where you can start changing. What, how you do things to refactor the business model. >>Yeah. So I think that, you know, specifically to this conversation, that's around the product capability, right? So for all too long, the insurance companies have had three specific sleeves of insurance products. We've had individual life. We have an individual disability and we'd have individual annuities, right? Each of them serving a specific purpose in the customer's lives, what this platform and this cloud platform allows us to do is start to think about, can we create the concept of a single rapper? Can we bring some of these products together? Can we centralize the buying process? And with ALA behind the scenes, you don't have that. You know, I kind of equate it to building a Ferrari and attaching a, uh, a trailer to it, right? And that's what we were doing today. Our digital front ends, our new business capabilities are all being anchored down or slowed down by our traditional mainframe backends by introducing Accenture on the cloud in AWS, we now have our Ferrari fully free to run as fast as it can versus anchoring this massive, you know, trailer to it. Um, so it really was a matter of bringing our product innovation to our digital front end innovation that we've been working on for, you know, two or three years prior. >>I mean, this is the kind of the Amazon way, right? You decouple things, you decompose, you don't want to have a drag. And with containers, we're seeing companies look at existing legacy in a way that's different. Um, can you talk about how you guys look at that Nick and terminally? Because a lot of CEO's are saying, Hey, you know what? I can have the best of both worlds. I don't have to kill the old to bring in the new, but I can certainly modernize everything. What's your reaction to that? >>Yeah. And I think that's, that's our exact, that's our exact path forward, right? We don't, we don't feel like we need to boil the ocean. Right. We're going after the surgically for the things that we think are going to be most impactful to our customers, right? So legacy blocks of business that are sitting out there that are, you know, full, completely closed. They're not our concern. It's really hitching this new ALA capability to the next generation of products. The next generation of customer needs understanding data, data capture is very important. And right. So if you look at the mainframes and what we're living on now, it's all about the owner of the policy. You lose connection with the beneficiary or the insured, what these new platforms allowed us to do is really understand the household around the products that they're buying. Right. I know it sounds simple, but that data architecture, that data infrastructure on these newer platforms and in the cloud, you can turn it faster. >>You have scale to do more analysis, but you're also able to capture in a much cleaner way on the traditional systems. You're talking about what we call intimately the blob on the mainframe that has your name, your first name, your last name, your address, all in one free form field sitting in some database. It's very hard to discern on these new platforms, given our need and our desire to be deeper into the client's lives, understanding their needs, ALA coupled with em, with AWS, with our new business capabilities on the front end really puts together that true customer value chain. That's going to differentiate us. >>Okay. I'm okay. CTO of a live as he calls it, the acronym for the service you have, this is a great example. I hate to use the word on-ramp cause that sounds so old, right? But in a way in vertical markets, you're seeing the power of the cloud because the data and the AI could be freed up and you can take advantage of all the heavy lifting by providing some platform or some support with Amazon, the, your expertise. This is a great use case of that, I think. And I think, you know, this is, I think a future trend where the developments can be faster, that value can be faster and your customers don't have to build all that lower level abstractions. If you will. Can you describe the essential relationship to your customers as you guys? Cause this is a real great use case. >>Yeah, it is. You know, our philosophy is simple. Let's not reinvent the wheel and with cloud and native services as AWS and, uh, provide w we want to focus on the business of what the system needs to do and not all the little side bets, we can get a great service. That's fully managed that has, uh, security patches updates. We want to focus on the real deal. Like Nick wants to focus on the business and not so much what's underneath it. That's my problem. I'm focusing on that. And we will work together, uh, in a nice little gel. You've had the relatively new term, no code, low code. You know, it's strange a modern system, like a lip has been that way for a number of years. Basically it means I don't want to make code changes. I just want to be able to configure it. >>So now more people can have access to make change, and we can even get it to the point where it's the people that are sitting there, dealing with the clients that would be the ultimate, where they can innovate and come up with ideas and try things because we've got it so simple. We're not there yet, but that's the ultimate goal. So alien, the no code, no code has been around for quite some time. And maybe we should take advantage of that, but I think we're missing one thing. So as good as the platform is the cloud moving in calculating native services, using the built-in security that comes with all that, um, and extending the function and then being able to tap into, you know, the InsureTech FinTech internet of things, and quickly adapt. I think the partnership is big. Okay. Uh, it's, it's very strong part of the exercise, so you can have the product, but without the people that work well together, I think it's also a big challenge. >>You know, all programs have their idiosyncrasies and there's a lot of challenges along the way. You know, there's one really small, simple example I can use. Um, I'd say guardian is one of our industries, market leaders, when, and when they approach the security, they really do lead the way out there. They're very strict, very, um, very responsible, which is such a pleasure to say, but at the end of the day, you still need to run a business. So, you know, because we're a partnership because we all have the same challenges we want to get to success. We were able to work together quite quickly. We planned out the right approach that maximize the security, but it also progressed the business. So, and we applied that into the overall program. So I think it is the product. Definitely. I think it is, uh, everything Nick said you actually elaborated on, but I'd like to point out there's a big part of the partnership to make it a success. >>Yeah. Great, great call out there, Nick, let's get your reaction on that because I want to get into the customer side of it. This enablement platform is kind of the new platform has been around for awhile, but the notion of buying tools and having platforms are now interesting because you have to take this kind of low code, no code capability, and you still got to code. I mean, there's some coding going on, but what it means is ease of use composing and being fast, um, platforms are super important. That requires real architecture and partnership. What's your reaction. >>Yeah. So I think, you know, I'll, I'll tie it all together between AWS and ALA, right? And here's the beauty of it. So we have something called launchpad where we're able to quickly stand up in AIDAP instance for development capabilities because of our Amazon relationship. And then to Kim's point, we have been successful 85% or more of all the work we've done with Inala is configuration versus code. And I'd actually I'd venture to say 90%. So that's extremely powerful when you think about the speed to market and our need to be product innovative. Um, so if our developers and even our, our analysts that sit on the business side could come in and quickly stand up a development buyer and start to play with, um, actuarial calculations, new product features and function, and then spin that to a more higher end development environment. You now have the perfect coupling of a new policy administration system that has the flexibility and configuration with a cloud provider like Amazon and AWS that allows us to move quickly with environments. Whereas in days past you'd have to have an architecture team come in and stand up the servers. And, you know, I'm going way back, but like buy the boxes, put the boxes in place and wire them down. This combination available in AWS has really a new capability to guardian that we're really excited about. >>I love that little comparison. Let me just quickly ask you compared to the old way, give us an order of magnitude of pain and timing involved versus what you just described as standing up something very quickly and getting value and having people shift their, their intellectual capital into value activities versus undifferentiated heavy lifting. >>Yes. I'll, I'll give you real dates. Right? So we engage really engaged with Accenture on the ALA program. Right before Thanksgiving of last year, we had our environment stood up and running all of our vitamins dev set UAT up by February, March timeframe on AWS. And we are about to launch our first product configuration into the, of the platform come November. So within a year we've taken arguably decades of product innovation from our mainframes and built it onto the Ayla platform on the Amazon cloud. So I don't know that you can do that in any other type of environment or partnership. >>It's amazing. You know, that's just great example to me, uh, where cloud scale and real refactoring and business agility is kinda plays out. So congratulations. I got to ask you now on the customer side, you mentioned, um, you guys love, uh, providing value to the customers. What is the impact of the customer? Okay, now you're a customer guardian life's customer. What's the impact of them. Can you share how you see that rendering itself in the marketplace? >>Yeah, so, so clearly AWS has rendered tons of value to the customer across the value stream, right? Whether it be our new business capability, our underwriting capability, our ability to process data and use their scale. I mean, it just goes on and on about the AWS, but specifically around ad-lib, um, the new API environment that we have, the connectivity that we can now make with the new backend policy admin systems has really brought us to a new, a new level. Um, whether it be repricing, product innovation, um, responding to claims capabilities, responding to servicing capabilities that the customer may need. You know, we're able to introduce more self-service. So if you think about it from the back end policy admin, going forward to our client portal, we're able to expose more transactions to self-serve. So minimize calls to the call center, minimize frustration of hold times and allow them to come onto the portal and do more and interact more with their policies because we're on this new, more modern cloud environment and a new, more modern policy admin. So we're delivering new capabilities to the customer from beginning to end being on the cloud with, with, >>Okay, final question. What's next for guardian life's journey year with Accenture. What's your plans? What do you want to knock down for the next year? What's what's on your mind? What's next? >>Uh, so that's an easy question. We've had this roadmap plan since we first started talking to Excentra, at least I've had it in my head. Um, we, we want off all of our policy admin systems for new business come end of 2025. So we've got about four policy admin systems maintaining our different lines of business, our individual disability or life insurance, and our newest, um, four systems that are kind of weighing us down a little bit. We have a glide path and a roadmap with Accenture as a partner to get off of all of these, for new business capability, um, by end of 2024. And that's, you know, I'm being gracious to my teams when I say that I'd like to go a little bit sooner, and then we begin to migrate the, the most important blocks of business that caused the most angst and most concerned with the executive leadership team and then, you know, complete the product. >>But along the way, you know, given regulation, given new, uh, customer customer needs, you know, meeting the needs of the customers changing life, we're going to have parallel tracks, right? So I envision we continue to have this flywheel turning of moving, but then we begin another flywheel right next to it that says we're going to innovate now on the new platform as well. So ultimately John, next year, if I could have my entire whole life block, as it stands today on the new admin platform and one or two new product innovations on the platform as well, by the third quarter, fourth quarter of next year, that would be a success. As far as that. >>Awesome. You guys had all planned out. I love, and I have such a passion for how technology powers business. And this is such a great story for next gen kind of where the modernization trend is today and kind of where it's going. It's the Nick. Appreciate it, Kim. Thanks for coming out with a censure Nixon. It's an easy question for you. I have to ask you another one. Um, this is, I got you here. You know, you guys are doing a lot of great work for other CEOs out there that are going through this right now, whether whatever they are on the spectrum missed the cloud way of getting in. Now this notion of refactoring and then replatforming, and then refactoring business is a playbook we're seeing emerge. People can get the benefits of going to the cloud, certainly for efficiency, but now it opens up the aperture for different kinds of business models. With more data access with machine learning. This refactoring seems to be the new hot thing where the best minds are saying, wow, we could do more, even more. What's your vision? How would you share those folks out there, out there, or the CEOs? What should they be thinking? What's their approach? What advice would you give? >>Yeah, so a lot of the mistakes we make as CEOs, we go for the white hot core first, right? We went the other way. We went for the newer digital assets. We went for the stuff that wasn't as concerning to the business should be fall over. Should there be an outage? Should there be anything? Right? So if you avoid the white hot core, improve it with your peripherals, easier moves to the cloud portals, broker, portals, um, beneficiary portals, uh, simple, you know, AIX frames, moving to the cloud and making them cloud native new builds. Right? So we started with all those peripheral pieces of the architecture and we avoided the white hot core because that's where you start to get those very difficult conversations about, I don't know if I'm ready to move. And I don't see the obvious benefit of moving a dividend generating policy admin system to the cloud. Like why, when you prove it in the pudding and you put the other things out there and prove you can be successful the conversation and move your core and your white hot core out to the platform out to leverage the cloud and to leverage new admin platforms, it becomes a much easier conversation because you've kind of cut your teeth on something much less detrimental to the business. Should it be >>What's the other expression, put water through the pipes, get some reps in and get the team ready to bring training, whatever metaphor you. That's what you're essentially saying. There, get, get some, get some, get your sea legs, get, get practice >>Exactly. Then go for the hard stuff, right? >>It's such a valid point. John is, you know, we see a lot of different approaches across a lot of different companies and, and the biggest challenges, the core is the biggest part. And if you start with that, it can be the scariest part. And I've seen companies trip up big time and you know, it becomes such a bubble spend, which really knocks you on for years, lose confidence in your strategy and everything else. And you're only as strong as your weakest link. So whether you do the outside first or the inside first from a weakest link until it's, the journey is complete, you're never going to maximize. So it was a, it was a very, uh, different and new and great approach that they took by doing a learning curve around the easiest stuff. And then, >>Yeah. Well, that's a great point. One quick, quick followup on that is that the talk about the impact of the personnel, Kim and Nick, because you know, there's a morale issue going on too. There's a, there's a, there's a training. I won't say training, but there's not re-skilling, but there's the rigor. If you're refactoring, you are, re-skilling, you're doing new things, the impact on morale and confidence. If you're not, you get the white, you don't wanna be in the white core unconfident. >>Maybe I should get first. Cause it's Nick's stuff. So he probably might want to say a lot, but yeah. Um, what we see with a lot of insurance companies, uh, they grow through acquisition. Okay. They're very large companies grown over time, uh, buying companies with businesses and systems and bringing it in. They usually bring a ten-year staff. So getting the staff to the next generation, uh, those staff is extremely important because they know everything that you've got today, and they're not so, uh, fair with what's coming up in the future. And there is a transition and people shouldn't feel threatened, but there is change and people do need to adopt and evolve and it should be fun and interesting, but it is a challenge at that turnover point on who controlling what, and then you get the concerns and get paranoid. So it is a true HR issue that you need to manage through >>The final word here. Go for it. >>Yeah. John, I'll give you a story that I think will sum the whole thing up about the excitement versus contention. We see here at guardian. I have a 50 year veteran on my legacy platform team and this person is so excited, got themselves certified in Amazon and is now leading the charge to bring our mainframes onto a lip and is one of the most essential. And I've actually had Accenture tell me if I had a person like this on every one of my engagements who is not only knowledgeable of the legacy, but is so excited to move to the new. I don't think I'd have a failed implementation. So that's the kind of guardian, the kind of backing guardians putting behind this, right? We are absolutely focusing on rescaling. We are not going to the market. We're giving everyone the opportunity and we have an amazing take-up rate. And again, like I said, 50 year veteran who probably could have retired 10 years ago is so excited, reeducated themselves, and is now a key part of this implementation, >>Hey, who wouldn't want to drive a Ferrari when you see it come in, right? I mean Barston magnet trailer. Great story, Nick. Thank you for coming on. Great insight, Kim, great stuff for the century as always a great story here, right? At the heart of the real focus where all companies are feeling right now, we're surviving and thriving and coming out of the pandemic with a growth strategy and a business model with powered by technology. So thanks for sharing the story. Appreciate it. Thanks John. Appreciate it. Okay. So cube coverage of 80 of us executive summit at re-invent 2021. I'm John furrier, your host of the cube. Thanks for watching.

Published Date : Nov 9 2021

SUMMARY :

I'm John ferry hosts of the cube. because of the pandemic forced everyone to kind of identify what's working. So those familiar with insurance traditionally, you know, life insurance, underwriting, Like kind of have a start, stop, continue conversation internally to say, you know, this digitation digitization lots of use cases with a central, almost in every vertical where you guys are almost like the firefighters get called in I think the most interesting fact And a lot of the companies were thinking about a prior. I want to get your reaction to see if you agree. but, you know, guardian had a strategy of getting out of the data center and moving to a much more flexible, Um, so And you had regulations like 7,702 coming out where you had to reprice the entire portfolio the knowledge, the industry knowledge and the capabilities that Accenture brought to the table with the I want to just ask you a quick follow-up on that. the scenes, you don't have that. I can have the best of both worlds. So legacy blocks of business that are sitting out there that are, you know, into the client's lives, understanding their needs, ALA coupled with em, with AWS, CTO of a live as he calls it, the acronym for the service you have, this is a great example. Let's not reinvent the wheel and with cloud and native services So now more people can have access to make change, and we can even get it to the point where but at the end of the day, you still need to run a business. but the notion of buying tools and having platforms are now interesting because you So that's extremely powerful when you think about the speed to market Let me just quickly ask you compared to the old way, So I don't know that you can do that in any other type of environment or partnership. I got to ask you now on the customer side, you mentioned, um, you guys love, uh, the new API environment that we have, the connectivity that we can now make with the new backend policy admin systems has What do you want to knock down for the next year? And that's, you know, I'm being gracious to my teams when I say that I'd like to go a little bit sooner, But along the way, you know, given regulation, given new, I have to ask you another one. and you put the other things out there and prove you can be successful the conversation and move your core and your white What's the other expression, put water through the pipes, get some reps in and get the team ready to bring training, Then go for the hard stuff, right? So whether you do the outside first or the inside Kim and Nick, because you know, there's a morale issue going on too. So getting the staff to the next generation, Go for it. is not only knowledgeable of the legacy, but is so excited to move to the So thanks for sharing the story.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

NickPERSON

0.99+

AmazonORGANIZATION

0.99+

Nick VolpePERSON

0.99+

40QUANTITY

0.99+

KimPERSON

0.99+

January 1st, 2022DATE

0.99+

Nick VolpiPERSON

0.99+

AWSORGANIZATION

0.99+

oneQUANTITY

0.99+

next yearDATE

0.99+

March of 2020DATE

0.99+

KevinPERSON

0.99+

Kym GullyPERSON

0.99+

2016DATE

0.99+

50 yearQUANTITY

0.99+

ten-yearQUANTITY

0.99+

three-yearQUANTITY

0.99+

AccentureORGANIZATION

0.99+

NovemberDATE

0.99+

Kim gollyPERSON

0.99+

90%QUANTITY

0.99+

2018DATE

0.99+

FerrariORGANIZATION

0.99+

EachQUANTITY

0.99+

85%QUANTITY

0.99+

end of 2025DATE

0.99+

third quarterDATE

0.99+

ALAORGANIZATION

0.99+

ExcentraORGANIZATION

0.99+

end of 2024DATE

0.99+

19DATE

0.99+

threeQUANTITY

0.98+

todayDATE

0.98+

decadesQUANTITY

0.98+

50%QUANTITY

0.98+

first productQUANTITY

0.98+

17DATE

0.98+

firstQUANTITY

0.98+

ThanksgivingEVENT

0.97+

80QUANTITY

0.97+

18 monthsQUANTITY

0.97+

both worldsQUANTITY

0.97+

pandemicEVENT

0.96+

last yearDATE

0.96+

two great guestsQUANTITY

0.96+

10 years agoDATE

0.96+

The New Data Equation: Leveraging Cloud-Scale Data to Innovate in AI, CyberSecurity, & Life Sciences


 

>> Hi, I'm Natalie Ehrlich and welcome to the AWS startup showcase presented by The Cube. We have an amazing lineup of great guests who will share their insights on the latest innovations and solutions and leveraging cloud scale data in AI, security and life sciences. And now we're joined by the co-founders and co-CEOs of The Cube, Dave Vellante and John Furrier. Thank you gentlemen for joining me. >> Hey Natalie. >> Hey Natalie. >> How are you doing. Hey John. >> Well, I'd love to get your insights here, let's kick it off and what are you looking forward to. >> Dave, I think one of the things that we've been doing on the cube for 11 years is looking at the signal in the marketplace. I wanted to focus on this because AI is cutting across all industries. So we're seeing that with cybersecurity and life sciences, it's the first time we've had a life sciences track in the showcase, which is amazing because it shows that growth of the cloud scale. So I'm super excited by that. And I think that's going to showcase some new business models and of course the keynotes Ali Ghodsi, who's the CEO Data bricks pushing a billion dollars in revenue, clear validation that startups can go from zero to a billion dollars in revenues. So that should be really interesting. And of course the top venture capitalists coming in to talk about what the enterprise dynamics are all about. And what about you, Dave? >> You know, I thought it was an interesting mix and choice of startups. When you think about, you know, AI security and healthcare, and I've been thinking about that. Healthcare is the perfect industry, it is ripe for disruption. If you think about healthcare, you know, we all complain how expensive it is not transparent. There's a lot of discussion about, you know, can everybody have equal access that certainly with COVID the staff is burned out. There's a real divergence and diversity of the quality of healthcare and you know, it all results in patients not being happy, and I mean, if you had to do an NPS score on the patients and healthcare will be pretty low, John, you know. So when I think about, you know, AI and security in the context of healthcare in cloud, I ask questions like when are machines going to be able to better meet or make better diagnoses than doctors? And that's starting. I mean, it's really in assistance putting into play today. But I think when you think about cheaper and more accurate image analysis, when you think about the overall patient experience and trust and personalized medicine, self-service, you know, remote medicine that we've seen during the COVID pandemic, disease tracking, language translation, I mean, there are so many things where the cloud and data, and then it can help. And then at the end of it, it's all about, okay, how do I authenticate? How do I deal with privacy and personal information and tamper resistance? And that's where the security play comes in. So it's a very interesting mix of startups. I think that I'm really looking forward to hearing from... >> You know Natalie one of the things we talked about, some of these companies, Dave, we've talked a lot of these companies and to me the business model innovations that are coming out of two factors, the pandemic is kind of coming to an end so that accelerated and really showed who had the right stuff in my opinion. So you were either on the wrong side or right side of history when it comes to the pandemic and as we look back, as we come out of it with clear growth in certain companies and certain companies that adopted let's say cloud. And the other one is cloud scale. So the focus of these startup showcases is really to focus on how startups can align with the enterprise buyers and create the new kind of refactoring business models to go from, you know, a re-pivot or refactoring to more value. And the other thing that's interesting is that the business model isn't just for the good guys. If you look at say ransomware, for instance, the business model of hackers is gone completely amazing too. They're kicking it but in terms of revenue, they have their own they're well-funded machines on how to extort cash from companies. So there's a lot of security issues around the business model as well. So to me, the business model innovation with cloud-scale tech, with the pandemic forcing function, you've seen a lot of new kinds of decision-making in enterprises. You seeing how enterprise buyers are changing their decision criteria, and frankly their existing suppliers. So if you're an old guard supplier, you're going to be potentially out because if you didn't deliver during the pandemic, this is the issue that everyone's talking about. And it's kind of not publicized in the press very much, but this is actually happening. >> Well thank you both very much for joining me to kick off our AWS startup showcase. Now we're going to go to our very special guest Ali Ghodsi and John Furrier will seat with him for a fireside chat and Dave and I will see you on the other side. >> Okay, Ali great to see you. Thanks for coming on our AWS startup showcase, our second edition, second batch, season two, whatever we want to call it it's our second version of this new series where we feature, you know, the hottest startups coming out of the AWS ecosystem. And you're one of them, I've been there, but you're not a startup anymore, you're here pushing serious success on the revenue side and company. Congratulations and great to see you. >> Likewise. Thank you so much, good to see you again. >> You know I remember the first time we chatted on The Cube, you weren't really doing much software revenue, you were really talking about the new revolution in data. And you were all in on cloud. And I will say that from day one, you were always adamant that it was cloud cloud scale before anyone was really talking about it. And at that time it was on premises with Hadoop and those kinds of things. You saw that early. I remember that conversation, boy, that bet paid out great. So congratulations. >> Thank you so much. >> So I've got to ask you to jump right in. Enterprises are making decisions differently now and you are an example of that company that has gone from literally zero software sales to pushing a billion dollars as it's being reported. Certainly the success of Data bricks has been written about, but what's not written about is the success of how you guys align with the changing criteria for the enterprise customer. Take us through that and these companies here are aligning the same thing and enterprises want to change. They want to be in the right side of history. What's the success formula? >> Yeah. I mean, basically what we always did was look a few years out, the how can we help these enterprises, future proof, what they're trying to achieve, right? They have, you know, 30 years of legacy software and, you know baggage, and they have compliance and regulations, how do we help them move to the future? So we try to identify those kinds of secular trends that we think are going to maybe you see them a little bit right now, cloud was one of them, but it gets more and more and more. So we identified those and there were sort of three or four of those that we kind of latched onto. And then every year the passes, we're a little bit more right. Cause it's a secular trend in the market. And then eventually, it becomes a force that you can't kind of fight anymore. >> Yeah. And I just want to put a plug for your clubhouse talks with Andreessen Horowitz. You're always on clubhouse talking about, you know, I won't say the killer instinct, but being a CEO in a time where there's so much change going on, you're constantly under pressure. It's a lonely job at the top, I know that, but you've made some good calls. What was some of the key moments that you can point to, where you were like, okay, the wave is coming in now, we'd better get on it. What were some of those key decisions? Cause a lot of these startups want to be in your position, and a lot of buyers want to take advantage of the technology that's coming. They got to figure it out. What was some of those key inflection points for you? >> So if you're just listening to what everybody's saying, you're going to miss those trends. So then you're just going with the stream. So, Juan you mentioned that cloud. Cloud was a thing at the time, we thought it's going to be the thing that takes over everything. Today it's actually multi-cloud. So multi-cloud is a thing, it's more and more people are thinking, wow, I'm paying a lot's to the cloud vendors, do I want to buy more from them or do I want to have some optionality? So that's one. Two, open. They're worried about lock-in, you know, lock-in has happened for many, many decades. So they want open architectures, open source, open standards. So that's the second one that we bet on. The third one, which you know, initially wasn't sort of super obvious was AI and machine learning. Now it's super obvious, everybody's talking about it. But when we started, it was kind of called artificial intelligence referred to robotics, and machine learning wasn't a term that people really knew about. Today, it's sort of, everybody's doing machine learning and AI. So betting on those future trends, those secular trends as we call them super critical. >> And one of the things that I want to get your thoughts on is this idea of re-platforming versus refactoring. You see a lot being talked about in some of these, what does that even mean? It's people trying to figure that out. Re-platforming I get the cloud scale. But as you look at the cloud benefits, what do you say to customers out there and enterprises that are trying to use the benefits of the cloud? Say data for instance, in the middle of how could they be thinking about refactoring? And how can they make a better selection on suppliers? I mean, how do you know it used to be RFP, you deliver these speeds and feeds and you get selected. Now I think there's a little bit different science and methodology behind it. What's your thoughts on this refactoring as a buyer? What do I got to do? >> Well, I mean let's start with you said RFP and so on. Times have changed. Back in the day, you had to kind of sign up for something and then much later you're going to get it. So then you have to go through this arduous process. In the cloud, would pay us to go model elasticity and so on. You can kind of try your way to it. You can try before you buy. And you can use more and more. You can gradually, you don't need to go in all in and you know, say we commit to 50,000,000 and six months later to find out that wow, this stuff has got shelf where it doesn't work. So that's one thing that has changed it's beneficial. But the second thing is, don't just mimic what you had on prem in the cloud. So that's what this refactoring is about. If you had, you know, Hadoop data lake, now you're just going to have an S3 data lake. If you had an on-prem data warehouse now you just going to have a cloud data warehouse. You're just repeating what you did on prem in the cloud, architected for the future. And you know, for us, the most important thing that we say is that this lake house paradigm is a cloud native way of organizing your data. That's different from how you would do things on premises. So think through what's the right way of doing it in the cloud. Don't just try to copy paste what you had on premises in the cloud. >> It's interesting one of the things that we're observing and I'd love to get your reaction to this. Dave a lot** and I have been reporting on it is, two personas in the enterprise are changing their organization. One is I call IT ops or there's an SRE role developing. And the data teams are being dismantled and being kind of sprinkled through into other teams is this notion of data, pipelining being part of workflows, not just the department. Are you seeing organizational shifts in how people are organizing their resources, their human resources to take advantage of say that the data problems that are need to being solved with machine learning and whatnot and cloud-scale? >> Yeah, absolutely. So you're right. SRE became a thing, lots of DevOps people. It was because when the cloud vendors launched their infrastructure as a service to stitch all these things together and get it all working you needed a lot of devOps people. But now things are maturing. So, you know, with vendors like Data bricks and other multi-cloud vendors, you can actually get much higher level services where you don't need to necessarily have lots of lots of DevOps people that are themselves trying to stitch together lots of services to make this work. So that's one trend. But secondly, you're seeing more data teams being sort of completely ubiquitous in these organizations. Before it used to be you have one data team and then we'll have data and AI and we'll be done. ' It's a one and done. But that's not how it works. That's not how Google, Facebook, Twitter did it, they had data throughout the organization. Every BU was empowered. It's sales, it's marketing, it's finance, it's engineering. So how do you embed all those data teams and make them actually run fast? And you know, there's this concept of a data mesh which is super important where you can actually decentralize and enable all these teams to focus on their domains and run super fast. And that's really enabled by this Lake house paradigm in the cloud that we're talking about. Where you're open, you're basing it on open standards. You have flexibility in the data types and how they're going to store their data. So you kind of provide a lot of that flexibility, but at the same time, you have sort of centralized governance for it. So absolutely things are changing in the market. >> Well, you're just the professor, the masterclass right here is amazing. Thanks for sharing that insight. You're always got to go out of date and that's why we have you on here. You're amazing, great resource for the community. Ransomware is a huge problem, it's now the government's focus. We're being attacked and we don't know where it's coming from. This business models around cyber that's expanding rapidly. There's real revenue behind it. There's a data problem. It's not just a security problem. So one of the themes in all of these startup showcases is data is ubiquitous in the value propositions. One of them is ransomware. What's your thoughts on ransomware? Is it a data problem? Does cloud help? Some are saying that cloud's got better security with ransomware, then say on premise. What's your vision of how you see this ransomware problem being addressed besides the government taking over? >> Yeah, that's a great question. Let me start by saying, you know, we're a data company, right? And if you say you're a data company, you might as well just said, we're a privacy company, right? It's like some people say, well, what do you think about privacy? Do you guys even do privacy? We're a data company. So yeah, we're a privacy company as well. Like you can't talk about data without talking about privacy. With every customer, with every enterprise. So that's obviously top of mind for us. I do think that in the cloud, security is much better because, you know, vendors like us, we're investing so much resources into security and making sure that we harden the infrastructure and, you know, by actually having all of this infrastructure, we can monitor it, detect if something is, you know, an attack is happening, and we can immediately sort of stop it. So that's different from when it's on prem, you have kind of like the separated duties where the software vendor, which would have been us, doesn't really see what's happening in the data center. So, you know, there's an IT team that didn't develop the software is responsible for the security. So I think things are much better now. I think we're much better set up, but of course, things like cryptocurrencies and so on are making it easier for people to sort of hide. There decentralized networks. So, you know, the attackers are getting more and more sophisticated as well. So that's definitely something that's super important. It's super top of mind. We're all investing heavily into security and privacy because, you know, that's going to be super critical going forward. >> Yeah, we got to move that red line, and figure that out and get more intelligence. Decentralized trends not going away it's going to be more of that, less of the centralized. But centralized does come into play with data. It's a mix, it's not mutually exclusive. And I'll get your thoughts on this. Architectural question with, you know, 5G and the edge coming. Amazon's got that outpost stringent, the wavelength, you're seeing mobile world Congress coming up in this month. The focus on processing data at the edge is a huge issue. And enterprises are now going to be commercial part of that. So architecture decisions are being made in enterprises right now. And this is a big issue. So you mentioned multi-cloud, so tools versus platforms. Now I'm an enterprise buyer and there's no more RFPs. I got all this new choices for startups and growing companies to choose from that are cloud native. I got all kinds of new challenges and opportunities. How do I build my architecture so I don't foreclose a future opportunity. >> Yeah, as I said, look, you're actually right. Cloud is becoming even more and more something that everybody's adopting, but at the same time, there is this thing that the edge is also more and more important. And the connectivity between those two and making sure that you can really do that efficiently. My ask from enterprises, and I think this is top of mind for all the enterprise architects is, choose open because that way you can avoid locking yourself in. So that's one thing that's really, really important. In the past, you know, all these vendors that locked you in, and then you try to move off of them, they were highly innovative back in the day. In the 80's and the 90's, there were the best companies. You gave them all your data and it was fantastic. But then because you were locked in, they didn't need to innovate anymore. And you know, they focused on margins instead. And then over time, the innovation stopped and now you were kind of locked in. So I think openness is really important. I think preserving optionality with multi-cloud because we see the different clouds have different strengths and weaknesses and it changes over time. All right. Early on AWS was the only game that either showed up with much better security, active directory, and so on. Now Google with AI capabilities, which one's going to win, which one's going to be better. Actually, probably all three are going to be around. So having that optionality that you can pick between the three and then artificial intelligence. I think that's going to be the key to the future. You know, you asked about security earlier. That's how people detect zero day attacks, right? You ask about the edge, same thing there, that's where the predictions are going to happen. So make sure that you invest in AI and artificial intelligence very early on because it's not something you can just bolt on later on and have a little data team somewhere that then now you have AI and it's one and done. >> All right. Great insight. I've got to ask you, the folks may or may not know, but you're a professor at Berkeley as well, done a lot of great work. That's where you kind of came out of when Data bricks was formed. And the Berkeley basically was it invented distributed computing back in the 80's. I remember I was breaking in when Unix was proprietary, when software wasn't open you actually had the deal that under the table to get code. Now it's all open. Isn't the internet now with distributed computing and how interconnects are happening. I mean, the internet didn't break during the pandemic, which proves the benefit of the internet. And that's a positive. But as you start seeing edge, it's essentially distributed computing. So I got to ask you from a computer science standpoint. What do you see as the key learnings or connect the dots for how this distributed model will work? I see hybrids clearly, hybrid cloud is clearly the operating model but if you take it to the next level of distributed computing, what are some of the key things that you look for in the next five years as this starts to be completely interoperable, obviously software is going to drive a lot of it. What's your vision on that? >> Yeah, I mean, you know, so Berkeley, you're right for the gigs, you know, there was a now project 20, 30 years ago that basically is how we do things. There was a project on how you search in the very early on with Inktomi that became how Google and everybody else to search today. So workday was super, super early, sometimes way too early. And that was actually the mistake. Was that they were so early that people said that that stuff doesn't work. And then 20 years later you were invented. So I think 2009, Berkeley published just above the clouds saying the cloud is the future. At that time, most industry leaders said, that's just, you know, that doesn't work. Today, recently they published a research paper called, Sky Computing. So sky computing is what you get above the clouds, right? So we have the cloud as the future, the next level after that is the sky. That's one on top of them. That's what multi-cloud is. So that's a lot of the research at Berkeley, you know, into distributed systems labs is about this. And we're excited about that. Then we're one of the sky computing vendors out there. So I think you're going to see much more innovation happening at the sky level than at the compute level where you needed all those DevOps and SRE people to like, you know, build everything manually themselves. I can just see the memes now coming Ali, sky net, star track. You've got space too, by the way, space is another frontier that is seeing a lot of action going on because now the surface area of data with satellites is huge. So again, I know you guys are doing a lot of business with folks in that vertical where you starting to see real time data acquisition coming from these satellites. What's your take on the whole space as the, not the final frontier, but certainly as a new congested and contested space for, for data? >> Well, I mean, as a data vendor, we see a lot of, you know, alternative data sources coming in and people aren't using machine learning< AI to eat out signal out of the, you know, massive amounts of imagery that's coming out of these satellites. So that's actually a pretty common in FinTech, which is a vertical for us. And also sort of in the public sector, lots of, lots of, lots of satellites, imagery data that's coming. And these are massive volumes. I mean, it's like huge data sets and it's a super, super exciting what they can do. Like, you know, extracting signal from the satellite imagery is, and you know, being able to handle that amount of data, it's a challenge for all the companies that we work with. So we're excited about that too. I mean, definitely that's a trend that's going to continue. >> All right. I'm super excited for you. And thanks for coming on The Cube here for our keynote. I got to ask you a final question. As you think about the future, I see your company has achieved great success in a very short time, and again, you guys done the work, I've been following your company as you know. We've been been breaking that Data bricks story for a long time. I've been excited by it, but now what's changed. You got to start thinking about the next 20 miles stair when you look at, you know, the sky computing, you're thinking about these new architectures. As the CEO, your job is to one, not run out of money which you don't have to worry about that anymore, so hiring. And then, you got to figure out that next 20 miles stair as a company. What's that going on in your mind? Take us through your mindset of what's next. And what do you see out in that landscape? >> Yeah, so what I mentioned around Sky company optionality around multi-cloud, you're going to see a lot of capabilities around that. Like how do you get multi-cloud disaster recovery? How do you leverage the best of all the clouds while at the same time not having to just pick one? So there's a lot of innovation there that, you know, we haven't announced yet, but you're going to see a lot of it over the next many years. Things that you can do when you have the optionality across the different parts. And the second thing that's really exciting for us is bringing AI to the masses. Democratizing data and AI. So how can you actually apply machine learning to machine learning? How can you automate machine learning? Today machine learning is still quite complicated and it's pretty advanced. It's not going to be that way 10 years from now. It's going to be very simple. Everybody's going to have it at their fingertips. So how do we apply machine learning to machine learning? It's called auto ML, automatic, you know, machine learning. So that's an area, and that's not something that can be done with, right? But the goal is to eventually be able to automate a way the whole machine learning engineer and the machine learning data scientist altogether. >> You know it's really fun and talking with you is that, you know, for years we've been talking about this inside the ropes, inside the industry, around the future. Now people starting to get some visibility, the pandemics forced that. You seeing the bad projects being exposed. It's like the tide pulled out and you see all the scabs and bad projects that were justified old guard technologies. If you get it right you're on a good wave. And this is clearly what we're seeing. And you guys example of that. So as enterprises realize this, that they're going to have to look double down on the right projects and probably trash the bad projects, new criteria, how should people be thinking about buying? Because again, we talked about the RFP before. I want to kind of circle back because this is something that people are trying to figure out. You seeing, you know, organic, you come in freemium models as cloud scale becomes the advantage in the lock-in frankly seems to be the value proposition. The more value you provide, the more lock-in you get. Which sounds like that's the way it should be versus proprietary, you know, protocols. The protocol is value. How should enterprises organize their teams? Is it end to end workflows? Is it, and how should they evaluate the criteria for these technologies that they want to buy? >> Yeah, that's a great question. So I, you know, it's very simple, try to future proof your decision-making. Make sure that whatever you're doing is not blocking your in. So whatever decision you're making, what if the world changes in five years, make sure that if you making a mistake now, that's not going to bite you in about five years later. So how do you do that? Well, open source is great. If you're leveraging open-source, you can try it out already. You don't even need to talk to any vendor. Your teams can already download it and try it out and get some value out of it. If you're in the cloud, this pay as you go models, you don't have to do a big RFP and commit big. You can try it, pay the vendor, pay as you go, $10, $15. It doesn't need to be a million dollar contract and slowly grow as you're providing value. And then make sure that you're not just locking yourself in to one cloud or, you know, one particular vendor. As much as possible preserve your optionality because then that's not a one-way door. If it turns out later you want to do something else, you can, you know, pick other things as well. You're not locked in. So that's what I would say. Keep that top of mind that you're not locking yourself into a particular decision that you made today, that you might regret in five years. >> I really appreciate you coming on and sharing your with our community and The Cube. And as always great to see you. I really enjoy your clubhouse talks, and I really appreciate how you give back to the community. And I want to thank you for coming on and taking the time with us today. >> Thanks John, always appreciate talking to you. >> Okay Ali Ghodsi, CEO of Data bricks, a success story that proves the validation of cloud scale, open and create value, values the new lock-in. So Natalie, back to you for continuing coverage. >> That was a terrific interview John, but I'd love to get Dave's insights first. What were your takeaways, Dave? >> Well, if we have more time I'll tell you how Data bricks got to where they are today, but I'll say this, the most important thing to me that Allie said was he conveyed a very clear understanding of what data companies are outright and are getting ready. Talked about four things. There's not one data team, there's many data teams. And he talked about data is decentralized, and data has to have context and that context lives in the business. He said, look, think about it. The way that the data companies would get it right, they get data in teams and sales and marketing and finance and engineering. They all have their own data and data teams. And he referred to that as a data mesh. That's a term that is your mock, the Gany coined and the warehouse of the data lake it's merely a node in that global message. It meshes discoverable, he talked about federated governance, and Data bricks, they're breaking the model of shoving everything into a single repository and trying to make that the so-called single version of the truth. Rather what they're doing, which is right on is putting data in the hands of the business owners. And that's how true data companies do. And the last thing you talked about with sky computing, which I loved, it's that future layer, we talked about multi-cloud a lot that abstracts the underlying complexity of the technical details of the cloud and creates additional value on top. I always say that the cloud players like Amazon have given the gift to the world of 100 billion dollars a year they spend in CapEx. Thank you. Now we're going to innovate on top of it. Yeah. And I think the refactoring... >> Hope by John. >> That was great insight and I totally agree. The refactoring piece too was key, he brought that home. But to me, I think Data bricks that Ali shared there and why he's been open and sharing a lot of his insights and the community. But what he's not saying, cause he's humble and polite is they cracked the code on the enterprise, Dave. And to Dave's points exactly reason why they did it, they saw an opportunity to make it easier, at that time had dupe was the rage, and they just made it easier. They was smart, they made good bets, they had a good formula and they cracked the code with the enterprise. They brought it in and they brought value. And see that's the key to the cloud as Dave pointed out. You get replatform with the cloud, then you refactor. And I think he pointed out the multi-cloud and that really kind of teases out the whole future and landscape, which is essentially distributed computing. And I think, you know, companies are starting to figure that out with hybrid and this on premises and now super edge I call it, with 5G coming. So it's just pretty incredible. >> Yeah. Data bricks, IPO is coming and people should know. I mean, what everybody, they created spark as you know John and everybody thought they were going to do is mimic red hat and sell subscriptions and support. They didn't, they developed a managed service and they embedded AI tools to simplify data science. So to your point, enterprises could buy instead of build, we know this. Enterprises will spend money to make things simpler. They don't have the resources, and so this was what they got right was really embedding that, making a building a managed service, not mimicking the kind of the red hat model, but actually creating a new value layer there. And that's big part of their success. >> If I could just add one thing Natalie to that Dave saying is really right on. And as an enterprise buyer, if we go the other side of the equation, it used to be that you had to be a known company, get PR, you fill out RFPs, you had to meet all the speeds. It's like going to the airport and get a swab test, and get a COVID test and all kinds of mechanisms to like block you and filter you. Most of the biggest success stories that have created the most value for enterprises have been the companies that nobody's understood. And Andy Jazz's famous quote of, you know, being misunderstood is actually a good thing. Data bricks was very misunderstood at the beginning and no one kind of knew who they were but they did it right. And so the enterprise buyers out there, don't be afraid to test the startups because you know the next Data bricks is out there. And I think that's where I see the psychology changing from the old IT buyers, Dave. It's like, okay, let's let's test this company. And there's plenty of ways to do that. He illuminated those premium, small pilots, you don't need to go on these big things. So I think that is going to be a shift in how companies going to evaluate startups. >> Yeah. Think about it this way. Why should the large banks and insurance companies and big manufacturers and pharma companies, governments, why should they burn resources managing containers and figuring out data science tools if they can just tap into solutions like Data bricks which is an AI platform in the cloud and let the experts manage all that stuff. Think about how much money in time that saves enterprises. >> Yeah, I mean, we've got 15 companies here we're showcasing this batch and this season if you call it. That episode we are going to call it? They're awesome. Right? And the next 15 will be the same. And these companies could be the next billion dollar revenue generator because the cloud enables that day. I think that's the exciting part. >> Well thank you both so much for these insights. Really appreciate it. AWS startup showcase highlights the innovation that helps startups succeed. And no one knows that better than our very next guest, Jeff Barr. Welcome to the show and I will send this interview now to Dave and John and see you just in the bit. >> Okay, hey Jeff, great to see you. Thanks for coming on again. >> Great to be back. >> So this is a regular community segment with Jeff Barr who's a legend in the industry. Everyone knows your name. Everyone knows that. Congratulations on your recent blog posts we have reading. Tons of news, I want to get your update because 5G has been all over the news, mobile world congress is right around the corner. I know Bill Vass was a keynote out there, virtual keynote. There's a lot of Amazon discussion around the edge with wavelength. Specifically, this is the outpost piece. And I know there is news I want to get to, but the top of mind is there's massive Amazon expansion and the cloud is going to the edge, it's here. What's up with wavelength. Take us through the, I call it the power edge, the super edge. >> Well, I'm really excited about this mostly because it gives a lot more choice and flexibility and options to our customers. This idea that with wavelength we announced quite some time ago, at least quite some time ago if we think in cloud years. We announced that we would be working with 5G providers all over the world to basically put AWS in the telecom providers data centers or telecom centers, so that as their customers build apps, that those apps would take advantage of the low latency, the high bandwidth, the reliability of 5G, be able to get to some compute and storage services that are incredibly close geographically and latency wise to the compute and storage that is just going to give customers this new power and say, well, what are the cool things we can build? >> Do you see any correlation between wavelength and some of the early Amazon services? Because to me, my gut feels like there's so much headroom there. I mean, I was just riffing on the notion of low latency packets. I mean, just think about the applications, gaming and VR, and metaverse kind of cool stuff like that where having the edge be that how much power there. It just feels like a new, it feels like a new AWS. I mean, what's your take? You've seen the evolutions and the growth of a lot of the key services. Like EC2 and SA3. >> So welcome to my life. And so to me, the way I always think about this is it's like when I go to a home improvement store and I wander through the aisles and I often wonder through with no particular thing that I actually need, but I just go there and say, wow, they've got this and they've got this, they've got this other interesting thing. And I just let my creativity run wild. And instead of trying to solve a problem, I'm saying, well, if I had these different parts, well, what could I actually build with them? And I really think that this breadth of different services and locations and options and communication technologies. I suspect a lot of our customers and customers to be and are in this the same mode where they're saying, I've got all this awesomeness at my fingertips, what might I be able to do with it? >> He reminds me when Fry's was around in Palo Alto, that store is no longer here but it used to be back in the day when it was good. It was you go in and just kind of spend hours and then next thing you know, you built a compute. Like what, I didn't come in here, whether it gets some cables. Now I got a motherboard. >> I clearly remember Fry's and before that there was the weird stuff warehouse was another really cool place to hang out if you remember that. >> Yeah I do. >> I wonder if I could jump in and you guys talking about the edge and Jeff I wanted to ask you about something that is, I think people are starting to really understand and appreciate what you did with the entrepreneur acquisition, what you do with nitro and graviton, and really driving costs down, driving performance up. I mean, there's like a compute Renaissance. And I wonder if you could talk about the importance of that at the edge, because it's got to be low power, it has to be low cost. You got to be doing processing at the edge. What's your take on how that's evolving? >> Certainly so you're totally right that we started working with and then ultimately acquired Annapurna labs in Israel a couple of years ago. I've worked directly with those folks and it's really awesome to see what they've been able to do. Just really saying, let's look at all of these different aspects of building the cloud that were once effectively kind of somewhat software intensive and say, where does it make sense to actually design build fabricate, deploy custom Silicon? So from putting up the system to doing all kinds of additional kinds of security checks, to running local IO devices, running the NBME as fast as possible to support the EBS. Each of those things has been a contributing factor to not just the power of the hardware itself, but what I'm seeing and have seen for the last probably two or three years at this point is the pace of innovation on instance types just continues to get faster and faster. And it's not just cranking out new instance types because we can, it's because our awesomely diverse base of customers keeps coming to us and saying, well, we're happy with what we have so far, but here's this really interesting new use case. And we needed a different ratio of memory to CPU, or we need more cores based on the amount of memory, or we needed a lot of IO bandwidth. And having that nitro as the base lets us really, I don't want to say plug and play, cause I haven't actually built this myself, but it seems like they can actually put the different elements together, very very quickly and then come up with new instance types that just our customers say, yeah, that's exactly what I asked for and be able to just do this entire range of from like micro and nano sized all the way up to incredibly large with incredible just to me like, when we talk about terabytes of memory that are just like actually just RAM memory. It's like, that's just an inconceivably large number by the standards of where I started out in my career. So it's all putting this power in customer hands. >> You used the term plug and play, but it does give you that nitro gives you that optionality. And then other thing that to me is really exciting is the way in which ISVs are writing to whatever's underneath. So you're making that, you know, transparent to the users so I can choose as a customer, the best price performance for my workload and that that's just going to grow that ISV portfolio. >> I think it's really important to be accurate and detailed and as thorough as possible as we launch each one of these new instance types with like what kind of processor is in there and what clock speed does it run at? What kind of, you know, how much memory do we have? What are the, just the ins and outs, and is it Intel or arm or AMD based? It's such an interesting to me contrast. I can still remember back in the very very early days of back, you know, going back almost 15 years at this point and effectively everybody said, well, not everybody. A few people looked and said, yeah, we kind of get the value here. Some people said, this just sounds like a bunch of generic hardware, just kind of generic hardware in Iraq. And even back then it was something that we were very careful with to design and optimize for use cases. But this idea that is generic is so, so, so incredibly inaccurate that I think people are now getting this. And it's okay. It's fine too, not just for the cloud, but for very specific kinds of workloads and use cases. >> And you guys have announced obviously the performance improvements on a lamb** does getting faster, you got the per billing, second billings on windows and SQL server on ECE too**. So I mean, obviously everyone kind of gets that, that's been your DNA, keep making it faster, cheaper, better, easier to use. But the other area I want to get your thoughts on because this is also more on the footprint side, is that the regions and local regions. So you've got more region news, take us through the update on the expansion on the footprint of AWS because you know, a startup can come in and these 15 companies that are here, they're global with AWS, right? So this is a major benefit for customers around the world. And you know, Ali from Data bricks mentioned privacy. Everyone's a privacy company now. So the huge issue, take us through the news on the region. >> Sure, so the two most recent regions that we announced are in the UAE and in Israel. And we generally like to pre-announce these anywhere from six months to two years at a time because we do know that the customers want to start making longer term plans to where they can start thinking about where they can do their computing, where they can store their data. I think at this point we now have seven regions under construction. And, again it's all about customer trice. Sometimes it's because they have very specific reasons where for based on local laws, based on national laws, that they must compute and restore within a particular geographic area. Other times I say, well, a lot of our customers are in this part of the world. Why don't we pick a region that is as close to that part of the world as possible. And one really important thing that I always like to remind our customers of in my audience is, anything that you choose to put in a region, stays in that region unless you very explicitly take an action that says I'd like to replicate it somewhere else. So if someone says, I want to store data in the US, or I want to store it in Frankfurt, or I want to store it in Sao Paulo, or I want to store it in Tokyo or Osaka. They get to make that very specific choice. We give them a lot of tools to help copy and replicate and do cross region operations of various sorts. But at the heart, the customer gets to choose those locations. And that in the early days I think there was this weird sense that you would, you'd put things in the cloud that would just mysteriously just kind of propagate all over the world. That's never been true, and we're very very clear on that. And I just always like to reinforce that point. >> That's great stuff, Jeff. Great to have you on again as a regular update here, just for the folks watching and don't know Jeff he'd been blogging and sharing. He'd been the one man media band for Amazon it's early days. Now he's got departments, he's got peoples on doing videos. It's an immediate franchise in and of itself, but without your rough days we wouldn't have gotten all the great news we subscribe to. We watch all the blog posts. It's essentially the flow coming out of AWS which is just a tsunami of a new announcements. Always great to read, must read. Jeff, thanks for coming on, really appreciate it. That's great. >> Thank you John, great to catch up as always. >> Jeff Barr with AWS again, and follow his stuff. He's got a great audience and community. They talk back, they collaborate and they're highly engaged. So check out Jeff's blog and his social presence. All right, Natalie, back to you for more coverage. >> Terrific. Well, did you guys know that Jeff took a three week AWS road trip across 15 cities in America to meet with cloud computing enthusiasts? 5,500 miles he drove, really incredible I didn't realize that. Let's unpack that interview though. What stood out to you John? >> I think Jeff, Barr's an example of what I call direct to audience a business model. He's been doing it from the beginning and I've been following his career. I remember back in the day when Amazon was started, he was always building stuff. He's a builder, he's classic. And he's been there from the beginning. At the beginning he was just the blog and it became a huge audience. It's now morphed into, he was power blogging so hard. He has now support and he still does it now. It's basically the conduit for information coming out of Amazon. I think Jeff has single-handedly made Amazon so successful at the community developer level, and that's the startup action happened and that got them going. And I think he deserves a lot of the success for AWS. >> And Dave, how about you? What is your reaction? >> Well I think you know, and everybody knows about the cloud and back stop X** and agility, and you know, eliminating the undifferentiated, heavy lifting and all that stuff. And one of the things that's often overlooked which is why I'm excited to be part of this program is the innovation. And the innovation comes from startups, and startups start in the cloud. And so I think that that's part of the flywheel effect. You just don't see a lot of startups these days saying, okay, I'm going to do something that's outside of the cloud. There are some, but for the most part, you know, if you saw in software, you're starting in the cloud, it's so capital efficient. I think that's one thing, I've throughout my career. I've been obsessed with every part of the stack from whether it's, you know, close to the business process with the applications. And right now I'm really obsessed with the plumbing, which is why I was excited to talk about, you know, the Annapurna acquisition. Amazon bought and a part of the $350 million, it's reported, you know, maybe a little bit more, but that isn't an amazing acquisition. And the reason why that's so important is because Amazon is continuing to drive costs down, drive performance up. And in my opinion, leaving a lot of the traditional players in their dust, especially when it comes to the power and cooling. You have often overlooked things. And the other piece of the interview was that Amazon is actually getting ISVs to write to these new platforms so that you don't have to worry about there's the software run on this chip or that chip, or x86 or arm or whatever it is. It runs. And so I can choose the best price performance. And that's where people don't, they misunderstand, you always say it John, just said that people are misunderstood. I think they misunderstand, they confused, you know, the price of the cloud with the cost of the cloud. They ignore all the labor costs that are associated with that. And so, you know, there's a lot of discussion now about the cloud tax. I just think the pace is accelerating. The gap is not closing, it's widening. >> If you look at the one question I asked them about wavelength and I had a follow up there when I said, you know, we riff on it and you see, he lit up like he beam was beaming because he said something interesting. It's not that there's a problem to solve at this opportunity. And he conveyed it to like I said, walking through Fry's. But like, you go into a store and he's a builder. So he sees opportunity. And this comes back down to the Martine Casada paradox posts he wrote about do you optimize for CapEx or future revenue? And I think the tell sign is at the wavelength edge piece is going to be so creative and that's going to open up massive opportunities. I think that's the place to watch. That's the place I'm watching. And I think startups going to come out of the woodwork because that's where the action will be. And that's just Amazon at the edge, I mean, that's just cloud at the edge. I think that is going to be very effective. And his that's a little TeleSign, he kind of revealed a little bit there, a lot there with that comment. >> Well that's a to be continued conversation. >> Indeed, I would love to introduce our next guest. We actually have Soma on the line. He's the managing director at Madrona venture group. Thank you Soma very much for coming for our keynote program. >> Thank you Natalie and I'm great to be here and will have the opportunity to spend some time with you all. >> Well, you have a long to nerd history in the enterprise. How would you define the modern enterprise also known as cloud scale? >> Yeah, so I would say I have, first of all, like, you know, we've all heard this now for the last, you know, say 10 years or so. Like, software is eating the world. Okay. Put it another way, we think about like, hey, every enterprise is a software company first and foremost. Okay. And companies that truly internalize that, that truly think about that, and truly act that way are going to start up, continue running well and things that don't internalize that, and don't do that are going to be left behind sooner than later. Right. And the last few years you start off thing and not take it to the next level and talk about like, not every enterprise is not going through a digital transformation. Okay. So when you sort of think about the world from that lens. Okay. Modern enterprise has to think about like, and I am first and foremost, a technology company. I may be in the business of making a car art, you know, manufacturing paper, or like you know, manufacturing some healthcare products or what have you got out there. But technology and software is what is going to give me a unique, differentiated advantage that's going to let me do what I need to do for my customers in the best possible way [Indistinct]. So that sort of level of focus, level of execution, has to be there in a modern enterprise. The other thing is like not every modern enterprise needs to think about regular. I'm competing for talent, not anymore with my peers in my industry. I'm competing for technology talent and software talent with the top five technology companies in the world. Whether it is Amazon or Facebook or Microsoft or Google, or what have you cannot think, right? So you really have to have that mindset, and then everything flows from that. >> So I got to ask you on the enterprise side again, you've seen many ways of innovation. You've got, you know, been in the industry for many, many years. The old way was enterprises want the best proven product and the startups want that lucrative contract. Right? Yeah. And get that beach in. And it used to be, and we addressed this in our earlier keynote with Ali and how it's changing, the buyers are changing because the cloud has enabled this new kind of execution. I call it agile, call it what you want. Developers are driving modern applications, so enterprises are still, there's no, the playbooks evolving. Right? So we see that with the pandemic, people had needs, urgent needs, and they tried new stuff and it worked. The parachute opened as they say. So how do you look at this as you look at stars, you're investing in and you're coaching them. What's the playbook? What's the secret sauce of how to crack the enterprise code today. And if you're an enterprise buyer, what do I need to do? I want to be more agile. Is there a clear path? Is there's a TSA to let stuff go through faster? I mean, what is the modern playbook for buying and being a supplier? >> That's a fantastic question, John, because I think that sort of playbook is changing, even as we speak here currently. A couple of key things to understand first of all is like, you know, decision-making inside an enterprise is getting more and more de-centralized. Particularly decisions around what technology to use and what solutions to use to be able to do what people need to do. That decision making is no longer sort of, you know, all done like the CEO's office or the CTO's office kind of thing. Developers are more and more like you rightly said, like sort of the central of the workflow and the decision making process. So it'll be who both the enterprises, as well as the startups to really understand that. So what does it mean now from a startup perspective, from a startup perspective, it means like, right. In addition to thinking about like hey, not do I go create an enterprise sales post, do I sell to the enterprise like what I might have done in the past? Is that the best way of moving forward, or should I be thinking about a product led growth go to market initiative? You know, build a product that is easy to use, that made self serve really works, you know, get the developers to start using to see the value to fall in love with the product and then you think about like hey, how do I go translate that into a contract with enterprise. Right? And more and more what I call particularly, you know, startups and technology companies that are focused on the developer audience are thinking about like, you know, how do I have a bottom up go to market motion? And sometime I may sort of, you know, overlap that with the top down enterprise sales motion that we know that has been going on for many, many years or decades kind of thing. But really this product led growth bottom up a go to market motion is something that we are seeing on the rise. I would say they're going to have more than half the startup that we come across today, have that in some way shape or form. And so the enterprise also needs to understand this, the CIO or the CTO needs to know that like hey, I'm not decision-making is getting de-centralized. I need to empower my engineers and my engineering managers and my engineering leaders to be able to make the right decision and trust them. I'm going to give them some guard rails so that I don't find myself in a soup, you know, sometime down the road. But once I give them the guard rails, I'm going to enable people to make the decisions. People who are closer to the problem, to make the right decision. >> Well Soma, what are some of the ways that startups can accelerate their enterprise penetration? >> I think that's another good question. First of all, you need to think about like, Hey, what are enterprises wanting to rec? Okay. If you start off take like two steps back and think about what the enterprise is really think about it going. I'm a software company, but I'm really manufacturing paper. What do I do? Right? The core thing that most enterprises care about is like, hey, how do I better engage with my customers? How do I better serve my customers? And how do I do it in the most optimal way? At the end of the day that's what like most enterprises really care about. So startups need to understand, what are the problems that the enterprise is trying to solve? What kind of tools and platform technologies and infrastructure support, and, you know, everything else that they need to be able to do what they need to do and what only they can do in the most optimal way. Right? So to the extent you are providing either a tool or platform or some technology that is going to enable your enterprise to make progress on what they want to do, you're going to get more traction within the enterprise. In other words, stop thinking about technology, and start thinking about the customer problem that they want to solve. And the more you anchor your company, and more you anchor your conversation with the customer around that, the more the enterprise is going to get excited about wanting to work with you. >> So I got to ask you on the enterprise and developer equation because CSOs and CXOs, depending who you talk to have that same answer. Oh yeah. In the 90's and 2000's, we kind of didn't, we throttled down, we were using the legacy developer tools and cloud came and then we had to rebuild and we didn't really know what to do. So you seeing a shift, and this is kind of been going on for at least the past five to eight years, a lot more developers being hired yet. I mean, at FinTech is clearly a vertical, they always had developers and everyone had developers, but there's a fast ramp up of developers now and the role of open source has changed. Just looking at the participation. They're not just consuming open source, open source is part of the business model for mainstream enterprises. How is this, first of all, do you agree? And if so, how has this changed the course of an enterprise human resource selection? How they're organized? What's your vision on that? >> Yeah. So as I mentioned earlier, John, in my mind the first thing is, and this sort of, you know, like you said financial services has always been sort of hiring people [Indistinct]. And this is like five-year old story. So bear with me I'll tell you the firewall story and then come to I was trying to, the cloud CIO or the Goldman Sachs. Okay. And this is five years ago when people were still like, hey, is this cloud thing real and now is cloud going to take over the world? You know, am I really ready to put my data in the cloud? So there are a lot of questions and conversations can affect. The CIO of Goldman Sachs told me two things that I remember to this day. One is, hey, we've got a internal edict. That we made a decision that in the next five years, everything in Goldman Sachs is going to be on the public law. And I literally jumped out of the chair and I said like now are you going to get there? And then he laughed and said like now it really doesn't matter whether we get there or not. We want to set the tone, set the direction for the organization that hey, public cloud is here. Public cloud is there. And we need to like, you know, move as fast as we realistically can and think about all the financial regulations and security and privacy. And all these things that we care about deeply. But given all of that, the world is going towards public load and we better be on the leading edge as opposed to the lagging edge. And the second thing he said, like we're talking about like hey, how are you hiring, you know, engineers at Goldman Sachs Canada? And he said like in hey, I sort of, my team goes out to the top 20 schools in the US. And the people we really compete with are, and he was saying this, Hey, we don't compete with JP Morgan or Morgan Stanley, or pick any of your favorite financial institutions. We really think about like, hey, we want to get the best talent into Goldman Sachs out of these schools. And we really compete head to head with Google. We compete head to head with Microsoft. We compete head to head with Facebook. And we know that the caliber of people that we want to get is no different than what these companies want. If you want to continue being a successful, leading it, you know, financial services player. That sort of tells you what's going on. You also talked a little bit about like hey, open source is here to stay. What does that really mean kind of thing. In my mind like now, you can tell me that I can have from given my pedigree at Microsoft, I can tell you that we were the first embraces of open source in this world. So I'll say that right off the bat. But having said that we did in our turn around and said like, hey, this open source is real, this open source is going to be great. How can we embrace and how can we participate? And you fast forward to today, like in a Microsoft is probably as good as open source as probably any other large company I would say. Right? Including like the work that the company has done in terms of acquiring GitHub and letting it stay true to its original promise of open source and community can I think, right? I think Microsoft has come a long way kind of thing. But the thing that like in all these enterprises need to think about is you want your developers to have access to the latest and greatest tools. To the latest and greatest that the software can provide. And you really don't want your engineers to be reinventing the wheel all the time. So there is something available in the open source world. Go ahead, please set up, think about whether that makes sense for you to use it. And likewise, if you think that is something you can contribute to the open source work, go ahead and do that. So it's really a two way somebody Arctic relationship that enterprises need to have, and they need to enable their developers to want to have that symbiotic relationship. >> Soma, fantastic insights. Thank you so much for joining our keynote program. >> Thank you Natalie and thank you John. It was always fun to chat with you guys. Thank you. >> Thank you. >> John we would love to get your quick insight on that. >> Well I think first of all, he's a prolific investor the great from Madrona venture partners, which is well known in the tech circles. They're in Seattle, which is in the hub of I call cloud city. You've got Amazon and Microsoft there. He'd been at Microsoft and he knows the developer ecosystem. And reason why I like his perspective is that he understands the value of having developers as a core competency in Microsoft. That's their DNA. You look at Microsoft, their number one thing from day one besides software was developers. That was their army, the thousand centurions that one won everything for them. That has shifted. And he brought up open source, and .net and how they've embraced Linux, but something that tele before he became CEO, we interviewed him in the cube at an Xcel partners event at Stanford. He was open before he was CEO. He was talking about opening up. They opened up a lot of their open source infrastructure projects to the open compute foundation early. So they had already had that going and at that price, since that time, the stock price of Microsoft has skyrocketed because as Ali said, open always wins. And I think that is what you see here, and as an investor now he's picking in startups and investing in them. He's got to read the tea leaves. He's got to be in the right side of history. So he brings a great perspective because he sees the old way and he understands the new way. That is the key for success we've seen in the enterprise and with the startups. The people who get the future, and can create the value are going to win. >> Yeah, really excellent point. And just really quickly. What do you think were some of our greatest hits on this hour of programming? >> Well first of all I'm really impressed that Ali took the time to come join us because I know he's super busy. I think they're at a $28 billion valuation now they're pushing a billion dollars in revenue, gap revenue. And again, just a few short years ago, they had zero software revenue. So of these 15 companies we're showcasing today, you know, there's a next Data bricks in there. They're all going to be successful. They already are successful. And they're all on this rocket ship trajectory. Ali is smart, he's also got the advantage of being part of that Berkeley community which they're early on a lot of things now. Being early means you're wrong a lot, but you're also right, and you're right big. So Berkeley and Stanford obviously big areas here in the bay area as research. He is smart, He's got a great team and he's really open. So having him share his best practices, I thought that was a great highlight. Of course, Jeff Barr highlighting some of the insights that he brings and honestly having a perspective of a VC. And we're going to have Peter Wagner from wing VC who's a classic enterprise investors, super smart. So he'll add some insight. Of course, one of the community session, whenever our influencers coming on, it's our beat coming on at the end, as well as Katie Drucker. Another Madrona person is going to talk about growth hacking, growth strategies, but yeah, sights Raleigh coming on. >> Terrific, well thank you so much for those insights and thank you to everyone who is watching the first hour of our live coverage of the AWS startup showcase for myself, Natalie Ehrlich, John, for your and Dave Vellante we want to thank you very much for watching and do stay tuned for more amazing content, as well as a special live segment that John Furrier is going to be hosting. It takes place at 12:30 PM Pacific time, and it's called cracking the code, lessons learned on how enterprise buyers evaluate new startups. Don't go anywhere.

Published Date : Jun 24 2021

SUMMARY :

on the latest innovations and solutions How are you doing. are you looking forward to. and of course the keynotes Ali Ghodsi, of the quality of healthcare and you know, to go from, you know, a you on the other side. Congratulations and great to see you. Thank you so much, good to see you again. And you were all in on cloud. is the success of how you guys align it becomes a force that you moments that you can point to, So that's the second one that we bet on. And one of the things that Back in the day, you had to of say that the data problems And you know, there's this and that's why we have you on here. And if you say you're a data company, and growing companies to choose In the past, you know, So I got to ask you from a for the gigs, you know, to eat out signal out of the, you know, I got to ask you a final question. But the goal is to eventually be able the more lock-in you get. to one cloud or, you know, and taking the time with us today. appreciate talking to you. So Natalie, back to you but I'd love to get Dave's insights first. And the last thing you talked And see that's the key to the of the red hat model, to like block you and filter you. and let the experts manage all that stuff. And the next 15 will be the same. see you just in the bit. Okay, hey Jeff, great to see you. and the cloud is going and options to our customers. and some of the early Amazon services? And so to me, and then next thing you Fry's and before that and appreciate what you did And having that nitro as the base is the way in which ISVs of back, you know, going back is that the regions and local regions. And that in the early days Great to have you on again Thank you John, great to you for more coverage. What stood out to you John? and that's the startup action happened the most part, you know, And that's just Amazon at the edge, Well that's a to be We actually have Soma on the line. and I'm great to be here How would you define the modern enterprise And the last few years you start off thing So I got to ask you on and then you think about like hey, And the more you anchor your company, So I got to ask you on the enterprise and this sort of, you know, Thank you so much for It was always fun to chat with you guys. John we would love to get And I think that is what you see here, What do you think were it's our beat coming on at the end, and it's called cracking the code,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Ali GhodsiPERSON

0.99+

Natalie EhrlichPERSON

0.99+

DavePERSON

0.99+

MicrosoftORGANIZATION

0.99+

Dave VellantePERSON

0.99+

NataliePERSON

0.99+

JeffPERSON

0.99+

AmazonORGANIZATION

0.99+

AWSORGANIZATION

0.99+

JohnPERSON

0.99+

GoogleORGANIZATION

0.99+

OsakaLOCATION

0.99+

UAELOCATION

0.99+

AlliePERSON

0.99+

IsraelLOCATION

0.99+

Peter WagnerPERSON

0.99+

John FurrierPERSON

0.99+

FacebookORGANIZATION

0.99+

TokyoLOCATION

0.99+

$10QUANTITY

0.99+

Sao PauloLOCATION

0.99+

Goldman SachsORGANIZATION

0.99+

FrankfurtLOCATION

0.99+

BerkeleyORGANIZATION

0.99+

Jeff BarrPERSON

0.99+

SeattleLOCATION

0.99+

$28 billionQUANTITY

0.99+

Katie DruckerPERSON

0.99+

$15QUANTITY

0.99+

Morgan StanleyORGANIZATION

0.99+

SomaPERSON

0.99+

IraqLOCATION

0.99+

2009DATE

0.99+

JuanPERSON

0.99+

Goldman SachsORGANIZATION

0.99+

$350 millionQUANTITY

0.99+

AliPERSON

0.99+

11 yearsQUANTITY

0.99+

Rick Farnell, Protegrity | AWS Startup Showcase: The Next Big Thing in AI, Security, & Life Sciences


 

(gentle music) >> Welcome to today's session of the AWS Startup Showcase The Next Big Thing in AI, Security, & Life Sciences. Today we're featuring Protegrity for the life sciences track. I'm your host for theCUBE, Natalie Erlich, and now we're joined by our guest, Rick Farnell, the CEO of Protegrity. Thank you so much for being with us. >> Great to be here. Thanks so much Natalie, great to be on theCUBE. >> Yeah, great, and so we're going to talk today about the ransomware game, and how it has changed with kinetic data protection. So, the title of today's video segment makes a bold claim, how are kinetic data and ransomware connected? >> So first off kinetic data, data is in use, it's moving, it's not static, it's no longer sitting still, and your data protection has to adhere to those same standards. And I think if you kind of look at what's happening in the ransomware kind of attacks, there's a couple of different things going on, which is number one, bad actors are getting access to data in the clear, and they're holding that data ransom, and threatening to release that data. So kind of from a Protegrity standpoint, with our protection capabilities, that data would be rendered useless to them in that scenario. So there's lots of ways in which kind of backup data protection, really wonderful opportunities to do both data protection and kind of that backup mixed together really is a wonderful solution to the threat of ransomware. And it's a serious issue and it's not just targeting the most highly regulated industries and customers, we're seeing kind of attacks on pipeline and ferry companies, and really there is no end to where some of these bad actors are really focusing on and the damages can be in the hundreds of millions of dollars and last for years after from a brand reputation. So I think if you look at how data is used today, there's that kind of opposing forces where the business wants to use data at the speed of light to produce more machine learning, and more artificial intelligence, and predict where customers are going to be, and have wonderful services at their fingertips. But at the same time, they really want to protect their data, and sometimes those architectures can be at odds, and at Protegrity, we're really focusing on solving that problem. So free up your data to be used in artificial intelligence and machine learning, while making sure that it is absolutely bulletproof from some of these ransomware attacks. >> Yeah, I mean, you bring a really fascinating point that's really central to your business. Could you tell us more about how you're actually making that data worthless? I mean, that sounds really revolutionary. >> So, it sounds novel, right? To kind of make your data worthless in the wrong hands. And I think from a Protegrity perspective, our kind of policy and protection capability follows the individual piece of data no matter where it lives in the architecture. And we do a ton of work as the world does with Amazon Web Services, so kind of helping customers really blend their hybrid cloud strategies with their on-premise and their use of AWS, is something that we thrive at. So protecting that data, not just at rest or while it's in motion, but it's a continuous protection policy that we can basically preserve the privacy of the data but still keep it unique for use in downstream analytics and machine learning. >> Right, well, traditional security is rather stifling, so how can we fix this, and what are you doing to amend that? >> Well, I think if you look at cybersecurity, and we certainly play a big role in the cybersecurity world but like any industry, there are many layers. And traditional cybersecurity investment has been at the perimeter level, at the network level keeping bad actors out, and once people do get through some of those fences, if your data is not protected at a fine grain level, they have access to it. And I think from our standpoint, yes, we're last line of defense but at the same time, we partner with folks in the cybersecurity industry and with AWS and with others in the backup and recovery to give customers that level of protection, but still allow their kinetic data to be utilized in downstream analytics. >> Right, well, I'd love to hear more about the types of industries that you're helping, and specifically healthcare obviously, a really big subject for the year and probably now for years to come, how is this industry using kinetic protection at the moment? >> So certainly, as you mentioned, some of the most highly regulated industries are our sweet spot. So financial services, insurance, online retail, and healthcare, or any industry that has sensitive data and sensitive customer data, so think first name last name, credit card information, national ID number, social security number blood type, cancer type. That's all sensitive information that you as an organization want to protect. So in the healthcare space, specifically, some of the largest healthcare organizations in the world rely on Protegrity to provide that level of protection, but at the same time, give them the business flexibility to utilize that data. So one of our customers, one of the leaders in online prescriptions, and that is an AWS customer, to allow a wonderful service to be delivered to all of their customers while maintaining protection. If you think about sharing data on your watch with your insurance provider, we have lots of customers that bridge that gap and have that personal data coming in to the insurance companies. All the way to, if in a use case in the future, looking at the pandemic, if you have to prove that you've been vaccinated, we're talking about some sensitive information, so you want to be able to show that information but still have the confidence that it's not going to be used for nefarious purposes. >> Right, and what is next for Protegrity? >> Well, I think continuing on our journey, we've been around for 17 years now, and I think the last couple, there's been an absolute renaissance in fine-grained data protection or that connected data protection, and organizations are recognizing that continuing to protect your perimeter, continuing to protect your firewalls, that's not going to go away anytime soon. Your access points, your points of vulnerability to keep bad actors out, but at the same time, recognizing that the data itself needs to be protected but with that balance of utilizing it downstream for analytic purposes, for machine learning, for artificial intelligence. Keeping the data of hundreds of millions if not billions of people saved, that's what we do. If you were to add up the customers of all of our customers, the largest banks, the largest insurance companies, largest healthcare companies in the world, globally, we're protecting the private data of billions of human beings. And it doesn't just stop there, I think you asked a great question about kind of the industry and yes, insurance, healthcare, retail, where there's a lot of sensitive data that certainly can be a focus point. But in the IOT space, kind of if you think about GPS location or geolocation, if you think about a device, and what it does, and the intelligence that it has, and the decisions that it makes on the fly, protecting data and keeping that safe is not just a personal thing, we're stepping into intellectual property and some of the most valuable assets that companies have, which is their decision-making on how they use data and how they deliver an experience, and I think that's why there's been such a renaissance, if you will, in kind of that fine grain data protection that we provide. >> Yeah, well, what is Protegrity's role now in future proofing businesses against cyber attacks? I mean, you mentioned really the ramifications of that and the impact it can have on businesses, but also on governments. I mean, obviously this is really critical. >> So there's kind of a three-step approach, and this is something that we have certainly kind of felt for a long, long time, and we work on with our customers. One is having that fine-grain data protection. So tokenizing your data so that if someone were to get your data, it's worthless, unless they have the ability to unlock every single individual piece of data. So that's number one, and then that's kind of what Protegrity provides. Number two, having a wonderful backup capability to roll kind of an active-active, AWS being one of the major clouds in the world where we deploy our software regularly and work with our customers, having multi-regions, multi-capabilities for an active-active scenario where if there's something that goes down or happens you can bring that down and bring in a new environment up. And then third is kind of malware detection in the rest of the cyber world to make sure that you rinse kind of your architecture from some of those agents. And I think when you kind of look at it, ransomware, they take data, they encrypt your data, so they force you to give them Bitcoin, or whatnot, or they'll release some of your data. And if that data is rendered useless, that's one huge step in kind of your discussions with these nefarious actors and be like you could release it, but there's nothing there, you're not going to see anything. And then second, if you have a wonderful backup capability where you wind down that environment that has been infiltrated, prove that this new environment is safe, have your production data have rolling and then wind that back up, you're back in business. You don't have to notify your customers, you don't have to deal with the ransomware players. So it's really a three-step process but ultimately it starts with protecting your data and tokenizing your data, and that's something that Protegrity does really, really well. >> So you're basically able to eliminate the financial impact of a breach? >> Honestly, we dramatically reduce the risk of customers being at risk for ransomware attacks 100%. Now, tokenizing data and moving that direction is something that it's not trivial, we are literally replacing production data with a token and then making sure that all downstream applications have the ability to utilize that, and make sure that the analytic systems and machine learning systems, and artificial intelligence applications that are built downstream on that data have the ability to execute, but that is something that from our patent portfolio and what we provide to our customers, again, some of the largest organizations in retail, in financial services, in banking, and in healthcare, we've been doing that for a long time. We're not just saying that we can do this and we're in version one of our product, we've been doing this for years, supporting the largest organizations with a 24 by seven capability. >> Right, and tell us a bit about the competitive landscape, where do you see your offering compared to your competitors? >> So, kind of historically back, let's call it an era ago maybe even before cloud even became a thing, and hybrid cloud, there were a handful of players that could acquire into much larger organizations, those organizations have been dusting off those acquired assets, and we're seeing them come back in. There's some new entrants into our space that have some protection mechanisms, whether it be encryption, or whether it be anonymization, but unless you're doing fine grain tokenization, you're not going to be able to allow that data to participate in the artificial intelligence world. So, we see kind of a range of competition there. And then I'd say probably the biggest competitor, Natalie, is customers not doing tokenization. They're saying, "No, we're okay, we'll continue protecting our firewall, we'll continue protecting our access points, we'll invest a little bit more in maybe some governance, but that fine grain data protection, maybe it's not for us." And that is the big shift that's happening. You look at kind of the beginning of this year with the solar winds attack, and the vulnerability that caused the very large and important organizations found themselves the last few weeks with all the ransomware attacks that are happening on meat processing plants and facilities, shutting down meat production, pipeline, stopping oil and gas and kind of that. So we're seeing a complete shift in the types of organizations and the industries that need to protect their data. It's not just the healthcare organizations, or the banks, or the credit card companies, it is every single industry, every single size company. >> Right, and I got to ask you this questioning, what is your defining contribution to the future of cloud scale? >> Well, ultimately we kind of have a charge here at Protegrity where we feel like we protect the world's most sensitive data. And when we come into work every day, that's what every single employee thinks at Protegrity. We are standing behind billions of individuals who are customers of our customers, and that's a cultural thing for us, and we take that very serious. We have maniacal customer support supporting our biggest customers with a fall of the sun 24 by seven global capability. So that's number one. So, I think our part in this is really helping to educate the world that there is a solution for this ransomware and for some of these things that don't have to happen. Now, naturally with any solution, there's going to be some investment, there's going to be some architecture changes, but with partnerships like AWS, and our partnership with pretty much every data provider, data storage provider, data solution provider in the world, we want to provide fine-grain data protection, any data in any system on any platform. And that's our mission. >> Well, Rick Farnell, this has been really fascinating conversation with you, thank you so much. The CEO of Protegrity, really great to have you on this program for the AWS Startup Showcase, talking about how ransomware game has changed with the kinetic data protection. Really appreciate it. Again, I'm your host Natalie Erlich, thank you again very much for watching. (light music)

Published Date : Jun 24 2021

SUMMARY :

of the AWS Startup Showcase Great to be here. and how it has changed with and kind of that backup mixed together that's really central to your business. in the architecture. but at the same time, and have that personal data coming in and some of the most valuable and the impact it can have on businesses, have the ability to unlock and make sure that the analytic systems And that is the big that don't have to happen. really great to have you on this program

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Natalie ErlichPERSON

0.99+

Rick FarnellPERSON

0.99+

NataliePERSON

0.99+

AWSORGANIZATION

0.99+

ProtegrityORGANIZATION

0.99+

Amazon Web ServicesORGANIZATION

0.99+

24QUANTITY

0.99+

pandemicEVENT

0.99+

hundreds of millionsQUANTITY

0.99+

17 yearsQUANTITY

0.99+

100%QUANTITY

0.99+

secondQUANTITY

0.99+

oneQUANTITY

0.98+

thirdQUANTITY

0.98+

todayDATE

0.98+

TodayDATE

0.98+

billions of peopleQUANTITY

0.98+

OneQUANTITY

0.97+

three-stepQUANTITY

0.97+

hundreds of millions of dollarsQUANTITY

0.96+

bothQUANTITY

0.96+

billions of human beingsQUANTITY

0.96+

billions of individualsQUANTITY

0.93+

sevenQUANTITY

0.9+

theCUBEORGANIZATION

0.89+

Next Big ThingTITLE

0.85+

Startup ShowcaseEVENT

0.85+

firstQUANTITY

0.83+

this yearDATE

0.78+

lastDATE

0.78+

Number twoQUANTITY

0.76+

single industryQUANTITY

0.76+

single employeeQUANTITY

0.75+

weeksDATE

0.73+

yearsQUANTITY

0.72+

single sizeQUANTITY

0.7+

oneOTHER

0.7+

Startup Showcase The Next Big Thing inEVENT

0.68+

Security, &EVENT

0.67+

ransomwareTITLE

0.64+

anDATE

0.63+

24DATE

0.59+

coupleQUANTITY

0.59+

single individual pieceQUANTITY

0.59+

SciencesEVENT

0.58+

stepQUANTITY

0.54+

versionQUANTITY

0.46+

sunEVENT

0.36+

Ariel Assaraf, Coralogix | AWS Startup Showcase: The Next Big Thing in AI, Security, & Life Sciences


 

(upbeat music) >> Hello and welcome today's session for the AWS Startup Showcase, the next big thing in AI, Security and Life Sciences featuring Coralogix for the AI track. I'm your host, John Furrier with theCUBE. We're here we're joined by Ariel Assaraf, CEO of Coralogix. Ariel, great to see you calling in from remotely, videoing in from Tel Aviv. Thanks for coming on theCUBE. >> Thank you very much, John. Great to be here. >> So you guys are features a hot next thing, start next big thing startup. And one of the things that you guys do we've been covering for many years is, you're into the log analytics, from a data perspective, you guys decouple the analytics from the storage. This is a unique thing. Tell us about it. What's the story? >> Yeah. So what we've seen in the market is that probably because of the great job that a lot of the earlier generation products have done, more and more companies see the value in log data, what used to be like a couple rows, that you add, whenever you have something very important to say, became a standard to document all communication between different components, infrastructure, network, monitoring, and the application layer, of course. And what happens is that data grows extremely fast, all data grows fast, but log data grows even faster. What we always say is that for sure data grows faster than revenue. So as fast as a company grows, its data is going to outpace that. And so we found ourselves thinking, how can we help companies be able to still get the full coverage they want without cherry picking data or deciding exactly what they want to monitor and what they're taking risk with. But still give them the real time analysis that they need to make sure that they get the full insight suite for the entire data, wherever it comes from. And that's why we decided to decouple the analytics layer from storage. So instead of ingesting the data, then indexing and storing it, and then analyzing the stored data, we analyze everything, and then we only store it matters. So we go from the insights backwards. That allowed us to reduce the amount of data, reduce the digital exhaust that it creates, and also provide better insights. So the idea is that as this world of data scales, the need for real time streaming analytics is going to increase. >> So what's interesting is we've seen this decoupling with storage and compute be a great success formula and cloud scale, for instance, that's a known best practice. You're taking a little bit different. I love how you're coming backwards from it, you're working backwards from the insights, almost doing some intelligence on the front end of the data, probably sees a lot of storage costs. But I want to get specifically back to this real time. How do you do that? And how did you come up with this? What's the vision? How did you guys come up with the idea? What was the magic light bulb that went off for Coralogix? >> Yes, the Coralogix story is very interesting. Actually, it was no light bulb, it was a road of pain for years and years, we started by just you know, doing the same, maybe faster, a couple more features. And it didn't work out too well. The first few years, the company were not very successful. And we've grown tremendously in the past three years, almost 100X, since we've launched this, and it came from a pain. So once we started scaling, we saw that the side effects of accessing the storage for analytics, the latency it creates, the the dependency on schema, the price that it poses on our customers became unbearable. And then we started thinking, so okay, how do we get the same level of insights, because there's this perception in the world of storage. And now it started to happen in analytics, also, that talks about tiers. So you want to get a great experience, you pay a lot, you want to get a less than great experience, you pay less, it's a lower tier. And we decided that we're looking for a way to give the same level of real time analytics and the same level of insights. Only without the issue of dependencies, decoupling all the storage schema issues and latency. And we built our real time pipeline, we call it Streama. Streama is a Coralogix real time analysis platform that analyzes everything in real time, also the stateful thing. So stateless analytics in real time is something that's been done in the past and it always worked well. The issue is, how do you give a stateful insight on data that you analyze in real time without storing and I'll explain how can you tell that a certain issue happened that did not happen in the past three months if you did not store the past three months? Or how can you tell that behavior is abnormal if you did not store what's normal, you did not store to state. So we created what we call the state store that holds the state of the system, the state of data, were a snapshot on that state for the entire history. And then instead of our state being the storage, so you know, you asked me, how is this compared to last week? Instead of me going to the storage and compare last week, I go to the state store, and you know, like a record bag, I just scroll fast, I find out one piece of state. And I say, okay, this is how it looked like last week, compared to this week, it changed in ABC. And once we started doing that we on boarded more and more services to that model. And our customers came in and say, hey, you're doing everything in real time. We don't need more than that. Yeah, like a very small portion of data, we actually need to store and frequently search, how about you guys fit into our use cases, and not just sell on quota? And we decided to basically allow our customers to choose what is the use case that they have, and route the data through different use cases. And then each log records, each log record stops at the relevant stops in our data pipeline based on the use case. So just like you wouldn't walk into the supermarket, you fill in a bag, you go out, they weigh it and they say, you know, it's two kilograms, you pay this amount, because different products have different costs and different meaning to you. That same way, exactly, We analyze the data in real time. So we know the importance of data, and we allow you to route it based on your use case and pay a different amount per use case. >> So this is really interesting. So essentially, you guys, essentially capture insights and store those, you call them states, and then not have to go through the data. So it's like you're eliminating the old problem of, you know, going back to the index and recovering the data to get the insights, did we have that? So anyway, it's a round trip query, if you will, you guys are start saving all that data mining cost and time. >> We call it node zero side effects, that round trip that you that you described is exactly it, no side effects to an analysis that is done in real time. I don't need to get the latency from the storage, a bit of latency from the database that holds the model, a bit of latency from the cache, everything stays in memory, everything stays in stream. >> And so basically, it's like the definition of insanity, doing the same thing over and over again and expecting a different result. Here, that's kind of what that is, the old model of insight is go query the database and get something back, you're actually doing the real time filtering on the front end, capturing the insights, if you will, storing those and replicating that as use case. Is that right? >> Exactly. But then, you know, there's still the issue of customer saying, yeah, but I need that data. Someday, I need to really frequently search, I don't know, you know, the unknown unknowns, or some of the day I need for compliance, and I need an immutable record that stays in my compliance bucket forever. So we allowed customers, we have this some that screen, we call the TCO optimizer, that allows them to define those use cases. And they can always access the data by creating their remote storage from Coralogix, or carrying the hot data that is stored with Coralogix. So it's all about use cases. And it's all about how you consume the data because it doesn't make sense for me to pay the same amount or give the same amount of attention to a record that is completely useless. It's just there for the record or for a compliance audit, that may or may not happen in the future. And, you know, do the same with the most critical exception in my application log that has immediate business impact. >> What's really good too, is you can actually set some policy up if you want a certain use cases, okay, store that data. So it's not to say you don't want to store it, but you might want to store it on certain use cases. So I can see that. So I got to ask the question. So how does this differ from the competition? How do you guys compete? Take us through a use case of a customer? How do you guys go to the customer and you just say, hey, we got so much scar tissue from this, we learned the hard way, take it from us? How does it go? Take us through an example. >> So an interesting example of actually a company that is not the your typical early adopter, let's call it this way. A very advanced in technology and smart company, but a huge one, one of the largest telecommunications company in India. And they were actually cherry picking about 100 gigs of data per day, and sending it to one of the legacy providers which has a great solution that does give value. But they weren't even thinking about sending their entire data set because of cost because of scale, because of, you know, just a clutter. Whenever you search, you have to sift through millions of records that many of them are not that important. And we help them actually ask analyze their data and work with them to understand these guys had over a terabyte of data that had incredible insights, it was like a goldmine of insights. But now you just needed to prioritize it by their use case, and they went from 100 gig with the other legacy solution to a terabyte, at almost the same cost, with more advanced insights within one week, which isn't in that scale of an organization is something that is is out of the ordinary, took them four months to implement the other product. But now, when you go from the insights backwards, you understand your data before you have to store it, you understand the data before you have to analyze it, or before you have to manually sift through it. So if you ask about the difference, it's all about the architecture. We analyze and only then index instead of indexing and then analyzing. It sounds simple. But of course, when you look at this stateful analytics, it's a lot more, a lot more complex. >> Take me through your growth story, because first of all, I'll get back to the secret sauce in the same way. I want to get back to how you guys got here. (indistinct) you had this problem? You kind of broke through, you hit the magic formula, talking about the growth? Where's the growth coming from? And what's the real impact? What's the situation relative to the company's growth? >> Yeah, so we had a first rough three years that I kind of mentioned, and then I was not the CEO at the beginning, I'm one of the co founders. I'm more of the technical guy, was the product manager. And I became CEO after the company was kind of on the verge of closing at the end of 2017. And the CTO left the CEO left, the VP of R&D became the CTO, I became the CEO, we were five people with $200,000 in the bank that you know, you know that that's not a long runway. And we kind of changed attitudes. So we kind of, so we first we launched this product, and then we understood that we need to go bottoms up, you can go to enterprises and try to sell something that is out of the ordinary, or that changes how they're used to working or just, you know, sell something, (indistinct) five people will do under $1,000 in the bank. So we started going from bottoms up, and the earlier adopters. And it's still until today, you know, the the more advanced companies, the more advanced teams. This is our Gartner friend Coralogix, the preferred solution for Advanced, DevOps and Platform Teams. So they started adopting Coralogix, and then it grew to the larger organization, and they were actually pushing, there are champions within their organizations. And ever since. So until the beginning of 2018, we raised about $2 million and had sales or marginal. Today, we have over 1500, pink accounts, and we raised almost $100 million more. >> Wow, what a great pivot. That was great example of kind of getting the right wave here, cloud wave. You said in terms of customers, you had the DevOps kind of (indistinct) initially. And now you said expanded out to a lot more traditional enterprise, you can take me through the customer profile. >> Yeah, so I'd say it's still the core would be cloud native and (indistinct) companies. These are typical ones, we have very tight integration with AWS, all the services, all the integrations required, we know how to read and write back to the different services and analysis platforms in AWS. Also for Asia and GCP, but mostly AWS. And then we do have quite a few big enterprise accounts, actually, five of the largest 50 companies in the world use Coralogix today. And it grew from those DevOps and platform evangelists into the level of IT, execs and even (indistinct). So today, we have our security product that already sells to some of the biggest companies in the world, it's a different profile. And the idea for us is that, you know, once you solve that issue of too much data, too expensive, not proactive enough, too couple with the storage, you can actually expand that from observability logging metrics, now into tracing and then into security and maybe even to other fields, where the cost and the productivity are an issue for many companies. >> So let me ask you this question, then Ariel, if you don't mind. So if a customer has a need for Coralogix, is it because the data fall? Or they just got data kind of sprawled all over the place? Or is it that storage costs are going up on S3 or what's some of the signaling that you would see, that would be like, telling you, okay, okay, what's the opportunity to come in and either clean house or fix the mess or whatnot, Take us through what you see. What do you see is the trend? >> Yeah. So like the tip customer (indistinct) Coralogix will be someone using one of the legacy solution and growing very fast. That's the easiest way for us to know. >> What grows fast? The storage, the storage is growing fast? >> The company is growing fast. >> Okay. And you remember, the data grows faster than revenue. And we know that. So if I see a company that grew from, you know, 50 people to 500, in three years, specifically, if it's cloud native or internet company, I know that their data grew not 10X, but 100X. So I know that that company that might started with a legacy solution at like, you know, $1,000 a month, and they're happy with it. And you know, for $1,000 a month, if you don't have a lot of data, those legacy solutions, you know, they'll do the trick. But now I know that they're going to get asked to pay 50, 60, $70,000 a month. And this is exactly where we kick in. Because now, when it doesn't fit the economic model, when it doesn't fit the unit economics, and he started damaging the margins of those companies. Because remember, those internet and cloud companies, it's not costs are not the classic costs that you'll see in an enterprise, they're actually damaging your unit economics and the valuation of the business, the bigger deal. So now, when I see that type of organization, we come in and say, hey, better coverage, more advanced analytics, easier integration within your organization, we support all the common open source syntaxes, and dashboards, you can plug it into your entire environment, and the costs are going to be a quarter of whatever you're paying today. So once they see that they see, you know, the Dev friendliness of the product, the ease of scale, the stability of the product, it makes a lot more sense for them to engage in a PLC, because at the end of the day, if you don't prove value, you know, you can come with 90% discount, it doesn't do anything, not to prove the value to them. So it's a great door opener. But from then on, you know, it's a PLC like any other. >> Cloud is all about the PLC or pilot, as they say. So take me through the product, today, and what's next for the product, take us through the vision of the product and the product strategy. >> Yeah, so today, the product allows you to send any log data, metric data or security information, analyze it a million ways, we have one of the most extensive alerting mechanism to market, automatic anomaly detection, data flustering. And all the real law, you know, the real time pipeline, things that help companies make their data smarter, and more readable, parsing, enriching, getting external sources to enrich the data, and so on, so forth. Where we're stepping in now is actually to make the final step of decoupling the analytics from storage, what we call the datalist data platform in which no data will sit or reside within the Coralogix cloud, everything will be analyzed in real time, stored in a storage of choice of our customers, then we'll allow our customers to remotely query that incredible performance. So that'll bring our customers away, to have the first ever true SaaS experience for observability. Think about no quota plans, no retention, you send whatever you want, you pay only for what you send, you retain it, how long you want to retain it, and you get all the real time insights much, much faster than any other product that keeps it on a hot storage. So that'll be our next step to really make sure that, you know, we're kind of not reselling cloud storage, because a lot of the times when you are dependent on storage, and you know, we're a cloud company, like I mentioned, you got to keep your unit economics. So what do you do? You sell storage to the customer, you add your markup, and then you you charge for it. And this is exactly where we don't want to be. We want to sell the intelligence and the insights and the real time analysis that we know how to do and let the customers enjoy the, you know, the wealth of opportunities and choices their cloud providers offer for storage. >> That's great vision in a way, the hyper scalars early days showed that decoupling compute from storage, which I mentioned earlier, was a huge category creation. Here, you're doing it for data. We call hyper data scale, or like, maybe there's got to be a name for this. What do you see, about five years from now? Take us through the trajectory of the next five years, because certainly observability is not going away. I mean, it's data management, monitoring, real time, asynchronous, synchronous, linear, all the stuffs happening, what's the what's the five year vision? >> Now add security and observability, which is something we started preaching for, because no one can say I have observability to my environment when people you know, come in and out and steal data. That's no observability. But the thing is that because data grows exponentially, because it grows faster than revenue what we believe is that in five years, there's not going to be a choice, everyone are going to have to analyze the data in real time. Extract the insights and then decide whether to store it on a you know long term archive or not, or not store it at all. You still want to get the full coverage and insights. But you know, when you think about observability, unlike many other things, the more data you have many times, the less observability you get. So you think of log data unlike statistics, if my system was only in recording everything was only generating 10 records a day, I have full, incredible observability I know everything that I've done. what happens is that you pay more, you get less observability, and more uncertainty. So I think that you know, with time, we'll start seeing more and more real time streaming analytics, and a lot less storage based and index based solutions. >> You know, Ariel, I've always been saying to Dave Vellante on theCUBE, many times that there needs to be insights as to be the norm, not the exception, where, and then ultimately, it would be a database of insights. I mean, at the end of the day, the insights become more plentiful. You have the ability to actually store those insights, and refresh them and challenge them and model update them, verify them, either sunset them or add to them or you know, saying that's like, when you start getting more data into your organization, AI and machine learning prove that pattern recognition works. So why not grab those insights? >> And use them as your baseline to know what's important, and not have to start by putting everything in a bucket. >> So we're going to have new categories like insight, first, software (indistinct) >> Go from insights backwards, that'll be my tagline, if I have to, but I'm a terrible marketing (indistinct). >> Yeah, well, I mean, everyone's like cloud, first data, data is data driven, insight driven, what you're basically doing is you're moving into the world of insights driven analytics, really, as a way to kind of bring that forward. So congratulations. Great story. I love the pivot love how you guys entrepreneurially put it all together and had the problem your own problem and brought it out and to the to the rest of the world. And certainly DevOps in the cloud scale wave is just getting bigger and bigger and taking over the enterprise. So great stuff. Real quick while you're here. Give a quick plug for the company. What you guys are up to, stats, vitals, hiring, what's new, give the commercial. >> Yeah, so like mentioned over 1500 being customers growing incredibly in the past 24 months, hiring, almost doubling the company in the next few months. offices in Israel, East Center, West US, and UK and Mumbai. Looking for talented engineers to join the journey and build the next generation of data lists data platforms. >> Ariel Assaraf, CEO of Coralogix. Great to have you on theCUBE and thank you for participating in the AI track for our next big thing in the Startup Showcase. Thanks for coming on. >> Thank you very much John, really enjoyed it. >> Okay, I'm John Furrier with theCUBE. Thank you for watching the AWS Startup Showcase presented by theCUBE. (calm music)

Published Date : Jun 24 2021

SUMMARY :

Ariel, great to see you Thank you very much, John. And one of the things that you guys do So instead of ingesting the data, And how did you come up with this? and we allow you to route and recovering the data database that holds the model, capturing the insights, if you will, that may or may not happen in the future. So it's not to say you that is not the your sauce in the same way. and the earlier adopters. And now you said expanded out to And the idea for us is that, the opportunity to come in So like the tip customer and the costs are going to be a quarter and the product strategy. and let the customers enjoy the, you know, of the next five years, the more data you have many times, You have the ability to and not have to start by Go from insights backwards, I love the pivot love how you guys and build the next generation and thank you for Thank you very much the AWS Startup Showcase

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Ariel AssarafPERSON

0.99+

$200,000QUANTITY

0.99+

IsraelLOCATION

0.99+

IndiaLOCATION

0.99+

90%QUANTITY

0.99+

JohnPERSON

0.99+

last weekDATE

0.99+

$1,000QUANTITY

0.99+

Tel AvivLOCATION

0.99+

10XQUANTITY

0.99+

John FurrierPERSON

0.99+

two kilogramsQUANTITY

0.99+

100 gigQUANTITY

0.99+

MumbaiLOCATION

0.99+

UKLOCATION

0.99+

50QUANTITY

0.99+

ArielPERSON

0.99+

50 peopleQUANTITY

0.99+

CoralogixORGANIZATION

0.99+

AWSORGANIZATION

0.99+

fiveQUANTITY

0.99+

this weekDATE

0.99+

three yearsQUANTITY

0.99+

todayDATE

0.99+

five peopleQUANTITY

0.99+

100XQUANTITY

0.99+

TodayDATE

0.99+

five yearQUANTITY

0.99+

each logQUANTITY

0.99+

about $2 millionQUANTITY

0.99+

four monthsQUANTITY

0.99+

five yearsQUANTITY

0.99+

one pieceQUANTITY

0.99+

millions of recordsQUANTITY

0.99+

60QUANTITY

0.99+

50 companiesQUANTITY

0.99+

almost $100 millionQUANTITY

0.99+

one weekQUANTITY

0.99+

GartnerORGANIZATION

0.99+

500QUANTITY

0.98+

AsiaLOCATION

0.98+

CoralogixPERSON

0.98+

West USLOCATION

0.98+

over 1500QUANTITY

0.98+

East CenterLOCATION

0.97+

under $1,000QUANTITY

0.97+

firstQUANTITY

0.96+

each log recordsQUANTITY

0.96+

10 records a dayQUANTITY

0.96+

oneQUANTITY

0.96+

end of 2017DATE

0.96+

about 100 gigsQUANTITY

0.96+

StreamaTITLE

0.95+

$1,000 a monthQUANTITY

0.95+

R&DORGANIZATION

0.95+

beginningDATE

0.95+

first few yearsQUANTITY

0.93+

past three monthsDATE

0.93+

$70,000 a monthQUANTITY

0.9+

CoralogixTITLE

0.9+

GCPORGANIZATION

0.88+

TCOORGANIZATION

0.88+

AWS Startup ShowcaseEVENT

0.87+

Toni Manzano, Aizon | AWS Startup Showcase | The Next Big Thing in AI, Security, & Life Sciences


 

(up-tempo music) >> Welcome to today's session of the cube's presentation of the AWS startup showcase. The next big thing in AI security and life sciences. Today, we'll be speaking with Aizon, as part of our life sciences track and I'm pleased to welcome the co-founder as well as the chief science officer of Aizon: Toni Monzano, will be discussing how artificial intelligence is driving key processes in pharma manufacturing. Welcome to the show. Thanks so much for being with us today. >> Thank you Natalie to you and to your introduction. >> Yeah. Well, as you know industry 4.0 is revolutionizing manufacturing across many industries. Let's talk about how it's impacting biotech and pharma and as well as Aizon's contributions to this revolution. >> Well, actually pharmacogenetics is totally introducing a new concept of how to manage processes. So, nowadays the industry is considering that everything is particularly static, nothing changes and this is because they don't have the ability to manage the complexity and the variability around the biotech and the driving factor in processes. Nowadays, with pharma - technologies cloud, our computing, IOT, AI, we can get all those data. We can understand the data and we can interact in real time, with processes. This is how things are going on nowadays. >> Fascinating. Well, as you know COVID-19 really threw a wrench in a lot of activity in the world, our economies, and also people's way of life. How did it impact manufacturing in terms of scale up and scale out? And what are your observations from this year? >> You know, the main problem when you want to do a scale-up process is not only the equipment, it is also the knowledge that you have around your process. When you're doing a vaccine on a smaller scale in your lab, the only parameters you're controlling in your lab, they have to be escalated when you work from five liters to 2,500 liters. How to manage this different of a scale? Well, AI is helping nowadays in order to detect and to identify the most relevant factors involved in the process. The critical relationship between the variables and the final control of all the full process following a continued process verification. This is how we can help nowadays in using AI and cloud technologies in order to accelerate and to scale up vaccines like the COVID-19. >> And how do you anticipate pharma manufacturing to change in a post COVID world? >> This is a very good question. Nowadays, we have some assumptions that we are trying to overpass yet with human efforts. Nowadays, with the new situation, with the pandemic that we are living in, the next evolution that we are doing humans will take care about the good practices of the new knowledge that we have to generate. So AI will manage the repetitive tasks, all the human condition activity that we are doing, So that will be done by AI, and humans will never again do repetitive tasks in this way. They will manage complex problems and supervise AI output. >> So you're driving more efficiencies in the manufacturing process with AI. You recently presented at the United nations industrial development organization about the challenges brought by COVID-19 and how AI is helping with the equitable distribution of vaccines and therapies. What are some of the ways that companies like Aizon can now help with that kind of response? >> Very good point. Could you imagine you're a big company, a top pharma company, that you have an intellectual property of COVID-19 vaccine based on emergency and principle, and you are going to, or you would like to, expand this vaccination in order not to get vaccination, also to manufacture the vaccine. What if you try to manufacture these vaccines in South Africa or in Asia in India? So the secret is to transport, not only the raw material not only the equipment, also the knowledge. How to appreciate how to control the full process from the initial phase 'till their packaging and the vials filling. So, this is how we are contributing. AI is packaging all this knowledge in just AI models. This is the secret. >> Interesting. Well, what are the benefits for pharma manufacturers when considering the implementation of AI and cloud technologies. And how can they progress in their digital transformation by utilizing them? >> One of the benefits is that you are able to manage the variability the real complexity in the world. So, you can not create processes, in order to manufacture drugs, just considering that the raw material that you're using is never changing. You cannot consider that all the equipment works in the same way. You cannot consider that your recipe will work in the same way in Brazil than in Singapore. So the complexity and the variability is must be understood as part of the process. This is one of the benefits. The second benefit is that when you use cloud technologies, you have not a big care about computing's licenses, software updates, antivirals, scale up of cloud ware computing. Everything is done in the cloud. So well, this is two main benefits. There are more, but this is maybe the two main ones. >> Yeah. Well, that's really interesting how you highlight how this is really. There's a big shift how you handle this in different parts of the world. So, what role does compliance and regulation play here? And of course we see differences the way that's handled around the world as well. >> Well, I think that is the first time the human race in the pharma - let me say experience - that we have a very strong commitment from the 30 bodies, you know, to push forward using this kind of technologies actually, for example, the FDA, they are using cloud, to manage their own system. So why not use them in pharma? >> Yeah. Well, how does AWS and Aizon help manufacturers address these kinds of considerations? >> Well, we have a very great partner. AWS, for us, is simplifying a lot our life. So, we are a very, let me say different startup company, Aizon, because we have a lot of PhDs in the company. So we are not in the classical geeky company with guys all day parameter developing. So we have a lot of science inside the company. So this is our value. So everything that is provided by Amazon, why we have to aim to recreate again so we can rely on Sage Maker. we can rely on Cogito, we can rely on Landon we can rely on Esri to have encryption data with automatic backup. So, AWS is simplifying a lot of our life. And we can dedicate all our knowledge and all our efforts to the things that we know: pharma compliance. >> And how do you anticipate that pharma manufacturing will change further in the 2021 year? Well, we are participating not only with business cases. We also participate with the community because we are leading an international project in order to anticipate this kind of new breakthroughs. So, we are working with, let me say, initiatives in the - association we are collaborating in two different projects in order to apply AI in computer certification in order to create more robust process for the MRA vaccine. We are collaborating with the - university creating the standards for AI application in GXP. We collaborating with different initiatives with the pharma community in order to create the foundation to move forward during this year. >> And how do you see the competitive landscape? What do you think Aizon provides compared to its competitors? >> Well, good question. Probably, you can find a lot of AI services, platforms, programs softwares that can run in the industrial environment. But I think that it will be very difficult to find a GXP - a full GXP-compliant platform working on cloud with AI when AI is already qualified. I think that no one is doing that nowadays. And one of the demonstration for that is that we are also writing some scientific papers describing how to do that. So you will see that Aizon is the only company that is doing that nowadays. >> Yeah. And how do you anticipate that pharma manufacturing will change or excuse me how do you see that it is providing a defining contribution to the future of cloud-scale? >> Well, there is no limits in cloud. So as far as you accept that everything is varied and complex, you will need power computing. So the only way to manage this complexity is running a lot of power computation. So cloud is the only system, let me say, that allows that. Well, the thing is that, you know pharma will also have to be compliant with the cloud providers. And for that, we created a new layer around the platform that we say qualification as a service. We are creating this layer in order to qualify continuously any kind of cloud platform that wants to work on environment. This is how we are doing that. >> And in what areas are you looking to improve? How are you constantly trying to develop the product and bring it to the next level? >> Always we have, you know, in mind the patient. So Aizon is a patient-centric company. Everything that we do is to improve processes in order to improve at the end, to deliver the right medicine at the right time to the right patient. So this is how we are focusing all our efforts in order to bring this opportunity to everyone around the world. For this reason, for example, we want to work with this project where we are delivering value to create vaccines for COVID-19, for example, everywhere. Just packaging the knowledge using AI. This is how we envision and how we are acting. >> Yeah. Well, you mentioned the importance of science and compliance. What do you think are the key themes that are the foundation of your company? >> The first thing is that we enjoy the task that we are doing. This is the first thing. The other thing is that we are learning every day with our customers and for real topics. So we are serving to the patients. And everything that we do is enjoying science enjoying how to achieve new breakthroughs in order to improve life in the factory. We know that at the end will be delivered to the final patient. So enjoying making science and creating breakthroughs; being innovative. >> Right, and do you think that in the sense that we were lucky, in light of COVID, that we've already had these kinds of technologies moving in this direction for some time that we were somehow able to mitigate the tragedy and the disaster of this situation because of these technologies? >> Sure. So we are lucky because of this technology because we are breaking the distance, the physical distance, and we are putting together people that was so difficult to do that in all the different aspects. So, nowadays we are able to be closer to the patients to the people, to the customer, thanks to these technologies. Yes. >> So now that also we're moving out of, I mean, hopefully out of this kind of COVID reality, what's next for Aizon? Do you see more collaboration? You know, what's next for the company? >> The next for the company is to deliver AI models that are able to be encapsulated in the drug manufacturing for vaccines, for example. And that will be delivered with the full process not only materials, equipment, personnel, recipes also the AI models will go together as part of the recipe. >> Right, well, we'd love to hear more about your partnership with AWS. How did you get involved with them? And why them, and not another partner? >> Well, let me explain to you a secret. Seven years ago, we started with another top cloud provider, but we saw very soon, that this other cloud provider were not well aligned with the GXP requirements. For this reason, we met with AWS. We went together to some seminars, conferences with top pharma communities and pharma organizations. We went there to make speeches and talks. We felt that we fit very well together because AWS has a GXP white paper describing very well how to rely on AWS components. One by one. So this is for us, this is a very good credential, when we go to our customers. Do you know that when customers are acquiring and are establishing the Aizon platform in their systems, they are outbidding us. They are outbidding Aizon. Well we have to also outbid AWS because this is the normal chain in pharma supplier. Well, that means that we need this documentation. We need all this transparency between AWS and our partners. This is the main reason. >> Well, this has been a really fascinating conversation to hear how AI and cloud are revolutionizing pharma manufacturing at such a critical time for society all over the world. Really appreciate your insights, Toni Monzano: the chief science officer and co-founder of Aizon. I'm your host, Natalie Erlich, for the Cube's presentation of the AWS startup showcase. Thanks very much for watching. (soft upbeat music)

Published Date : Jun 24 2021

SUMMARY :

of the AWS startup showcase. and to your introduction. contributions to this revolution. and the variability around the biotech in a lot of activity in the world, the knowledge that you the next evolution that we are doing in the manufacturing process with AI. So the secret is to transport, considering the implementation You cannot consider that all the equipment And of course we see differences from the 30 bodies, you and Aizon help manufacturers to the things that we in order to create the is that we are also to the future of cloud-scale? So cloud is the only system, at the right time to the right patient. the importance of science and compliance. the task that we are doing. and we are putting in the drug manufacturing love to hear more about This is the main reason. of the AWS startup showcase.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Toni MonzanoPERSON

0.99+

Natalie ErlichPERSON

0.99+

AWSORGANIZATION

0.99+

NataliePERSON

0.99+

AizonORGANIZATION

0.99+

SingaporeLOCATION

0.99+

BrazilLOCATION

0.99+

South AfricaLOCATION

0.99+

AmazonORGANIZATION

0.99+

AsiaLOCATION

0.99+

COVID-19OTHER

0.99+

oneQUANTITY

0.99+

2,500 litersQUANTITY

0.99+

five litersQUANTITY

0.99+

2021 yearDATE

0.99+

30 bodiesQUANTITY

0.99+

TodayDATE

0.99+

second benefitQUANTITY

0.99+

IndiaLOCATION

0.99+

Toni ManzanoPERSON

0.99+

OneQUANTITY

0.99+

two main benefitsQUANTITY

0.99+

pandemicEVENT

0.98+

todayDATE

0.98+

two different projectsQUANTITY

0.98+

COVIDOTHER

0.97+

Seven years agoDATE

0.97+

two main onesQUANTITY

0.97+

this yearDATE

0.96+

LandonORGANIZATION

0.95+

first thingQUANTITY

0.92+

FDAORGANIZATION

0.89+

MRAORGANIZATION

0.88+

CubeORGANIZATION

0.85+

United nationsORGANIZATION

0.82+

first timeQUANTITY

0.8+

Sage MakerTITLE

0.77+

Startup ShowcaseEVENT

0.73+

GXPORGANIZATION

0.64+

EsriORGANIZATION

0.64+

GXPTITLE

0.6+

CogitoORGANIZATION

0.6+

AizonTITLE

0.57+

benefitsQUANTITY

0.36+

GXPCOMMERCIAL_ITEM

0.36+

Gil Geron, Orca Security | AWS Startup Showcase: The Next Big Thing in AI, Security, & Life Sciences


 

(upbeat electronic music) >> Hello, everyone. Welcome to theCUBE's presentation of the AWS Startup Showcase. The Next Big Thing in AI, Security, and Life Sciences. In this segment, we feature Orca Security as a notable trend setter within, of course, the security track. I'm your host, Dave Vellante. And today we're joined by Gil Geron. Who's the co-founder and Chief Product Officer at Orca Security. And we're going to discuss how to eliminate cloud security blind spots. Orca has a really novel approach to cybersecurity problems, without using agents. So welcome Gil to today's sessions. Thanks for coming on. >> Thank you for having me. >> You're very welcome. So Gil, you're a disruptor in security and cloud security specifically and you've created an agentless way of securing cloud assets. You call this side scanning. We're going to get into that and probe that a little bit into the how and the why agentless is the future of cloud security. But I want to start at the beginning. What were the main gaps that you saw in cloud security that spawned Orca Security? >> I think that the main gaps that we saw when we started Orca were pretty similar in nature to gaps that we saw in legacy, infrastructures, in more traditional data centers. But when you look at the cloud when you look at the nature of the cloud the ephemeral nature, the technical possibilities and disruptive way of working with a data center, we saw that the usage of traditional approaches like agents in these environments is lacking, it actually not only working as well as it was in the legacy world, it's also, it's providing less value. And in addition, we saw that the friction between the security team and the IT, the engineering, the DevOps in the cloud is much worse or how does that it was, and we wanted to find a way, we want for them to work together to bridge that gap and to actually allow them to leverage the cloud technology as it was intended to gain superior security than what was possible in the on-prem world. >> Excellent, let's talk a little bit more about agentless. I mean, maybe we could talk a little bit about why agentless is so compelling. I mean, it's kind of obvious it's less intrusive. You've got fewer processes to manage, but how did you create your agentless approach to cloud security? >> Yes, so I think the basis of it all is around our mission and what we try to provide. We want to provide seamless security because we believe it will allow the business to grow faster. It will allow the business to adopt technology faster and to be more dynamic and achieve goals faster. And so we've looked on what are the problems or what are the issues that slow you down? And one of them, of course, is the fact that you need to install agents that they cause performance impact, that they are technically segregated from one another, meaning you need to install multiple agents and they need to somehow not interfere with one another. And we saw this friction causes organization to slow down their move to the cloud or slow down the adoption of technology. In the cloud, it's not only having servers, right? You have containers, you have manage services, you have so many different options and opportunities. And so you need a different approach on how to secure that. And so when we understood that this is the challenge, we decided to attack it in three, using three periods; one, trying to provide complete security and complete coverage with no friction, trying to provide comprehensive security, which is taking an holistic approach, a platform approach and combining the data in order to provide you visibility into all of your security assets, and last but not least of course, is context awareness, meaning being able to understand and find these the 1% that matter in the environment. So you can actually improve your security posture and improve your security overall. And to do so, you had to have a technique that does not involve agents. And so what we've done, we've find a way that utilizes the cloud architecture in order to scan the cloud itself, basically when you integrate Orca, you are able within minutes to understand, to read, and to view all of the risks. We are leveraging a technique that we are calling side scanning that uses the API. So it uses the infrastructure of the cloud itself to read the block storage device of every compute instance and every instance, in the environment, and then we can deduce the actual risk of every asset. >> So that's a clever name, side scanning. Tell us a little bit more about that. Maybe you could double click on, on how it works. You've mentioned it's looking into block storage and leveraging the API is a very, very clever actually quite innovative. But help us understand in more detail how it works and why it's better than traditional tools that we might find in this space. >> Yes, so the way that it works is that by reading the block storage device, we are able to actually deduce what is running on your computer, meaning what kind of waste packages applications are running. And then by con combining the context, meaning understanding that what kind of services you have connected to the internet, what is the attack surface for these services? What will be the business impact? Will there be any access to PII or any access to the crown jewels of the organization? You can not only understand the risks. You can also understand the impact and then understand what should be our focus in terms of security of the environment. Different factories, the fact that we are doing it using the infrastructure itself, we are not installing any agents, we are not running any packet. You do not need to change anything in your architecture or design of how you use the cloud in order to utilize Orca Orca is working in a pure SaaS way. And so it means that there is no impact, not on cost and not on performance of your environment while using Orca. And so it reduces any friction that might happen with other parties of the organization when you enjoy the security or improve your security in the cloud. >> Yeah, and no process management intrusion. Now, I presume Gil that you eat your own cooking, meaning you're using your own product. First of all, is that true? And if so, how has your use of Orca as a chief product officer help you scale Orca as a company? >> So it's a great question. I think that something that we understood early on is that there is a, quite a significant difference between the way you architect your security in cloud and also the way that things reach production, meaning there's a difference, that there's a gap between how you imagined, like in everything in life how you imagine things will be and how they are in real life in production. And so, even though we have amazing customers that are extremely proficient in security and have thought of a lot of ways of how to secure the environment. Ans so, we of course, we are trying to secure environment as much as possible. We are using Orca because we understand that no one is perfect. We are not perfect. We might, the engineers might, my engineers might make mistakes like every organization. And so we are using Orca because we want to have complete coverage. We want to understand if we are doing any mistake. And sometimes the gap between the architecture and the hole in the security or the gap that you have in your security could take years to happen. And you need a tool that will constantly monitor your environment. And so that's why we are using Orca all around from day one not to find bugs or to do QA, we're doing it because we need security to our cloud environment that will provide these values. And so we've also passed the compliance auditing like SOC 2 and ISO using Orca and it expedited and allowed us to do these processes extremely fast because of having all of these guardrails and metrics has. >> Yeah, so, okay. So you recognized that you potentially had and did have that same problem as your customer has been. Has it helped you scale as a company obviously but how has it helped you scale as a company? >> So it helped us scale as a company by increasing the trust, the level of trust customer having Orca. It allowed us to adopt technology faster, meaning we need much less diligence or exploration of how to use technology because we have these guardrails. So we can use the richness of the technology that we have in the cloud without the need to stop, to install agents, to try to re architecture the way that we are using the technology. And we simply use it. We simply use the technology that the cloud offer as it is. And so it allows you a rapid scalability. >> Allows you allows you to move at the speed of cloud. Now, so I'm going to ask you as a co-founder, you got to wear many hats first of a co-founder and the leadership component there. And also the chief product officer, you got to go out, you got to get early customers, but but even more importantly you have to keep those customers retention. So maybe you can describe how customers have been using Orca. Did they, what was their aha moment that you've seen customers react to when you showcase the new product? And then how have you been able to keep them as loyal partners? >> So I think that we are very fortunate, we have a lot of, we are blessed with our customers. Many of our customers are vocal customers about what they like about Orca. And I think that something that comes along a lot of times is that this is a solution they have been waiting for. I can't express how many times I hear that I could go on a call and a customer says, "I must say, I must share. "This is a solution I've been looking for." And I think that in that respect, Orca is creating a new standard of what is expected from a security solution because we are transforming the security all in the company from an inhibitor to an enabler. You can use the technology. You can use new tools. You can use the cloud as it was intended. And so (coughs) we have customers like one of these cases is a customer that they have a lot of data and they're all super scared about using S3 buckets. We call over all of these incidents of these three buckets being breached or people connecting to an s3 bucket and downloading the data. So they had a policy saying, "S3 bucket should not be used. "We do not allow any use of S3 bucket." And obviously you do need to use S3 bucket. It's a powerful technology. And so the engineering team in that customer environment, simply installed a VM, installed an FTP server, and very easy to use password to that FTP server. And obviously two years later, someone also put all of the customer databases on that FTP server, open to the internet, open to everyone. And so I think it was for him and for us as well. It was a hard moment. First of all, he planned that no data will be leaked but actually what happened is way worse. The data was open to the to do to the world in a technology that exists for a very long time. And it's probably being scanned by attackers all the time. But after that, he not only allowed them to use S3 bucket because he knew that now he can monitor. Now, you can understand that they are using the technology as intended, now that they are using it securely. It's not open to everyone it's open in the right way. And there was no PII on that S3 bucket. And so I think the way he described it is that, now when he's coming to a meeting about things that needs to be improved, people are waiting for this meeting because he actually knows more than what they know, what they know about the environment. And I see it really so many times where a simple mistake or something that looks benign when you look at the environment in a holistic way, when you are looking on the context, you understand that there is a huge gap. That should be the breech. And another cool example was a case where a customer allowed an access from a third party service that everyone trusts to the crown jewels of the environment. And he did it in a very traditional way. He allowed a certain IP to be open to that environment. So overall it sounds like the correct way to go. You allow only a specific IP to access the environment but what he failed to to notice is that everyone in the world can register for free for this third-party service and access the environment from this IP. And so, even though it looks like you have access from a trusted service, a trusted third party service, when it's a Saas service, it's actually, it can mean that everyone can use it in order to access the environment and using Orca, you saw immediately the access, you saw immediately the risk. And I see it time after time that people are simply using Orca to monitor, to guardrail, to make sure that the environment stays safe throughout time and to communicate better in the organization to explain the risk in a very easy way. And the, I would say the statistics show that within few weeks, more than 85% of the different alerts and risks are being fixed, and think it comes to show how effective it is and how effective it is in improving your posture, because people are taking action. >> Those are two great examples, and of course they have often said that the shared responsibility model is often misunderstood. And those two examples underscore thinking that, "oh I hear all this, see all this press about S3, but it's up to the customer to secure the endpoint components et cetera. Configure it properly is what I'm saying. So what an unintended consequence, but but Orca plays a role in helping the customer with their portion of that shared responsibility. Obviously AWS is taking care of this. Now, as part of this program we ask a little bit of a challenging question to everybody because look it as a startup, you want to do well you want to grow a company. You want to have your employees, you know grow and help your customers. And that's great and grow revenues, et cetera but we feel like there's more. And so we're going to ask you because the theme here is all about cloud scale. What is your defining contribution to the future of cloud at scale, Gil? >> So I think that cloud is allowed the revolution to the data centers, okay? The way that you are building services, the way that you are allowing technology to be more adaptive, dynamic, ephemeral, accurate, and you see that it is being adopted across all vendors all type of industries across the world. I think that Orca is the first company that allows you to use this technology to secure your infrastructure in a way that was not possible in the on-prem world, meaning that when you're using the cloud technology and you're using technologies like Orca, you're actually gaining superior security that what was possible in the pre cloud world. And I think that, to that respect, Orca is going hand in hand with the evolution and actually revolutionizes the way that you expect to consume security, the way that you expect to get value, from security solutions across the world. >> Thank You for that Gil. And so we're at the end of our time, but we'll give you a chance for final wrap up. Bring us home with your summary, please. >> So I think that Orca is building the cloud security solution that actually works with its innovative aid agentless approach to cyber security to gain complete coverage, comprehensive solution and to gain, to understand the complete context of the 1% that matters in your security challenges across your data centers in the cloud. We are bridging the gap between the security teams, the business needs to grow and to do so in the paste of the cloud, I think the approach of being able to install within minutes, a security solution in getting complete understanding of your risk which is goes hand in hand in the way you expect and adopt cloud technology. >> That's great Gil. Thanks so much for coming on. You guys doing awesome work. Really appreciate you participating in the program. >> Thank you very much. >> And thank you for watching this AWS Startup Showcase. We're covering the next big thing in AI, Security, and Life Science on theCUBE. Keep it right there for more great content. (upbeat music)

Published Date : Jun 24 2021

SUMMARY :

of the AWS Startup Showcase. agentless is the future of cloud security. and the IT, the engineering, but how did you create And to do so, you had to have a technique into block storage and leveraging the API is that by reading the you eat your own cooking, or the gap that you have and did have that same problem And so it allows you a rapid scalability. to when you showcase the new product? the to do to the world And so we're going to ask you the way that you expect to get value, but we'll give you a in the way you expect and participating in the program. And thank you for watching

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

OrcaORGANIZATION

0.99+

AWSORGANIZATION

0.99+

1%QUANTITY

0.99+

GilPERSON

0.99+

Gil GeronPERSON

0.99+

oneQUANTITY

0.99+

more than 85%QUANTITY

0.99+

two examplesQUANTITY

0.99+

two years laterDATE

0.99+

Orca SecurityORGANIZATION

0.98+

threeQUANTITY

0.98+

two great examplesQUANTITY

0.98+

ISOORGANIZATION

0.98+

three bucketsQUANTITY

0.97+

three periodsQUANTITY

0.96+

todayDATE

0.96+

S3TITLE

0.96+

FirstQUANTITY

0.95+

firstQUANTITY

0.94+

first companyQUANTITY

0.91+

day oneQUANTITY

0.9+

SOC 2TITLE

0.87+

theCUBEORGANIZATION

0.86+

SaasORGANIZATION

0.82+

Startup ShowcaseEVENT

0.8+

s3TITLE

0.7+

doubleQUANTITY

0.57+

GilORGANIZATION

0.55+

Next Big ThingTITLE

0.51+

yearsQUANTITY

0.5+

S3COMMERCIAL_ITEM

0.47+

Rohan D'Souza, Olive | AWS Startup Showcase | The Next Big Thing in AI, Security, & Life Sciences.


 

(upbeat music) (music fades) >> Welcome to today's session of theCUBE's presentation of the AWS Startup Showcase, I'm your host Natalie Erlich. Today, we're going to feature Olive, in the life sciences track. And of course, this is part of the future of AI, security, and life sciences. Here we're joined by our very special guest Rohan D'Souza, the Chief Product Officer of Olive. Thank you very much for being with us. Of course, we're going to talk today about building the internet of healthcare. I do you appreciate you joining the show. >> Thanks, Natalie. My pleasure to be here, I'm excited. >> Yeah, likewise. Well tell us about AI and how it's revolutionizing health systems across America. >> Yeah, I mean, we're clearly living around, living at this time of a lot of hype with AI, and there's a tremendous amount of excitement. Unfortunately for us, or, you know, depending on if you're an optimist or a pessimist, we had to wait for a global pandemic for people to realize that technology is here to really come into the aid of assisting everybody in healthcare, not just on the consumer side, but on the industry side, and on the enterprise side of delivering better care. And it's a truly an exciting time, but there's a lot of buzz and we play an important role in trying to define that a little bit better because you can't go too far today and hear about the term AI being used/misused in healthcare. >> Definitely. And also I'd love to hear about how Olive is fitting into this, and its contributions to AI in health systems. >> Yeah, so at its core, we, the industry thinks of us very much as an automation player. We are, we've historically been in the trenches of healthcare, mostly on the provider side of the house, in leveraging technology to automate a lot of the high velocity, low variability items. Our founding and our DNA is in this idea of, we think it's unfair that healthcare relies on humans as being routers. And we have looked to solve the problem of technology not talking to each other, by using humans. And so we set out to really go in into the trenches of healthcare and bring about core automation technology. And you might be sitting there wondering, well why are we talking about automation under the umbrella of AI? And that's because we are challenging the very status quo of siloed-based automation, and we're building, what we say, is the internet of healthcare. And more importantly what we've done is, we've brought in a human, very empathetic approach to automation, and we're leveraging technology by saying when one Olive learns, all Olives learn, so that we take advantage of the network effect of a single Olive worker in the trenches of healthcare, sharing that knowledge and wisdom, both with her human counterparts, but also with her AI worker counterparts that are showing up to work every single day in some of the most complex health systems in this country. >> Right. Well, when you think about AI and, you know, computer technology, you don't exactly think of, you know, humanizing kind of potential. So how are you seeking to make AI really humanistic, and empathetic, potentially? >> Well, most importantly the way we're starting with that is where we are treating Olive just like we would any single human counterpart. We don't want to think of this as just purely a technology player. Most importantly, healthcare is deeply rooted in this idea of investing in outcomes, and not necessarily investing in core technology, right? So we have learned that from the early days of us doing some really robust integrated AI-based solutions, but we've humanized it, right? Take, for example, we treat Olive just like any other human worker would, she shows up to work, she's onboarded, she has an obligation to her customers and to her human worker counterparts. And we care very deeply about the cost of the false positive that exists in healthcare, right? So, and we do this through various different ways. Most importantly, we do it in an extremely transparent and interpretable way. By transparent I mean, Olive provides deep insights back to her human counterparts in the form of reporting and status reports, and we even, we even have a term internally, that we call is a sick day. So when Olive calls in sick, we don't just tell our customers Olive's not working today, we tell our customers that Olive is taking a sick day, because a human worker that might require, that might need to stay home and recover. In our case, we just happened to have to rewire a certain portal integration because a portal just went through a massive change, and Olive has to take a sick day in order to make that fix, right? So. And this is, you know, just helping our customers understand, or feel like they can achieve success with AI-based deployments, and not sort of this like robot hanging over them, where we're waiting for Skynet to come into place, and truly humanizing the aspects of AI in healthcare. >> Right. Well that's really interesting. How would you describe Olive's personality? I mean, could you attribute a personality? >> Yeah, she's unbiased, data-driven, extremely transparent in her approach, she's empathetic. There are certain days where she's direct, and there are certain ways where she could be quirky in the way she shares stuff. Most importantly, she's incredibly knowledgeable, and we really want to bring that knowledge that she has gained over the years of working in the trenches of healthcare to her customers. >> That sounds really fascinating, and I love hearing about the human side of Olive. Can you tell us about how this AI, though, is actually improving efficiencies in healthcare systems right now? >> Yeah, not too many people know that about a third of every single US dollar is spent in the administrative burden of delivering care. It's really, really unfortunate. In the capitalistic world, of, just us as a system of healthcare in the United States, there is a lot of tail wagging the dog that ends up happening. Most importantly, I don't know that the last time, if you've been through a process where you have to go and get an MRI or a CT scan, and your provider tells you that we first have to wait for the insurance company in order to give us permission to perform this particular task. And when you think about that, one, there's, you know the tail wagging the dog scenario, but two, the administrative burden to actually seek the approval for that test, that your provider is telling you that you need to perform. Right? And what we've done is, as humans, or as sort of systems, we have just put humans in the supply chain of connecting the left side to the right side. So what we're doing is we're taking advantage of massive distributing cloud computing platforms, I mean, we're fully built on the AWS stack, we take advantage of things that we can very quickly stand up, and spin up. And we're leveraging core capabilities in our computer vision, our natural language processing, to do a lot of the tasks that, unfortunately, we have relegated humans to do, and our goal is can we allow humans to function at the top of their license? Irrespective of what the license is, right? It could be a provider, it could be somebody working in the trenches of revenue cycle management, or it could be somebody in a call center talking to a very anxious patient that just learned that he or she might need to take a test in order to rule out something catastrophic, like a very adverse diagnosis. >> Yeah, really fascinating. I mean, do you think that this is just like the tip of the iceberg? I mean, how much more potential does AI have for healthcare? >> Yeah, I think we're very much in the early, early, early days of AI being applied in a production in practical sense. You know, AI has been talked about for many, many many years, in the trenches of healthcare. It has found its place very much in challenging status quos in research, it has struggled to find its way in the trenches of just the practicality on the application of AI. And that's partly because we, you know, going back to the point that I raised earlier, the cost of the false positive in healthcare is really high. You know, it can't just be a, you know, I bought a pair of shoes online, and it recommended that I buy a pair of socks, and I happen to get the socks and I returned them back because I realized that they're really ugly and hideous and I don't want them. In healthcare, you can't do that. Right? In healthcare you can't tell a patient or somebody else oops, I really screwed up, I should not have told you that. So, what that's meant for us, in the trenches of delivery of AI-based applications, is we've been through a cycle of continuous pilots and proof of concepts. Now, though, with AI starting to take center stage, where a lot of what has been hardened in the research world can be applied towards the practicality to avoid the burnout, and the sheer cost that the system is under, we're starting to see this real upwards tick of people implementing AI-based solutions, whether it's for decision-making, whether it's for administrative tasks, drug discovery, it's just, is an amazing, amazing time to be at the intersection of practical application of AI and really, really good healthcare delivery for all of us. >> Yeah, I mean, that's really, really fascinating, especially your point on practicality. Now how do you foresee AI, you know, being able to be more commercial in its appeal? >> I think you have to have a couple of key wins under your belt, is number one, number two, the standard, sort of outcomes-based publications that is required. Two, I think we need, we need real champions on the inside of systems to support the narrative that us as vendors are pushing heavily on the AI-driven world or the AI-approachable world, and we're starting to see that right now. You know, it took a really, really long time for providers, first here in the United States, but now internationally, on this adoption and move away from paper-based records to electronic medical records. You know, you still hear a lot of pain from people saying oh my God, I used an EMR, but try to take the EMR away from them for a day or two, and you'll very quickly realize that life without an EMR is extremely hard right now. AI is starting to get to that point where, for us, we, you know, we treat, we always say that Olive needs to pass the Turing test. Right? So when you clearly get this, this sort of feeling that I can trust my AI counterpart, my AI worker to go and perform these tasks, because I realized that, you know, as long as it's unbiased, as long as it's data-driven, as long as it's interpretable, and something that I can understand, I'm willing to try this out in a routine basis, but we really, really need those champions on the internal side to promote the use of this safe application. >> Yeah. Well, just another thought here is, you know, looking at your website, you really focus on some of the broken systems in healthcare, and how Olive is uniquely prepared to shine the light on that, where others aren't. Can you just give us an insight onto that? >> Yeah. You know, the shine the light is a play on the fact that there's a tremendous amount of excitement in technology and AI in healthcare applied to the clinical side of the house. And it's the obvious place that most people would want to invest in, right? It's like, can I bring an AI-based technology to the clinical side of the house? Like decision support tools, drug discovery, clinical NLP, et cetera, et cetera. But going back to what I said, 30% of what happens today in healthcare is on the administrative side. And so what we call as the really, sort of the dark side of healthcare where it's not the most exciting place to do true innovation, because you're controlled very much by some big players in the house, and that's why we we provide sort of this insight on saying we can shine a light on a place that has typically been very dark in healthcare. It's around this mundane aspects of traditional, operational, and financial performance, that doesn't get a lot of love from the tech community. >> Well, thank you Rohan for this fascinating conversation on how AI is revolutionizing health systems across the country, and also the unique role that Olive is now playing in driving those efficiencies that we really need. Really looking forward to our next conversation with you. And that was Rohan D'Souza, the Chief Product Officer of Olive, and I'm Natalie Erlich, your host for the AWS Startup Showcase, on theCUBE. Thank you very much for joining us, and look forward for you to join us on the next session. (gentle music)

Published Date : Jun 24 2021

SUMMARY :

of the AWS Startup Showcase, My pleasure to be here, I'm excited. and how it's revolutionizing and on the enterprise side And also I'd love to hear about in some of the most complex So how are you seeking to in the form of reporting I mean, could you attribute a personality? that she has gained over the years the human side of Olive. know that the last time, is just like the tip of the iceberg? and the sheer cost that you know, being able to be first here in the United and how Olive is uniquely prepared is on the administrative side. and also the unique role

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Rohan D'SouzaPERSON

0.99+

NataliePERSON

0.99+

Natalie ErlichPERSON

0.99+

United StatesLOCATION

0.99+

30%QUANTITY

0.99+

AWSORGANIZATION

0.99+

twoQUANTITY

0.99+

AmericaLOCATION

0.99+

RohanPERSON

0.99+

OlivePERSON

0.99+

United StatesLOCATION

0.99+

TodayDATE

0.99+

a dayQUANTITY

0.99+

firstQUANTITY

0.99+

todayDATE

0.99+

bothQUANTITY

0.98+

TwoQUANTITY

0.98+

singleQUANTITY

0.97+

OlivesPERSON

0.96+

OliveORGANIZATION

0.92+

oneQUANTITY

0.88+

Startup ShowcaseEVENT

0.88+

theCUBEORGANIZATION

0.88+

single dayQUANTITY

0.82+

pandemicEVENT

0.81+

about a thirdQUANTITY

0.81+

a pair of socksQUANTITY

0.8+

AWS Startup ShowcaseEVENT

0.8+

AWS Startup ShowcaseEVENT

0.75+

single humanQUANTITY

0.73+

SkynetORGANIZATION

0.68+

USLOCATION

0.67+

every singleQUANTITY

0.65+

dollarQUANTITY

0.62+

pairQUANTITY

0.6+

numberQUANTITY

0.56+

NLPORGANIZATION

0.5+

shoesQUANTITY

0.5+

Zach Booth, Explorium | AWS Startup Showcase | The Next Big Thing in AI, Security, & Life Sciences.


 

(gentle upbeat music) >> Everyone welcome to the AWS Startup Showcase presented by theCUBE. I'm John Furrier, host of theCUBE. We are here talking about the next big thing in cloud featuring Explorium. For the AI track, we've got AI cybersecurity and life sciences. Obviously AI is hot, machine learning powering that. Today we're joined by Zach Booth, director of global partnerships and channels like Explorium. Zach, thank you for joining me today remotely. Soon we'll be in person, but thanks for coming on. We're going to talk about rethinking external data. Thanks for coming on theCUBE. >> Absolutely, thanks so much for having us, John. >> So you guys are a hot startup. Congratulations, we just wrote about on SiliconANGLE, you're a new $75 million of fresh funding. So you're part of the Amazon partner network and growing like crazy. You guys have a unique value proposition looking at external data and that having a platform for advanced analytics and machine learning. Can you take a minute to explain what you guys do? What is this platform? What's the value proposition and why do you exist? >> Bottom line, we're bringing context to decision-making. The premise of Explorium and kind of this is consistent with the framework of advanced analytics is we're helping customers to reach better, more relevant, external data to feed into their predictive and analytical models. It's quite a challenge to actually integrate and effectively leverage data that's coming from beyond your organization's walls. It's manual, it's tedious, it's extremely time consuming and that's a problem. It's really a problem that Explorium was built to solve. And our philosophy is it shouldn't take so long. It shouldn't be such an arduous process, but it is. So we built a company, a technology that's capable for any given analytical process of connecting a customer to relevant sources that are kind of beyond their organization's walls. And this really impacts decision-making by bringing variety and context into their analytical processes. >> You know, one of the things I see a lot in my interviews with theCUBE and talking to people in the industry is that everyone talks a big game about having some machine learning and AI, they're like, "Okay, I got all this cool stuff". But at the end of the day, people are still using spreadsheets. They're wrangling data. And a lot of it's dominated by these still fenced-off data warehousing and you start to see the emergence of really companies built on the cloud. I saw the snowflake IPO, you're seeing a whole new shift of new brands emerging that are doing things differently, right? And because there's such a need for just move out of the archaic spreadsheet and data presentation layers, it's a slower antiquated, outdated. How do you guys solve that problem? You guys are on the other side of that equation, you're on the new wave of analytics. What are you guys solving? How do you make that work? How do you get on that way? >> So basically the way Explorium sees the world, and I think that most analytical practitioners these days see it in a similar way, but the key to any analytical problem is having the right data. And the challenge that we've talked about and that we're really focused on is helping companies reach that right data. Our focus is on the data part of data science. The science part is the algorithmic side. It's interesting. It was kind of the first frontier of machine learning as practitioners and experts were focused on it and cloud and compute really enabled that. The challenge today isn't so much "What's the right model for my problem?" But it's "What's the right data?" And that's the premise of what we do. Your model's only as strong as the data that it trains on. And going back to that concept of just bringing context to decision-making. Within that framework that we talked about, the key is bringing comprehensive, accurate and highly varied data into my model. But if my model is only being informed with internal data which is wonderful data, but only internal, then it's missing context. And we're helping companies to reach that external variety through a pretty elegant platform that can connect the right data for my analytical process. And this really has implications across several different industries and a multitude of use cases. We're working with companies across consumer packaged goods, insurance, financial services, retail, e-commerce, even software as a service. And the use cases can range between fraud and risk to marketing and lifetime value. Now, why is this such a challenge today with maybe some antiquated or analog means? With a spreadsheet or with a rule-based approach where we're pretty limited, it was an effective means of decision-making to generate and create actions, but it's highly limited in its ability to change, to be dynamic, to be flexible. And with modeling and using data, it's really a huge arsenal that we have at our fingertips. The trick is extracting value from within it. There's obviously latent value from within our org but every day there's more and more data that's being created outside of our org. And that is a challenge to go out and get to effectively filter and navigate and connect to. So we've basically built that tech to help us navigate and query for any given analytical question. Find me the right data rather than starting with what's the problem I'm looking for, now let me think about the right data. Which is kind of akin to going into a library and searching for a specific book. You know which book you're looking for. Instead of saying, there's a world, a universe of data outside there. I want to access it. I want to tap into what's right. Can I use a tool that can effectively query all that data, find what's relevant for me, connect it and match it with my own and distill signals or features from that data to provide more variety into my modeling efforts yielding a robust decision as an output. >> I love that paradigm of just having that searchable kind of paradigm. I got to ask you one of the big things that I've heard people talk about. I want to get your thoughts on this, is that how do I know if I even have the right data? Is the data addressable? Can I find it? Is it even, can I even be queried? How do you solve that problem for customers when they say, "I really want the best analytics but do I even have the data or is it the right data?" How do you guys look at that? >> So the way our technology was built is that it's quite relevant for a few different profile types of customers. Some of these customers, really the genesis of the company started with those cloud-based, model-driven since day one organizations, and they're working with machine learning and they have models in production. They're quite mature in fact. And the problem that they've been facing is, again, our models are only as strong as the data that they're training on. The only data that they're training on is internal data. And we're seeing diminishing returns from those decisions. So now suddenly we're looking for outside data and we're finding that to effectively use outside data, we have to spend a lot of time. 60% of our time spent thinking of data, going out and getting it, cleaning it, validating it, and only then can we actually train a model and assess if there's an ROI. That takes months. And if it doesn't push the needle from an ROI standpoint, then it's an enormous opportunity cost, which is very, very painful, which goes back to their decision-making. Is it even worth it if it doesn't push the needle? That's why there had to be a better way. And what we built is relevant for that audience as well as companies that are in the midst of their digital transformation. We're data rich, but data science poor. We have lots of data. A latent value to extract from within our own data and at the same time tons of valuable data outside of our org. Instead of waiting 18, 36 months to transform ourselves, get our infrastructure in place, our data collection in place, and really start having models in production based on our own data. You can now do this in tandem. And that's what we're seeing with a lot of our enterprise customers. By using their analysts, their data engineers, some of them in their innovation or kind of center of excellences have a data science group as well. And they're using the platform to inform a lot of their different models across lines of businesses. >> I love that expression, "data-rich". A lot of people becoming full of data too. They have a data problem. They have a lot of it. I think I want to get your thoughts but I think that connects to my next question which is as people look at the cloud, for instance, and again, all these old methods were internal, internal to the company, but now that you have this idea of cloud, more integration's happening. More people are connecting with APIs. There's more access to potentially more signals, more data. How does a company go to that next level to connect in and acquire the data and make it faster? Because I can almost imagine that the signals that come from that context of merging external data and that's the topic of this theme, re-imagining external data is extremely valuable signaling capability. And so it sounds like you guys make it go faster. So how does it work? Is it the cloud? Take us through that value proposition. >> Well, it's a real, it's amazing how fast the rate of change organizations have been moving onto the cloud over the past year during COVID and the fact that alternative or external data, depending on how you refer to it, has really, really blown up. And it's really exciting. This is coming in the form of data providers and data marketplaces, and everybody is kind of, more and more organizations are moving from rule-based decision-making to predictive decision making, and that's exciting. Now what's interesting about this company, Explorium, we're working with a lot of different types of customers but our long game has a real high upside. There's more and more companies that are starting to use data and are transformed or already are in the midst of their transformation. So they need outside data. And that challenge that I described is exists for all of them. So how does it really work? Today, if I don't have data outside, I have to think. It's based on hypothesis and it all starts with that hypothesis which is already prone to error from the get-go. You and I might be domain experts for a given use case. Let's say we're focusing on fraud. We might think about a dozen different types of data sources, but going out and getting it like I said, it takes a lot of time harmonizing it, cleaning it, and being able to use it takes even more time. And that's just for each one. So if we have to do that across dozens of data sources it's going to take far too much time and the juice isn't worth the squeeze. And so I'm going to forego using that. And a metaphor that I like to use when I try to describe what Explorium does to my mom. I basically use this connection to buying your first home. It's a very, very important financial decision. You would, when you're buying this home, you're thinking about all the different inputs in your decision-making. It's not just about the blueprint of the house and how many rooms and the criteria you're looking for. You're also thinking external variables. You're thinking about the school zone, the construction, the property value, alternative or similar neighborhoods. That's probably your most important financial decision or one of the largest at least. A machine learning model in production is an extremely important and expensive investment for an organization. Now, the problem is as a consumer buying a home, we have all this data at our fingertips to find out all of those external-based inputs. Organizations don't, which is kind of crazy when I first kind of got into this world. And so, they're making decisions with their first party data only. First party data's wonderful data. It's the best, it's representative, it's high quality, it's high value for their specific decision-making and use cases but it lacks context. And there's so much context in the form of location-based data and business information that can inform decision-making that isn't being used. It translates to sub-optimal decision-making, let's say. >> Yeah, and I think one of the insights around looking at signal data in context is if by merging it with the first party, it creates a huge value window, it gives you observational data, maybe potentially insights into customer behavior. So totally agree, I think that's a huge observation. You guys are definitely on the right side of history here. I want to get into how it plays out for the customer. You mentioned the different industries, obviously data's in every vertical. And vertical specialization with the data it has to be, is very metadata driven. I mean, metadata and oil and gas is different than fintech. I mean, some overlap, but for the most part you got to have that context, acute context, each one. How are you guys working? Take us through an example of someone getting it right, getting that right set up, taking us through the use case of how someone on boards Explorium, how they put it to use, and what are some of the benefits? >> So let's break it down into kind of a three-step phase. And let's use that example of fraud earlier. An organization would have basically past historical data of how many customers were actually fraudulent in the end of the day. So this use case, and it's a core business problem, is with an intention to reduce that fraud. So they would basically provide, going with your description earlier, something similar to an Excel file. This can be pulled from any database out there, we're working with loads of them, and they would provide this what's called training data. This training data is their historical data and would have as an output, the outcome, the conclusion, was this business fraudulent or not? Yes or no. Binary. The platform would understand that data itself to train a model with external context in the form of enrichments. These data enrichments at the end of the day are important, they're relevant, but their purpose is to generate signals. So to your point, signals is the bottom line what everyone's trying to achieve and identify and discover, and even engineer by using data that they have and data that they yet to integrate with. So the platform would connect to your data, infer and understand the meaning of that data. And based on this matching of internal plus external context, the platform automates the process of distilling signals. Or in machine learning this is called, referred to as features. And these features are really the bread and butter of your modeling efforts. If you can leverage features that are coming from data that's outside of your org, and they're quantifiably valuable which the platform measures, then you're putting yourself in a position to generate an edge in your modeling efforts. Meaning now, you might reduce your fraud rate. So your customers get a much better, more compelling offer or service or price point. It impacts your business in a lot of ways. What Explorium is bringing to the table in terms of value is a single access point to a huge universe of external data. It expedites your time to value. So rather than data analysts, data engineers, data scientists, spending a significant amount of time on data preparation, they can now spend most of their time on feature or signal engineering. That's the more fun and interesting part, less so the boring part. But they can scale their modeling efforts. So time to value, access to a huge universe of external context, and scale. >> So I see two things here. Just make sure I get this right 'cause it sounds awesome. So one, the core assets of the engineering side of it, whether it's the platform engineer or data engineering, they're more optimized for getting more signaling which is more impactful for the context acquisition, looking at contexts that might have a business outcome, versus wrangling and doing mundane, heavy lifting. >> Yeah so with it, sorry, go ahead. >> And the second one is you create a democratization for analysts or business people who just are used to dealing with spreadsheets who just want to kind of play and play with data and get a feel for it, or experiment, do querying, try to match planning with policy - >> Yeah, so the way I like to kind of communicate this is Explorium's this one, two punch. It's got this technology layer that provides entity resolution, so matching with external data, which otherwise is a manual endeavor. Explorium's automated that piece. The second is a huge universe of outside data. So this circumvents procurement. You don't have to go out and spend all of these one-off efforts on time finding data, organizing it, cleaning it, etc. You can use Explorium as your single access point to and gateway to external data and match it, so this will accelerate your time to value and ultimately the amount of valuable signals that you can discover and leverage through the platform and feed this into your own pipelines or whatever system or analytical need you have. >> Zach, great stuff. I love talking with you and I love the hot startup action here. Cause you're again, you're on the net new wave here. Like anything new, I was just talking to a colleague here. (indistinct) When you have something new, it's like driving a car for the first time. You need someone to give you some driving lessons or figure out how to operationalize it or take advantage of the one, two, punch as you pointed out. How do you guys get someone up and running? 'Cause let's just say, I'm like, okay, I'm bought into this. So no brainer, you got my attention. I still don't understand. Do you provide a marketplace of data? Do I need to get my own data? Do I bring my own data to the party? Do you guys provide relationships with other data providers? How do I get going? How do I drive this car? How do you answer that? >> So first, explorium.ai is a free trial and we're a product-focused company. So a practitioner, maybe a data analyst, a data engineer, or data scientist would use this platform to enrich their analytical, so BI decision-making or any models that they're working on either in production or being trained. Now oftentimes models that are being trained don't actually make it to production because they don't meet a minimum threshold. Meaning they're not going to have a positive business outcome if they're deployed. With Explorium you can now bring variety into that and increase your chances that your model that's being trained will actually be deployed because it's being fed with the right data. The data that you need that's not just the data that you have. So how a business would start working with us would typically be with a use case that has a high business value. Maybe this could be a fraud use case or a risk use case and B2B, or even B2SMB context. This might be a marketing use case. We're talking about LTV modeling, lookalike modeling, lead acquisition and generation for our CPGs and field sales optimization. Explore and understand your data. It would enrich that data automatically, it would generate and discover new signals from external data plus from your own and feed this into either a model that you have in-house or end to end in the platform itself. We provide customer success to generate, kind of help you build out your first model perhaps, and hold your hands through that process. But typically most of our customers are after a few months time having run in building models, multiple models in production on their own. And that's really exciting because we're helping organizations move from a more kind of rule-based decision making and being their bridge to data science. >> Awesome. I noticed that in your title you handle global partnerships and channels which I'm assuming is you guys have a network and ecosystem you're working with. What are some of the partnerships and channel relationships that you have that you bring to bear in the marketplace? >> So data and analytics, this space is very much an ecosystem. Our customers are working across different clouds, working with all sorts of vendors, technologies. Basically they have a pretty big stack. We're a part of that stack and we want to symbiotically play within our customer stack so that we can contribute value whether they sit here, there, or in another place. Our partners range from consulting and system integration firms, those that perhaps are building out the blueprint for a digital transformation or actually implementing that digital transformation. And we contribute value in both of these cases as a technology innovation layer in our product. And a customer would then consume Explorium afterwards, after that transformation is complete as a part of their stack. We're also working with a lot of the different cloud vendors. Our customers are all cloud-based and data enrichment is becoming more and more relevant with some wonderful machine-learning tools. Be they AutoML, or even some data marketplaces are popping up and very exciting. What we're bringing to the table as an edge is accelerating the connection between the data that I think I want as a company and how to actually extract value from that data. Being part of this ecosystem means that we can be working with and should be working with a lot of different partners to contribute incremental value to our end customers. >> Final question I want to ask you is if I'm in a conference room with my team and someone says, "Hey, we should be rethinking our external data." What would I say? How would I pound my fist on the table or raise my hand in saying, "Hey, I have an idea, we should be thinking this way." What would be my argument to the team, to re-imagine how we deal with external data? >> So it might be a scenario that rather than banging your hands on the table, you might be banging your heads on the table because it's such a challenging endeavor today. Companies have to think about, What's the right data for my specific use cases? I need to validate that data. Is it relevant? Is it real? Is it representative? Does it have good coverage, good depth and good quality? Then I need to procure that data. And this is about getting a license from it. I need to integrate that data with my own. That means I need to have some in-house expertise to do so. And then of course, I need to monitor and maintain that data on an ongoing basis. All of this is a pretty big thing to undertake and undergo and having a partner to facilitate that external data integration and ongoing refresh and monitoring, and being able to trust that this is all harmonized, high quality, and I can find the valuable ones without having to manually pick and choose and try to discover it myself is a huge value add, particularly the larger the organization or partner. Because there's so much data out there. And there's a lot of noise out there too. And so if I can through a single partner or access point, tap into that data and quantify what's relevant for my specific problem, then I'm putting myself in a really good position and optimizing the allocation of my very expensive and valuable data analysts and engineering resources. >> Yeah, I think one of the things you mentioned earlier I thought was a huge point was good call out was it goes beyond the first party data because and even just first party if you just in an internal view, some of the best, most successful innovators that we've been covering with cloud scale is they're extending their first party data to external providers. So they're in the value chains of solutions that share their first party data with other suppliers. And so that's just, again, more of an extension of the first party data. You're kind of taking it to a whole 'nother level of there's another external, external set of data beyond it that's even more important. I think this is a fascinating growth area and I think you guys are onto it. Great stuff. >> Thank you so much, John. >> Well, I really appreciate you coming on Zach. Final word, give a quick plug for the company. What are you up to, and what's going on? >> What's going on with Explorium? We are growing very fast. We're a very exciting company. I've been here since the very early days and I can tell you that we have a stellar working environment, a very, very, strong down to earth, high work ethic culture. We're growing in the sense of our office in San Mateo, New York, and Tel Aviv are growing rapidly. As you mentioned earlier, we raised our series C so that totals Explorium to raising I think 127 million over the past two years and some change. And whether you want to partner with Explorium, work with us as a customer, or join us as an employee, we welcome that. And I encourage everybody to go to explorium.ai. Check us out, read some of the interesting content there around data science, around the processes, around the business outcomes that a lot of our customers are seeing, as well as joining a free trial. So you can check out the platform and everything that has to offer from machine learning engine to a signal studio, as well as what type of information might be relevant for your specific use case. >> All right Zach, thanks for coming on. Zach Booth, director of global partnerships and channels that explorium.ai. The next big thing in cloud featuring Explorium and a part of our AI track, I'm John Furrier, host of theCUBE. Thanks for watching.

Published Date : Jun 24 2021

SUMMARY :

For the AI track, we've Absolutely, thanks so and that having a platform It's quite a challenge to actually of really companies built on the cloud. And that is a challenge to go out and get I got to ask you one of the big things and at the same time tons of valuable data and that's the topic of this theme, And a metaphor that I like to use of the insights around and data that they yet to integrate with. the core assets of the and gateway to external data Do I bring my own data to the party? that's not just the data that you have. What are some of the partnerships a lot of the different cloud vendors. to re-imagine how we and optimizing the allocation of the first party data. plug for the company. that has to offer from and a part of our AI track,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

Zach BoothPERSON

0.99+

ExploriumORGANIZATION

0.99+

ZachPERSON

0.99+

AmazonORGANIZATION

0.99+

60%QUANTITY

0.99+

$75 millionQUANTITY

0.99+

John FurrierPERSON

0.99+

San MateoLOCATION

0.99+

two thingsQUANTITY

0.99+

Tel AvivLOCATION

0.99+

127 millionQUANTITY

0.99+

ExcelTITLE

0.99+

explorium.aiOTHER

0.99+

first partyQUANTITY

0.99+

TodayDATE

0.99+

first timeQUANTITY

0.99+

first modelQUANTITY

0.98+

todayDATE

0.98+

bothQUANTITY

0.98+

first homeQUANTITY

0.98+

oneQUANTITY

0.98+

firstQUANTITY

0.98+

three-stepQUANTITY

0.98+

secondQUANTITY

0.97+

two punchQUANTITY

0.97+

twoQUANTITY

0.97+

first frontierQUANTITY

0.95+

New YorkLOCATION

0.95+

theCUBEORGANIZATION

0.94+

AWSORGANIZATION

0.93+

explorium.aiORGANIZATION

0.91+

each oneQUANTITY

0.9+

second oneQUANTITY

0.9+

single partnerQUANTITY

0.89+

AWS Startup ShowcaseEVENT

0.87+

dozensQUANTITY

0.85+

past yearDATE

0.84+

single accessQUANTITY

0.84+

First partyQUANTITY

0.84+

series COTHER

0.79+

COVIDEVENT

0.74+

past two yearsDATE

0.74+

36 monthsQUANTITY

0.73+

18,QUANTITY

0.71+

Startup ShowcaseEVENT

0.7+

SiliconANGLEORGANIZATION

0.55+

tonsQUANTITY

0.53+

thingsQUANTITY

0.53+

snowflake IPOEVENT

0.52+

A Day in the Life of an IT Admin | HPE Ezmeral Day 2021


 

>>Hi, everyone. Welcome to ASML day. My name is Yasmin Joffey. I'm the director of systems engineering for ASML at HPE. Today. We're here and joined by my colleague, Don wake, who is a technical marketing engineer who will talk to us about the date and the life of an it administrator through the lens of ASML container platform. We'll be answering your questions real time. So if you have any questions, please feel free to put your questions in the chat, and we should have some time at the end for some live Q and a. Don wants to go ahead and kick us off. >>All right. Thanks a lot, Yasir. Yeah, my name is Don wake. I'm the tech marketing guy and welcome to asthma all day, day in the life of an it admin and happy St. Patrick's day. At the same time, I hope you're wearing green virtual pinch. If you're not wearing green, don't have to look that up if you don't know what I'm scouting. So we're just going to go through some quick things. Talk about discussion of modern business. It needs to kind of set the stage and go right into a demo. Um, so what is the need here that we're trying to fulfill with, uh, ASML container platform? It's, it's all rooted in analytics. Um, modern businesses are driven by data. Um, they are also application centric and the separation of applications and data has never been more important or, or the relationship between the two applications are very data hungry. >>These days, they consume data in all new ways. The applications themselves are, are virtualized, containerized, and distributed everywhere, and optimizing every decision and every application is, is become a huge problem to tackle for every enterprise. Um, so we look at, um, for example, data science, um, as one big use case here, um, and it's, it's really a team sport and I'm today wearing the hat of perhaps, you know, operations team, maybe software engineer, guy working on, you know, continuous integration, continuous development integration with source control, and I'm supporting these data scientists, data analysts. And I also have some resource control. I can decide whether or not the data science team gets a, a particular cluster of compute and storage so that they can do their work. So this is the solution that I've been given as an it admin, and that is the ASML container platform. >>And just walking through this real quick, at the top, I'm trying to, as wherever possible, not get involved in these guys' lives. So the data engineers, scientists, app developers, dev ops guys, they all have particular needs and they can access their resources and spin up clusters, or just do work with the Jupiter notebook or run spark or Kafka or any of the, you know, popular analytics platforms by just getting in points that we can provide to them web URLs and their self service. But in the backend, I can then as the it guy makes sure the Kubernetes clusters are up and running, I can assign particular access to particular roles. I can make sure the data's well protected and I can connect them. I can import clusters from public clouds. I can, uh, you know, put my like clusters on premise if I want to. >>And I can do all this through this centralized control plane. So today I'm just going to show you I'm supporting some data scientists. So one of our very own guys is actually doing a demo right now as well, called the a day in the life of the data scientist. And he's on the opposite side, not caring about all the stuff I'm doing in the backend and he's training models and registering the models and working with data, uh, inside his, you know, Jupiter notebook, running inferences, running postman scripts. And so I'm in the background here, making sure that he's got access to his cluster storage protected, make sure it's, um, you know, his training models are up, he's got service endpoints, connecting him to, um, you know, his source control and making sure he's got access to all that stuff. So he's got like a taxi ride prediction model that he's working on and he has a Jupiter notebook and models. So why don't we, um, get hands on and I'll just jump right over it. >>It was no container platform. So this is a web UI. So this is the interface into the container platform. Our centralized control plane, I'm using my active directory credentials to log in here. >>And >>When I log in, I've also been assigned a particular role, uh, with regard to how much of the resources I can access. Now, in my case, I'm a site admin you can see right up here in the upper right hand, I'm a site admin and I have access to lots and lots of resources. And the one I'm going to be focusing on today is a Kubernetes cluster. Um, so I have a cluster I can go in here and let's say, um, we have a new data scientists come on board one. I can give him his own resources so he can do whatever he wants, use some GPU's and not affect other clusters. Um, so we have all these other clusters already created here. You can see here that, um, this is a very busy, um, you know, production system. They've got some dev clusters over here. >>I see here, we have a production cluster. So he needs to produce something for data scientists to use. It has to be well protected and, and not be treated like a development resource. So under his production cluster, I decided to create a new Kubernetes cluster. And literally I just push a button, create Kubernetes cluster once I've done that. And I'll just show you some of the screens and this is a live environment. So this is, I could actually do it all my hosts are used up right now, but I wouldn't be able to go in here and give it a name, just select, um, some hosts to use as the primary master controller and some workers answer a few more questions. And then once that's done, I have now created a special, a whole nother Kubernetes cluster, um, that I could also create tenants from. >>So tenants are really Kubernetes. Uh namespaces so in addition to taking hosts and Kubernetes clusters, I can also go to that, uh, to existing clusters and now carve out a namespace from that. So I look at some of the clusters that were already created and, um, let's see, we've got, um, we've got this year is an example of a tenant that I could have created from that production cluster. And to do that here in the namespace, I just hit create and similar to how you create a cluster. You can now carve down from a given cluster and we'll say the production cluster and give it a name and a description. I can even tell it, I want this specific one to be an AI ML project, um, which really is our ML ops license. So at the end of the day, I can say, okay, I'm going to create an ML ops tenant from that cluster that I created. >>And so I've already created it here for this demo. And I'm going to just go into that Kubernetes namespace now that we also call it tenant. I mean, it's like, multitenancy the name essentially means we're carving out resources so that somebody can be isolated from another environment. First thing I typically do. Um, and at this point I could also give access to this tenant and only this tenant to my data scientist. So the first thing I typically do is I go in here and you can actually assign users right here. So right now it's just me. But if I want it to, for example, give this, um, to Terry, I could go in here and find another user and assign him from this lead, from this list, as long as he's got the proper credentials here. So you can see here, all these other users have active directory credentials, and they, uh, when we created the cluster itself, we also made sure it integrated with our active directory, so that only authorized users can get in there. >>Let's say the first thing I want to do is make sure when I do Jupiter notebook work, or when Terry does, I'm going to connect him up straight up to the get hub repository. So he gives me a link to get hub and says, Hey man, this is all of my cluster work that I've been doing. I've got my source control there. My scripts, my Python notebooks, my Jupiter notebooks. So when I create that, I simply give him, you know, he gives me his, I create a configuration. I say, okay, here's a, here's a get repo. Here's the link to it. I can use a token, here's his username. And I can now put in that token. So this is actually a private repo and using a token, you know, standard get interface. And then the cool thing after that, you can go in here and actually copy the authorization secret. >>And this gets into the Kubernetes world. Um, you know, if you want to make sure you have secure integration with things like your source control or perhaps your active directory, that's all maintained in secrets. So you can take that secret. And when I then create his notebook, I can put that secret right in here in this, uh, launch Yammel. And I say, Hey, connect this Jupiter notebook up with this secret so he can log in. And when I've launched this Jupiter notebook cluster, this is actually now, uh, within my, my, uh, Kubernetes tenant. It is now really a pod. And if I want to, I can go right into a terminal for that, uh, Kubernetes tenant and say, coop CTL, these are standard, you know, CNCF certified Kubernetes get pods. And when I do this, it'll tell me all of the active pods and within those positive containers that I'm running. >>So I'm running quite a few pods and containers here in this, uh, artificial intelligence machine learning, um, tenant. So that's kind of cool. Also, if I wanted to, I could go straight and I can download the config for Kubernetes, uh, control. Uh well, and then I can do something like this, where on my own system where I'm more comfortable, perhaps coop CTL get pods. So this is running on my laptop and I just had to do a coop CTL refresh and give the IP address and authorization, um, information in order to connect from my laptop to that end point. So from a CIC D perspective from, you know, an it admin guides, he usually wants to use tools right on his, uh, desktop. So here am I back in my web browser, I'm also here on the dashboard of this, uh, Kubernetes, um, tenant, and I can see how it's doing. >>It looks like it's kind of busy here. I can focus specifically on a pod if I want to. I happen to know this pod is my Jupiter notebook pod. So aren't, I show how, you know, I could enable my data scientists by just giving him the, uh, URL or what we call a notebook service end points or notebook end point. And just by clicking on this URL or copying it, copying, you know, it's a link, uh, and then emailing it to them and say, okay, here's your, uh, you know, here's your duper notebook. And I say, Hey, just log in with your credentials. I've already logged in. Um, and so then he's got his Jupiter notebook here and you can see that he's connected to his GitHub repo directly. He's got all of the files that he needs to run his data science project and within here, and this is really in the data science realm, data scientists realm. >>He can see that he can have access to centralized storage and he can copy the files from his GitHub repo to that centralized storage. And, you know, these, these commands, um, are kind of cool. They're a little Jupiter magic commands, and we've got some of our own that showed that attachment to the cluster. Um, but you can see here if you run these commands, they're actually looking at the shared project repository managed by the container platform. So, you know, just to show you that again, I'll go back to the container platform. And in fact, the data scientist, uh, could do the same thing. Attitude put a notebook back to platform. So here's this project repository. So this is other big point. So now putting on my storage admin hat, you know, I've got this shared, um, storage, um, volume that is managed for me by the ESMO data fabric. >>Um, in, in here, you can see that the data scientist, um, from his get repo is able to through Jupiter notebook directly, uh, copy his code. He was able to run as Jupiter notebook and create this XG boost, uh, model. So this file can then be registered in this AIML tenant. So he can go in here and register his model. So this is, you know, this is really where the data scientist guy can self-service kick off his notebooks, even get a deployment end point so that he can then inference his cluster. So here again, another URL that you could then take this and put it into like a postman rest URL and get answers. Um, but let's say he wants to, um, he's been doing all this work and I want to make sure that his, uh, data's protected, uh, how about creating a mirror. >>So if I want to create a mirror of that data, now I go back to this other, uh, and this is the, the, uh, data fabric embedded in a very special cluster called the Picasso cluster. And it's a version of the ASML data fabric that allows you to launch what was formerly called Matt bar as a Kubernetes cluster. And when you create this special cluster, every other cluster that you create is automatically, uh, gets things like that. Tenant storage. I showed you to create a shared workspace, and it's automatically managed by this, uh, data fabric. Uh, and you're even given an end point to go into the data fabric and then use all of the awesome features of ASML data fabric. So here I can just log in here. And now I'm at the, uh, data fabric, web UI to do some data protection and mirroring. >>So >>Let's go over here. Let's say I want to, uh, create a mirror of that tenant. So I forgot to note what the name of my tenant was. I'm going to go back to my tenant, the name of the volume that I'm playing with here. So in my AIML tenant, I'm going to go to my source, control my project repository that I want to protect. And I see that the ESMO data fabric has created 10 and 30 as a volume. So I'll go back to my, um, data fabric here, and I'm going to look for 10 and 30. And if I want to, I can go into tenant 30, >>Okay. >>Down here, I can look at the usage. I can look at all of the, you know, I've used very little of the, uh, allocated storage that I want, but let's, uh, you know what, let's go ahead and create a volume to mirror that one. So very simple web UI that has said create volume. I go in here and I say, I want to do a, a tenant 30 mirror. And I say, mirror the mirror volume. Um, I want to use my Picasso cluster. I want to use tenant 30. So now that's actually looking up in the data fabric, um, database there's 10 and 30 K. So it knows exactly which one I want to use. I can go in here and I can say, you know, ext HCP, tenant, 30 mirror, you know, I can give it whatever name I want and this path here. >>And that's a whole nother, uh, demo is this could be in Tokyo. This could be mirrored to all kinds of places all over the world, because this is truly a global name, split namespace, which is a huge differentiator for us in this case, I'm creating a local mirror and that can go down here and, um, I can add, uh, audit and encryptions. I can do, um, access control. I can, you know, change permissions, you know, so full service, um, interactivity here. And of course this is using the web UI, but there's also rest API interfaces as well. So that is pretty much the, the brunt of what I wanted to show you in the demo. Um, so we got hands on and I'm just going to throw this up real quick and then come back to Yasser. See if he's got any questions he has received from anybody watching, if you have any new questions. >>Yeah. We've got a few questions. Um, we can, uh, just take some time to go, hopefully answer a few. Um, so it, it does look like you can integrate or incorporate your existing get hub, uh, to be able to, um, extract, uh, shared code or repositories. Correct? >>Yeah. So we have that built in and can either be, um, get hub or bit bucket it's, you know, pretty standard interface. So just like you can go into any given, get hub and do a clone of a, of a repo, pull it into your local environment. We integrated that directly into the gooey so that you can, uh, say to your, um, AIML tenant, uh, to your Jupiter notebook. You know, here's, here's my GitHub repo. When you open up my notebook, just connect me straight up. So it saves you some, some steps there because Jupiter notebook is designed to be integrated with get hub. So we have get hub integrated in as well or bit bucket. Right. >>Um, another question around the file system, um, has the map, our file system that was carried over, been modified in any way to run on top of Kubernetes. >>So yeah, I would say that the map, our file system data fabric, what I showed here is the Kubernetes version of it. So it gives you a lot of the same features, but if you need, um, perhaps run it on bare metal, maybe you have performance, um, concerns, um, you know, you can, uh, you can also deploy it as a separate bare metal instance of data fabric, but this is just one way that you can, uh, use it integrated directly into Kubernetes depends really the needs of, of the, uh, the user and that a fabric has a lot of different capabilities, but this is, um, it has a lot of the core file system capabilities where you can do snapshots and mirrors, and it it's of course, striped across multiple, um, multiple disks and nodes. And, uh, you know, Matt BARR data fabric has been around for years. It's, uh, and it's designed for integration with these, uh, analytic type workloads. >>Great. Um, you showed us how you can manage, um, Kubernetes clusters through the ASML container platform you buy. Um, but the question is, can you, uh, control who accesses, which tenant, I guess, namespace that you created, um, and also can you restrict or, uh, inject resource limitations for each individual namespace through the UI? >>Oh yeah. So that's, that's a great question. Yes. To both of those. So, um, as a site admin, I had lots of authority to create clusters, to go into any cluster I wanted, but typically for like the data scientist example I used, I would give him, I would create a user for him. And there's a couple of ways you can create users. Um, and it's all role-based access control. So I could create a local user and have container platform authenticate him, or I can say integrate directly with, uh, active directory or LDAP, and then even including which groups he has access to. And then in the user interface for the site admin, I could say he gets access to this tenant and only this tenant. Um, another thing you asked about is his limitations. So when you create the tenant to prevent that noisy neighbor problem, you can, um, go in and create quotas. >>So I didn't show the process of actually creating a Quentin, a tenant, but integral to that, um, flow is okay, I've defined which cluster I want to use. I defined how much memory I want to use. So there's a quota right there. You could say, Hey, how many CPU's am I taking from this pool? And that's one of the cool things about the platform is that it abstracts all that away. You don't have to really know exactly which host, um, you know, you can create the cluster and select specific hosts, but once you've created the cluster, it's not just a big pool of resources. So you can say Bob, over here, um, he's only going to get 50 of the a hundred CPU's available and he's only going to get X amount of gigabytes of memory. And he's only going to get this much storage that he can consume. So you can then safely hand off something and know they're not going to take all the resources, especially the GPU's where those will be expensive. And you want to make sure that one person doesn't hog all the resources. And so that absolutely quotas are built in there. >>Fantastic. Well, we, I think we are out of time. Um, we have, uh, a list of other questions that we will absolutely reach out and, um, get all your questions answered, uh, for those of you who ask questions in the chat. Um, Don, thank you very much. Thanks everyone else for joining Don, will this recording be made available for those who couldn't make it today? >>I believe so. Honestly, I'm not sure what the process is, but, um, yeah, it's being recorded so they must've done that for a reason. >>Fantastic. Well, Don, thank you very much for your time and thank everyone else for joining. Thank you.

Published Date : Mar 17 2021

SUMMARY :

So if you have any questions, please feel free to put your questions in the chat, don't have to look that up if you don't know what I'm scouting. you know, continuous integration, continuous development integration with source control, and I'm supporting I can, uh, you know, And so I'm in the background here, making sure that he's got access to So this is a web UI. You can see here that, um, this is a very busy, um, you know, And I'll just show you some of the screens and this is a live environment. in the namespace, I just hit create and similar to how you create a cluster. So you can see here, all these other users have active I create that, I simply give him, you know, he gives me his, I create a configuration. So you can take that secret. So this is running on my laptop and I just had to do a coop CTL refresh And just by clicking on this URL or copying it, copying, you know, it's a link, So now putting on my storage admin hat, you know, I've got this shared, So here again, another URL that you could then take this and put it into like a postman rest URL And when you create this special cluster, every other cluster that you create is automatically, And I see that the ESMO data I can look at all of the, you know, I can, you know, change permissions, Um, so it, it does look like you can integrate So just like you can go into any given, Um, another question around the file system, um, has the it has a lot of the core file system capabilities where you can do snapshots and mirrors, and also can you restrict or, uh, inject resource limitations for each So when you create the tenant to prevent So I didn't show the process of actually creating a Quentin, a tenant, but integral to that, Um, Don, thank you very much. I believe so.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
YasirPERSON

0.99+

TerryPERSON

0.99+

Don wakePERSON

0.99+

TokyoLOCATION

0.99+

50QUANTITY

0.99+

Yasmin JoffeyPERSON

0.99+

FirstQUANTITY

0.99+

two applicationsQUANTITY

0.99+

DonPERSON

0.99+

TodayDATE

0.99+

todayDATE

0.99+

St. Patrick's dayEVENT

0.98+

10QUANTITY

0.98+

bothQUANTITY

0.98+

30 K.QUANTITY

0.98+

oneQUANTITY

0.98+

KubernetesTITLE

0.98+

HPEORGANIZATION

0.97+

one personQUANTITY

0.97+

first thingQUANTITY

0.97+

YasserPERSON

0.97+

KafkaTITLE

0.97+

PythonTITLE

0.96+

ASMLORGANIZATION

0.96+

CNCFORGANIZATION

0.96+

one wayQUANTITY

0.95+

JupiterLOCATION

0.94+

ESMOORGANIZATION

0.94+

GitHubORGANIZATION

0.94+

ASMLEVENT

0.93+

BobPERSON

0.93+

Matt BARRPERSON

0.92+

this yearDATE

0.91+

JupiterORGANIZATION

0.9+

each individualQUANTITY

0.86+

30OTHER

0.85+

a hundred CPUQUANTITY

0.82+

ASMLTITLE

0.82+

2021DATE

0.8+

coopORGANIZATION

0.78+

a dayQUANTITY

0.78+

KubernetesORGANIZATION

0.75+

coupleQUANTITY

0.75+

A Day in the LifeTITLE

0.73+

an ITTITLE

0.7+

30 mirrorQUANTITY

0.69+

caseQUANTITY

0.64+

CTLCOMMERCIAL_ITEM

0.57+

few more questionsQUANTITY

0.57+

coop CTLORGANIZATION

0.55+

yearsQUANTITY

0.55+

QuentinPERSON

0.51+

30QUANTITY

0.49+

Ezmeral DayPERSON

0.48+

lotsQUANTITY

0.43+

JupiterCOMMERCIAL_ITEM

0.42+

10TITLE

0.41+

PicassoORGANIZATION

0.38+

A Day in the Life of Data with the HPE Ezmeral Data Fabric


 

>>Welcome everyone to a day in the life of data with HPE as well. Data fabric, the session is being recorded and will be available for replay at a later time. When you want to come back and view it again, feel free to add any questions that you have into the chat. And Chad and I joined stark. We'll, we'll be more than willing to answer your questions. And now let me turn it over to Jimmy Bates. >>Thanks. Uh, let me go ahead and share my screen here and we'll get started. >>Hey everyone. Uh, once again, my name is Jimmy Bates. I'm a director of solutions architecture here for HPS Merle in the Americas. Uh, today I'd like to walk you through a journey on how our everyday life is evolving, how everything about our world continues to grow more connected about, and about how here at HPE, how we support the data that represents that digital evolution for our customers, with the HPE as rural data fabric to start with, let's define that term data. The concept of that data can be simplified to a record of life's events. No matter if it's personal professional or mechanical in nature, data is just records that represent and describe what has happened, what is happening or what we think will happen. And it turns out the more complete record we have of these events, the easier it is to figure out what comes next. >>Um, I like to refer to that as the omnipotence protocol. Um, let's look at this from a personal perspective of two very different people. Um, let me introduce you to James. He's a native citizen of the digital world. He's, he's been, he's been a citizen of this, uh, an a career professional in the it world for years. He's always on always connected. He loves to get all the information he needs on a smartphone. He works constantly with analytics. He predicts what his customers need, what they want, where they are, uh, and how best to reach them. Um, he's fully embraced the use of data in his life. This is Sue SCA. She's, she's a bit of a, um, of an opposite to James. She's not yet immigrated to our digital world. She's been dealing with the changes that are prevalent in our times. And she started a new business that allows her customers, the option of, um, of expressing their personalities and the mask that they wear. She wants to make sure her customers can upload images, logos, and designs in order to deliver that customized mask, uh, to brighten their interactions with others while being safe as they go about their day. But she needs a crash course in digital and the digital journey. She's recently as, as most of us have as transitioned from an office culture to a work from home culture, and she wants to continue to grow that revenue venture on the side >>At the core of these personalities is a journey that is, that is representative common challenge that we're all facing today. Our world has been steadily shrinking as our ability to reach out to one another has steadily increased. We're all on that journey together to know more about what is happening to be connected to what our business is doing to be instantly responsive to our customer needs and to deliver that personalized service to every individual. And it as moral, we see this across every industry, the challenge of providing tailored experiences to potential customers in a connected world to provide constant information on deliveries that we requested or provide an easier commute to our destination to, to change the inventories, um, to the just-in-time arrival for our fabrications to identify quality issues in real time to alter the production of each product. So it's tailored to the request of the end user to deliver energy in, in smarter, more efficient ways, uh, without injury w while protecting the environment and to identify those, those, uh, medical emerging threats, and to deliver those personalized treatments safely. >>And at the core of all of these changes, all of these different industries is data. Um, if you look at the major technology trends, um, they've been evolving down this path for some time now, we're we're well into our cloud journey. The mobile platform world is, is now just part of our core strategies. IOT is feeding constant streams of data often over those mobile, uh, platforms. And the edge is increasingly just part of our core, all of this combined with the massive amounts of data that's becoming, becoming available through it is driving autonomous solutions with machine learning and AI. Uh, this is, this is just one aspect of this, this data journey that we're on, but for success, it's got, uh, sorry for success. It's got to be paired. Um, it's gotta be paired with action. >>Um, >>Well, when you look at the, uh, um, if we take a look at James and Cisco, right, we can start to see, um, with the investments in those actions, um, how their travel they're realizing >>Their goals, >>Services, efforts, you know, uh, focused, deliver new data-driven applications are done in new ways that are smaller in nature and kind of rapidly iterate, um, to respond to the digital needs of, of our new world, um, containerization to deploy and manage those apps anywhere in our connected world, they need to be secure we'll time streaming architecture, um, from, from the, from the beginning to allow for continual interactions with our changing customer demands and all of this, especially in our current environment, while running cost reduction initiatives. This is just the current world that, that our solutions must live in. Um, with that framework in mind, um, I'd like to take the remainder of our time and kind of walk through some of the use cases where, where we at HPE helped organizations through this journey with, with, with the ASML data fabrics, >>Let's >>Start with what's happening in the mobile world. In fact, the HPE as moral data fabric is being used by a number of companies to provide infinitely personalized experiences. In this case, it could be James could be sushi. It could be anyone that opens up their smartphone in the morning, uh, quickly checking what's transpiring in the world with a selection of curated, relative relevant articles, images, and videos provided by data-driven algorithm workloads, all that data, the logs, the recommendations, and the delivery of those recommendations are done through a variety of companies using HP as rural software, um, that provides a very personalized experience for our users. In addition, other companies monitor the service quality of those mobile devices to ensure optimize connectivity as they move throughout their day. The same is true for digital communication for that video communication, what we're doing right now, especially in these days where it's our primary method of connecting as we deal with limited physical engagements. Um, there's been a clear spike in the usage of these types of services. HPE, as Merle is helping a number of these companies deliver on real time telemetry analysis, predicting demand, latency, monitoring, user experience, and analyzing in real time, responding with autonomous adjustments to maintain pleasant experiences for all participants involved. >>Um, >>Another area, um, we're eight or HBS ML data fabric is playing a crucial role in the daily experience inside our automobiles. We invest a lot of ourselves in our cars. We expect tailored experiences that help us stay safe and connected as we move from one destination to another, in the areas of autonomous driving connected car, a number of major car companies in the world are using our data fabric to take autonomous driving to the next level where it should be effectively collecting all data from sensors and cameras, and then feeding that back into a global data fabric. So that engineers that develop cars can train next generation, future driving algorithms that make our driving experience safer and more autonomy going forward. >>Now let's take a look at a different mode of travel. Uh, the airline industry is being impaired. Varied is being impacted very differently today from, from the car companies, with our software, uh, we help airlines travel agencies, and even us as consumers deal with pricing, calculations and challenges, uh, with, um, air traffic services. We, we deal with, um, um, uh, delivering services around route predictions on time arrivals, weather patterns, and tagging and tracking luggage. We help people with flight connections and finding out what the figuring out what the best options are for your, for your travel. Uh, we collect mountains of data, secure it in a global data fabric, so it can provide, be provided back in an analyzed form with it. The stressed industry can contain some very interesting insights, provide competitive offerings and better services to us as travelers. >>This is also true for powering biometrics. At scale, we work with the biggest biometrics databases in the world, providing the back end for their enormous biometric authentication pursuit. Just to kind of give you a rough idea. A biometric authentication is done with a number of different data points from fingerprints. I re scans numerous facial features. All of these data points are captured for every individual and uploaded into the database, such that when the user is requesting services, their biometric metrics can be pooled and validated in seconds. From a scale perspective, they're onboarding 1 million people a day more than 200 million a year with a hundred percent business continuity and the options do multi-master and a global data fabric as needed ensuring that users will have no issues in securely accessing their pension payouts medical services or what other types of services. They may be guaranteed >>Pivoting >>To a very different industry. Even agriculture was being impacted in digital ways. Using HPE as well, data fabric, we help farmers become more digital. We help them predict weather patterns, optimize sea production. We even helped see producers create custom seed for very specific weather and ground conditions. We combine all of these things to help optimize production and ensure we can feed future generations. In some cases, all of these data sources collected at the edge can be provided back to insurance companies to help farmers issue claims when micro patterns affect farmers in negative ways, we all benefit from optimized farming and the HBS Modena fabric is there to assist in that journey. We provide the framework and the workload guidance to collect relevant data, analyze it and optimize food production. Our customers demonstrate the agricultural industry is most definitely my immigrating to our digital world. >>Now >>That we've got the food, we need to ship it along with everything else, all over the world, as well as offer can be found in action in many of the largest logistics companies in the world. I mean, just tracking things with greater efficiency can lead to astounding insights. What flights and ships did the package take? What Hans held it along its journey, what weather conditions did it encounter? What, what customs office did it go through and, and how much of it's requested and being delivered this along with hundreds of other telemetry points can be used to provide very accurate trade and economic predictions around what's going on with trade in the world. These data sets are being used very intensively to understand economy conditions and plan for future event consequences. We also help answer, uh, questions for shipping containers that are, that are more basic. Uh, like where is my container located at is my container still on the correct ship? Uh, surprisingly, uh, this helps cut down on those pesky little events like lost containers. >>Um, it's astounding the amount of data that's in DNA, and it's not just the pairs. It's, it's the never ending patterns found with other patterns that none of it can be fully understood unless the micro is maintained in context to the macro. You can't really understand these small patterns unless you maintain that overall understanding of the entire DNA structure to help the HVS mold data fabric can be found across every aspect of the medical field. Most recently was there providing the software framework to collect genomic sequencing, landing it in the data fabric, empowering connected availability for analysis to predict and find patterns of significance to shorten the effort it takes to identify those potential triggers and make things like vaccines become becoming available. In record time. >>Data is about people at HPE asthma. We keep people connected all around the world. We do this in a variety of ways. We we've already looked at several of the ways that that happens. We help you find data. You need, we help you get from point a to point B. We help make sure those birthday gifts show up on time. Some other interesting ways we connect people via recipes, through social platforms and online services. We help people connect to that new recipe that is unexpected, but may just be the kind of thing you need for dinner tonight at HPDs where we provide our customers with the power to deliver services that are tailored to the individual from edge to core, from containers to cloud. Many of the services you encounter everyday are delivered to you through an HV as oral global data fabric. You may not see it, but we're there in the morning in the morning when you get up and we're there in the evening. Um, when you wind down, um, at HPE as role, we make data globally available across everywhere that your business needs to go. Um, I'd like to thank everyone, uh, for the time that you've given us today. And I'd like to turn it back over and open up the floor for questions at this time, >>Jimmy, here's a question. What are the ways consumers can get started with HPS >>The fabric? Well, um, uh, there's several ways to get started, right? We, we, uh, first off we have software available that you can download that there's extensive documentation and use cases posted on our website. Um, uh, we have services that we offer, like, um, assessment services that can come in and help you assess the, the data challenges that you're having, whether you're, you're just dealing with a scale issue, a security issue, or trying to migrate to a more containerized approach. We have a services to help you come in, assess that aspect. Um, we have a getting started bundles, um, and we have, um, so there's all kinds of services that, that help you get started on your journey. So what >>Does a typical first deployment look like? >>Well, that's, that's a very, very interesting question. Um, a typical first deployment, it really kind of varies depending on where you're at in the material. Are you James? Are you, um, um, Cisco, right? It really depends on, on where you're at in your journey. Um, but a typical deployment, um, is, is, is involved. Uh, we, we like to come in, we we'd like to do workshops, really understand your specific challenges and problems so that we can determine what solutions are best for you. Um, that to take a look at when we kind of settle on that we, we, um, the first deployment, uh, is, um, there's typically, um, a deployment of, uh, a, uh, a service offering, um, w with a software to kind of get you started along the way we kind of bundle that aspect. Um, as you move forward, if you're more mature and you already have existing container solutions, you already have existing, large scale data aspects of it. Um, it's really about the specific use case of your current problem that you're dealing with. Um, every solution, um, is tailored towards the individual challenges and problems that, that each one of us are facing. >>I break, they mentioned as part of the asthma family. So how does data fabric pair with the other solutions within Israel? >>Well, so I like to say there's, um, there, there's, there's three main areas, um, from a software standpoint, um, for when you count some of our, um, offerings with the GreenLake solution, but there are, so there are really four main areas with ESMO. There's the data fabric offering, which is really focused on, on, on, on delivering that data at scale for AI ML workloads for big data workloads for containerized workloads. There is the ESMO container platform, which really solves a lot of, um, some of the same problems, but really focus more on a compute delivery, uh, and a hundred percent Kubernetes environment. We also have security offerings, um, which, which help you take in this containerized world, uh, that help you take the different aspects of, um, securing those applications. Um, so that when the application, the containerized applications move from one framework or one infrastructure from one to the other, it really helps those, the security go with those applications so that they can operate in a zero trust environment. And of course, all of this, uh, options of being available to you, where everything has a service, including the hardware through some of our GreenLake offerings. So those are kind of the areas that, uh, um, that pair with the HPE, um, data fabric, uh, when you look at the entire ESMO pro portfolio. >>Well, thanks, Jimmy really appreciate it. That's all the questions we have right now. So is there anything that you'd like to close with? >>Uh, you know, the, um, I I'm, I find it I'm very, uh, I'm honored to be here at HPE. Um, I, I really find it, it's amazing. Uh, as we work with our customers solving some really challenging problems that are core to their business, um, it's, it's always an interesting, um, interesting, um, day in the office because, uh, every problem is different because every problem is tailored to the specific challenges that our customers face. Um, while they're all will well, we will, what we went over today is a lot of the general areas and the general concepts that we're all on together in a journey, but the devil's always in the details. It's about understanding the specific challenges in the organization and, and as moral software is designed to help adapt, um, and, and empower your growth in your, in your company. So that you're focused on your business, in the complexity of delivering services across this connected world. That's what as will takes off your plate so that you don't have to worry about that. It just works, and you can focus on the things that impact your business more directly. >>Okay. Well, we really thank everyone for coming today and hope you learned, uh, an idea about how data fabric can begin to help your business with it. All of a sudden analytics, thank you for coming. Thanks.

Published Date : Mar 17 2021

SUMMARY :

Welcome everyone to a day in the life of data with HPE as well. Uh, let me go ahead and share my screen here and we'll get started. that digital evolution for our customers, with the HPE as rural data fabric to and designs in order to deliver that customized mask, uh, to brighten their interactions with others while protecting the environment and to identify those, those, uh, medical emerging threats, all of this combined with the massive amounts of data that's becoming, becoming available through it is This is just the current world that, that our solutions must live in. the service quality of those mobile devices to ensure optimize connectivity as they move a number of major car companies in the world are using our data fabric to take autonomous uh, we help airlines travel agencies, and even us as consumers deal with pricing, Just to kind of give you a rough idea. from optimized farming and the HBS Modena fabric is there to assist in that journey. and how much of it's requested and being delivered this along with hundreds of other telemetry points landing it in the data fabric, empowering connected availability for analysis to Many of the services you encounter everyday are delivered to you through What are the ways consumers can get started with HPS We have a services to help you uh, a service offering, um, w with a software to kind of get you started with the other solutions within Israel? uh, um, that pair with the HPE, um, data fabric, uh, when you look at the entire ESMO pro portfolio. That's all the questions we have right now. in the organization and, and as moral software is designed to help adapt, an idea about how data fabric can begin to help your business with it.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JamesPERSON

0.99+

ChadPERSON

0.99+

Jimmy BatesPERSON

0.99+

JimmyPERSON

0.99+

CiscoORGANIZATION

0.99+

HPORGANIZATION

0.99+

todayDATE

0.99+

HansPERSON

0.99+

HPS MerleORGANIZATION

0.99+

IsraelLOCATION

0.99+

hundredsQUANTITY

0.99+

HPEORGANIZATION

0.99+

AmericasLOCATION

0.99+

tonightDATE

0.99+

each productQUANTITY

0.98+

HPDsORGANIZATION

0.98+

three main areasQUANTITY

0.97+

ESMOTITLE

0.97+

four main areasQUANTITY

0.96+

more than 200 million a yearQUANTITY

0.96+

MerleORGANIZATION

0.96+

hundred percentQUANTITY

0.95+

one aspectQUANTITY

0.95+

GreenLakeORGANIZATION

0.94+

first deploymentQUANTITY

0.94+

one frameworkQUANTITY

0.93+

two very different peopleQUANTITY

0.92+

one infrastructureQUANTITY

0.92+

zero trustQUANTITY

0.88+

Sue SCAPERSON

0.88+

1 million people a dayQUANTITY

0.87+

firstQUANTITY

0.84+

ModenaCOMMERCIAL_ITEM

0.82+

HBSORGANIZATION

0.82+

each oneQUANTITY

0.82+

one destinationQUANTITY

0.77+

eightQUANTITY

0.73+

yearsQUANTITY

0.72+

A DayQUANTITY

0.67+

telemetry pointsQUANTITY

0.67+

KubernetesTITLE

0.61+

EzmeralORGANIZATION

0.58+

JamesORGANIZATION

0.56+

HPEOTHER

0.53+

A Day in the Life of a Data Scientist


 

>>Hello, everyone. Welcome to the a day in the life of a data science talk. Uh, my name is Terry Chang. I'm a data scientist for the ASML container platform team. And with me, I have in the chat room, they will be moderating the chat. I have Matt MCO as well as Doug Tackett, and we're going to dive straight into kind of what we can do with the asthma container platform and how we can support the role of a data scientist. >>So just >>A quick agenda. So I'm going to do some introductions and kind of set the context of what we're going to talk about. And then we're actually going to dive straight into the ASML container platforms. So we're going to walk straight into what a data scientist will do, kind of a pretty much a day in the life of the data scientists. And then we'll have some question and answer. So big data has been the talk within the last few years within the last decade or so. And with big data, there's a lot of ways to derive meaning. And then a lot of businesses are trying to utilize their applications and trying to optimize every decision with their, uh, application utilizing data. So previously we had a lot of focus on data analytics, but recently we've seen a lot of data being used for machine learning. So trying to take any data that they can and send it off to the data scientists to start doing some modeling and trying to do some prediction. >>So that's kind of where we're seeing modern businesses rooted in analytics and data science in itself is a team sport. We're seeing that it doesn't, we need more than data scientists to do all this modeling. We need data engineers to take the data, massage the data and do kind of some data manipulation in order to get it right for the data scientists. We have data analysts who are monitoring the models, and we even have the data scientists themselves who are building and iterating through multiple different models until they find a one that is satisfactory to the business needs. Then once they're done, they can send it off to the software engineers who will actually build it out into their application, whether it's a mobile app or a web app. And then we have the operations team kind of assigning the resources and also monitoring it as well. >>So we're really seeing data science as a team sport, and it does require a lot of different expertise and here's the kind of basic machine learning pipeline that we see in the industry now. So, uh, at the top we have this training environment and this is, uh, an entire loop. Uh, we'll have some registration, we'll have some inferencing and at the center of all, this is all the data prep, as well as your repositories, such as for your data, for any of your GitHub repository, things of that sort. So we're kind of seeing the machine learning industry, go follow this very basic pattern and at a high level I'll glance through this very quickly, but this is kind of what the, uh, machine learning pipeline will look like on the ASML container platform. So at the top left, we'll have our, our project depository, which is our, uh, persistent storage. >>We'll have some training clusters, we'll have a notebook, we'll have an inference deployment engine and a rest API, which is all sitting on top of the Kubernetes cluster. And the benefit of the container platform is that this is all abstracted away from the data scientist. So I will actually go straight into that. So just to preface, before we go into the data as small container platform, where we're going to look at is a machine learning example, problem that is, uh, trying to predict how long a specific taxi ride will take. So with a Jupiter notebook, the data scientists can take all of this data. They can do their data manipulation, train a model on a specific set of features, such as the location of a taxi ride, the duration of a taxi ride, and then model it to trying to figure out, you know, what, what kind of prediction we can get on a future taxi ride. >>So that's the example that we will talk through today. I'm going to hop out of my slides and jump into my web browser. So let me zoom in on this. So here I have a Jupiter environment and, um, this is all running on the container platform. All I need is actually this link and I can access my environment. So as a data scientist, I can grab this link from my it admin or my system administrator. And I could quickly start iterating and, and start coding. So on the left-hand side of the Jupiter, we actually have a file directory structure. So this is already synced up to my get repository, which I will show in a little bit on the container platform so quickly I can pull any files that are on my get hub repository. I can even push with a button here, but I can, uh, open up this Python notebook. >>And with all this, uh, unique features of the Jupiter environment, I can start coding. So each of these cells can run Python code and in specific the container at the ESMO container platform team, we've actually built our own in-house lime magic commands. So these are unique commands, um, that we can use to interact with the underlying infrastructure of the container platform. So the first line magic command that I want to mention is this command called percent attachments. When I run this command, I'll actually get the available training clusters that I can send training jobs to. So this specific notebook, uh, it's pretty much been created for me to quickly iterate and develop a model very quickly. I don't have to use all the resources. I don't have to allocate a full set of GPU boxes onto my little Jupiter environment. So with the training cluster, I can attach these individual data science notebooks to those training clusters and the data scientists can actually utilize those resources as a shared environment. >>So the, essentially the shared large eight GPU box can actually be shared. They don't have to be allocated to a single data scientist moving on. We have another line magic command, it's called percent percent Python training. This is how we're going to utilize that training cluster. So I will prepare the cell percent percent with the name of the training cluster. And this is going to tell this notebook to send this entire training cell, to be trained on those resources on that training cluster. So the data scientists can quickly iterate through a model. They can then format that model and all that code into a large cell and send it off to that training cluster. So because of that training cluster is actually located somewhere else. It has no context of what has been done locally in this notebook. So we're going to have to do and copy everything into one large cell. >>So as you see here, I'm going to be importing some libraries and I'm in a, you know, start defining some helper functions. I'm going to read in my dataset and with the typical data science modeling life cycle, we're going to have to take in the data. We're going to have to do some data pre-processing. So maybe the data scientists will do this. Maybe the data engineer will do this, but they have access to that data. So I'm here. I'm actually getting there to be reading in the data from the project repository. And I'll talk about this a little bit later with all of the clusters within the container platform, we have access to some project repository that has been set up using the underlying data fabric. So with this, I have, uh, some data preprocessing, I'm going to cleanse some of my data that I noticed that maybe something is missing or, uh, some data doesn't look funky. >>Maybe the data types aren't correct. This will all happen here in these cells. So once that is done, I can print out that the data is done cleaning. I can start training my model. So here we have to split our data, set into a test, train, uh, data split so that we have some data for actually training the model and some data to test the model. So I can split my data there. I could create my XG boost object to start doing my training and XG boost is kind of like a decision tree machine learning algorithm, and I'm going to fit my data into this, uh, XG boost algorithm. And then I'm going to do some prediction. And then in addition, I'm actually going to be tracking some of the metrics and printing them out. So these are common metrics that we, that data scientists want to see when they do their training of the algorithm. >>Just to see if some of the accuracy is being improved, if the loss is being improved or the mean absolute error. So things like that. So these are all things, data scientists want to see. And at the end of this training job, I'm going to be saving the model. So I'm going to be saving it back into the project repository in which we will have access to. And at the end, I will print out the end time so I can execute that cell. And I've already executed that cell. So you'll see all of these print statements happening here. So importing the libraries, the training was run reading and data, et cetera. All of this has been printed out from that training job. Um, and in order to access that, uh, kind of glance through that, we would get an output with a unique history URL. >>So when we send the training job to that training cluster, we'll the training cluster will send back a unique URL in which we'll use the last line magic command that I want to talk about called percent logs. So percent logs will actually, uh, parse out that response from the training cluster. And actually we can track in real time what is happening in that training job so quickly, we can see that the data scientist has a sandbox environment available to them. They have access to their get repository. They have access to a project repository in which they can read in some of their data and save the model. So very quick interactive environment for the data scientists to do all of their work. And it's all provisioned on the ASML container platform. And it's also abstracted away. So here, um, I want to mention that again, this URL is being surfaced through the container platform. >>The data scientist doesn't have to interact with that at all, but let's take, it's take a step back. Uh, this is the day to day in the life of the data scientists. Now, if we go backwards into the container platform and we're going to walk through how it was all set up for them. So here is my login page to the container platform. I'm going to log in as my user, and this is going to bring me to the, uh, view of the, uh, Emma lops tenant within the container platform. So this is where everything has been set up for me, the data scientist doesn't have to see this if they don't need to, but what I'll walk through now is kind of the topics that I mentioned previously that we would go back into. So first is the project repository. So this project deposited comes with each tenant that is created on the platform. >>So this is a more, nothing more than a shared collaborative workspace environment in which data scientist or any data scientist who is allocated to this tenant. They have this politics client that can visually see all their data of all, all of their code. And this is actually taking a piece of the underlying data fabric and using that for your project depository. So you can see here, I have some code I can create and see my scoring script. I can see the models that have been created within this tenant. So it's pretty much a powerful tool in which you can store your code store any of your data and have the ability to read and write from any of your Jupiter environments or any of your created clusters within this tenant. So a very cool ad here in which you can, uh, quickly interact with your data. >>The next thing I want to show is the source control. So here is where you would plug in all of your information for your source control. And if I edit this, you guys will actually see all the information that I've passed in to configure the source control. So on the backend, the container platform will take these credentials and connect the Jupiter notebooks you create within this tenant to that get repository. So this is the information that I've passed in. If GitHub is not of interest, we also have support for bit bucket here as well. So next I want to show you guys that we do have these notebook environments. So, um, the notebook environment was created here and you can see that I have a notebook called Teri notebook, and this is all running on the Kubernetes environment within the container platform. So either the data scientists can come here and create their notebook or their project admin can create the notebook. >>And all you'd have to do is come here to this notebook end points. And this, the container platform will actually map the container platform to a specific port in which you can just give this link to the data scientists. And this link will actually bring them to their own Jupiter environment and they can start doing all of their model just as I showed in that previous Jupiter environment. Next I want to show the training cluster. This is the training cluster that was created in which I can attach my notebook to start utilizing those training clusters. And then the last thing I want to show is the model, the deployment cluster. So once that model has been saved, we have a model registry in which we can register the model into the platform. And then the last step is to create a deployment clusters. So here on my screen, I have a deployment cluster called taxi deployment. >>And then all these serving end points have been configured for me. And most importantly, this endpoint model. So the deployment cluster is actually a wrap the, uh, train model with the flask wrapper and add a rest endpoint to it so quickly. I can operationalize my model by taking this end point and creating a curl command, or even a post request. So here I have my trusty postman tool in which I can format a post request. So I've taken that end point from the container platform. I've formatted my body, uh, right here. So these are some of the features that I want to send to that model. And I want to know how long this specific taxi ride at this location at this time of day would take. So I can go ahead and send that request. And then quickly I will get an output of the ride. >>Duration will take about 2,600 seconds. So pretty much we've walked through how a data scientists can quickly interact with their notebook. They can train their model. And then coming into the platform, we saw the project repository, we saw the source control. We can register the model within the platform, and then quickly we can operationalize that model with our deployment cluster, uh, and have our model up and running and available for inference. So that wraps up the demo. Uh, I'm gonna pass it back to Doug and Matt and see if they want to come off mute and see if there are any questions, Matt, Doug, you there. Okay. >>Yeah. Hey, Hey Terry, sorry. Sorry. Just had some trouble getting off mute there. Uh, no, that was a, that was an excellent presentation. And I think there are generally some questions that come up when I talk to customers around how integrated into the Kubernetes ecosystem is this capability and where does this sort of Ezreal starts? And the open source, uh, technologies like, um, cube flow as an example, uh, begin. >>Yeah, sure. Matt. So this is kind of one layer up. We have our Emma LOBs tenant and this is all running on a piece of a Kubernetes cluster. So if I log back out and go into the site admin view, this is where you would see all the Kubernetes clusters being created. And it's actually all abstracted away from the data scientists. They don't have to know Kubernetes. They just interact with the platform if they want to. But here in the site admin view, I had this Kubernetes dashboard and here on the left-hand side, I have all my Kubernetes sections. So if I just add some compute hosts, whether they're VMs or cloud compute hosts, like ETQ hosts, we can have these, uh, resources abstracted away from us to then create a Kubernetes cluster. So moving on down, I have created this Kubernetes cluster utilizing those resources. >>Um, so if I go ahead and edit this cluster, you'll actually see that have these hosts, which is just a click and a click and drop method. I can move different hosts to then configure my Kubernetes cluster. Once my Kubernetes cluster is configured, I can then create Kubernetes tenant or in this case, it's a namespace. So once I have this namespace available, I can then go into that tenant. And as my user, I don't actually see that it is running on Kubernetes. So in addition with our ML ops tenants, you have the ability to bootstrap cute flow. So queue flow is a open source machine learning framework that is run on Kubernetes, and we have the ability to link that up as well. So, uh, coming back to my Emma lops tenant, I can log in what I showed is the ASML container platform version of Emma flops. But you see here, we've also integrated QP flow. So, uh, very, uh, a nod to, uh, HPS contribution to, you know, utilizing open source. Um, it's actually all configured within our platform. So, um, hopefully, >>Yeah, actually, Tara, can you hear me? It's Doug. So there were a couple of other questions actually about key flare that came in. I wonder whether you could just comment on why we've chosen cube flow. Cause I know there was a question about ML flow in stead and what the differences between ML flow and coop flow. >>Yeah, sure. So the, just to reiterate, there are some questions about QP flow and I'm just, >>Yeah, so obviously one of, uh, one of the people watching saw the queue flow dashboard there, I guess. Um, and so couldn't help but get excited about it. But there was another question about whether, you know, ML flow versus cube flow and what the difference was between them. >>Yeah. So with flow, it's, it's an open source framework that Google has developed. It's a very powerful framework that comes with a lot of other unique tools and Kubernetes. So with Q flow, you really have the ability to launch other notebooks. You have the ability to utilize different Kubernetes operators like TensorFlow and PI torch. You can utilize a lot of the, some of the frameworks within Q4 to do training like Q4 pipelines, which visually allow you to see your training jobs, uh, within the queue flow. It also has a plethora of different serving mechanisms, such as Seldin, uh, for, you know, deploying your, your machine learning models. You have Ks serving, you have TF serving. So Q4 is very, it's a very powerful tool for data scientists to utilize if they want a full end to end open source and know how to use Kubernetes. So it's just a, another way to do your machine learning model development and right with ML flow, it's actually a different piece of the machine learning pipeline. So ML flow mainly focuses on model experimentation, comparing different models, uh, during the training and it off it can be used with Q4. >>The complimentary Terry I think is what you're saying. Sorry. I know we are dramatically running out of time now. So that was really fantastic demo. Thank you very much, indeed. >>Exactly. Thank you. So yeah, I think that wraps it up. Um, one last thing I want to mention is there is this slide that I want to show in case you have any other questions, uh, you can visit hp.com/asml, hp.com/container platform. If you have any questions and that wraps it up. So thank you guys.

Published Date : Mar 17 2021

SUMMARY :

I'm a data scientist for the ASML container platform team. So I'm going to do some introductions and kind of set the context of what we're going to talk about. the models, and we even have the data scientists themselves who are building and iterating So at the top left, we'll have our, our project depository, which is our, And the benefit of the container platform is that this is all abstracted away from the data scientist. So that's the example that we will talk through today. So the first line magic command that I want to mention is this command called percent attachments. So the data scientists can quickly iterate through a model. So maybe the data scientists will do this. So once that is done, I can print out that the data is done cleaning. So I'm going to be saving it back into the project repository in which we will So here, um, I want to mention that again, this URL is being So here is my login page to the container So this is a more, nothing more than a shared collaborative workspace environment in So on the backend, the container platform will take these credentials and connect So once that model has been saved, we have a model registry in which we can register So I've taken that end point from the container platform. So that wraps up the demo. And the open source, uh, technologies like, um, cube flow as an example, So moving on down, I have created this Kubernetes cluster So once I have this namespace available, So there were a couple of other questions actually So the, just to reiterate, there are some questions about QP flow and I'm just, But there was another question about whether, you know, ML flow versus cube flow and So with Q flow, you really have the ability to launch So that was really fantastic demo. So thank you guys.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DougPERSON

0.99+

Doug TackettPERSON

0.99+

Terry ChangPERSON

0.99+

TerryPERSON

0.99+

TaraPERSON

0.99+

MattPERSON

0.99+

PythonTITLE

0.99+

GoogleORGANIZATION

0.99+

Matt MCOPERSON

0.99+

JupiterLOCATION

0.99+

KubernetesTITLE

0.99+

first lineQUANTITY

0.98+

eachQUANTITY

0.98+

GitHubORGANIZATION

0.98+

todayDATE

0.98+

firstQUANTITY

0.98+

about 2,600 secondsQUANTITY

0.97+

Q4TITLE

0.97+

A Day in the Life of a Data ScientistTITLE

0.97+

hp.com/asmlOTHER

0.97+

last decadeDATE

0.97+

one layerQUANTITY

0.95+

hp.com/containerOTHER

0.92+

single dataQUANTITY

0.91+

EmmaPERSON

0.91+

one large cellQUANTITY

0.91+

each tenantQUANTITY

0.88+

oneQUANTITY

0.84+

one last thingQUANTITY

0.81+

Q flowTITLE

0.8+

EmmaTITLE

0.8+

ESMOORGANIZATION

0.76+

last few yearsDATE

0.74+

one ofQUANTITY

0.73+

dayQUANTITY

0.72+

eight GPUQUANTITY

0.7+

SeldinTITLE

0.69+

Q4DATE

0.67+

percent percentOTHER

0.65+

EzrealORGANIZATION

0.65+

some questionsQUANTITY

0.65+

ASMLTITLE

0.65+

ASMLORGANIZATION

0.61+

peopleQUANTITY

0.49+

ETQTITLE

0.46+

TeriORGANIZATION

0.4+

EmmaORGANIZATION

0.35+

Ankur Jain, Merkle & Rafael Mejia, AAA Life | AWS re:Invent 2019


 

>>LA from Las Vegas. It's the cube covering AWS reinvent 2019 brought to you by Amazon web services and along with its ecosystem partners. >>Welcome back to the queue from Las Vegas. We are live at AWS reinvent 19 Lisa Martin with John furrier. We've been having lots of great conversations. John, we're about to have another one cause we always love to talk about customer proof in the putting. Please welcome a couple of guests. We have Rafael, director of analytics and data management from triple a life. Welcome. Thanks for having me. Really appreciate it. Our pleasure. And from Burkle anchor Jane, the SVP of cloud platforms. Welcome. Thank you. Thank you so much. Pleasure to be here. So here we are in this, I can't see of people around us as, as growing exponential a by the hour here, but awkward. Let's start with you give her audience an understanding of Merkel, who you are and what you do. >>Yeah, absolutely. So Marco is a global performance marketing agency. We are part of a dental agent network and a, it's almost about 9,000 to 10,000 people worldwide. It's a global agency. What differentiates Merkel from rest of the other marketing agencies is our deep roots and data driven approach. We embrace technology. It's embedded in all our, all our solutions that we take to market. Um, and that's what we pride ourselves with. So, um, that's basically a high level pitch about Merkel. What differentiates us, my role, uh, I lead the cloud transformation for Merkel. Um, uh, basically think of my team as the think tanks who bring in the new technology, come up with a new way of rolling out solutions product I solutions, uh, disruptive solutions, which helps our clients and big fortune brands such as triple life insurance, uh, to transform their marketing ecosystem. >>So let's go ahead and dig. A lot of folks probably know AAA life, but, but Raphael, give us a little bit of an overview. This is a 50 year old organization. >>So we celebrate our 50th 50 year anniversary this year. Actually, we're founded in 1969. So everybody life insurance, we endeavor to be the provider of choice for a AAA member. Tell them to protect what matters most to them. And we offer a diverse set of insurance products across just about every channel. Um, and um, we engage with Merkel, uh, earlier, the, um, in 2018 actually to, to, uh, to build a nice solution that allows us to even better serve the needs of the members. Uh, my role, I am the, I lead our analytics and data management work. So helping us collect data and manage better and better leverage it to support the needs of members. >>So a trip, I can't even imagine the volumes of data that you're dealing with, but it's also, this is people's data, right? This is about insurance, life insurance, the volume of it. How have you, what were some of the things that you said? All right guys, we need to change how we're managing the data because we know there's probably a lot more business value, maybe new services that we can get our on it or eyes >>on it. >>So, so that was, that was it. So as an organization, uh, I want to underscore what you said. We make no compromises when it comes to the safety of our, of our members data. And we take every step possible to ensure that it is managed in a responsible and safe way. But we knew that on, on the platform that we had prior to this, we weren't, we weren't as italics. We wanted to be. We would find that threaten processes would take spans of weeks in order to operate or to run. And that just didn't allow us to provide the member experience that we wanted. So we built this new solution and this solution updates every day, right? There's no longer multi-week cycle times and tumbler processes happen in real time, which allows us to go to market with more accurate and more responsive programs to our members. >>Can you guys talk about the Amazon and AWS solution? How you guys using Amazon's at red shift? Can he says, you guys losing multiple databases, give us a peek into the Amazon services that you guys are taking advantage of that anchor. >>Yeah, please. Um, so basically when we were approached by AAA life to kind of come in and you know, present ourselves our credentials, one thing that differentiated there in that solution page was uh, bringing Amazon to the forefront because cloud, you know, one of the issue that Ravel and his team were facing were scalability aspect. You know, the performance was, was not up to the par, I believe you guys were um, on a two week cycle. That data was a definition every two weeks. And how can we turn that around and know can only be possible to, in our disruptive technologies that Amazon brings to the forefront. So what we built was basically it's a complete Amazon based cloud native architecture. Uh, we leveraged AWS with our chip as the data warehouse platform to integrate basically billions and billions of rows from a hundred plus sources that we are bringing in on a daily basis. >>In fact, actually some of the sources are the fresh on a real time basis. We are catching real time interactions of users on the website and then letting Kimberly the life make real time decisions on how we actually personalize their experience. So AWS, Redshift, you know, definitely the center's centerpiece. Then we are also leveraging a cloud native ELT technology extract load and transform technology called. It's a third party tool, but again, a very cloud native technology. So the whole solution leverage is Python to some extent. And then our veil can talk about AI and machine learning that how they are leveraging AWS ecosystem there. >>Yeah. So that was um, so, uh, I anchor said it right. One thing that differentiated Merkel was that cloud first approach, right? Uh, we looked at it what a, all of the analysts were saying. We went to all the key vendors in this space. We saw the, we saw the architecture is, and when Merkel walked in and presented that, um, that AWS architecture, it was great for me because if nausea immediately made sense, there was no wizardry around, I hope this database scales. I was confident that Redshift and Lambda and dynamo would this go to our use cases. So it became a lot more about are we solving the right business problem and less about do we have the right technologies. So in addition to what Ankur mentioned, we're leveraging our sort of living RNR studio, um, in AWS as well as top low frat for our machine learning models and for business intelligence. >>And more recently we've started transition from R to a Python as a practitioner on the keynote today. Slew a new thing, Sage maker studio, an IDE for machine learning framework. I mean this is like a common set. Like finally, I couldn't have been more excited right? That, that was my Superbowl moment. Um, I was, I was as I was, we were actually at dinner yesterday and I was mentioning Tonker, this is my wishlist, right? I want AWS to make a greater investment in that end user data scientists experience in auto ML and they knocked it out of the park. Everything they announced today, I was just, I was texting frat. Wow, this is amazing. I can't wait to go home. There's a lot of nuances to, and a lot of these announcements, auto ML for instance. Yeah. Really big deal the way they did it. >>And again, the ID who would've thought, I mean this is duh, why didn't we think about this sooner? Yeah. With auto ML that that focus on transparency. Right. And then I think about a year ago we went to market and we ended up not choosing any solutions because they hadn't solved for once you've got a model built, how do you effectively migrated from let's say an analyst who might not have the, the ML expertise to a data science team and the fact that AWS understood out of the gate that you need that transparent all for it. I'm really excited for that. What do you think the impacts are going to be more uptake on the data science side? What do you think the impact of this and the, so I think for, I think we're going to see, um, that a lot of our use cases are going to part a lot less effort to spin up. >>So we're going to see much more, much faster pilots. We're going to have a much clearer sense of is this worth it? Is this something we should continue to invest in and to me we should drive and I expect that a lot, much larger percentage of my team, the analysts are going to be involved in data and data science and machine learning. So I'm really excited about that. And also the ability to inquire, to integrate best practices into what we're doing out of the gate. Right? So software engineers figured out profiling, they figured out the bugging and these are things that machine learners are picking up. Now the fact that you're front and center is really excited. Superbowl moment. You can be like the new England Patriots, 17 straight AFC championship games. Boston. Gosh, I could resist. Uh, they're all Seattle. They're all Seattle here and Amazon. I don't even bring Seattle Patriots up here and Amazon, >>we are the ESPN of tech news that we have to get in as far as conversation. But I want to kind of talk a little bit, Raphael about the transformation because presumably in, in every industry, especially in insurance, there are so many born in the cloud companies that are a lot, they're a lot more agile and they are chasing what AAA life and your competitors and your peers are doing. What your S establishing with the help of anchor and Merkel, how does this allow you to actually take the data that you had, expand it, but also extract insights from maybe competitive advantages that you couldn't think about before? >>Yeah, so I think, uh, so as an organization, even though we're 50 years old, one of the things that drew me to the company and it's really exciting is it's unrelated to thrusting on its laurels, right? I think there's tremendous hunger and appetite within our executive group to better serve our members and to serve more members. And what this technology is allowed is the technology is not a limiting factor. It's an enabling factors. We're able to produce more models, more performant models, process more of IO data, build more features. Um, we've managed to do away with a lot of the, you know, if you take it and you look at it this way and squeeze it and maybe it'll work and systematize more aspects of our reporting and our campaign development and our model development and the observability, the visibility of just the ability to be agile and have our data be a partner to what we're trying to accomplish. That's been really great. >>You talked about the significant reduction in cycle times. If we go back up to the executive suite from a business differentiation perspective, is the senior leadership at AAA understanding what this cloud infrastructure is going to enable their business to achieve? >>Absolutely. So, so our successes here I think have been instrumental in encouraging our organization to continue to invest in cloud. And uh, we're an active, we're actively considering and discussing additional cloud initiatives, especially around the areas of machine learning and AI. >>And the auger question for you in terms of, of your expertise, in your experience as we look at how cloud is changing, John, you know, educate us on cloud cloud, Tuto, AI machine learning. What are, as, as these, as businesses, as industries have the opportunity to for next gen cloud, what are some of the next industries that you think are really prime to be completely transformed? >>Um, I'm in that are so many different business models. If you look around, one thing I would like to actually touch upon what we are seeing from Merkel standpoint is the digital transformation and how customers in today's world they are, you know, how brands are engaging with their customers and how customers are engaging with the brands. Especially that expectations customer is at the center stage here they are the ones who are driving the whole customer engagement journey, right? How all I am browsing a catalog of a particular brand on my cell phone and then I actually purchased right then and there and if I have an issue I can call them or I can go to social media and log a complaint. So that's whole multi channel, you know, aspect of this marketing ecosystem these days. I think cloud is the platform which is enabling that, right? >>This cannot happen without cloud. I'm going to look at, Raphael was just describing, you know, real time interaction, real time understanding the behavior of the customer in real time and engaging with them based on their need at that point of time. If you have technologies like Sage maker, if you have technologies like AWS Redship you have technologies like glue, Kinesis, which lets you bring in data from all these disparate sources and give you the ability to derive some insights from that data in that particular moment and then interact with the customer right then and there. That's exactly what we are talking about. And this can only happen through cloud so, so that's my 2 cents are where they are, what we from Merkel standpoint, we are looking into the market. That's what we are helping our brands through to >>client. I completely agree. I think that the change from capital and operation, right to no longer house to know these are all the sources and all the use cases and everything that needs to happen before you start the project and the ability to say, Hey, let's get going. Let's deliver value in the way that we've had and continue to have conversations and deliver new features, new stores, a new functionality, and at the same time, having AWS as a partner who's, who's building an incremental value. I think just last week I was really excited with the changes they've made to integrate Sage maker with their databases so you can score from the directly from the database. So it feels like all these things were coming together to allow us as a company to better off on push our aims and exciting time. >>It is exciting. Well guys, I wish we had more time, but we are out of time. Thank you Raphael and anchor for sharing with Merkel and AAA. Pleasure. All right. Take care. Or John furrier. I am Lisa Martin and you're watching the cube from Vegas re-invent 19 we'll be right back.

Published Date : Dec 3 2019

SUMMARY :

AWS reinvent 2019 brought to you by Amazon web services So here we are It's embedded in all our, all our solutions that we take to market. So let's go ahead and dig. Um, and um, we engage with Merkel, the data because we know there's probably a lot more business value, maybe new services that we can So as an organization, uh, I want to underscore what Amazon services that you guys are taking advantage of that anchor. You know, the performance was, was not up to the par, I believe you guys were um, So AWS, Redshift, you know, So in addition to what Ankur mentioned, on the keynote today. and the fact that AWS understood out of the gate that you need that transparent all for it. And also the ability to inquire, the help of anchor and Merkel, how does this allow you to actually take the Um, we've managed to do away with a lot of the, you know, if you take it and you look at it this way and squeeze You talked about the significant reduction in cycle times. our organization to continue to invest in cloud. And the auger question for you in terms of, of your expertise, in your experience as we look at how cloud So that's whole multi channel, you know, disparate sources and give you the ability to derive some insights from that data that needs to happen before you start the project and the ability to say, Hey, Thank you Raphael and anchor for sharing with Merkel

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MerkelPERSON

0.99+

JohnPERSON

0.99+

Lisa MartinPERSON

0.99+

RafaelPERSON

0.99+

2018DATE

0.99+

AmazonORGANIZATION

0.99+

AWSORGANIZATION

0.99+

RaphaelPERSON

0.99+

Las VegasLOCATION

0.99+

1969DATE

0.99+

KimberlyPERSON

0.99+

LALOCATION

0.99+

yesterdayDATE

0.99+

PythonTITLE

0.99+

ESPNORGANIZATION

0.99+

Rafael MejiaPERSON

0.99+

2 centsQUANTITY

0.99+

John furrierPERSON

0.99+

last weekDATE

0.99+

todayDATE

0.98+

dynamoORGANIZATION

0.98+

this yearDATE

0.98+

SuperbowlEVENT

0.98+

first approachQUANTITY

0.98+

Seattle PatriotsORGANIZATION

0.97+

oneQUANTITY

0.96+

AAA lifeORGANIZATION

0.96+

Ankur JainPERSON

0.95+

BostonORGANIZATION

0.95+

50 year oldQUANTITY

0.95+

a hundred plus sourcesQUANTITY

0.95+

LambdaORGANIZATION

0.94+

RavelPERSON

0.94+

AAAORGANIZATION

0.93+

about 9,000QUANTITY

0.93+

17 straightQUANTITY

0.92+

10,000 peopleQUANTITY

0.92+

MarcoORGANIZATION

0.91+

RedshiftORGANIZATION

0.91+

RNRORGANIZATION

0.91+

50 years oldQUANTITY

0.9+

billions andQUANTITY

0.89+

SageORGANIZATION

0.89+

TutoORGANIZATION

0.86+

triple life insuranceORGANIZATION

0.86+

a year agoDATE

0.85+

billions of rowsQUANTITY

0.84+

One thingQUANTITY

0.82+

50th 50 year anniversaryQUANTITY

0.81+

every two weeksQUANTITY

0.8+

couple of guestsQUANTITY

0.78+

RedshipCOMMERCIAL_ITEM

0.77+

anchorORGANIZATION

0.77+

almostQUANTITY

0.74+

Sage maker studioORGANIZATION

0.74+

Susan Wilson, Informatica & Blake Andrews, New York Life | MIT CDOIQ 2019


 

(techno music) >> From Cambridge, Massachusetts, it's theCUBE. Covering MIT Chief Data Officer and Information Quality Symposium 2019. Brought to you by SiliconANGLE Media. >> Welcome back to Cambridge, Massachusetts everybody, we're here with theCUBE at the MIT Chief Data Officer Information Quality Conference. I'm Dave Vellante with my co-host Paul Gillin. Susan Wilson is here, she's the vice president of data governance and she's the leader at Informatica. Blake Anders is the corporate vice president of data governance at New York Life. Folks, welcome to theCUBE, thanks for coming on. >> Thank you. >> Thank you. >> So, Susan, interesting title; VP, data governance leader, Informatica. So, what are you leading at Informatica? >> We're helping our customers realize their business outcomes and objectives. Prior to joining Informatica about 7 years ago, I was actually a customer myself, and so often times I'm working with our customers to understand where they are, where they going, and how to best help them; because we recognize data governance is more than just a tool, it's a capability that represents people, the processes, the culture, as well as the technology. >> Yeah so you've walked the walk, and you can empathize with what your customers are going through. And Blake, your role, as the corporate VP, but more specifically the data governance lead. >> Right, so I lead the data governance capabilities and execution group at New York Life. We're focused on providing skills and tools that enable government's activities across the enterprise at the company. >> How long has that function been in place? >> We've been in place for about two and half years now. >> So, I don't know if you guys heard Mark Ramsey this morning, the key-note, but basically he said, okay, we started with enterprise data warehouse, we went to master data management, then we kind of did this top-down enterprise data model; that all failed. So we said, all right, let's pump the governance. Here you go guys, you fix our corporate data problem. Now, right tool for the right job but, and so, we were sort of joking, did data governance fail? No, you always have to have data governance. It's like brushing your teeth. But so, like I said, I don't know if you heard that, but what are your thoughts on that sort of evolution that he described? As sort of, failures of things like EDW to live up to expectations and then, okay guys over to you. Is that a common theme? >> It is a common theme, and what we're finding with many of our customers is that they had tried many of the, if you will, the methodologies around data governance, right? Around policies and structures. And we describe this as the Data 1.0 journey, which was more application-centric reporting to Data 2.0 to data warehousing. And a lot of the failed attempts, if you will, at centralizing, if you will, all of your data, to now Data 3.0, where we look at the explosion of data, the volumes of data, the number of data consumers, the expectations of the chief data officer to solve business outcomes; crushing under the scale of, I can't fit all of this into a centralized data at repository, I need something that will help me scale and to become more agile. And so, that message does resonate with us, but we're not saying data warehouses don't exist. They absolutely do for trusted data sources, but the ability to be agile and to address many of your organizations needs and to be able to service multiple consumers is top-of-mind for many of our customers. >> And the mind set from 1.0 to 2.0 to 3.0 has changed. From, you know, data as a liability, to now data as this massive asset. It's sort of-- >> Value, yeah. >> Yeah, and the pendulum is swung. It's almost like a see-saw. Where, and I'm not sure it's ever going to flip back, but it is to a certain extent; people are starting to realize, wow, we have to be careful about what we do with our data. But still, it's go, go, go. But, what's the experience at New York Life? I mean, you know. A company that's been around for a long time, conservative, wants to make sure risk averse, obviously. >> Right. >> But at the same time, you want to keep moving as the market moves. >> Right, and we look at data governance as really an enabler and a value-add activity. We're not a governance practice for the sake of governance. We're not there to create a lot of policies and restrictions. We're there to add value and to enable innovation in our business and really drive that execution, that efficiency. >> So how do you do that? Square that circle for me, because a lot of people think, when people think security and governance and compliance they think, oh, that stifles innovation. How do you make governance an engine of innovation? >> You provide transparency around your data. So, it's transparency around, what does the data mean? What data assets do we have? Where can I find that? Where are my most trusted sources of data? What does the quality of that data look like? So all those things together really enable your data consumers to take that information and create new value for the company. So it's really about enabling your value creators throughout the organization. >> So data is an ingredient. I can tell you where it is, I can give you some kind of rating as to the quality of that data and it's usefulness. And then you can take it and do what you need to do with it in your specific line of business. >> That's right. >> Now you said you've been at this two and half years, so what stages have you gone through since you first began the data governance initiative. >> Sure, so our first year, year and half was really focused on building the foundations, establishing the playbook for data governance and building our processes and understanding how data governance needed to be implemented to fit New York Life in the culture of the company. The last twelve months or so has really been focused on operationalizing governance. So we've got the foundations in place, now it's about implementing tools to further augment those capabilities and help assist our data stewards and give them a better skill set and a better tool set to do their jobs. >> Are you, sort of, crowdsourcing the process? I mean, you have a defined set of people who are responsible for governance, or is everyone taking a role? >> So, it is a two-pronged approach, we do have dedicated data stewards. There's approximately 15 across various lines of business throughout the company. But, we are building towards a data democratization aspect. So, we want people to be self-sufficient in finding the data that they need and understanding the data. And then, when they have questions, relying on our stewards as a network of subject matter experts who also have some authorizations to make changes and adapt the data as needed. >> Susan, one of the challenges that we see is that the chief data officers often times are not involved in some of these skunkworks AI projects. They're sort of either hidden, maybe not even hidden, but they're in the line of business, they're moving. You know, there's a mentality of move fast and break things. The challenge with AI is, if you start operationalizing AI and you're breaking things without data quality, without data governance, you can really affect lives. We've seen it. In one of these unintended consequences. I mean, Facebook is the obvious example and there are many, many others. But, are you seeing that? How are you seeing organizations dealing with that problem? >> As Blake was mentioning often times what it is about, you've got to start with transparency, and you got to start with collaborating across your lines of businesses, including the data scientists, and including in terms of what they are doing. And actually provide that level of transparency, provide a level of collaboration. And a lot of that is through the use of our technology enablers to basically go out and find where the data is and what people are using and to be able to provide a mechanism for them to collaborate in terms of, hey, how do I get access to that? I didn't realize you were the SME for that particular component. And then also, did you realize that there is a policy associated to the data that you're managing and it can't be shared externally or with certain consumer data sets. So, the objective really is around how to create a platform to ensure that any one in your organization, whether I'm in the line of business, that I don't have a technical background, or someone who does have a technical background, they can come and access and understand that information and connect with their peers. >> So you're helping them to discover the data. What do you do at that stage? >> What we do at that stage is, creating insights for anyone in the organization to understand it from an impact analysis perspective. So, for example, if I'm going to make changes, to as well as discovery. Where exactly is my information? And so we have-- >> Right. How do you help your customers discover that data? >> Through machine learning and artificial intelligence capabilities of our, specifically, our data catalog, that allows us to do that. So we use such things like similarity based matching which help us to identify. It doesn't have to be named, in miscellaneous text one, it could be named in that particular column name. But, in our ability to scan and discover we can identify in that column what is potentially social security number. It might have resided over years of having this data, but you may not realize that it's still stored there. Our ability to identify that and report that out to the data stewards as well as the data analysts, as well as to the privacy individuals is critical. So, with that being said, then they can actually identify the appropriate policies that need to be adhered to, alongside with it in terms of quality, in terms of, is there something that we need to archive. So that's where we're helping our customers in that aspect. >> So you can infer from the data, the meta data, and then, with a fair degree of accuracy, categorize it and automate that. >> Exactly. We've got a customer that actually ran this and they said that, you know, we took three people, three months to actually physically tag where all this information existed across something like 7,000 critical data elements. And, basically, after the set up and the scanning procedures, within seconds we were able to get within 90% precision. Because, again, we've dealt a lot with meta data. It's core to our artificial intelligence and machine learning. And it's core to how we built out our platforms to share that meta data, to do something with that meta data. It's not just about sharing the glossary and the definition information. We also want to automate and reduce the manual burden. Because we recognize with that scale, manual documentation, manual cataloging and tagging just, >> It doesn't work. >> It doesn't work. It doesn't scale. >> Humans are bad at it. >> They're horrible at it. >> So I presume you have a chief data officer at New York Life, is that correct? >> We have a chief data and analytics officer, yes. >> Okay, and you work within that group? >> Yes, that is correct. >> Do you report it to that? >> Yes, so-- >> And that individual, yeah, describe the organization. >> So that sits in our lines of business. Originally, our data governance office sat in technology. And then, our early 2018 we actually re-orged into the business under the chief data and analytics officer when that role was formed. So we sit under that group along with a data solutions and governance team that includes several of our data stewards and also some others, some data engineer-type roles. And then, our center for data science and analytics as well that contains a lot of our data science teams in that type of work. >> So in thinking about some of these, I was describing to Susan, as these skunkworks projects, is the data team, the chief data officer's team involved in those projects or is it sort of a, go run water through the pipes, get an MVP and then you guys come in. How does that all work? >> We're working to try to centralize that function as much as we can, because we do believe there's value in the left hand knowing what the right hand is doing in those types of things. So we're trying to build those communications channels and build that network of data consumers across the organization. >> It's hard right? >> It is. >> Because the line of business wants to move fast, and you're saying, hey, we can help. And they think you're going to slow them down, but in fact, you got to make the case and show the success because you're actually not going to slow them down to terms of the ultimate outcome. I think that's the case that you're trying to make, right? >> And that's one of the things that we try to really focus on and I think that's one of the advantages to us being embedded in the business under the CDAO role, is that we can then say our objectives are your objectives. We are here to add value and to align with what you're working on. We're not trying to slow you down or hinder you, we're really trying to bring more to the table and augment what you're already trying to achieve. >> Sometimes getting that organization right means everything, as we've seen. >> Absolutely. >> That's right. >> How are you applying governance discipline to unstructured data? >> That's actually something that's a little bit further down our road map, but one of the things that we have started doing is looking at our taxonomy's for structured data and aligning those with the taxonomy's that we're using to classify unstructured data. So, that's something we're in the early stages with, so that when we get to that process of looking at more of our unstructured content, we can, we already have a good feel for there's alignment between the way that we think about and organize those concepts. >> Have you identified automation tools that can help to bring structure to that unstructured data? >> Yes, we have. And there are several tools out there that we're continuing to investigate and look at. But, that's one of the key things that we're trying to achieve through this process is bringing structure to unstructured content. >> So, the conference. First year at the conference. >> Yes. >> Kind of key take aways, things that interesting to you, learnings? >> Oh, yes, well the number of CDO's that are here and what's top of mind for them. I mean, it ranges from, how do I stand up my operating model? We just had a session just about 30 minutes ago. A lot of questions around, how do I set up my organization structure? How do I stand up my operating model so that I could be flexible? To, right, the data scientists, to the folks that are more traditional in structured and trusted data. So, still these things are top-of-mind and because they're recognizing the market is also changing too. And the growing amount of expectations, not only solving business outcomes, but also regulatory compliance, privacy is also top-of-mind for a lot of customers. In terms of, how would I get started? And what's the appropriate structure and mechanism for doing so? So we're getting a lot of those types of questions as well. So, the good thing is many of us have had years of experience in this phase and the convergence of us being able to support our customers, not only in our principles around how we implement the framework, but also the technology is really coming together very nicely. >> Anything you'd add, Blake? >> I think it's really impressive to see the level of engagement with thought leaders and decision makers in the data space. You know, as Susan mentioned, we just got out of our session and really, by the end of it, it turned into more of an open discussion. There was just this kind of back and forth between the participants. And so it's really engaging to see that level of passion from such a distinguished group of individuals who are all kind of here to share thoughts and ideas. >> Well anytime you come to a conference, it's sort of any open forum like this, you learn a lot. When you're at MIT, it's like super-charged. With the big brains. >> Exactly, you feel it when you come on the campus. >> You feel smarter when you walk out of here. >> Exactly, I know. >> Well, guys, thanks so much for coming to theCUBE. It was great to have you. >> Thank you for having us. We appreciate it, thank you. >> You're welcome. All right, keep it right there everybody. Paul and I will be back with our next guest. You're watching theCUBE from MIT in Cambridge. We'll be right back. (techno music)

Published Date : Aug 2 2019

SUMMARY :

Brought to you by SiliconANGLE Media. Susan Wilson is here, she's the vice president So, what are you leading at Informatica? and how to best help them; but more specifically the data governance lead. Right, so I lead the data governance capabilities and then, okay guys over to you. And a lot of the failed attempts, if you will, And the mind set from 1.0 to 2.0 to 3.0 has changed. Where, and I'm not sure it's ever going to flip back, But at the same time, Right, and we look at data governance So how do you do that? What does the quality of that data look like? and do what you need to do with it so what stages have you gone through in the culture of the company. in finding the data that they need is that the chief data officers often times and to be able to provide a mechanism What do you do at that stage? So, for example, if I'm going to make changes, How do you help your customers discover that data? and report that out to the data stewards and then, with a fair degree of accuracy, categorize it And it's core to how we built out our platforms It doesn't work. And that individual, And then, our early 2018 we actually re-orged is the data team, the chief data officer's team and build that network of data consumers but in fact, you got to make the case and show the success and to align with what you're working on. Sometimes getting that organization right but one of the things that we have started doing is bringing structure to unstructured content. So, the conference. And the growing amount of expectations, and decision makers in the data space. it's sort of any open forum like this, you learn a lot. when you come on the campus. Well, guys, thanks so much for coming to theCUBE. Thank you for having us. Paul and I will be back with our next guest.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Paul GillinPERSON

0.99+

SusanPERSON

0.99+

Dave VellantePERSON

0.99+

PaulPERSON

0.99+

Susan WilsonPERSON

0.99+

BlakePERSON

0.99+

InformaticaORGANIZATION

0.99+

CambridgeLOCATION

0.99+

Mark RamseyPERSON

0.99+

Blake AndersPERSON

0.99+

three monthsQUANTITY

0.99+

three peopleQUANTITY

0.99+

FacebookORGANIZATION

0.99+

New York LifeORGANIZATION

0.99+

early 2018DATE

0.99+

Cambridge, MassachusettsLOCATION

0.99+

SiliconANGLE MediaORGANIZATION

0.99+

First yearQUANTITY

0.99+

oneQUANTITY

0.99+

90%QUANTITY

0.99+

two and half yearsQUANTITY

0.98+

firstQUANTITY

0.98+

approximately 15QUANTITY

0.98+

7,000 critical data elementsQUANTITY

0.97+

about two and half yearsQUANTITY

0.97+

first yearQUANTITY

0.96+

twoQUANTITY

0.96+

about 30 minutes agoDATE

0.96+

theCUBEORGANIZATION

0.95+

Blake AndrewsPERSON

0.95+

MIT Chief Data Officer andEVENT

0.93+

MIT Chief Data Officer Information Quality ConferenceEVENT

0.91+

EDWORGANIZATION

0.86+

last twelve monthsDATE

0.86+

skunkworksORGANIZATION

0.85+

CDAOORGANIZATION

0.85+

this morningDATE

0.83+

MITORGANIZATION

0.83+

7 years agoDATE

0.78+

yearQUANTITY

0.75+

Information Quality Symposium 2019EVENT

0.74+

3.0OTHER

0.66+

York LifeORGANIZATION

0.66+

2.0OTHER

0.59+

MIT CDOIQ 2019EVENT

0.58+

halfQUANTITY

0.52+

Data 2.0OTHER

0.52+

Data 3.0TITLE

0.45+

1.0OTHER

0.43+

DataOTHER

0.21+

Adam Smiley Poswolsky, The Quarter Life Breakthrough - PBWC 2017 - #InclusionNow - #theCUBE


 

>> Hey welcome back everybody. Jeff Frick here with the Cube. We're in San Francisco at the Professional Business Women of California Conference, the 28th year, I think Hillary must be in the neighborhood because everyone is streaming up to the keynote rooms. It's getting towards the end of the day. But we're excited to have Adam Smiley on. He's the author of The Quarter-Life Breakthrough. Welcome Adam. >> Great to be here, thanks for having me. >> Absolutely. So you gave a talk a little bit earlier on, I assume the theme of kind of your general thing. Would you just, Quarter-Life Breakthrough, what is Quarter-Life Breakthrough? >> So this is a book about how to empower the next generation. How young people can find meaning in their careers and their lives. So the subtitle of the book is Invent Your Own Path, Find Meaningful Work, and Build a Life That Matters. So everyone talks about millennials, you hear them in the news, "Oh they're the lazy generation," >> Right, right. >> "The entitled generation." The Me, Me, Me generation. I actually think that couldn't be further from the truth. So the truth is that actually 50% of millennials would take a pay cut to find work that matches their values. 90% want to use their skills for good. So my book is a guide for people to find purpose in their careers and really help them find meaning at the workplace and help companies empower that generation at work. >> So from being the older guy, so then is it really incumbent, you know, because before people didn't work for good, they worked for paycheck, right. They went, they punched in, they got paid, they went home. So is it really incumbent then on the employers now to find purposeful work? And how much of it has to be purposeful? I mean, unfortunately, there's always some of that, that grimy stuff that you just have to do. So what's the balance? >> Yeah and it's not to say that millennials don't want a paycheck, everyone wants money. I obviously want to make more money than less money. But it's also that this generation is really looking for meaning in the workplace. And one of the main things, if you look at all the studies, whether it's the Deloitte Millennial Study or the IBM Study, this is a generation that wants to move the needle forward on social issues at work. Not just after work or on the weekends, but at the workplace. And I think it's incumbent upon companies to really think about how they're providing those opportunities for purpose. Both in the mission of the company, what someone's doing every day, and opportunities outside of work, whether it's service projects, paid sabbaticals for people to do purpose-driven projects, really thinking about how someone is inspired to do mission during work every day. >> Right, it's interesting, Bev Crair at the keynote talked about, the question I think was, do you have to separate, kind of your personal views from your professional views and your social life? And she made a very powerful statement, she's like, "I'm comfortable enough with my employer that I can say what I feel and if there's ever a question they can ask me about it. But I don't gait what I say based on my employer as long as I'm being honest and truthful." So you know it's an interesting twist on an old theme. Where before you kind of had your separate worlds. You know, you had your work life and your home life, but now between email and text and social media, there is no kind of they're there for work and it's really invaded into the personal. So is that why the personal has to kind of invade back into the work? >> And when it comes to millennials, one word that always comes up is authenticity. People do not want to separate who they are at home from who they are at work. They want to be their whole person. Now obviously there's a line you don't cross. I'm not going to tell someone exactly what I think of them or tell the boss to go screw themselves or insult somebody or put on social media something that's secret that we're doing at the company. But I think that people want to feel that they get to show up who they are, have their beliefs echoed at the workplace, be able to be their full self, their full values, their mission, their goals, have that reflected in what they do, and have people at the company actually acknowledge that. You're not just an employee, I actually know what's going on in your life. I know what your dreams are, I know what your family's going through. I care about where you're headed, not just today or while you work here, but when you leave the company. Because that's the other thing, is that we're accepting that most of the people entering the workforce now or starting a new job, they're going to be there on average two to three years, maybe four, five, or six years. They're not going to be there ten, fifteen, twenty years like they used to be. So how do you actually empower someone to make an impact while they're there. And help them find the next lily pad, as they call it. The next opportunity. Because they're going to have a lot of those lily pads as they go throughout their career. >> It's interesting. We interviewed a gal named Marcia Conrad at an IBM event many years ago. She just made a really funny observation, she's like, "You know, people come in and you interview them and they're these really cool people and that's why you hire them, because they've got all these personality traits and habits and hobbies and things that they do, and energy." And then they come into the company, and then the old-school, you drop the employee, you know manual on top of them, basically saying stop being you. Stop being the person that we just hired. So that's completely flipped up on its head. >> Right, one of the things I talked about in the session today was this idea of stay interviews versus exit interviews. Normally when we do performance managements, kind of like, okay, you're leaving, what did you think? Why are you doing that when someone leaves? Do it to be like, what would make you stay? What do you want to accomplish while you're here? And you're not being graded against what everyone else is being graded on, what do you want to be graded on? What are your goals? What are your metrics for success? Performance achievement versus just performance measurement. I think is very important for this generation, because otherwise it's like, well why am I being judged on the standards that were written in 1986? This is what I'm trying to do here. >> It's interesting, even Jeff Immelt at GE, they've thrown out the annual review because it's a silly thing. You kind of collect your data two weeks before and the other fifty weeks everybody is just working. I have another hypothesis I want to run by you though. On this kind of purpose-driven. Today so many more things are as a service, transportation as a service, you know there seems to be less emphasis on things and more emphasis on experiences. It also feels like it's easier to see your impact whether it's writing a line of code, or doing something in social media. And you know there was an interesting campaign, Casey Neistat did, participated a couple weeks ago, right. They raised $2 million and basically got Turkish Airlines to fly in a couple hundred thousand metric tons of food to Somalia. And my question is, is it just because you can do those things so much easier and see an impact? Is that why, kind of this, increased purposefulness, I'm struggling on the word. >> I think the tools are certainly more available for people to take action. I think the connection is there. People are seeing what's going on in the world in a way that they've never been exposed to before with social media, with communication technology. It's up front and center. I think also that as technology takes over our lives, you see this with kind of statistics around depression and anxiety, people are starved for that in-person connection. They're starved for that meaning, that actual conversation. We're always doing this, but really a lot of data shows that people experience true joy, true fulfillment, true connection, true experience is what you're talking about, when they're in a room with someone. So people want that. So it's kind of a return back to that purpose-driven life, that purpose-driven tribe, village experience because the rest of the time we're on our phones. And yeah, it's cool, but something's missing. So people are starting to go back to work and be like, "I want that inspiration" that other generations may have gotten from church or from outside of work, or from their community, or from their village, or from the elders, or from a youth group or something. They're like, "I want that in the workplace. I want that everyday." >> Well so this is more top-down right? I mean I just think again, kind of the classic, back in the day, you were kind of compelled to give x percentage of your pay to United Way or whatever. And that was like this big aggregation mechanism that would roll up the money and distribute it to God-knows-where. Completely different model than, and you can see, because of social media and ubiquitous cell phones all over the place, you can actually see who that kid is, that's getting your thing on the other side. >> And it's empowering someone to say, "Okay this is what's important to me. These are the causes that I'd like to support. This is where I want my money to go and here's why." >> So what do you think's the biggest misunderstanding of millennials from old people like me or even older hopefully? >> Well one thing that I do think that millennials don't get right is the importance of patience. I think a lot of times people say, you know, "oh millennials, they want things to happen too quickly." I think that that's true. I think that my generation, I'm going to be the first to admit and say that we need to do a better job of being patient, being persistent. You can't expect things to happen overnight. You can't expect to start a job and in two months get promoted or to feel like you're with the Board of Directors. Things take time. At the same time, it's incumbent upon older generations to listen to these young people, to make them feel like they have a voice, to make them feel like they're heard and that their ideas matter, even if they don't have the final say, to make them feel like they actually matter. Because I think sometimes people assume that they don't know anything. They don't know everything, but they have some really brilliant ideas and if you listen to those ideas they might actually be really good for the company both in terms of profit and purpose. So that's one thing I would say. >> Okay, just, so first time with this show, just get your impressions of the show. >> Oh it's great. This is a great show. You all are doing a great job, a great interview. >> No not our show. The PBWC, I mean of course we're doing a good job, we have you on. I mean the PBWC. >> It's a great, you know for me, it's real exciting to be at the end of an event where I'm one of the only male speakers. Because usually, I've been doing the speaking circuit thing now for a year or two. And I go to these events, I go to panels, I go to conferences, keynotes, and it's mostly male speakers, which is a huge problem. There's far far far fewer women and people of color speaking at these events than men. And one of the things I'm really trying to change is that but also pay equity around speaking, because I talked to some of my female colleagues about what they were paid for a specific event, and they'll say, "Well they covered my transportation, they covered my lift and a salad, or my hotel maybe." I'm like, well I got paid $5000. That's messed up. We did the same amount of work. We did the same panel or doing the same keynote, similar experience levels. That's messed up. And so I'm trying to change that by doing this thing called the Women Speaker Initiative. Which is a mentorship program to empower more women and people of color to be speakers and then to make sure that they're paid fairly when compared to men. >> So how do people get involved with that? >> They should just got to my website, smileyposwolsky.com and check out Women Speaker Initiative. >> Alright, well Adam, thanks for taking a few minutes out of your day. Great great topic and I'm sure, look forward to catching up again later. >> Thanks so much for having me. >> Alright. He's Adam, I'm Jeff. You're watching theCube. We're at the Professional Business Women of California conference, twenty eighth year. Thanks for watching.

Published Date : Mar 31 2017

SUMMARY :

at the Professional Business Women of California Conference, I assume the theme of kind of your general thing. So this is a book about how to empower So my book is a guide for people to find purpose And how much of it has to be purposeful? And one of the main things, if you look at all the studies, and it's really invaded into the personal. or tell the boss to go screw themselves and that's why you hire them, Do it to be like, what would make you stay? I have another hypothesis I want to run by you though. So it's kind of a return back to that and distribute it to God-knows-where. These are the causes that I'd like to support. I think a lot of times people say, you know, just get your impressions of the show. This is a great show. I mean the PBWC. And I go to these events, I go to panels, They should just got to my website, look forward to catching up again later. We're at the Professional Business Women of California

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jeff FrickPERSON

0.99+

Marcia ConradPERSON

0.99+

AdamPERSON

0.99+

HillaryPERSON

0.99+

JeffPERSON

0.99+

Jeff ImmeltPERSON

0.99+

SomaliaLOCATION

0.99+

tenQUANTITY

0.99+

1986DATE

0.99+

fourQUANTITY

0.99+

GEORGANIZATION

0.99+

50%QUANTITY

0.99+

San FranciscoLOCATION

0.99+

$5000QUANTITY

0.99+

90%QUANTITY

0.99+

Casey NeistatPERSON

0.99+

$2 millionQUANTITY

0.99+

fiveQUANTITY

0.99+

Adam SmileyPERSON

0.99+

Quarter-Life BreakthroughTITLE

0.99+

six yearsQUANTITY

0.99+

todayDATE

0.99+

twoQUANTITY

0.99+

Bev CrairPERSON

0.99+

three yearsQUANTITY

0.99+

The Quarter-Life BreakthroughTITLE

0.99+

fifty weeksQUANTITY

0.99+

BothQUANTITY

0.99+

two monthsQUANTITY

0.99+

Turkish AirlinesORGANIZATION

0.99+

firstQUANTITY

0.98+

oneQUANTITY

0.98+

a yearQUANTITY

0.98+

fifteenQUANTITY

0.98+

smileyposwolsky.comOTHER

0.98+

bothQUANTITY

0.98+

Invent Your Own Path, Find Meaningful Work, and Build a Life That MattersTITLE

0.98+

first timeQUANTITY

0.97+

TodayDATE

0.97+

one wordQUANTITY

0.97+

twenty eighth yearQUANTITY

0.97+

PBWCORGANIZATION

0.95+

PBWC 2017EVENT

0.95+

Adam Smiley PoswolskyPERSON

0.93+

one thingQUANTITY

0.92+

Professional Business Women of CaliforniaEVENT

0.91+

two weeksDATE

0.91+

twenty yearsQUANTITY

0.9+

couple hundred thousand metric tonsQUANTITY

0.9+

IBMORGANIZATION

0.86+

couple weeks agoDATE

0.86+

28th yearQUANTITY

0.83+

Professional Business Women of California ConferenceEVENT

0.83+

many years agoDATE

0.83+

The Quarter Life BreakthroughTITLE

0.79+

CubeORGANIZATION

0.77+

theCubeORGANIZATION

0.66+

IBM StudyORGANIZATION

0.64+

Women Speaker InitiativeORGANIZATION

0.63+

Deloitte Millennial StudyORGANIZATION

0.61+

United WayORGANIZATION

0.54+

#theCUBEORGANIZATION

0.5+

Mattia Ballerio, Elmec Informatica | The Path to Sustainable IT


 

(upbeat music) >> We're back talking about the path to sustainable IT and now we're going to get the perspective from Mattia Ballerio who is with Elmec Informatica, an IT services firm in the beautiful Lombardi region, of Italy, north of Milano. Mattia, welcome to theCUBE. Thanks so much for coming on. >> Thank you very much, Dave. Thank you. >> All right, before we jump in, tell us a little bit more about Elmec Informatica. What's your focus? Talk about your unique value add to customers. >> Yeah! So basically Elmec Informatica is middle company from the north part of Italy. And is managed service provider in the IT area. Okay, so the, the main focus area of Elmec is, rich digital transformation, and innovation to our clients with the focus on infrastructure services, workplace services, and also cybersecurity services, okay. And we try to follow the path of our clients to the digital transformation and innovation through technology and sustainability. >> Yeah, obviously very hot topics right now. Sustainability, environmental impact, they're growing areas of focus among leaders across all industries, particularly acute right now in, in Europe, with the, you know, the energy challenges. You've talked about things like sustainable business. What does that mean? What does that term, you know, speak to, and, and what can others learn from it? >> Yeah, at Elmec, our approach to sustainability is grounded in science and, and values. And also in a customer territory, but also employee centered. I mean, we conduct regular assessments to understand the most significant environment and social issues for our business with, with the goal of prioritizing what we do for a sustainability future. Our service delivery methodology, employee care, relationship with the local supplier, and local area and institution are a major factor for us to, to build a such a responsibility strategy. Specifically during the past year, we have been particularly focused on define sustainability governance in the company based on stakeholder engagement, defining material issues, establishing quantitative indicators, to monitor and setting medium to long term goals. >> Okay, so you have a lot of data. You can go into a customer, you can do an assessment, you can set a baseline, and then you have other data by which you can compare that and, and understand what's achievable. So what's your vision for sustainable business? You know, that strategy, you know, how has it affected your business in terms of the evolution? 'Cause this was, hasn't always been as hot a topic as it is today, and, and is it a competitive advantage for you? >> Yeah, yeah. For, for all intense and proposed sustainability is a competitive advantage for Elmec. I mean, it's so, because at the time of profound transformation in the work, in the world of work, CSR issues make a company more attractive when searching for new talent to enter in the workforce of our company. In addition, efforts to ensure people's proper work life balance are a strong retention factor. And, regarding our business proposition, Elmec's attempts is to meet high standard of sustainability and reliability. Our green data center, you said is a prime example of this approach, as at the same time, is there a conditioning activity that is done to give a second life to technology devices that come from, back from rental? I mean, our customer inquiries with respect to Elmec sustainability are increasingly frequent, and in depth. And which is why we monitor our performance, and invest in certification, such as, EcoVadis or ISO 14,001. Okay? >> Got it! So in a previous life, I actually did some work with, with power companies, and there were two big factors in IT, that affected the power consumption. Obviously virtualization was a big one, if you could consolidate servers, you know, that was huge. But the other was the advent of flash storage, and that was all we used to actually go in with the, the engineers and the power company put in alligator clips to measure of, of of an all flash array versus, you know, the spinning disk and it was a big impact. So, you want to talk about, your, your experience with Pure Storage. You use Flash Array, and the Evergreen architecture. Can you talk about your experience there? Why did you make that decision to select Pure Storage? How does that help you meet sustainability and operational requirements? Do those benefits scale as your customers grow? What's your experience been? >> Yeah! It was basically, an easy, an easy answer to our, to our business needs. Okay, because you said before that, in Elmec, we manage a lot of data, okay. And in the past we, we, we see, we see that, the constraints of managing so many, many data was very, very difficult to manage in terms of power consumption or simply for the, the space of storing the data. And, when, when Pure came to us and share our, their products, their vision, to the data management journey for Elmec Informatica, it was very easy to choose Pure, why? With values and the numbers, we, we create a business case and, we said, we see that our power consumption usage was much less, more than 90% of previous technology that we used in the past. Okay? And so of course you have to manage a gradual deploy of flash technology storage, but it was a good target. So we have tried to monitoring the adoption of flash technology, and monitor, monitoring also the power consumption, and the efficiency that the pure technology bring to our, to our IT systems, and of course the IT systems of our clients. And so this is one, the first part, the first good part of our trip with, with Pure. And after that, we approach also the sustainability in long term of choosing Pure technology storage. You mentioned the evergreen models of Pure, and of course this was, a game challenge for us because it allows, it allow us to extend the life cycle management of our data centers, but also the, it allows us to improve the facility, of the facilities of using technology from our technical side, okay. So we are much more efficient than in the past with the choose of Pure Storage Technologies, okay. Of course, this easy users, easy usage mode, let me say, it allow us to bring this value to our, to all our clients that put their data in our data centers. >> So, you talked about how you've seen, 90% improvement relative to previous technologies. I always, I haven't put you on the spot. Because I, I, I was on Pure's website, and I saw in their ESG report some com, you know, it was a comparison with a generic competitor. I'm presuming that competitor was not, you know 2010 spinning disk system. But, but, so I'm curious, as to the results that you're seeing with Pure, in terms of footprint and power usage. You, you're referencing some of that. We heard some metrics from Nicole and Ajay earlier in the program. Do you think, again I'm going to put you in the spot, do you think that Pure's architecture, and the way they've applied, whether it's machine intelligence or the Evergreen model, et cetera, is more competitive than other platforms, that you've seen? >> Yeah, of course. Is more competitor, more competitive. Because basically it allows to service provider to do much more efficient value proposition and offer services that are more that brings more values to, to the customers. Okay, so the customer is always at the center of a proposition of service provider. And the trying to adopt the methodology and also the, the value that Pure as inside, by design in the technology is, is for us very, very important and very, very strategic. Because, because, with like a glass, we can ourself transfer, try to transfer the values of Pure, Pure technologies to our service provider client. >> Okay Mattia, let's wrap and talk about sort of near term 2023 and then longer term. It looks like sustainability is a topic that's here to stay. Unlike when we were putting alligator clips on storage arrays, trying to help customers get rebates, that just didn't have legs. It was too complicated. Now it's a, a topic that everybody's measuring. What's next for Elmec, in its sustainability journey? What advice would you might have for sustainability leaders that want to make a meaningful impact on the environment but also on the bottom line? >> Okay. So, sustainability is fortunately a widely spread concept. And our role in, in this great game is to define a strategy, align with the common and fundamentals goals for the future of planet, and capable of expressing our inclination, and the particularities. Elmec sustainability goals in the near future, I can say that are will be basically free. One define sustainability plan, okay. It's fundamentals to define a sustainability plan. Then it's very important to monitor the, its emissions and we will calculate our carbon footprint, okay. And list, button list, produce a certifiable and comprehensive sustainability report, with respect to the demands of customers, suppliers, and also partners. Okay, so I can say that, this three target will be our direction in the, in the future. Okay? >> Yeah, so I mean, pretty straightforward. Make a plan. You got to monitor and measure. You can't improve what you can't measure. So you going to set a baseline, you're going to report on that. You're going to analyze the data and you're going to make continuous improvement. >> Yep. >> Mattia, thanks so much for joining us today and sharing your perspectives from the, the northern part of Italy. Really appreciate it. >> Yep. Thank you for having me on board. Thank you very much. >> It was really our pleasure. Okay, in a moment, I'm going to be back to wrap up the program, and share some resources , that could be valuable in your sustainability journey. Keep it right there. (upbeat music)

Published Date : Dec 7 2022

SUMMARY :

the path to sustainable IT Thank you very much, Dave. All right, before we jump in, and innovation to our clients in Europe, with the, you governance in the company in terms of the evolution? in the world of work, and the Evergreen architecture. and of course the IT and Ajay earlier in the program. by design in the technology is, also on the bottom line? and the particularities. and you're going to make and sharing your perspectives Thank you for having me on board. Okay, in a moment, I'm going to be back

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MattiaPERSON

0.99+

ElmecORGANIZATION

0.99+

Mattia BallerioPERSON

0.99+

Elmec InformaticaORGANIZATION

0.99+

DavePERSON

0.99+

EuropeLOCATION

0.99+

NicolePERSON

0.99+

90%QUANTITY

0.99+

firstQUANTITY

0.99+

AjayPERSON

0.99+

todayDATE

0.99+

LombardiLOCATION

0.99+

first partQUANTITY

0.99+

PureORGANIZATION

0.99+

2010DATE

0.99+

more than 90%QUANTITY

0.97+

2023DATE

0.97+

two big factorsQUANTITY

0.96+

oneQUANTITY

0.95+

threeQUANTITY

0.95+

ItalyLOCATION

0.94+

second lifeQUANTITY

0.92+

MilanoLOCATION

0.91+

past yearDATE

0.89+

EcoVadisORGANIZATION

0.88+

EvergreenORGANIZATION

0.78+

OneQUANTITY

0.68+

Pure StorageORGANIZATION

0.68+

14,001OTHER

0.67+

ESGTITLE

0.61+

StorageOTHER

0.6+

ISOTITLE

0.6+

ofLOCATION

0.59+

northLOCATION

0.51+

Pure Storage The Path to Sustainable IT


 

>>In the early part of this century, we're talking about the 2005 to 2007 timeframe. There was a lot of talk about so-called green it. And at that time there was some organizational friction. Like for example, the line was that the CIO never saw the power bill, so he or she didn't care, or that the facilities folks, they rarely talked to the IT department. So it was kind of that split brain. And, and then the oh 7 0 8 financial crisis really created an inflection point in a couple of ways. First, it caused organizations to kind of pump the brakes on it spending, and then they took their eye off the sustainability ball. And the second big trend, of course, was the cloud model, you know, kind of became a benchmark for it. Simplicity and automation and efficiency, the ability to dial down and dial up capacity as needed. >>And the third was by the end of the first decade of the, the two thousands, the technology of virtualization was really hitting its best stride. And then you had innovations like flash storage, which largely eliminated the need for these massive farms of spinning mechanical devices that sucked up a lot of power. And so really these technologies began their march to mainstream adoption. And as we progressed through the 2020s, the effect of climate change really come into focus as a critical component of esg. Environmental, social, and governance. Shareholders have come to demand metrics around sustainability. Employees are often choosing employers based on their ESG posture. And most importantly, companies are finding that savings on power cooling and footprint, it has a bottom line impact on the income statement. Now you add to that the energy challenges around the world, particularly facing Europe right now, the effects of global inflation and even more advanced technologies like machine intelligence. >>And you've got a perfect storm where technology can really provide some relief to organizations. Hello and welcome to the Path to Sustainable It Made Possible by Pure Storage and Collaboration with the Cube. My name is Dave Valante and I'm one of the host of the program, along with my colleague Lisa Martin. Now, today we're gonna hear from three leaders on the sustainability topic. First up, Lisa will talk to Nicole Johnson. She's the head of Social Impact and Sustainability at Pure Storage. Nicole will talk about the results from a study of around a thousand sustainability leaders worldwide, and she'll share some metrics from that study. And then next, Lisa will speak to AJ Singh. He's the Chief Product Officer at Pure Storage. We've had had him on the cube before, and not only will he share some useful stats in the market, I'll also talk about some of the technology innovations that customers can tap to address their energy consumption, not the least of which is ai, which is is entering every aspect of our lives, including how we deal with energy consumption. And then we'll bring it back to our Boston studio and go north of Italy with Mattia Ballero of Elec Informatica, a services provider with deep expertise on the topic of sustainability. We hope you enjoyed the program today. Thanks for watching. Let's get started >>At Pure Storage, the opportunity for change and our commitment to a sustainable future are a direct reflection of the way we've always operated and the values we live by every day. We are making significant and immediate impact worldwide through our environmental sustainability efforts. The milestones of change can be seen everywhere in everything we do. Pure's Evergreen Storage architecture delivers two key environmental benefits to customers, the reduction of wasted energy and the reduction of e-waste. Additionally, Pure's implemented a series of product packaging redesigns, promoting recycled and reuse in order to reduce waste that will not only benefit our customers, but also the environment. Pure is committed to doing what is right and leading the way with innovation. That has always been the pure difference, making a difference by enabling our customers to drive out energy usage and their data storage systems by up to 80%. Today, more than 97% of pure arrays purchased six years ago are still in service. And tomorrow our goal for the future is to reduce Scope three. Emissions Pure is committing to further reducing our sold products emissions by 66% per petabyte by 2030. All of this means what we said at the beginning, change that is simple and that is what it has always been about. Pure has a vision for the future today, tomorrow, forever. >>Hi everyone, welcome to this special event, pure Storage, the Path to Sustainable it. I'm your host, Lisa Martin. Very pleased to be joined by Nicole Johnson, the head of Social Impact and Sustainability at Pure Storage. Nicole, welcome to the Cube. Thanks >>For having me, Lisa. >>Sustainability is such an important topic to talk about and I understand that Pure just announced a report today about sustainability. What can you tell me what nuggets are in this report? >>Well, actually quite a few really interesting nuggets, at least for us. And I, I think probably for you and your viewers as well. So we actually commissioned about a thousand sustainability leaders across the globe to understand, you know, what are their sustainability goals, what are they working on, and what are the impacts of buying decisions, particularly around infrastructure when it comes to sustainable goals. I think one of the things that was really interesting for us was the fact that around the world we did not see a significant variation in terms of sustainability being a top priority. You've, I'm sure you've heard about the energy crisis that's happening across Europe. And so, you know, there was some thought that perhaps that might play into AMEA being a larger, you know, having sustainability goals that were more significant. But we actually did not find that we found sustainability to be really important no matter where the respondents were located. >>So very interesting at Pure sustainability is really at the heart of what we do and has been since our founding. It's interesting because we set out to make storage really simple, but it turns out really simple is also really sustainable. And the products and services that we bring to our customers have really powerful outcomes when it comes to decreasing their, their own carbon footprints. And so, you know, we often hear from customers that we've actually really helped them to significantly improve their storage performance, but also allow them to save on space power and cooling costs and, and their footprint. So really significant findings. One example of that is a company called Cengage, which is a global education technology company. They recently shared with us that they have actually been able to reduce their overall storage footprint by 80% while doubling to tripling the performance of their storage systems. So it's really critical for, for companies who are thinking about their sustainability goals, to consider the dynamic between their sustainability program and their IT teams who are making these buying decisions, >>Right? Those two teams need to be really inextricably linked these days. You talked about the fact that there was really consistency across the regions in terms of sustainability being of high priority for organizations. You had a great customer story that you shared that showed significant impact can be made there by bringing the sustainability both together with it. But I'm wondering why are we seeing that so much of the vendor selection process still isn't revolving around sustainability or it's overlooked? What are some of the things that you received despite so many people saying sustainability, huge priority? >>Well, in this survey, the most commonly cited challenge was really around the fact that there was a lack of management buy-in. 40% of respondents told us this was the top roadblock. So getting, I think getting that out of the way. And then we also just heard that sustainability teams were not brought into tech purchasing processes until after it's already rolling, right? So they're not even looped in. And that being said, you know, we know that it has been identified as one of the key departments to supporting a company sustainability goals. So we, we really want to ensure that these two teams are talking more to each other. When we look even closer at the data from the respondents, we see some really positive correlations. We see that 65% of respondents reported that they're on track to meet their sustainability goals. And the IT of those 65%, it is significantly engaged with reporting data for those sustainability initiatives. We saw that, that for those who did report, the sustainability is a top priority for vendor selection. They were twice as likely to be on track with their goals and their sustainability directors said that they were getting involved at the beginning of the tech purchasing program. Our process, I'm sorry, rather than towards the end. And so, you know, we know that to curb the impact of climate crisis, we really need to embrace sustainability from a cross-functional viewpoint. >>Definitely has to be cross-functional. So, so strong correlations there in the report that organizations that had closer alignment between the sustainability folks and the IT folks were farther along in their sustainability program development, execution, et cetera, those co was correlations, were they a surprise? >>Not entirely. You know, when we look at some of the statistics that come from the, you know, places like the World Economic Forum, they say that digitization generated 4% of greenhouse gas emissions in 2020. So, and that, you know, that's now almost three years ago, digital data only accelerates, and by 2025, we expect that number could be almost double. And so we know that that communication and that correlation is gonna be really important because data centers are taking up such a huge footprint of when companies are looking at their emissions. And it's, I mean, quite frankly, a really interesting opportunity for it to be a trailblazer in the sustainability journey. And, you know, perhaps people that are in IT haven't thought about how they can make an impact in this area, but there really is some incredible ways to help us work on cutting carbon emissions, both from your company's perspective and from the world's perspective, right? >>Like we are, we're all doing this because it's something that we know we have to do to drive down climate change. So I think when you, when you think about how to be a trailblazer, how to do things differently, how to differentiate your own department, it's a really interesting connection that IT and sustainability work together. I would also say, you know, I'll just note that of the respondents to the survey we were discussing, we do over half of those respondents expect to see closer alignment between the organization's IT and sustainability teams as they move forward. >>And that's really a, a tip a hat to those organizations embracing cultural change. That's always hard to do, but for those two, for sustainability in IT to come together as part of really the overall ethos of an organization, that's huge. And it's great to see the data demonstrating that, that those, that alignment, that close alignment is really on its way to helping organizations across industries make a big impact. I wanna dig in a little bit to here's ESG goals. What can you share with us about >>That? Absolutely. So as I mentioned peers kind of at the beginning of our formal ESG journey, but really has been working on the, on the sustainability front for a long time. I would, it's funny as we're, as we're doing a lot of this work and, and kind of building our own profile around this, we're coming back to some of the things that we have done in the past that consumers weren't necessarily interested in then but are now because the world has changed, becoming more and more invested in. So that's exciting. So we did a baseline scope one, two, and three analysis and discovered, interestingly enough that 70% of our emissions comes from use of sold products. So our customers work running our products in their data centers. So we know that we, we've made some ambitious goals around our Scope one and two emissions, which is our own office, our utilities, you know, those, they only account for 6% of our emissions. So we know that to really address the issue of climate change, we need to work on the use of sold products. So we've also made a, a really ambitious commitment to decrease our carbon emissions by 66% per bed per petabyte by 2030 in our product. So decreasing our own carbon footprint, but also affecting our customers as well. And we've also committed to a science-based target initiative and our road mapping how to achieve the ambitious goals set out in the Paris agreement. >>That's fantastic. It sounds like you really dialed in on where is the biggest opportunity for us as Pure Storage to make the biggest impact across our organization, across our customers organizations. There lofty goals that pure set, but knowing what I know about Pure, you guys are probably well on track to, to accomplish those goals in record time, >>I hope So. >>Talk a little bit about advice that you would give to viewers who might be at the very beginning of their sustainability journey and really wondering what are the core elements besides it, sustainability, team alignment that I need to bring into this program to make it actually successful? >>Yeah, so I think, you know, understanding that you don't have to pick between really powerful technology and sustainable technology. There are opportunities to get both and not just in storage right in, in your entire IT portfolio. We know that, you know, we're in a place in the world where we have to look at things from the bigger picture. We have to solve new challenges and we have to approach business a little bit differently. So adopting solutions and services that are environmentally efficient can actually help to scale and deliver more effective and efficient IT solutions over time. So I think that that's something that we need to, to really remind ourselves, right? We have to go about business a little bit differently and that's okay. We also know that data centers utilize an incredible amount of, of energy and, and carbon. And so everything that we can do to drive that down is going to address the sustainability goals for us individually as well as, again, drive down that climate change. So we, we need to get out of the mindset that data centers are, are about reliability or cost, et cetera, and really think about efficiency and carbon footprint when you're making those business decisions. I'll also say that, you know, the earlier that we can get sustainability teams into the conversation, the more impactful your business decisions are going to be and helping you to guide sustainable decision making. >>So shifting sustainability and IT left almost together really shows that the correlation between those folks getting together in the beginning with intention, the report shows and the successes that peers had demonstrate that that's very impactful for organizations to actually be able to implement even the cultural change that's needed for sustainability programs to be successful. My last question for you goes back to that report. You mentioned in there that the data show a lot of organizations are hampered by management buy-in, where sustainability is concerned. How can pure help its customers navigate around those barriers so that they get that management buy-in and they understand that the value in it for >>Them? Yeah, so I mean, I think that for me, my advice is always to speak to hearts and minds, right? And help the management to understand, first of all, the impact right on climate change. So I think that's the kind of hearts piece on the mind piece. I think it's addressing the sustainability goals that these companies have set for themselves and helping management understand how to, you know, how their IT buying decisions can actually really help them to reach these goals. We also, you know, we always run kind of TCOs for customers to understand what is the actual cost of, of the equipment. And so, you know, especially if you're in a, in a location in which energy costs are rising, I mean, I think we're seeing that around the world right now with inflation. Better understanding your energy costs can really help your management to understand the, again, the bigger picture and what that total cost is gonna be. Often we see, you know, that maybe the I the person who's buying the IT equipment isn't the same person who's purchasing, who's paying the, the electricity bills, right? And so sometimes even those two teams aren't talking. And there's a great opportunity there, I think, to just to just, you know, look at it from a more high level lens to better understand what total cost of ownership is. >>That's a great point. Great advice. Nicole, thank you so much for joining me on the program today, talking about the new report that on sustainability that Pure put out some really compelling nuggets in there, but really also some great successes that you've already achieved internally on your own ESG goals and what you're helping customers to achieve in terms of driving down their carbon footprint and emissions. We so appreciate your insights and your thoughts. >>Thank you, Lisa. It's been great speaking with you. >>AJ Singh joins me, the Chief Product Officer at Peer Storage. Aj, it's great to have you back on the program. >>Great to be back on, Lisa, good morning. >>Good morning. And sustainability is such an important topic to talk about. So we're gonna really unpack what PEER is doing, we're gonna get your viewpoints on what you're seeing and you're gonna leave the audience with some recommendations on how they can get started on their ESG journey. First question, we've been hearing a lot from pure AJ about the role that technology plays in organizations achieving sustainability goals. What's been the biggest environmental impact associated with, with customers achieving that given the massive volumes of data that keep being generated? >>Absolutely, Lisa, you can imagine that the data is only growing and exploding and, and, and, and there's a good reason for it. You know, data is the new currency. Some people call it the new oil. And the opportunity to go process this data gain insights is really helping customers drive an edge in the digital transformation. It's gonna make a difference between them being on the leaderboard a decade from now when the digital transformation kind of pans out versus, you know, being kind of somebody that, you know, quite missed the boat. So data is super critical and and obviously as part of that we see all these big benefits, but it has to be stored and, and, and that means it's gonna consume a lot of resources and, and the, and therefore data center usage has only accelerated, right? You can imagine the amount of data being generated, you know, recent study pointed to roughly by twenty twenty five, a hundred and seventy five zetabytes, which where each zettabyte is a billion terabytes. So just think of that size and scale of data. That's huge. And, and they also say that, you know, pretty soon, today, in fact in the developed world, every person is having an interaction with the data center literally every 18 seconds. So whether it's on Facebook or Twitter or you know, your email, people are constantly interacting with data. So you can imagine this data is only exploding. It has to be stored and it consumes a lot of energy. In fact, >>It, oh, go ahead. Sorry. >>No, I was saying in fact, you know, there's some studies have shown that data center usage literally consumes one to 2% of global energy consumption. So if there's one place we could really help climate change and, and all those aspects, if you can kind of really, you know, tamp down the data center, energy consumption, sorry, you were saying, >>I was just gonna say, it's, it's an incredibly important topic and the, the, the stats on data that you provided and also I, I like how you talked about, you know, every 18 seconds we're interacting with a data center, whether we know it or not, we think about the long term implications, the fact that data is growing massively. As you shared with the stats that you mentioned. If we think about though the responsibility that companies have, every company in today's world needs to be a data company, right? And we consumers expect it. We expect that you are gonna deliver these relevant, personalized experiences whether we're doing a transaction in our personal lives or in business. But what is the, what requirements do technology companies have to really start billing down their carbon footprints? >>No, absolutely. If you can think about it, just to kind of finish up the data story a little bit, the explosion is to the point where, in fact, if you just recently was in the news that Ireland went up and said, sorry, we can't have any more data centers here. We just don't have the power to supply them. That was big in the news and you know, all the hyperscale that was crashing the head. I know they've come around that and figured out a way around it, but it's getting there. Some, some organizations and and areas jurisdictions are saying pretty much no data center the law, you know, we're, we just can't do it. And so as you said, so companies like Pure, I mean, our view is that it has an opportunity here to really do our bit for climate change and be able to, you know, drive a sustainable environment. >>And, and at Pure we believe that, you know, today's data success really ultimately hinges on energy efficiency, you know, so to to really be energy efficient means you are gonna be successful long term with data. Because if you think of classic data infrastructures, the legacy infrastructures, you know, we've got disk infrastructures, hybrid infrastructures, flash infrastructures, low end systems, medium end systems, high end systems. So a lot of silos, you know, a lot of inefficiency across the silos. Cause the data doesn't get used across that. In fact, you know, today a lot of data centers are not really built with kind of the efficiency and environmental mindset. So there's a big opportunity there. >>So aj, talk to me about some of the steps that Pure is implementing as its chief product officer. Would love to get your your thoughts, what steps is it implementing to help Pures customers become more sustainable? >>No, absolutely. So essentially we are all inherently motivated, like pure and, and, and, and everybody else to solve problems for customers and really forward the status quo, right? You know, innovation, you know, that's what we are all about. And while we are doing that, the challenge is to how do you make technology and the data we feed into it faster, smarter, scalable obviously, but more importantly sustainable. And you can do all of that, but if you miss the sustainability bit, you're kind of missing the boat. And I also feel from an ethical perspective, that's really important for us. Not only you do all the other things, but also kind of make it sustainable. In fact, today 80% of the companies, the companies are realizing this, 80% today are in fact report out on sustainability, which is great. In fact, 80% of leadership at companies, you know, CEOs and senior executives say they've been impacted by some climate change event, you know, where it's a fire in the place they had to evacuate or floods or storms or hurricanes, you, you name it, right? >>So mitigating the carbon impact can in fact today be a competitive advantage for companies because that's where the puck is going and everybody's, you know, it's skating, wanting to skate towards the, and it's good, it's good business too to be sustainable and, and, and meet these, you know, customer requirements. In fact, the the recent survey that we released today is saying that more and more organizations are kickstarting, their sustainability initiatives and many take are aiming to make a significant progress against that over the next decade. So that's, that's really, you know, part of the big, the really, so our view is that that IT infrastructure, you know, can really make a big push towards greener it and not just kind of greenwash it, but actually, you know, you know, make things more greener and, and, and really take the, the lead in, in esg. And so it's important that organizations can reach alignment with their IT teams and challenge their IT teams to continue to lead, you know, for the organization, the sustainability aspects. >>I'm curious, aj, when you're in customer conversations, are you seeing that it's really the C-suite plus it coming together and, and how does peer help facilitate that? To your point, it needs to be able to deliver this, but it's, it's a board level objective these days. >>Absolutely. We're seeing increasingly, especially in Europe with the, you know, the war in Ukraine and the energy crisis that, you know, that's, that's, you know, unleashed. We definitely see it's becoming a bigger and bigger board level objective for, for a lot of companies. And we definitely see customers in starting to do that. So, so in particular, I do want to touch briefly on what steps we are taking as a company, you know, to to to make it sustainable. And obviously customers are doing all the things we talked about and, and we're also helping them become smarter with data. But the key difference is, you know, we have a big focus on efficiency, which is really optimizing performance per wat with unmatched storage density. So you can reduce the footprint and dramatically lower the power required. And and how efficient is that? You know, compared to other old flash systems, we tend to be one fifth, we tend to take one fifth the power compared to other flash systems and substantially lower compared to spinning this. >>So you can imagine, you know, cutting your, if data center consumption is a 2% of global consumption, roughly 40% of that tends to be storage cause of all the spinning disc. So you add about, you know, 0.8% to global consumption and if you can cut that by four fifths, you know, you can already start to make an impact. So, so we feel we can do that. And also we're quite a bit more denser, 10 times more denser. So imagine one fifth the power, one 10th the density, but then we take it a step further because okay, you've got the storage system in the data center, but what about the end of life aspect? What about the waste and reclamation? So we also have something called non-disruptive upgrades. We, using our AI technology in pure one, we can start to sense when a particular part is going to fail and just before it goes to failure, we actually replace it in a non-disruptive fashion. So customer's data is not impacted and then we recycle that so you get a full end to end life cycle, you know, from all the way from the time you deploy much lower power, much lower density, but then also at the back end, you know, reduction in e-waste and those kind of things. >>That's a great point you, that you bring up in terms of the reclamation process. It sounds like Pure does that on its own, the customer doesn't have to be involved in that. >>That's right. And we do that, it's a part of our evergreen, you know, service that we offer. A lot of customers sign up for that service and in fact they don't even, we tell them, Hey, you know, that part's about to go, we're gonna come in, we're gonna swap it out and, and then we actually recycle that part, >>The power of ai. Love that. What are some of the, the things that companies can do if they're, if they're early in this journey on sustainability, what are some of the specific steps companies can take to get started and maybe accelerate that journey as it's becoming climate change and things are becoming just more and more of a, of a daily topic on the news? >>No, absolutely. There's a lot of things companies can do. In fact, the four four item that we're gonna highlight, the first one is, you know, they can just start by doing a materiality assessment and a materiality assessment essentially engages all the stakeholders to find out which specific issues are important for the business, right? So you identify your key priorities that intersect with what the stakeholders want, you know, your different groups from sales, customers, partners, you know, different departments in the organization. And for example, for us, when we conducted our materiality assessment, for us, our product we felt was the biggest area of focus that could contribute a lot towards, you know, making an impact in, in, in from a sustainability standpoint. That's number one. I think number two companies can also think about taking an Azure service approach. The beauty of the Azure service approach is that you are buying a, your customer, they're buying outcomes with SLAs and, and when you are starting to buy outcomes with SLAs, you can start small and then grow as you consume more. >>So that way you don't have systems sitting idle waiting for you to consume more, right? And that's the beauty of the as service approach. And so for example, for us, you know, we have something called Evergreen one, which is our as service offer, where essentially customers are able to only use and have systems turned onto as much as they're consuming. So, so that reduces the waste associated with underutilized systems, right? That's number two. Number three is also you can optimize your supply chains end to end, right? Basically by making sure you're moving, recycling, packaging and eliminating waste in that thing so you can recycle it back to your suppliers. And you can also choose a sustainable supplier network that following sort of good practices, you know, you know, across the globe and such supply chains that are responsive and diverse can really help you. Also, the big business benefit benefited. >>You can also handle surges and demand, for example, for us during the pandemic with this global supply chain shortages, you know, whereas most of our competitors, you know, lead times went to 40, 50 weeks, our lead times went from three to six weeks cuz you know, we had this sustainable, you know, supply chain. And so all of these things, you know, the three things important, but the fourth thing I say more cultural and, and the cultural thing is how do you actually begin to have sustainability become a core part of your ethos at the company, you know, across all the departments, you know, and we've at Pure, definitely it's big for us, you know, you know, around sustainability starting with a product design, but all of the areas as well, if you follow those four items, they'll do the great place to start. >>That's great advice, great recommendations. You talk about the, the, the supply chain, sustainable supply chain optimization. We've been having a lot of conversations with businesses and vendors alike about that and how important it is. You bring up a great point too on supplier diversity, if we could have a whole conversation on that. Yes. But I'm also glad that you brought up culture that's huge to, for organizations to adopt an ESG strategy and really drive sustainability in their business. It has to become, to your point, part of their ethos. Yes. It's challenging. Cultural change management is challenging. Although I think with climate change and the things that are so public, it's, it's more on, on the top mindset folks. But it's a great point that the organization really as a whole needs to embrace the sustainability mindset so that it as a, as an organization lives and breathes that. Yes. And last question for you is advice. So you, you outlined the Four Steps organizations can take. I look how you made that quite simple. What advice would you give organizations who are on that journey to adopting those, those actions, as you said, as they look to really build and deploy and execute an ESG strategy? >>No, absolutely. And so obviously, you know, the advice is gonna come from, you know, a company like Pure, you know, our background kind of being a supplier of products. And so, you know, our advice is for companies that have products, usually they tend to be the biggest generator, the products that you sell to your, your customers, especially if they've got hardware components in it. But, you know, the biggest generator of e-waste and, and and, and, and, and kind of from a sustainability standpoint. So it's really important to have an intentional design approach towards your products with sustainability in mind. So it's not something that's, that you can handle at the very back end. You design it front in the product and so that sustainable design becomes very intentional. So for us, for example, doing these non-disruptive upgrades had to be designed up front so that, you know, a, you know, one of our repair person could go into a customer shop and be able to pull out a card and put in a new card without any change in the customer system. >>That non-receptive approach, it has to be designed into the hardware software systems to be able to pull that on. And that intentional design enables you to recover pieces just when they're about to fail and then putting them through a recovery, you know, waste recovery process. So that, that's kind of the one thing I would say that philosophy, again, it comes down to if that is, you know, seeping into the culture, into your core ethos, you will start to do, you know, you know, that type of work. So, so I mean it's important thing, you know, look, this year, you know, with the spike in energy prices, you know, you know, gas prices going up, it's super important that all of us, you know, do our bit in there and start to drive products that are fundamentally sustainable, not just at the initial, you know, install point, but from an end to end full life cycle standpoint. >>Absolutely. And I love that you brought up intention that is everything that peers doing is with, with such thought and intention and really for organizations and any industry to become more sustainable, to develop an ESG strategy. To your point, it all needs to start with intention. And of course that that cultural adoption, aj, it's been so great to have you on the program talking about what PEER is doing to help organizations really navigate that path to sustainable it. We appreciate your insights on your time. >>Thank you, Lisa. Pleasure being on board >>At Pure Storage. The opportunity for change and our commitment to a sustainable future are a direct reflection of the way we've always operated and the values we live by every day. We are making significant and immediate impact worldwide through our environmental sustainability efforts. The milestones of change can be seen everywhere in everything we do. Pures Evergreen storage architecture delivers two key environmental benefits to customers, the reduction of wasted energy and the reduction of e-waste. Additionally, pures implemented a series of product packaging redesigns, promoting recycle and reuse in order to reduce waste that will not only benefit our customers, but also the environment. Pure is committed to doing what is right and leading the way with innovation. That has always been the pure difference, making a difference by enabling our customers to drive out energy usage and their data storage systems by up to 80% today, more than 97% of Pure Array purchased six years ago are still in service. And tomorrow our goal for the future is to reduce Scope three emissions Pure is committing to further reducing our sold products emissions by 66% per petabyte by 2030. All of this means what we said at the beginning, change that is simple and that is what it has always been about. Pure has a vision for the future today, tomorrow, forever. >>We're back talking about the path to sustainable it and now we're gonna get the perspective from Mattia Valerio, who is with Elec Informatica and IT services firm and the beautiful Lombardi region of Italy north of Milano. Mattia, welcome to the Cube. Thanks so much for coming on. >>Thank you very much, Dave. Thank you. >>All right, before we jump in, tell us a little bit more about Elec Informatica. What's your focus, talk about your unique value add to customers. >>Yeah, so basically Alma Informatica is middle company from the north part of Italy and is managed service provider in the IT area. Okay. So the, the main focus area of Al Meca is reach digital transformation innovation to our clients with focus on infrastructure services, workplace services, and also cybersecurity services. Okay. And we try to follow the path of our clients to the digital transformation and the innovation through technology and sustainability. >>Yeah. Obviously very hot topics right now. Sustainability, environmental impact, they're growing areas of focus among leaders across all industries. A particularly acute right now in, in Europe with the, you know, the energy challenges you've talked about things like sustainable business. What does that mean? What does that term Yeah. You know, speak to and, and what can others learn from it? >>Yeah. At at, at our approach to sustainability is grounded in science and, and values and also in customer territory, but also employee centered. I mean, we conduct regular assessments to understand the most significant environment and social issues for our business with, with the goal of prioritizing what we do for a sustainability future. Our service delivery methodology, employee care relationship with the local supplier and local area and institution are a major factor for us to, to build a such a responsibility strategy. Specifically during the past year, we have been particularly focused on define sustainability governance in the company based on stakeholder engagement, defining material issues, establishing quantitative indicators to monitor and setting medium to long-term goals. >>Okay, so you have a lot of data. You can go into a customer, you can do an assessment, you can set a baseline, and then you have other data by which you can compare that and, and understand what's achievable. So what's your vision for sustainable business? You know, that strategy, you know, how has it affected your business in terms of the evolution? Cuz this wasn't, hasn't always been as hot a topic as it is today. And and is it a competitive advantage for you? >>Yeah, yeah. For, for, for all intense and proposed sustainability is a competitive advantage for elec. I mean, it's so, because at the time of profound transformation in the work, in the world of work, CSR issues make a company more attractive when searching for new talent to enter in the workforce of our company. In addition, efforts to ensure people's proper work life balance are a strong retention factor. And regarding our business proposition, ELEX attempts is to meet high standard of sustainability and reliability. Our green data center, you said is a prime example of this approach as at the same time, is there a conditioning activity that is done to give a second life to technology devices that come from back from rental? I mean, our customer inquiries with respect to sustainability are increasingly frequent and in depth and which is why we monitor our performance and invest in certification such as EcoVadis or ISO 14,001. Okay, >>Got it. So in a previous life I actually did some work with, with, with power companies and there were two big factors in it that affected the power consumption. Obviously virtualization was a big one, if you could consolidate servers, you know, that was huge. But the other was the advent of flash storage and that was, we used to actually go in with the, the engineers and the power company put in alligator clips to measure of, of, of an all flash array versus, you know, the spinning disc and it was a big impact. So you, I wanna talk about your, your experience with Pure Storage. You use Flash Array and the Evergreen architecture. Can you talk about what your experience there, why did you make that decision to select Pure Storage? How does that help you meet sustainability and operational requirements? Do those benefits scale as your customers grow? What's your experience been? >>Yeah, it was basically an easy and easy answer to our, to our business needs. Okay. Because you said before that in Elec we, we manage a lot of data, okay? And in the past we, we, we see it, we see that the constraints of managing so many, many data was very, very difficult to manage in terms of power consumption or simply for the, the space of storing the data. And when, when Pure came to us and share our products, their vision to the data management journey for Element Informatica, it was very easy to choose pure why with values and numbers. We, we create a business case and we said that we, we see that our power consumption usage was much less, more than 90% of previous technology that we used in the past. Okay. And so of course you have to manage a grade oil deploy of flash technology storage, but it was a good target. >>So we have tried to monitoring the adoption of flash technology and monitor monitoring also the power consumption and the efficiency that the pure technology bring to our, to our IT systems and of course the IT systems of our clients. And so this is one, the first part, the first good part of our trip with, with Pure. And after that we approach also the sustainability in long term of choosing pure technology storage. You mentioned the Evergreen models of Pure, and of course this was, again, challenge for us because it allows, it allow us to extend the life cycle management of our data centers, but also the, IT allows us to improve the facility of the facilities of using technology from our technical side. Okay. So we are much more efficient than in the past with the choose of Pure storage technologies. Okay. Of course, this easy users, easy usage mode, let me say it, allow us to bring this value to our, to all our clients that put their data in our data centers. >>So you talked about how you've seen a 90% improvement relative to previous technologies. I always, I haven't put you in the spot. Yeah, because I, I, I was on Pure's website and I saw in their ESG report some com, you know, it was a comparison with a generic competitor presuming that competitor was not, you know, a 2010 spinning disc system. But, but, so I'm curious as to the results that you're seeing with Pure in terms of footprint and power usage. You, you're referencing some of that. We heard some metrics from Nicole and AJ earlier in the program. Do you think, again, I'm gonna put you in the spot, do you think that Pure's architecture and the way they've applied, whether it's machine intelligence or the Evergreen model, et cetera, is more competitive than other platforms that you've seen? >>Yeah, of course. Is more competitor improve competitive because basically it allows to service provider to do much more efficient value proposition and offer services that are more, that brings more values to, to the customers. Okay. So the customer is always at the center of a proposition of a service provider and trying to adopt the methodology and also the, the value that pure as inside by design in the technology is, is for us very, very, very important and very, very strategic because, because with like a glass, we can, our self transfer try to transfer the values of pure, pure technologies to our service provider client. >>Okay. Matta, let's wrap and talk about sort of near term 2023 and then longer term it looks like sustainability is a topic that's here to stay. Unlike when we were putting alligator clips on storage arrays, trying to help customers get rebates that just didn't have legs. It was too complicated. Now it's a, a topic that everybody's measuring. What's next for elec in its sustainability journey? What advice would you might have? Sustainability leaders that wanna make a meaningful impact on the environment, but also on the bottom line. >>Okay, so sustainability is fortunately a widely spread concept. And our role in, in this great game is to define a strategy, align with the common and fundamentals goals for the future of planet and capable of expressing our inclination and the, and the particularities and accessibility goals in the near future. I, I say, I can say that are will be basically free one define sustainability plan. Okay? It's fundamentals to define a sustainability plan. Then it's very important to monitor the its emissions and we will calculate our carbon footprint. Okay? And least button list produces certifiable and comprehensive sustainability report with respect to the demands of customers, suppliers, and also partners. Okay. So I can say that this three target will be our direction in the, in the future. Okay. >>Yeah. So I mean, pretty straightforward. Make a plan. You gotta monitor and measure, you can't improve what you can't measure. So you gonna set a baseline, you're gonna report on that. Yep. You're gonna analyze the data and you're gonna make continuous improvement. >>Yep. >>Matea, thanks so much for joining us today in sharing your perspectives from the, the northern part of Italy. Really appreciate it. >>Yeah, thank you for having aboard. Thank you very >>Much. It was really our pleasure. Okay, in a moment, I'm gonna be back to wrap up the program and share some resources that could be valuable in your sustainability journey. Keep it right there. >>Sustainability is becoming increasingly important and is hitting more RFPs than ever before as a critical decision point for customers. Environmental benefits are not the only impetus. Rather bottom line cost savings are proving that sustainability actually means better business. You can make a strong business case around sustainability and you should, many more organizations are setting mid and long-term goals for sustainability and putting forth published metrics for shareholders and customers. Whereas early green IT initiatives at the beginning of this century, were met with skepticism and somewhat disappointing results. Today, vendor r and d is driving innovation in system design, semiconductor advancements, automation in machine intelligence that's really beginning to show tangible results. Thankfully. Now remember, all these videos are available on demand@thecube.net. So check them out at your convenience and don't forget to go to silicon angle.com for all the enterprise tech news of the day. You also want to check out pure storage.com. >>There are a ton of resources there. As an aside, pure is the only company I can recall to allow you to access resources like a Gartner Magic Quadrant without forcing you to fill out a lead gen form. So thank you for that. Pure storage, I love that. There's no squeeze page on that. No friction. It's kind of on brand there for pure well done. But to the topic today, sustainability, there's some really good information on the site around esg, Pure's Environmental, social and Governance mission. So there's more in there than just sustainability. You'll see some transparent statistics on things like gender and ethnic diversity, and of course you'll see that Pure has some work to do there. But kudos for publishing those stats transparently and setting goals so we can track your progress. And there's plenty on the sustainability topic as well, including some competitive benchmarks, which are interesting to look at and may give you some other things to think about. We hope you've enjoyed the path to Sustainable it made possible by Pure Storage produced with the Cube, your leader in enterprise and emerging tech, tech coverage.

Published Date : Dec 5 2022

SUMMARY :

trend, of course, was the cloud model, you know, kind of became a benchmark for it. And then you had innovations like flash storage, which largely eliminated the We hope you enjoyed the program today. At Pure Storage, the opportunity for change and our commitment to a sustainable future Very pleased to be joined by Nicole Johnson, the head of Social What can you tell me what nuggets are in this report? And so, you know, there was some thought that perhaps that might play into AMEA And so, you know, we often hear from customers that What are some of the things that you received despite so many people saying sustainability, And so, you know, we know that to curb the that had closer alignment between the sustainability folks and the IT folks were farther along So, and that, you know, that's now almost three years ago, digital data the respondents to the survey we were discussing, we do And it's great to see the data demonstrating our Scope one and two emissions, which is our own office, our utilities, you know, those, It sounds like you really dialed in on where is the biggest decisions are going to be and helping you to guide sustainable decision My last question for you goes back to that report. And so, you know, especially if you're in a, in a location Nicole, thank you so much for joining me on the program today, it's great to have you back on the program. pure AJ about the role that technology plays in organizations achieving sustainability it's on Facebook or Twitter or you know, your email, people are constantly interacting with you know, tamp down the data center, energy consumption, sorry, you were saying, We expect that you are gonna deliver these relevant, the explosion is to the point where, in fact, if you just recently was in the news that Ireland went So a lot of silos, you know, a lot of inefficiency across the silos. So aj, talk to me about some of the steps that Pure is implementing as its chief product officer. In fact, 80% of leadership at companies, you know, CEOs and senior executives say they've teams and challenge their IT teams to continue to lead, you know, To your point, it needs to be able to deliver this, but it's, it's a board level objective We're seeing increasingly, especially in Europe with the, you know, the war in Ukraine and the the back end, you know, reduction in e-waste and those kind of things. that on its own, the customer doesn't have to be involved in that. they don't even, we tell them, Hey, you know, that part's about to go, we're gonna come in, we're gonna swap it out and, companies can take to get started and maybe accelerate that journey as it's becoming climate the biggest area of focus that could contribute a lot towards, you know, making an impact in, So that way you don't have systems sitting idle waiting for you to consume more, and the cultural thing is how do you actually begin to have sustainability become But I'm also glad that you brought up culture that's And so obviously, you know, the advice is gonna come from, you know, it comes down to if that is, you know, seeping into the culture, into your core ethos, it's been so great to have you on the program talking about what PEER is doing to help organizations really are a direct reflection of the way we've always operated and the values we live by every We're back talking about the path to sustainable it and now we're gonna get the perspective from All right, before we jump in, tell us a little bit more about Elec Informatica. in the IT area. right now in, in Europe with the, you know, the energy challenges you've talked about things sustainability governance in the company based on stakeholder engagement, You know, that strategy, you know, how has it affected your business in terms of the evolution? Our green data center, you of, of, of an all flash array versus, you know, the spinning disc and it was a big impact. And so of course you have to manage a grade oil deploy of the facilities of using technology from our that competitor was not, you know, a 2010 spinning disc system. So the customer is always at the center of a proposition What advice would you might have? monitor the its emissions and we will calculate our So you gonna set a baseline, you're gonna report on that. the northern part of Italy. Yeah, thank you for having aboard. Okay, in a moment, I'm gonna be back to wrap up the program and share some resources case around sustainability and you should, many more organizations are setting mid can recall to allow you to access resources like a Gartner Magic Quadrant without forcing

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NicolePERSON

0.99+

Lisa MartinPERSON

0.99+

Nicole JohnsonPERSON

0.99+

Dave ValantePERSON

0.99+

Mattia BalleroPERSON

0.99+

Elec InformaticaORGANIZATION

0.99+

MattiaPERSON

0.99+

AJ SinghPERSON

0.99+

AJ SinghPERSON

0.99+

40QUANTITY

0.99+

Mattia ValerioPERSON

0.99+

EuropeLOCATION

0.99+

DavePERSON

0.99+

LisaPERSON

0.99+

0.8%QUANTITY

0.99+

Al MecaORGANIZATION

0.99+

2020DATE

0.99+

threeQUANTITY

0.99+

90%QUANTITY

0.99+

Alma InformaticaORGANIZATION

0.99+

10 timesQUANTITY

0.99+

2005DATE

0.99+

6%QUANTITY

0.99+

2010DATE

0.99+

4%QUANTITY

0.99+

firstQUANTITY

0.99+

2030DATE

0.99+

2%QUANTITY

0.99+

70%QUANTITY

0.99+

ELEXORGANIZATION

0.99+

2025DATE

0.99+

80%QUANTITY

0.99+

Pure StorageORGANIZATION

0.99+

BostonLOCATION

0.99+

twiceQUANTITY

0.99+

two teamsQUANTITY

0.99+

65%QUANTITY

0.99+

LombardiLOCATION

0.99+

tomorrowDATE

0.99+

secondQUANTITY

0.99+

MateaPERSON

0.99+

PureORGANIZATION

0.99+

2007DATE

0.99+

demand@thecube.netOTHER

0.99+

CengageORGANIZATION

0.99+

First questionQUANTITY

0.99+

AJPERSON

0.99+

Element InformaticaORGANIZATION

0.99+

todayDATE

0.99+

first partQUANTITY

0.99+

six weeksQUANTITY

0.99+

more than 97%QUANTITY

0.99+

oneQUANTITY

0.99+

FirstQUANTITY

0.99+

thirdQUANTITY

0.99+

TodayDATE

0.99+

twenty twenty fiveQUANTITY

0.99+

2020sDATE

0.99+

twoQUANTITY

0.99+

two thousandsQUANTITY

0.99+

six years agoDATE

0.99+

bothQUANTITY

0.99+

Dan Molina, nth, Terry Richardson, AMD, & John Frey, HPE | Better Together with SHI


 

(futuristic music) >> Hey everyone. Lisa Martin here for theCUBE back with you, three guests join me. Dan Molina is here, the co-president and chief technology officer at NTH Generation. And I'm joined once again by Terry Richardson, North American channel chief for AMD and Dr. John Fry, chief technologist, sustainable transformation at HPE. Gentlemen, It's a pleasure to have you on theCUBE Thank you for joining me. >> Thank you, Lisa. >> Dan. Let's have you kick things off. Talk to us about how NTH Generation is addressing the environmental challenges that your customers are having while meeting the technology demands of the future. That those same customers are no doubt having. >> It's quite an interesting question, Lisa, in our case we have been in business since 1991 and we started by providing highly available computing solutions. So this is great for me to be partnered here with HPE and the AMD because we want to provide quality computing solutions. And back in the day, since 1991 saving energy saving footprint or reducing footprint in the data center saving on cooling costs was very important. Over time those became even more critical components of our solutions design. As you know, as a society we started becoming more aware of the benefits and the must that we have a responsibility back to society to basically contribute with our social and environmental responsibility. So one of the things that we continue to do and we started back in 1991 is to make sure that we're deciding compute solutions based on clients' actual needs. We go out of our way to collect real performance data real IT resource consumption data. And then we architect solutions using best in the industry components like AMD and HPE to make sure that they were going to be meeting those goals and energy savings, like cooling savings, footprint reduction, knowing that instead of maybe requiring 30 servers, just to mention an example maybe we're going to go down to 14 and that's going to result in great energy savings. Our commitment to making sure that we're providing optimized solutions goes all the way to achieving the top level certifications from our great partner, Hewlett Packard Enterprise. Also go deep into micro processing technologies like AMD but we want to make sure that the designs that we're putting together actually meet those goals. >> You talked about why sustainability is important to NTH from back in the day. I love how you said that. Dan, talk to us a little bit about what you're hearing from customers as we are seeing sustainability as a corporate initiative horizontally across industries and really rise up through the C-suite to the board. >> Right, it is quite interesting Lisa We do service pretty much horizontally just about any vertical, including public sector and the private sector from retail to healthcare, to biotech to manufacturing, of course, cities and counties. So we have a lot of experience with many different verticals. And across the board, we do see an increased interest in being socially responsible. And that includes not just being responsible on recycling as an example, most of our conversations or engagements that conversation happens, 'What what's going to happen with the old equipment ?' as we're replacing with more modern, more powerful, more efficient equipment. And we do a number of different things that go along with social responsibility and environment protection. And that's basically e-waste programs. As an example, we also have a program where we actually donate some of that older equipment to schools and that is quite quite something because we're helping an organization save energy, footprint. Basically the things that we've been talking about but at the same time, the older equipment even though it's not saving that much energy it still serves a purpose in society where maybe the unprivileged or not as able to afford computing equipment in certain schools and things of that nature. Now they can benefit and being productive to society. So it's not just about energy savings there's so many other factors around social corporate responsibility. >> So sounds like Dan, a very comprehensive end to end vision that NTH has around sustainability. Let's bring John and Terry into the conversation. John, we're going to start with you. Talk to us a little bit about how HPE and NTH are partnering together. What are some of the key aspects of the relationship from HPE's perspective that enable you both to meet not just your corporate sustainable IT objectives, but those of your customers. >> Yeah, it's a great question. And one of the things that HPE brings to bear is 20 years experience on sustainable IT, white papers, executive workbooks and a lot of expertise for how do we bring optimized solutions to market. If the customer doesn't want to manage those pieces himself we have our 'As a service solutions, HPE GreenLake. But our sales force won't get to every customer across the globe that wants to take advantage of this expertise. So we partner with companies like NTH to know the customer better, to develop the right solution for that customer and with NTH's relationships with the customers, they can constantly help the customer optimize those solutions and see where there perhaps areas of opportunity that may be outside of HPE's own portfolio, such as client devices where they can bring that expertise to bear, to help the customer have a better total customer experience. >> And that is critical, that better overall comprehensive total customer experience. As we know on the other end, all customers are demanding customers like us who want data in real time, we want access. We also want the corporate and the social responsibility of the companies that we work with. Terry, bringing you into the conversation. Talk to us a little about AMD. How are you helping customers to create what really is a sustainable IT strategy from what often starts out as sustainability tactics? >> Exactly. And to pick up on what John and and Dan were saying, we're really energized about the opportunity to allow customers to accelerate their ability to attain some of their more strategic sustainability goals. You know, since we started on our current data center, CPU and GPU offerings, each generation we continue to focus on increasing the performance capability with great sensitivity to the efficiency, right? So as customers are modernizing their data center and achieving their own digital transformation initiatives we are able to deliver solutions through HPE that really address a greater performance per watt which is a a core element in allowing customers to achieve the goals that John and Dan talked about. So, you know, as a company, we're fully on board with some very public positions around our own sustainability goals, but working with terrific partners like NTH and HPE allows us to together bring those enabling technologies directly to customers >> Enabling and accelerating technologies. Dan, let's go back to you. You mentioned some of the things that NTH is doing from a sustainability approach, the social and the community concern, energy use savings, recycling but this goes all the way from NTH's perspective to things like outreach and fairness in the workplace. Talk to us a little bit about some of those other initiatives that NTH has fired up. >> Absolutely, well at NTH , since the early days, we have invested heavily on modern equipment and we have placed that at NTH labs, just like HPE labs we have NTH labs, and that's what we do a great deal of testing to make sure that our clients, right our joint clients are going to have high quality solutions that we're not just talking about it and we actually test them. So that is definitely an investment by being conscious about energy conservation. We have programs and scripts to shut down equipment that is not needed at the time, right. So we're definitely conscious about it. So I wanted to mention that example. Another one is, we all went through a pandemic and this is still ongoing from some perspectives. And that forced pretty much all of our employees, at least for some time to work from home. Being an IT company, we're very proud that we made that transition almost seamlessly. And we're very proud that you know people who continue to work from home, they're saving of course, gasoline, time, traffic, all those benefits that go with reducing transportation, and don't get me wrong, I mean, sometimes it is important to still have face to face meetings, especially with new organizations that you want to establish trust. But for the most part we have become a hybrid workforce type of organization. At the same time, we're also implementing our own hybrid IT approach which is what we talk to our clients about. So there's certain workloads, there are certain applications that truly belong in in public cloud or Software as a Service. And there's other workloads that truly belong, to stay in your data center. So a combination and doing it correctly can result in significant savings, not just money, but also again energy, consumption. Other things that we're doing, I mentioned trading programs, again, very proud that you know, we use a e-waste programs to make sure that those IT equipment is properly disposed of and it's not going to end in a landfill somewhere but also again, donating to schools, right? And very proud about that one. We have other outreach programs. Normally at the end of the year we do some substantial donations and we encourage our employees, my coworkers to donate. And we match those donations to organizations like Operation USA, they provide health and education programs to recover from disasters. Another one is Salvation Army, where basically they fund rehabilitation programs that heal addictions change lives and restore families. We also donate to the San Diego Zoo. We also believe in the whole ecosystem, of course and we're very proud to be part of that. They are supporting more than 140 conservation projects and partnerships in 70 countries. And we're part of that donation. And our owner has been part of the board or he was for a number of years. Mercy House down in San Diego, we have our headquarters. They have programs for the homeless. And basically that they're servicing. Also Save a Life Foundation for the youth to be educated to help prevent sudden cardiac arrest for the youth. So programs like that. We're very proud to be part of the donations. Again, it's not just about energy savings but it's so many other things as part of our corporate social responsibility program. Other things that I wanted to mention. Everything in our buildings, in our offices, multiple locations. Now we turn into LED. So again, we're eating our own dog food as they say but that is definitely some significant energy savings. And then lastly, I wanted to mention, this is more what we do for our customers, but the whole HPE GreenLake program we have a growing number of clients especially in Southern California. And some of those are quite large like school districts, like counties. And we feel very proud that in the old days customers would buy IT equipment for the next three to five years. Right? And they would buy extra because obviously they're expecting some growth while that equipment must consume energy from day one. With a GreenLake type of program, the solution is sized properly. Maybe a little bit of a buffer for unexpected growth needs. And anyway, but with a GreenLake program as customers need more IT resources to continue to expand their workloads for their organizations. Then we go in with 'just in time' type of resources. Saving energy and footprint and everything else that we've been talking about along the way. So very proud to be one of the go-tos for Hewlett Packard Enterprise on the GreenLake program which is now a platform, so. >> That's great. Dan, it sounds like NTH generation has such a comprehensive focus and strategy on sustainability where you're pulling multiple levers it's almost like sustainability to the NTH degree ? See what I did there ? >> (laughing) >> I'd like to talk with all three of you now. And John, I want to start with you about employees. Dan, you talked about the hybrid work environment and some of the silver linings from the pandemic but I'd love to know, John, Terry and then Dan, in that order how educated and engaged are your employees where sustainability is concerned? Talk to me about that from their engagement perspective and also from the ability to retain them and make them proud as Dan was saying to work for these companies, John ? >> Yeah, absolutely. One of the things that we see in technology, and we hear it from our customers every day when we're meeting with them is we all have a challenge attracting and retaining new employees. And one of the ways that you can succeed in that challenge is by connecting the work that the employee does to both the purpose of your company and broader than that global purpose. So environmental and social types of activities. So for us, we actually do a tremendous amount of education for our employees. At the moment, all of our vice presidents and above are taking climate training as part of our own climate aspirations to really drive those goals into action. But we're opening that training to any employee in the company. We have a variety of employee resource groups that are focused on sustainability and carbon reduction. And in many cases, they're very loud advocates for why aren't we pushing a roadmap further? Why aren't we doing things in a particular industry segment where they think we're not moving quite as quickly as we should be. But part of the recognition around all of that as well is customers often ask us when we suggest a sustainability or sustainable IT solution to them. Their first question back is, are you doing this yourselves? So for all of those reasons, we invest a lot of time and effort in educating our employees, listening to our employees on that topic and really using them to help drive our programs forward. >> That sounds like it's critical, John for customers to understand, are you doing this as well? Are you using your own technology ? Terry, talk to us about from the AMD side the education of your employees, the engagement of them where sustainability is concerned. >> Yeah. So similar to what John said, I would characterize AMD is a very socially responsible company. We kind of share that alignment in point of view along with NTH. Corporate responsibility is something that you know, most companies have started to see become a lot more prominent, a lot more talked about internally. We've been very public with four key sustainability goals that we've set as an organization. And we regularly provide updates on where we are along the way. Some of those goals extend out to 2025 and in one case 2030 so not too far away, but we're providing milestone updates against some pretty aggressive and important goals. I think, you know, as a technology company, regardless of the role that you're in there's a way that you can connect to what the company's doing that I think is kind of a feel good. I spend more of my time with the customer facing or partner facing resources and being able to deliver a tool to partners like NTH and strategic partners like HPE that really helps quantify the benefit, you know in a bare metal, in terms of greenhouse gas emissions and a TCO tool to really quantify what an implementation of a new and modern solution will mean to a customer. And for the first time they have choice. So I think employees, they can really feel good about being able to to do something that is for a greater good than just the traditional corporate goals. And of course the engineers that are designing the next generation of products that have these as core competencies clearly can connect to the impact that we're able to make on the broader global ecosystem. >> And that is so important. Terry, you know, employee productivity and satisfaction directly translates to customer satisfaction, customer retention. So, I always think of them as inextricably linked. So great to hear what you're all doing in terms of the employee engagement. Dan, talk to me about some of the outcomes that NTH is enabling customers to achieve, from an outcomes perspective those business outcomes, maybe even at a high level or a generic level, love to dig into some of those. >> Of course. Yes. So again, our mission is really to deliver awesome in everything we do. And we're very proud about that mission, very crispy clear, short and sweet and that includes, we don't cut corners. We go through the extent of, again, learning the technology getting those certifications, testing those in the lab so that when we're working with our end user organizations they know they're going to have a quality solution. And part of our vision has been to provide industry leading transformational technologies and solutions for example, HPE and AMD for organizations to go through their own digital transformation. Those two words have been used extensively over the last decade, but this is a multi decade type of trend, super trend or mega trend. And we're very proud that by offering and architecting and implementing, and in many cases supporting, with our partners, those, you know, best in class IT cyber security solutions were helping those organizations with those business outcomes, their own digital transformation. If you extend that Lisa , a Little bit further, by helping our clients, both public and private sector become more efficient, more scalable we're also helping, you know organizations become more productive, if you scale that out to the entire society in the US that also helps with the GDP. So it's all interrelated and we're very proud through our, again, optimized solutions. We're not just going to sell a box we're going to understand what the organization truly needs and adapt and architect our solutions accordingly. And we have, again, access to amazing technology, micro processes. Is just amazing what they can do today even compared to five years ago. And that enables new initiatives like artificial intelligence through machine learning and things of that nature. You need GPU technology , that specialized microprocessors and companies like AMD, like I said that are enabling organizations to go down that path faster, right? While saving energy, footprint and everything that we've been talking about. So those are some of the outcomes that I see >> Hey, Dan, listening to you talk, I can't help but think this is not a stretch for NTH right? Although, you know, terms like sustainability and reducing carbon footprint might be, you know more in vogue, the type of solutions that you've been architecting for customers your approach, dates back decades, and you don't have to change a lot. You just have new kind of toys to play with and new compelling offerings from great vendors like HPE to position to your customers. But it's not a big change in what you need to do. >> We're blessed from that perspective that's how our founders started the company. And we only, I think we go through a very extensive interview process to make sure that there will be a fit both ways. We want our new team members to get to know the the rest of the team before they actually make the decision. We are very proud as well, Terry, Lisa and John, that our tenure here at NTH is probably well over a decade. People get here and they really value how we help organizations through our dedicated work, providing again, leading edge technology solutions and the results that they see in our own organizations where we have made many friends in the industry because they had a problem, right? Or they had a very challenging initiative for their organization and we work together and the outcome there is something that they're very, very proud of. So you're right, Terry, we've been doing this for a long time. We're also very happy again with programs like the HPE GreenLake. We were already doing optimized solutions but with something like GreenLake is helping us save more energy consumption from the very beginning by allowing organizations to pay for only what they need with a little bit of buffer that we talked about. So what we've been doing since 1991 combined with a program like GreenLake I think is going to help us even better with our social corporate responsibility. >> I think what you guys have all articulated beautifully in the last 20 minutes is how strategic and interwoven the partnerships between HP, AMD and NTH is what your enabling customers to achieve those outcomes. What you're also doing internally to do things like reduce waste, reduce carbon emissions, and ensure that your employees are proud of who they're working for. Those are all fantastic guys. I wish we had more time cause I know we are just scratching the surface here. We appreciate everything that you shared with respect to sustainable IT and what you're enabling the end user customer to achieve. >> Thank you, Lisa. >> Thanks. >> Thank you. >> My pleasure. From my guests, I'm Lisa Martin. In a moment, Dave Vellante will return to give you some closing thoughts on sustainable IT You're watching theCUBE. the leader in high tech enterprise coverage.

Published Date : Sep 15 2022

SUMMARY :

to have you on theCUBE Talk to us about how NTH and the must that we have a responsibility the C-suite to the board. that older equipment to schools Talk to us a little bit that HPE brings to bear and the social responsibility And to pick up on what John of the things that NTH is doing for the next three to five years. to the NTH degree ? and also from the ability to retain them And one of the ways that you can succeed for customers to understand, and being able to deliver a tool So great to hear what you're all doing that are enabling organizations to go Hey, Dan, listening to you talk, and the results that they and interwoven the partnerships between to give you some closing

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

Dave VellantePERSON

0.99+

DanPERSON

0.99+

Lisa MartinPERSON

0.99+

NTHORGANIZATION

0.99+

TerryPERSON

0.99+

Dan MolinaPERSON

0.99+

Terry RichardsonPERSON

0.99+

AMDORGANIZATION

0.99+

HPEORGANIZATION

0.99+

HPORGANIZATION

0.99+

NTH GenerationORGANIZATION

0.99+

1991DATE

0.99+

NTH GenerationORGANIZATION

0.99+

San DiegoLOCATION

0.99+

Southern CaliforniaLOCATION

0.99+

30 serversQUANTITY

0.99+

20 yearsQUANTITY

0.99+

LisaPERSON

0.99+

two wordsQUANTITY

0.99+

USLOCATION

0.99+

Salvation ArmyORGANIZATION

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

2025DATE

0.99+

14QUANTITY

0.99+

John FreyPERSON

0.99+

first questionQUANTITY

0.99+

three guestsQUANTITY

0.99+

more than 140 conservation projectsQUANTITY

0.99+

oneQUANTITY

0.99+

each generationQUANTITY

0.99+

five years agoDATE

0.99+

bothQUANTITY

0.99+

70 countriesQUANTITY

0.99+

first timeQUANTITY

0.98+

John FryPERSON

0.98+

OneQUANTITY

0.98+

both waysQUANTITY

0.98+

Kristen Newcomer & Connor Gorman, Red Hat | Kubecon + Cloudnativecon Europe 2022


 

>>The cube presents, Coon and cloud native con Europe, 2022, brought to you by red hat, the cloud native computing foundation and its ecosystem partners. >>Welcome to Valencia Spain in Coon cloud native con 2022 Europe. I'm Keith Townsend, along with my cohot on Rico senior, Etti senior it analyst at gig home. We are talking to amazing people, creators people contributing to all these open source projects. Speaking of open source on Rico. Talk to me about the flavor of this show versus a traditional like vendor show of all these open source projects and open source based companies. >>Well, first of all, I think that the real difference is that this is a real conference. Hmm. So real people talking about, you know, projects about, so the, the open source stuff, the experiences are, you know, on stage and there are not really too many product pitches. It's, it's about, it's about the people. It's about the projects. It's about the, the challenges they had, how they, you know, overcome some of them. And, uh, that's the main difference. I mean, it's very educative informative and the kind of people is different. I mean, developers, you know, SREs, you know, you find ends on people. I mean, people that really do stuff that that's a real difference. I mean, uh, quite challenginghow discussing with them, but really, I mean, because they're really opinionated, but >>So we're gonna get talked to, to a company that has boosts on the ground doing open source since the, almost the start mm-hmm <affirmative> Kirsten newcomer, director of hybrid platform security at red hat and, uh, Connor Gorman, senior principal software engineer at red hat. So Kirsten, we're gonna start with you security and Kubernetes, you know, is Kubernetes. It's a, it's a race car. If I wanted security, I'd drive a minivan. <laugh> >>That's, that's a great frame. I think, I think though, if we stick with your, your car analogy, right, we have seen cars in cars and safety in cars evolve over the years to the point where you have airbags, even in, you know, souped up cars that somebody's driving on the street, a race car, race cars have safety built into, right. They do their best to protect those drivers. So I think while Kubernetes, you know, started as something that was largely, you know, used by Google in their environment, you know, had some perimeter based security as Kubernetes has become adopted throughout enterprises, as people. And especially, you know, we've seen the adoption accelerate during the pandemic, the move to both public cloud, but also private cloud is really accelerated. Security becomes even more important. You can't use Kubernetes in banking without security. You can't use it, uh, in automotive without security telco. >>And Kubernetes is, you know, Telco's adoption, Telco's deploying 5g on Kubernetes on open shift. Um, and, and this is just so the security capabilities have evolved over time to meet the customers and the adopters really red hat because of our enterprise customer base, we've been investing in security capabilities and we make those contributions upstream. We've been doing that really from the beginning of our adoption of Kubernetes, Kubernetes 1.0, and we continue to expand the security capabilities that we provide. And which is one of the reasons, you know, the acquisition of stack rocks was, was so important to us. >>And, and actually we are talking about security at different levels. I mean, so yeah, and different locations. So you are securing an edge location differently than a data center or, or, or maybe, you know, the cloud. So there are application level security. So there are so many angles to take this. >>Yeah. And, and you're right. I mean, I, there are the layers of the stack, which starts, you know, can start at the hardware level, right. And then the operating system, the Kubernetes orchestration all the services, you need to have a complete Kubernetes solution and application platform and then the services themselves. And you're absolutely right. That an edge deployment is different than a deployment, uh, on, you know, uh, AWS or in a private da data center. Um, and, and yet, because there is this, if you, if you're leveraging the heart of Kubernetes, the declarative nature of Kubernetes, you can do Kubernetes security in a way that can be consistent across these environments with the need to do some additions at the edge, right? You may, physical security is more important at the edge hardware based encryption, for example, whereas in a, in a cloud provider, your encryption might be at the cloud provider storage layer rather than hardware. >>So how do you orchestrate, because we are talking about orchestration all day and how do you orchestrate all these security? >>Yep. So one of the things, one of the evolutions that we've seen in our customer base in the last few years is we used to have, um, a small number of large clusters that our customers deployed and they used in a multi-tenant fashion, right? Multiple teams from within the organization. We're now starting to see a larger number of smaller clusters. And those clusters are in different locations. They might be, uh, customers are both deploying in public cloud, as well as private, you know, on premises, um, edge deployments, as you mentioned. And so we've invested in, uh, multi cluster management and, or, you know, sort of that orchestration for orchestrators, right? The, and because again of the declarative nature of Kubernetes, so we offer, uh, advanced cluster management, red hat, advanced cluster management, which we open sourced as the multi cluster engine CE. Um, so that component is now also freely available, open source. We do that with everything. So if you need a way to ensure that you have managed the configuration appropriately across all of these clusters in a declarative fashion, right. It's still YAML, it's written in YAML use ACM use CE in combination with a get ops approach, right. To manage that, uh, to ensure that you've got that environment consistent. And, and then, but then you have to monitor, right. You have to, I'm wearing >>All of these stack rocks >>Fits in. I mean, yeah, sure. >>Yeah. And so, um, you know, we took a Kubernetes native approach to securing all of this. Right. And there's kind of, uh, we have to say, there's like three major life cycles. You have the build life cycle, right. You're building these imutable images to go deployed to production. Right. That should never change that are, you know, locked at a point in time. And so you can do vulnerability scanning, you can do compliance checks at that point right. In the build phase. But then you put those in a registry, then those go and be deployed on top of Kubernetes. And you have the configuration of your application, you know, including any vulnerabilities that may exist in those images, you have the R back permissions, right. How much access does it have to the cluster? Is it exposed on the internet? Right. What can you do there? >>And then finally you have, the runtime perspective of is my pod is my container actually doing what I think it's supposed to do. Is it accessing all the right things? Is it running all the right processes? And then even taking that runtime information and influencing the configuration through things like network policies, where we have a feature called process baselining that you can say exactly what processes are supposed to run in this pod. Um, and then influencing configuration in that way to kind of be like, yeah, this is what it's doing. And let's go stamp this, you know, declaratively so that when you deploy it the next time you already have security built in at the Kubernetes level. >>So as we've talked about a couple of different topics, the abstraction layers, I have security around DevOps. So, you know, I have multi tendency, I have to deal with, think about how am I going to secure the, the, the Kubernetes infrastructure itself. Then I have what seems like you've been talking about here, Connor, which is dev SecOps mm-hmm <affirmative> and the practice of securing the application through policy. Right. Are customers really getting what's under the hood of dev SecOps? >>Do you wanna start or yeah. >>I mean, I think yes and no. I think, um, you know, we've, some organizations are definitely getting it right. And they have teams that are helping build things like network policies, which provide network segmentation. I think this is huge for compliance and multi-tenancy right. Just like containers, you know, one of the main benefits of containers, it provides this isolation between your applications, right? And then everyone's familiar with the network firewall, which is providing network segmentation, but now in between your applications inside Kubernetes, you can create, uh, network segmentation. Right. And so we have some folks that are super, super far along that path and, and creating those. And we have some folks who have no network policies except the ones that get installed with our products. Right. And then we say, okay, how can we help you guys start leveraging these things and, and creating maybe just basic name, space isolation, or things like that. And then trying to push that back into more the declarative approach. >>So some of what I think we hear from, from what Connor just te teed up is that real DevSecOps requires breaking down silos between developers, operations and security, including network security teams. And so the Kubernetes paradigm requires, uh, involvement actually, in some ways, it, it forces involvement of developers in things like network policy for the SDN layer, right? You need to, you know, the application developer knows which, what kinds of communication he or she, his app or her app needs to function. So they need to define, they need to figure out those network policies. Now, some network security teams, they're not familiar with YAML, they're not necessary familiar with software development, software defined networking. So there's this whole kind of, how do we do the network security in collaboration with the engineering team? And when people, one of the things I worry about, so DevSecOps it's technology, but it's people in process too. >>Right. And one of the things I think people are very comfortable adopting vulnerability scanning early on, but they haven't yet started to think about the network security angle. This is one area that not only do we have the ability in ACS stack rocks today to recommend a network policy based on a running deployment, and then make it easy to deploy that. But we're also working to shift that left so that you can actually analyze app deployment data prior to it being deployed, generate a network policy, tested out in staging and, and kind of go from the beginning. But again, people do vulnerability analysis shift left, but they kind of tend to stop there and you need to add app config analysis, network communication analysis, and then we need appropriate security gates at deployment time. We need the right automation that helps inform the developers. Not all developers have security expertise, not all security people understand a C I C D pipeline. Right. So, so how, you know, we need the right set of information to the right people in the place they're used to working in order to really do that infinity loop. >>Do you see this as a natural progression for developers? Do they really hit a wall before, you know, uh, finding out that they need to progress in, in this, uh, methodology? Or I know >>What else? Yeah. So I think, I think initially there's like a period of transition, right? Where there's sometimes there's opinion, oh, I, I ship my application. That's what I get paid for. That's what I do. Right. <laugh> um, and, and, but since, uh, Kubernetes has basically increased the velocity of developers on top, you know, of the platform in order to just deploy their own code. And, you know, we have every, some people have commits going to production, you know, every commitment on the repo goes to production. Right. Um, and so security is even more at the forefront there. So I think initially you hit a little bit of a wall security scans in CI. You could get some failures and some pushback, but as long as these are very informative and actionable, right. Then developers always wanna do the right thing. Right. I mean, we all want to ship secure code. >>Um, and so if you can inform you, Hey, this is why we do this. Or, or here's the information about this? I think it's really important because I'm like, right, okay. Now when I'm sending my next commits, I'm like, okay, these are some constraints that I'm thinking about, and it's sort of like a mindset shift, but I think through the tooling that we like know and love, and we use on top of Kubernetes, that's the best way to kind of convey that information of, you know, honestly significantly smaller security teams than the number of developers that are really pushing all of this code. >>So let's scale out what, talk to me about the larger landscape projects like prime cube, Litner, OPPI different areas of investment in, in, in security. Talk to me about where customers are making investments. >>You wanna start with coup linter. >>Sure. So coup linter was a open source project, uh, when we were still, uh, a private company and it was really around taking some of our functionality on our product and just making it available to everyone, to basically check configuration, um, both bridging DevOps and SecOps, right? There's some things around, uh, privileged containers, right? You usually don't wanna deploy those into your environment unless you really need to, but there's other things around, okay, do I have anti affinity rules, right. Am I running, you know, you can run 10 replicas of a pod on the same node, and now your failure domain is a single node. Now you want them on different nodes, right. And so you can do a bunch of checks just around the configuration DevOps best practices. And so we've actually seen quite a bit of adoption. I think we have like almost 2000 stars on, uh, and super happy to see people just really adopt that and integrate it into their pipelines. It's a single binary. So it's been super easy for people to take it into their C I C D and just, and start running three things through it and get, uh, you know, valuable insights into, to what configurations they should change. Right. >>And then if you're, if you were asking about things like, uh, OPPA, open policy agent and OPPA gatekeeper, so one of the things happening in the community about OPPA has been around for a while. Uh, they added, you know, the OPPA gatekeeper as an admission controller for Cobe. There's also veno another open source project that is doing, uh, admission as the Kubernetes community has, uh, kind of is decided to deprecate pod security policies, um, which had a level of complexity, but is one of the key security capabilities and gates built into Kubernetes itself. Um, OpenShift is gonna continue to have security context constraints, very similar, but it prevents by default on an OpenShift cluster. Uh, not a regular user cannot deploy a privileged pod or a pod that has access to the host network. Um, and there's se Linux configuration on by default also protects against container escapes to the file system or mitigates them. >>So pod security policies were one way to ensure that kind of constraint on what the developer did. Developers might not have had awareness of what was important in terms of the level of security. And so again, the cube and tools like that can help to inform the developer in the tools they use, and then a solution like OPPA, gatekeeper, or SCCs. That's something that runs on the cluster. So if something got through the pipeline or somebody's not using one of these tools, those gates can be leveraged to ensure that the security posture of the deployment is what the organization wants and OPPA gatekeeper. You can do very complex policies with that. And >>Lastly, talk to me about Falco and Claire, about what Falco >>Falco and yep, absolutely. So, um, Falco, great runtime analysis have been and something that stack rocks leveraged early on. So >>Yeah, so yeah, we leveraged, um, some libraries from Falco. Uh, we use either an EB P F pro or a kernel module to detect runtime events. Right. And we, we primarily focus on network and process activity as, um, as angles there. And then for Claire, um, it's, it's now within red hat again, <laugh>, uh, through the acquisition of cores, but, uh, we've forked in added a bunch of things around language vulnerabilities and, and different aspects that we wanted. And, uh, and you know, we're really interested in, I think, you know, the code bases have diversion a little bit Claire's on V4. We, we were based off V2, but I think we've both added a ton of really great features. And so I'm really looking forward to actually combining all of those features and kind of building, um, you know, we have two best of best of breed scanners right now. And I'm like, okay, what can we do when we put them together? And so that's something that, uh, I'm really excited about. >>So you, you somehow are aiming at, you know, your roadmap here now putting everything together. And again, orchestrated well integrated yeah. To, to get, you know, also a simplified experience, because that could be the >>Point. Yeah. And, and as you mentioned, you know, it's sort of that, that orchestration of orchestrators, like leveraging the Kubernetes operator principle to, to deliver an app, an opinionated Kubernetes platform has, has been one of the key things we've done. And we're doing that as well for security out of the box security policies, principles based on best practices with stack rocks that can be leveraged in the community or with red hat, advanced cluster security, combining our two scanners into one clear based scanner, contributing back, contributing back to Falco all of these things. >>Well, that speaks to the complexity of open source projects. There's a lot of overlap in reconciling. That is a very difficult thing. Kirsten Connor, thank you for joining the cube Connor. You're now a cube alone. Welcome to main elite group. Great. From Valencia Spain, I'm Keith Townsend, along with en Rico senior, and you're watching the cue, the leader in high tech coverage.

Published Date : May 19 2022

SUMMARY :

The cube presents, Coon and cloud native con Europe, 2022, brought to you by red hat, Talk to me about the flavor of the challenges they had, how they, you know, overcome some of them. we're gonna start with you security and Kubernetes, you know, is Kubernetes. And especially, you know, we've seen the adoption accelerate during And which is one of the reasons, you know, the acquisition of stack rocks was, was so important to than a data center or, or, or maybe, you know, the cloud. the Kubernetes orchestration all the services, you need to have a complete Kubernetes in, uh, multi cluster management and, or, you know, I mean, yeah, sure. And so you can do vulnerability scanning, And let's go stamp this, you know, declaratively so that when you So, you know, I have multi tendency, I mean, I think yes and no. I think, um, you know, we've, some organizations are definitely getting You need to, you know, So, so how, you know, we need the right set of information you know, we have every, some people have commits going to production, you know, every commitment on the repo goes to production. that's the best way to kind of convey that information of, you know, honestly significantly smaller security Talk to me about where customers And so you can do a bunch of checks just around the configuration DevOps best practices. Uh, they added, you know, the OPPA gatekeeper as an admission controller ensure that the security posture of the deployment is what the organization wants and So And, uh, and you know, we're really interested in, I think, you know, the code bases have diversion a little bit you know, also a simplified experience, because that could be the an opinionated Kubernetes platform has, has been one of the key things we've Kirsten Connor, thank you for joining the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Keith TownsendPERSON

0.99+

TelcoORGANIZATION

0.99+

Kirsten ConnorPERSON

0.99+

Connor GormanPERSON

0.99+

KirstenPERSON

0.99+

AWSORGANIZATION

0.99+

10 replicasQUANTITY

0.99+

GoogleORGANIZATION

0.99+

Kristen NewcomerPERSON

0.99+

ConnorPERSON

0.99+

red hatORGANIZATION

0.99+

Valencia SpainLOCATION

0.99+

Red HatORGANIZATION

0.99+

oneQUANTITY

0.99+

RicoORGANIZATION

0.99+

FalcoORGANIZATION

0.99+

twoQUANTITY

0.98+

annerPERSON

0.98+

LinuxTITLE

0.98+

KubernetesTITLE

0.98+

ClairePERSON

0.97+

two scannersQUANTITY

0.97+

OpenShiftTITLE

0.97+

bothQUANTITY

0.97+

CloudnativeconORGANIZATION

0.97+

Kubernetes 1.0TITLE

0.97+

telcoORGANIZATION

0.97+

single nodeQUANTITY

0.95+

one wayQUANTITY

0.95+

DevOpsTITLE

0.94+

pandemicEVENT

0.94+

2022DATE

0.94+

prime cubeCOMMERCIAL_ITEM

0.93+

SecOpsTITLE

0.93+

OPPATITLE

0.92+

one areaQUANTITY

0.91+

Kirsten newcomerPERSON

0.9+

KubeconORGANIZATION

0.9+

almost 2000 starsQUANTITY

0.89+

CoonORGANIZATION

0.87+

single binaryQUANTITY

0.87+

todayDATE

0.84+

EuropeLOCATION

0.82+

threeQUANTITY

0.77+

CobePERSON

0.75+

three major lifeQUANTITY

0.73+

5gQUANTITY

0.72+

coup linterTITLE

0.71+

Breaking Analysis: Cyber, Blockchain & NFTs Meet the Metaverse


 

>> From theCUBE Studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR. This is "Breaking Analysis" with Dave Vellante. >> When Facebook changed its name to Meta last fall, it catalyzed a chain reaction throughout the tech industry. Software firms, gaming companies, chip makers, device manufacturers, and others have joined in hype machine. Now, it's easy to dismiss the metaverse as futuristic hyperbole, but do we really believe that tapping on a smartphone, or staring at a screen, or two-dimensional Zoom meetings are the future of how we work, play, and communicate? As the internet itself proved to be larger than we ever imagined, it's very possible, and even quite likely that the combination of massive processing power, cheap storage, AI, blockchains, crypto, sensors, AR, VR, brain interfaces, and other emerging technologies will combine to create new and unimaginable consumer experiences, and massive wealth for creators of the metaverse. Hello, and welcome to this week's Wiki Bond Cube Insights, powered by ETR. In this "Breaking Analysis" we welcome in cyber expert, hacker gamer, NFT expert, and founder of ORE System, Nick Donarski. Nick, welcome, thanks so much for coming on theCUBE. >> Thank you, sir, glad to be here. >> Yeah, okay, so today we're going to traverse two parallel paths, one that took Nick from security expert and PenTester to NFTs, tokens, and the metaverse. And we'll simultaneously explore the complicated world of cybersecurity in the enterprise, and how the blockchain, crypto, and NFTs will provide key underpinnings for digital ownership in the metaverse. We're going to talk a little bit about blockchain, and crypto, and get things started there, and some of the realities and misconceptions, and how innovations in those worlds have led to the NFT craze. We'll look at what's really going on in NFTs and why they're important as both a technology and societal trend. Then, we're going to dig into the tech and try to explain why and how blockchain and NFTs are going to lay the foundation for the metaverse. And, finally, who's going to build the metaverse. And how long is it going to take? All right, Nick, let's start with you. Tell us a little bit about your background, your career. You started as a hacker at a really, really young age, and then got deep into cyber as a PenTester. You did some pretty crazy stuff. You have some great stories about sneaking into buildings. You weren't just doing it all remote. Tell us about yourself. >> Yeah, so I mean, really, I started a long time ago. My dad was really the foray into technology. I wrote my first program on an Apple IIe in BASIC in 1989. So, I like to say I was born on the internet, if you will. But, yeah, in high school at 16, I incorporated my first company, did just tech support for parents and teachers. And then in 2000 I transitioned really into security and focused there ever since. I joined Rapid7 and after they picked up Medis boy, I joined HP. I was one of their founding members of Shadowlabs and really have been part of the information security and the cyber community all throughout, whether it's training at various different conferences or talking. My biggest thing and my most awesome moments as various things of being broken into, is really when I get to actually work with somebody that's coming up in the industry and who's new and actually has that light bulb moment of really kind of understanding of technology, understanding an idea, or getting it when it comes to that kind of stuff. >> Yeah, and when you think about what's going on in crypto and NFTs and okay, now the metaverse it's you get to see some of the most innovative people. Now I want to first share a little bit of data on enterprise security and maybe Nick get you to comment. We've reported over the past several years on the complexity in the security business and the numerous vendor choices that SecOps Pros face. And this chart really tells that story in the cybersecurity space. It's an X,Y graph. We've shown it many times from the ETR surveys where the vertical axis, it's a measure of spending momentum called net score. And the horizontal axis is market share, which represents each company's presence in the data set, and a couple of points stand out. First, it's really crowded. In that red dotted line that you see there, that's 40%, above that line on the net score axis, marks highly elevated spending momentum. Now, let's just zoom in a bit and I've cut the data by those companies that have more than a hundred responses in the survey. And you can see here on this next chart, it's still very crowded, but a few call-outs are noteworthy. First companies like SentinelOne, Elastic, Tanium, Datadog, Netskope and Darktrace. They were all above that 40% line in the previous chart, but they've fallen off. They still have actually a decent presence in the survey over 60 responses, but under that hundred. And you can see Auth0 now Okta, big $7 billion acquisition. They got the highest net score CrowdStrike's up there, Okta classic they're kind of enterprise business, and Zscaler and others above that line. You see Palo Alto Networks and Microsoft very impressive because they're both big and they're above that elevated spending velocity. So Nick, kind of a long-winded intro, but it was a little bit off topic, but I wanted to start here because this is the life of a SecOps pro. They lack the talent in a capacity to keep bad guys fully at bay. And so they have to keep throwing tooling at the problem, which adds to the complexity and as a PenTester and hacker, this chaos and complexity means cash for the bad guys. Doesn't it? >> Absolutely. You know, the more systems that these organizations find to integrate into the systems, means that there's more components, more dollars and cents as far as the amount of time and the engineers that need to actually be responsible for these tools. There's a lot of reasons that, the more, I guess, hands in the cookie jar, if you will, when it comes to the security architecture, the more links that are, or avenues for attack built into the system. And really one of the biggest things that organizations face is being able to have engineers that are qualified and technical enough to be able to support that architecture as well, 'cause buying it from a vendor and deploying it, putting it onto a shelf is good, but if it's not tuned properly, or if it's not connected properly, that security tool can just hold up more avenues of attack for you. >> Right, okay, thank you. Now, let's get into the meat of the discussion for today and talk a little bit about blockchain and crypto for a bit. I saw sub stack post the other day, and it was ripping Matt Damon for pedaling crypto on TV ads and how crypto is just this big pyramid scheme. And it's all about allowing criminals to be anonymous and it's ransomware and drug trafficking. And yes, there are definitely scams and you got to be careful and lots of dangers out there, but these are common criticisms in the mainstream press, that overlooked the fact by the way that IPO's and specs are just as much of a pyramid scheme. Now, I'm not saying there shouldn't be more regulation, there should, but Bitcoin was born out of the 2008 financial crisis, cryptocurrency, and you think about, it's really the confluence of software engineering, cryptography and game theory. And there's some really powerful innovation being created by the blockchain community. Crypto and blockchain are really at the heart of a new decentralized platform being built out. And where today, you got a few, large internet companies. They control the protocols and the platform. Now the aspiration of people like yourself, is to create new value opportunities. And there are many more chances for the little guys and girls to get in on the ground floor and blockchain technology underpins all this. So Nick, what's your take, what are some of the biggest misconceptions around blockchain and crypto? And do you even pair those two in the same context? What are your thoughts? >> So, I mean, really, we like to separate ourselves and say that we are a blockchain company, as opposed to necessarily saying(indistinct) anything like that. We leverage those tools. We leverage cryptocurrencies, we leverage NFTs and those types of things within there, but blockchain is a technology, which is the underlying piece, is something that can be used and utilized in a very large number of different organizations out there. So, cryptocurrency and a lot of that negative context comes with a fear of something new, without having that regulation in place, without having the rules in place. And we were a big proponent of, we want the regulation, right? We want to do right. We want to do it by the rules. We want to do it under the context of, this is what should be done. And we also want to help write those rules as well, because a lot of the lawmakers, a lot of the lobbyists and things, they have a certain aspect or a certain goal of when they're trying to get these things. Our goal is simplicity. We want the ability for the normal average person to be able to interact with crypto, interact with NFTs, interact with the blockchain. And basically by saying, blockchain in quotes, it's very ambiguous 'cause there's many different things that blockchain can be, the easiest way, right? The easiest way to understand blockchain is simply a distributed database. That's really the core of what blockchain is. It's a record keeping mechanism that allows you to reference that. And the beauty of it, is that it's quote unquote immutable. You can't edit that data. So, especially when we're talking about blockchain, being underlying for technologies in the future, things like security, where you have logging, you have keeping, whether you're talking about sales, where you may have to have multiple different locations (indistinct) users from different locations around the globe. It creates a central repository that provides distribution and security in the way that you're ensuring your data, ensuring the validation of where that data exists when it was created. Those types of things that blockchain really is. If you go to the historical, right, the very early on Bitcoin absolutely was made to have a way of not having to deal with the fed. That was the core functionality of the initial crypto. And then you had a lot of the illicit trades, those black markets that jumped onto it because of what it could do. The maturity of the technology though, of where we are now versus say back in 97 is a much different world of blockchain, and there's a much different world of cryptocurrency. You still have to be careful because with any fed, you're still going to have that FUD that goes out there and sells that fear, uncertainty and doubt, which spurs a lot of those types of scams, and a lot of those things that target end users that we face as security professionals today. You still get mailers that go out, looking for people to give their social security number over during tax time. Snail mail is considered a very ancient technology, but it still works. You still get a portion of the population that falls for those tricks, fishing, whatever it might be. It's all about trying to make sure that you have fear about what is that change. And I think that as we move forward, and move into the future, the simpler and the more comfortable these types of technologies become, the easier it is to utilize and indoctrinate normal users, to be able to use these things. >> You know, I want to ask you about that, Nick, because you mentioned immutability, there's a lot of misconceptions about that. I had somebody tell me one time, "Blockchain's Bs," and they say, "Well, oh, hold on a second. They say, oh, they say it's a mutable, but you can hack Coinbase, whatever it is." So I guess a couple of things, one is that the killer app for blockchain became money. And so we learned a lot through that. And you had Bitcoin and it really wasn't programmable through its interface. And then Ethereum comes out. I know, you know a lot about Ether and you have solidity, which is a lot simpler, but it ain't JavaScript, which is ubiquitous. And so now you have a lot of potential for the initial ICO's and probably still the ones today, the white papers, a lot of security flaws in there. I'm sure you can talk to that, but maybe you can help square that circle about immutability and security. I've mentioned game theory before, it's harder to hack Bitcoin and the Bitcoin blockchain than it is to mine. So that's why people mine, but maybe you could add some context to that. >> Yeah, you know it goes to just about any technology out there. Now, when you're talking about blockchain specifically, the majority of the attacks happen with the applications and the smart contracts that are actually running on the blockchain, as opposed to necessarily the blockchain itself. And like you said, the impact for whether that's loss of revenue or loss of tokens or whatever it is, in most cases that results from something that was a phishing attack, you gave up your credentials, somebody said, paste your private key in here, and you win a cookie or whatever it might be, but those are still the fundamental pieces. When you're talking about various different networks out there, depending on the blockchain, depends on how much the overall security really is. The more distributed it is, and the more stable it is as the network goes, the better or the more stable any of the code is going to be. The underlying architecture of any system is the key to success when it comes to the overall security. So the blockchain itself is immutable, in the case that the owner are ones have to be trusted. If you look at distributed networks, something like Ethereum or Bitcoin, where you have those proof of work systems, that disperses that information at a much more remote location, So the more disperse that information is, the less likely it is to be able to be impacted by one small instance. If you look at like the DAO Hack, or if you look at a lot of the other vulnerabilities that exist on the blockchain, it's more about the code. And like you said, solidity being as new as it is, it's not JavaScript. The industry is very early and very infantile, as far as the developers that are skilled in doing this. And with that just comes the inexperience and the lack of information that you don't learn until JavaScript is 10 or 12 years old. >> And the last thing I'll say about this topic, and we'll move on to NFTs, but NFTs relate is that, again, I said earlier that the big internet giants have pretty much co-opted the platform. You know, if you wanted to invest in Linux in the early days, there was no way to do that. You maybe have to wait until red hat came up with its IPO and there's your pyramid scheme folks. But with crypto it, which is again, as Nick was explaining underpinning is the blockchain, you can actually participate in early projects. Now you got to be careful 'cause there are a lot of scams and many of them are going to blow out if not most of them, but there are some, gems out there, because as Nick was describing, you've got this decentralized platform that causes scaling issues or performance issues, and people are solving those problems, essentially building out a new internet. But I want to get into NFTs, because it's sort of the next big thing here before we get into the metaverse, what Nick, why should people pay attention to NFTs? Why do they matter? Are they really an important trend? And what are the societal and technological impacts that you see in this space? >> Yeah, I mean, NFTs are a very new technology and ultimately it's just another entry on the blockchain. It's just another piece of data in the database. But how it's leveraged in the grand scheme of how we, as users see it, it can be the classic idea of an NFT is just the art, or as good as the poster on your wall. But in the case of some of the new applications, is where are you actually get that utility function. Now, in the case of say video games, video games and gamers in general, already utilize digital items. They already utilize digital points. As in the case of like Call of Duty points, those are just different versions of digital currencies. You know, World of Warcraft Gold, I like to affectionately say, was the very first cryptocurrency. There was a Harvard course taught on the economy of WOW, there was a black market where you could trade your end game gold for Fiat currencies. And there's even places around the world that you can purchase real world items and stay at hotels for World of Warcraft Gold. So the adoption of blockchain just simply gives a more stable and a more diverse technology for those same types of systems. You're going to see that carry over into shipping and logistics, where you need to have data that is single repository for being able to have multiple locations, multiple shippers from multiple global efforts out there that need to have access to that data. But in the current context, it's either sitting on a shipping log, it's sitting on somebody's desk. All of those types of paper transactions can be leveraged as NFTs on the blockchain. It's just simply that representation. And once you break the idea of this is just a piece of art, or this is a cryptocurrency, you get into a world where you can apply that NFT technology to a lot more things than I think most people think of today. >> Yeah, and of course you mentioned art a couple of times when people sold as digital art for whatever, it was 60, 65 million, 69 million, that caught a lot of people's attention, but you're seeing, I mean, there's virtually infinite number of applications for this. One of the Washington wizards, tokenized portions of his contract, maybe he was creating a new bond, that's really interesting use cases and opportunities, and that kind of segues into the latest, hot topic, which is the metaverse. And you've said yourself that blockchain and NFTs are the foundation of the metaverse, they're foundational elements. So first, what is the metaverse to you and where do blockchain and NFTs, fit in? >> Sure, so, I mean, I affectionately refer to the metaverse just a VR and essentially, we've been playing virtual reality games and all the rest for a long time. And VR has really kind of been out there for a long time. So most people's interpretation or idea of what the metaverse is, is a virtual reality version of yourself and this right, that idea of once it becomes yourself, is where things like NFT items, where blockchain and digital currencies are going to come in, because if you have a manufacturer, so you take on an organization like Nike, and they want to put their shoes into the metaverse because we, as humans, want to individualize ourselves. We go out and we want to have that one of one shoe or that, t-shirt or whatever it is, we're going to want to represent that same type of individuality in our virtual self. So NFTs, crypto and all of those digital currencies, like I was saying that we've known as gamers are going to play that very similar role inside of the metaverse. >> Yeah. Okay. So basically you're going to take your physical world into the metaverse. You're going to be able to, as you just mentioned, acquire things- I loved your WOW example. And so let's stay on this for a bit, if we may, of course, Facebook spawned a lot of speculation and discussion about the concept of the metaverse and really, as you pointed out, it's not new. You talked about why second life, really started in 2003, and it's still around today. It's small, I read recently, it's creators coming back into the company and books were written in the early 90s that used the term metaverse. But Nick, talk about how you see this evolving, what role you hope to play with your company and your community in the future, and who builds the metaverse, when is it going to be here? >> Yeah, so, I mean, right now, and we actually just got back from CES last week. And the Metaverse is a very big buzzword. You're going to see a lot of integration of what people are calling, quote unquote, the metaverse. And there was organizations that were showing virtual office space, virtual malls, virtual concerts, and those types of experiences. And the one thing right now that I don't think that a lot of organizations have grasp is how to make one metaverse. There's no real player one, if you will always this yet, There's a lot of organizations that are creating their version of the metaverse, which then again, just like every other software and game vendor out there has their version of cryptocurrency and their version of NFTs. You're going to see it start to pop up, especially as Oculus is going to come down in price, especially as you get new technologies, like some of the VR glasses that look more augmented reality and look more like regular glasses that you're wearing, things like that, the easier that those technologies become as in adopting into our normal lifestyle, as far as like looks and feels, the faster that stuff's going to actually come out to the world. But when it comes to like, what we're doing is we believe that the metaverse should actually span multiple different blockchains, multiple different segments, if you will. So what ORE system is doing, is we're actually building the underlying architecture and technologies for developers to bring their metaverse too. You can leverage the ORE Systems NFTs, where we like to call our utility NFTs as an in-game item in one game, or you can take it over and it could be a t-shirt in another game. The ability for having that cross support within the ecosystem is what really no one has grasp on yet. Most of the organizations out there are using a very classic business model. Get the user in the game, make them spend their money in the game, make all their game stuff as only good in their game. And that's where the developer has you, they have you in their bubble. Our goal, and what we like to affectionately say is, we want to bring white collar tools and technology to blue collar folks, We want to make it simple. We want to make it off the shelf, and we want to make it a less cost prohibitive, faster, and cheaper to actually get out to all the users. We do it by supporting the technology. That's our angle. If you support the technology and you support the platform, you can build a community that will build all of the metaverse around them. >> Well, and so this is interesting because, if you think about some of the big names, we've Microsoft is talking about it, obviously we mentioned Facebook. They have essentially walled gardens. Now, yeah, okay, I could take Tik Tok and pump it into Instagram is fine, but they're really siloed off. And what you're saying is in the metaverse, you should be able to buy a pair of sneakers in one location and then bring it to another one. >> Absolutely, that's exactly it. >> And so my original kind of investment in attractiveness, if you will, to crypto, was that, the little guy can get an early, but I worry that some of these walled gardens, these big internet giants are going to try to co-op this. So I think what you're doing is right on, and I think it's aligned with the objectives of consumers and the users who don't want to be forced in to a pen. They want to be able to live freely. And that's really what you're trying to do. >> That's exactly it. You know, when you buy an item, say a Skin in Fortnite or Skin in Call of Duty, it's only good in that game. And not even in the franchise, it's only good in that version of the game. In the case of what we want to do is, you can not only have that carry over and your character. So say you buy a really cool shirt, and you've got that in your Call of Duty or in our case, we're really Osiris Protocol, which is our proof of concept video game to show that this all thing actually works, but you can actually go in and you can get a gun in Osiris Protocol. And if we release, Osiris Protocol two, you'll be able to take that to Osiris Protocol two. Now the benefit of that is, is you're going to be the only one in the next version with that item, if you haven't sold it or traded it or whatever else. So we don't lock you into a game. We don't lock you into a specific application. You own that, you can trade that freely with other users. You can sell that on the open market. We're embracing what used to be considered the black market. I don't understand why a lot of video games, we're always against the skins and mods and all the rest. For me as a gamer and coming up, through the many, many years of various different Call of Duties and everything in my time, I wish I could still have some this year. I still have a World of Warcraft account. I wasn't on, Vanilla, Burning Crusade was my foray, but I still have a character. If you look at it that way, if I had that wild character and that gear was NFTs, in theory, I could actually pass that onto my kid who could carry on that character. And it would actually increase in value because they're NFT back then. And then if needed, you could trade those on the open market and all the rest. It just makes gaming a much different thing. >> I love it. All right, Nick, hey, we're out of time, but I got to say, Nick Donarski, thanks so much for coming on the program today, sharing your insights and really good luck to you and building out your technology platform and your community. >> Thank you, sir, it's been an absolute pleasure. >> And thank you for watching. Remember, all these episodes are available as podcasts, just search "Breaking Analysis Podcast", and you'll find them. I publish pretty much every week on siliconangle.com and wikibond.com. And you can reach me @dvellante on Twitter or comment on my LinkedIn posts. You can always email me david.vellante@siliconangle.com. And don't forget, check out etr.plus for all the survey data. This is Dave Vellante for theCUBE Insights, powered by ETR, happy 2022 be well, and we'll see you next time. (upbeat music)

Published Date : Jan 17 2022

SUMMARY :

bringing you data-driven and even quite likely that the combination and how the blockchain, crypto, and NFTs and the cyber community all throughout, and the numerous vendor hands in the cookie jar, if you will, and the platform. and security in the way that and probably still the ones any of the code is going to be. and many of them are going to of data in the database. Yeah, and of course you and all the rest for a long time. and discussion about the believe that the metaverse is in the metaverse, and the users who don't want and mods and all the rest. really good luck to you Thank you, sir, it's all the survey data.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NikeORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

Dave VellantePERSON

0.99+

NetskopeORGANIZATION

0.99+

2003DATE

0.99+

DatadogORGANIZATION

0.99+

DarktraceORGANIZATION

0.99+

Nick DonarskiPERSON

0.99+

SentinelOneORGANIZATION

0.99+

NickPERSON

0.99+

ElasticORGANIZATION

0.99+

TaniumORGANIZATION

0.99+

1989DATE

0.99+

Palo Alto NetworksORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

10QUANTITY

0.99+

HPORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

Call of DutyTITLE

0.99+

ORE SystemORGANIZATION

0.99+

40%QUANTITY

0.99+

2000DATE

0.99+

Osiris Protocol twoTITLE

0.99+

OculusORGANIZATION

0.99+

FirstQUANTITY

0.99+

69 millionQUANTITY

0.99+

Matt DamonPERSON

0.99+

World of Warcraft GoldTITLE

0.99+

OktaORGANIZATION

0.99+

World of WarcraftTITLE

0.99+

JavaScriptTITLE

0.99+

Call of DutiesTITLE

0.99+

first programQUANTITY

0.99+

ZscalerORGANIZATION

0.99+

theCUBE StudiosORGANIZATION

0.99+

Burning CrusadeTITLE

0.99+

Osiris ProtocolTITLE

0.99+

each companyQUANTITY

0.99+

twoQUANTITY

0.99+

oneQUANTITY

0.98+

single repositoryQUANTITY

0.98+

ETRORGANIZATION

0.98+

siliconangle.comOTHER

0.98+

david.vellante@siliconangle.comOTHER

0.98+

first companyQUANTITY

0.98+

LinuxTITLE

0.98+

CESEVENT

0.98+

ShadowlabsORGANIZATION

0.98+

todayDATE

0.98+

over 60 responsesQUANTITY

0.98+

bothQUANTITY

0.98+

more than a hundred responsesQUANTITY

0.98+

BostonLOCATION

0.97+

two parallel pathsQUANTITY

0.97+

HarvardORGANIZATION

0.97+

Rapid7ORGANIZATION

0.97+

this yearDATE

0.97+

early 90sDATE

0.97+

16QUANTITY

0.97+

firstQUANTITY

0.97+

BASICTITLE

0.97+

one gameQUANTITY

0.97+

one locationQUANTITY

0.97+

OneQUANTITY

0.96+

last fallDATE

0.96+

one small instanceQUANTITY

0.96+

Auth0ORGANIZATION

0.96+

theCUBEORGANIZATION

0.95+

2008 financial crisisEVENT

0.95+

FortniteTITLE

0.95+

two-dimensionalQUANTITY

0.95+

Justin Borgman, Starburst and Teresa Tung, Accenture | AWS re:Invent 2021


 

>>Hey, welcome back to the cubes. Continuing coverage of AWS reinvent 2021. I'm your host, Lisa Martin. This is day two, our first full day of coverage. But day two, we have two life sets here with AWS and its ecosystem partners to remote sets over a hundred guests on the program. We're going to be talking about the next decade of cloud innovation, and I'm pleased to welcome back to cube alumni to the program. Justin Borkman is here, the co-founder and CEO of Starburst and Teresa Tung, the cloud first chief technologist at Accenture guys. Welcome back to the queue. Thank you. Thank you for having me. Good to have you back. So, so Teresa, I was doing some research on you and I see you are the most prolific prolific inventor at Accenture with over 220 patents and patent applications. That's huge. Congratulations. Thank you. Thank you. And I love your title. I think it's intriguing. I'd like to learn a little bit more about your role cloud-first chief technologist. Tell me about, >>Well, I get to think about the future of cloud and if you think about clouded powers, everything experiences in our everyday lives and our homes and our car in our stores. So pretty much I get to be cute, right? The rest of Accenture's James Bond >>And your queue. I like that. Wow. What a great analogy. Just to talk to me a little bit, I know service has been on the program before, but give me a little bit of an overview of the company, what you guys do. What were some of the gaps in the markets that you saw a few years ago and said, we have an idea to solve this? Sure. >>So Starburst offers a distributed query engine, which essentially means we're able to run SQL queries on data anywhere, uh, could be in traditional relational databases, data lakes in the cloud on-prem. And I think that was the gap that we saw was basically that people had data everywhere and really had a challenge with how they analyze that data. And, uh, my co-founders are the creators of an open source project originally called Presto now called Trino. And it's how Facebook and Netflix and Airbnb and, and a number of the internet companies run their analytics. And so our idea was basically to take that, commercialize that and make it enterprise grade for the thousands of other companies that are struggling with data management, data analytics problems. >>And that's one of the things we've seen explode during the last 22 months, among many other things is data, right? In every company. These days has to be a data company. If they're not, there's a competitor in the rear view rear view mirror, ready to come and take that place. We're going to talk about the data mesh Teresa, we're going to start with you. This is not a new car. This is a new concept. Talk to us about what a data mesh is and why organizations need to embrace this >>Approach. So there's a canonical definition about data mesh with four attributes and any data geek or data architect really resonates with them. So number one, it's really routed decentralized domain ownership. So data is not within a single line of business within a single entity within a single partner has to be across different domains. Second is publishing data as products. And so instead of these really, you know, technology solutions, data sets, data tables, really thinking about the product and who's going to use it. The third one is really around self-service infrastructure. So you want everybody to be able to use those products. And finally, number four, it's really about federated and global governance. So even though their products, you really need to make sure that you're doing the right things, but what's data money. >>We're not talking about a single tool here, right? This is more of a, an approach, a solution. >>It is a data strategy first and foremost, right? So companies, they are multi-cloud, they have many projects going on, they are on premise. So what do you do about it? And so that's the reality of the situation today, and it's first and foremost, a business strategy and framework to think about the data. And then there's a new architecture that underlines and supports that >>Just didn't talk to me about when you're having customer conversations. Obviously organizations need to have a core data strategy that runs the business. They need to be able to, to democratize really truly democratized data access across all business units. What are some of the, what are some of your customer conversations like are customers really embracing the data strategy, vision and approach? >>Yeah, well, I think as you alluded to, you know, every business is data-driven today and the pandemic, if anything has accelerated digital transformation in that move to become data-driven. So it's imperative that every business of every shape and size really put the power of data in the hands of everyone within their organization. And I think part of what's making data mesh resonates so well, is that decentralization concept that Teresa spoke about? Like, I think companies acknowledge that data is inherently decentralized. They have a lot of different database systems, different teams and data mesh is a framework for thinking about that. Then not only acknowledges that reality, but also braces it and basically says there's actually advantages to this decentralized approach. And so I think that's, what's driving the interest level in the data mesh, uh, paradigm. And it's been exciting to work with customers as they think about that strategy. And I think that, you know, essentially every company in the space is, is in transition, whether they're moving from on cloud to the prem, uh, to, uh, sorry, from on-prem to the cloud or from one cloud to another cloud or undergoing that digital transformation, they have left behind data everywhere. And so they're, they're trying to wrestle with how to grasp that. >>And there's, we know that there's so much value in data. The, the need is to be able to get it, to be able to analyze it quickly in real time. I think another thing we learned in the pandemic is it real-time is no longer a nice to have. It is essential for businesses in every organization. So Theresa let's talk about how Accenture and servers are working together to take the data mesh from a concept of framework and put this into production into execution. >>Yeah. I mean, many clients are already doing some aspect of the data mesh as I listed those four attributes. I'm sure everybody thought like I'm already doing some of this. And so a lot of that is reviewing your existing data projects and looking at it from a data product landscape we're at Amazon, right? Amazon famous for being customer obsessed. So in data, we're not always customer obsessed. We put up tables, we put up data sets, feature stores. Who's actually going to use this data. What's the value from it. And I think that's a big change. And so a lot of what we're doing is helping apply that product lens, a literal product lens and thinking about the customer. >>So what are some w you know, we often talk about outcomes, everything being outcomes focused and customers, vendors wanting to help customers deliver big outcomes, you know, cost reduction, et cetera, things like that. How, what are some of the key outcomes Theresa that the data mesh framework unlocks for organizations in any industry to be able to leverage? >>Yeah. I mean, it really depends on the product. Some of it is organizational efficiency and data-driven decisions. So just by the able to see the data, see what's happening now, that's great. But then you have so beyond the, now what the, so what the analytics, right. Both predictive prescriptive analytics. So what, so now I have all this data I can analyze and drive and predict. And then finally, the, what if, if I have this data and my partners have this data in this mesh, and I can use it, I can ask a lot of what if and, and kind of game out scenarios about what if I did things differently, all of this in a very virtualized data-driven fashion, >>Right? Well, we've been talking about being data-driven for years and years and years, but it's one thing to say that it's a whole other thing to actually be able to put that into practice and to use it, to develop new products and services, delight customers, right. And, and really achieve the competitive advantage that businesses want to have. Just so talk to me about how your customer conversations have changed in the last 22 months, as we've seen this massive acceleration of digital transformation companies initially, really trying to survive and figure out how to pivot, not once, but multiple times. How are those customer conversations changing now is as that data strategy becomes core to the survival of every business and its ability to thrive. >>Yeah. I mean, I think it's accelerated everything and, and that's been obviously good for companies like us and like Accenture, cause there's a lot of work to be done out there. Um, but I think it's a transition from a storage centric mindset to more of an analytics centric mindset. You know, I think traditionally data warehousing has been all about moving data into one central place. And, and once you get it there, then you can analyze it. But I think companies don't have the time to wait for that anymore. Right there, there's no time to build all the ETL pipelines and maintain them and get all of that data together. We need to shorten that time to insight. And that's really what we, what we've been focusing on with our, with our customers, >>Shorten that time to insight to get that value out of the data faster. Exactly. Like I said, you know, the time is no longer a nice to have. It's an absolute differentiator for folks in every business. And as, as in our consumer lives, we have this expectation that we can get whatever we want on our phone, on any device, 24 by seven. And of course now in our business lives, we're having the same expectation, but you have to be able to unlock that access to that data, to be able to do the analytics, to make the decisions based on what the data say. Are you, are you finding our total? Let's talk about a little bit about the go to market strategy. You guys go in together. Talk to me about how you're working with AWS, Theresa, we'll start with you. And then Justin we'll head over to you. Okay. >>Well, a lot of this is powered by the cloud, right? So being able to imagine a new data business to run the analytics on it and then push it out, all of that is often cloud-based. But then the great thing about data mesh it's it gives you a framework to look at and tap into multi-cloud on-prem edge data, right? Data that can't be moved because it is a private and secure has to be at the edge and on-prem so you need to have that's their data reality. And the cloud really makes this easier to do. And then with data virtualization, especially coming from the digital natives, we know it scales >>Just to talk to me about it from your perspective that the GTL. >>Yeah. So, I mean, I think, uh, data mesh is really about people process and technology. I think Theresa alluded to it as a strategy. It's, it's more than just technology. Obviously we bring some of that technology to bear by allowing customers to query the data where it lives. But the people in process side is just as important training people to kind of think about how they do data management, data analytics differently is essential thinking about how to create data as a product. That's one of the core principles that Theresa mentioned, you know, that's where I think, um, you know, folks like Accenture can be really instrumental in helping people drive that transformational change within their organization. And that's >>Hard. Transformational change is hard with, you know, the last 22 months. I've been hard on everyone for every reason. How are you facilitating? I'm curious, like to get Theresa, we'll start with you, your perspectives on how our together as servers and Accenture, with the power of AWS, helping to drive that cultural change within organizations. Because like we talked about Justin there, nobody has extra time to waste on anything these days. >>The good news is there's that imperative, right? Every business is a digital business. We found that our technology leaders, right, the top 10% investors in digital, they are outperforming are the laggards. So before pandemic, it's times to post pep devek times five, so there's a need to change. And so data is really the heart of the company. That's how you unlock your technical debt into technical wealth. And so really using cloud and technologies like Starburst and data virtualization is how we can actually do that. >>And so how do you, Justin, how does Starburst help organizations transfer that technical debt or reduce it? How does the D how does the data much help facilitate that? Because we talk about technical debt and it can, it can really add up. >>Yeah, well, a lot of people use us, uh, or think about us as an abstraction layer above the different data sources that they have. So they may have legacy data sources today. Um, then maybe they want to move off of over time, um, could be classical data, warehouses, other classical, uh, relational databases, perhaps they're moving to the cloud. And by leveraging Starburst as this abstraction, they can query the data that they have today, while in the background, moving data into the cloud or moving it into the new data stores that they want to utilize. And it sort of hides that complexity. It decouples the end user experience, the business analyst, the data scientists from where the data lives. And I think that gives people a lot of freedom and a lot of optionality. And I think, you know, the only constant is change. Um, and so creating an architecture that can stand the test of time, I think is really, really important. >>Absolutely. Speaking of change, I just saw the announcement about Starburst galaxy fully managed SAS platform now available in all three major clouds. Of course, here we are at AWS. This is a, is this a big directional shift for servers? >>It is, you know, uh, I think there's great precedent within open source enterprise software companies like Mongo DB or confluent who started with a self managed product, much the way that we did, and then moved in the direction of creating a SAS product, a cloud hosted, fully managed product that really I think, expands the market. And that's really essentially what we're doing with galaxy galaxy is designed to be as easy as possible. Um, you know, Starburst was already powerful. This makes it powerful and easy. And, uh, and, and in our view, can, can hopefully expand the market to thousands of potential customers that can now leverage this technology in a, in a faster, easier way, >>Just in sticking with you for a minute. Talk to me about kind of where you're going in, where services heading in terms of support for the data mesh architecture across industries. >>Yeah. So a couple of things that we've, we've done recently, and whether we're doing, uh, as we speak, one is, uh, we introduced a new capability. We call star gate. Now star gate is a connector between Starburst clusters. So you're going to have a Starbucks cluster, and let's say Azure service cluster in AWS, a Starbucks cluster, maybe an AWS west and AWS east. And this basically pushes the processing to where the data lives. So again, living within this construct of, uh, of decentralized data that a data mesh is all about, this allows you to do that at an even greater level of abstraction. So it doesn't even matter what cloud region the data lives in or what cloud entirely it lives in. And there are a lot of important applications for this, not only latency in terms of giving you fast, uh, ability to join across those different clouds, but also, uh, data sovereignty constraints, right? >>Um, increasingly important, especially in Europe, but increasingly everywhere. And, you know, if your data isn't Switzerland, it needs to stay in Switzerland. So starting date as a way of pushing the processing to Switzerland. So you're minimizing the data that you need to pull back to complete your analysis. And, uh, and so we think that's a big deal about, you know, kind of enabling a data mash on a, on a global scale. Um, another thing we're working on back to the point of data products is how do customers curate and create these data products and share them within their organization. And so we're investing heavily in our product to make that easier as well, because I think back to one of the things, uh, Theresa said, it's, it's really all about, uh, making this practical and finding quick wins that customers can deploy, deploy in their data mess journey, right? >>This quick wins are key. So Theresa, last question to you, where should companies go to get started today? Obviously everybody has gotten, we're still in this work from anywhere environment. Companies have tons of data, tons of sources of data, did it, infrastructure's already in place. How did they go and get started with data? >>I think they should start looking at their data projects and thinking about the best data products. I think just that mindset shift about thinking about who's this for what's the business value. And then underneath that architecture and support comes to bear. And then thinking about who are the products that your product could work better with just like any other practice partnerships, like what we have with AWS, right? Like that's a stronger together sort of thing, >>Right? So there's that kind of that cultural component that really strategic shift in thinking and on the architecture. Awesome guys, thank you so much for joining me on the program, coming back on the cube at re-invent talking about data mesh really help. You can help organizations and industry put that together and what's going on at service. We appreciate your time. Thanks again. All right. For my guests, I'm Lisa Martin, you're watching the cubes coverage of AWS reinvent 2021. The cube is the leader in global live tech coverage. We'll be right back.

Published Date : Nov 30 2021

SUMMARY :

Good to have you back. Well, I get to think about the future of cloud and if you think about clouded powers, I know service has been on the program before, but give me a little bit of an overview of the company, what you guys do. And it's how Facebook and Netflix and Airbnb and, and a number of the internet And that's one of the things we've seen explode during the last 22 months, among many other things is data, So even though their products, you really need to make sure that you're doing the right things, but what's data money. This is more of a, an approach, And so that's the reality of the situation today, and it's first and foremost, Just didn't talk to me about when you're having customer conversations. And I think that, you know, essentially every company in the space is, The, the need is to be able to get it, And so a lot of that is reviewing your existing data projects So what are some w you know, we often talk about outcomes, So just by the able to see the data, see what's happening now, that's great. Just so talk to me about how your customer conversations have changed in the last 22 But I think companies don't have the time to wait for that anymore. Let's talk about a little bit about the go to market strategy. And the cloud really makes this easier to do. That's one of the core principles that Theresa mentioned, you know, that's where I think, I'm curious, like to get Theresa, we'll start with you, your perspectives on how And so data is really the heart of the company. And so how do you, Justin, how does Starburst help organizations transfer that technical And I think, you know, the only constant is change. This is a, is this a big directional can, can hopefully expand the market to thousands of potential customers that can now leverage Talk to me about kind of where you're going in, where services heading in the processing to where the data lives. And, uh, and so we think that's a big deal about, you know, kind of enabling a data mash So Theresa, last question to you, where should companies go to get started today? And then thinking about who are the products that your product could work better with just like any other The cube is the leader in global live tech coverage.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

TheresaPERSON

0.99+

AWSORGANIZATION

0.99+

Teresa TungPERSON

0.99+

Justin BorkmanPERSON

0.99+

Justin BorgmanPERSON

0.99+

TeresaPERSON

0.99+

AmazonORGANIZATION

0.99+

JustinPERSON

0.99+

EuropeLOCATION

0.99+

SwitzerlandLOCATION

0.99+

StarburstORGANIZATION

0.99+

AccentureORGANIZATION

0.99+

SecondQUANTITY

0.99+

thousandsQUANTITY

0.99+

NetflixORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

third oneQUANTITY

0.99+

pandemicEVENT

0.98+

four attributesQUANTITY

0.98+

BothQUANTITY

0.98+

todayDATE

0.98+

24QUANTITY

0.98+

firstQUANTITY

0.98+

AirbnbORGANIZATION

0.98+

over 220 patentsQUANTITY

0.97+

over a hundred guestsQUANTITY

0.97+

2021DATE

0.97+

oneQUANTITY

0.96+

StarbucksORGANIZATION

0.96+

single partnerQUANTITY

0.96+

PrestoORGANIZATION

0.96+

single lineQUANTITY

0.96+

sevenQUANTITY

0.95+

confluentORGANIZATION

0.95+

10%QUANTITY

0.94+

one central placeQUANTITY

0.94+

one thingQUANTITY

0.93+

single toolQUANTITY

0.92+

day twoQUANTITY

0.92+

next decadeDATE

0.92+

single entityQUANTITY

0.92+

star gateTITLE

0.92+

Mongo DBORGANIZATION

0.91+

last 22 monthsDATE

0.91+

two lifeQUANTITY

0.91+

StarburstTITLE

0.88+

last 22 monthsDATE

0.87+

David Chou & Derrick Pledger, Leidos | AWS re:Invent 2021


 

>>Welcome back to the cubes, continuous coverage of AWS reinvent 2021 live in Las Vegas. I'm Lisa Martin pleased to be here in person. We are actually with AWS and its massive ecosystem of partners running. One of the industry's largest and most important hybrid tech events of the year. We've got two life sets over a hundred guests to remote studios. I'm pleased to welcome two guests from Laos here with me. Next, Derek is here, the VP and director of digital modernization and David chow, the director of cloud capabilities, Derek and David. Welcome to the program. Thanks for having us great to be here in person. Isn't it? >>Absolutely. Last year we missed out. So if we've got to get it all in this week. >>Exactly and well, this is day one and the amount of people that are in here, there's a lot of noise in the background. I'm sure the audience can hear it is, is really nice. AWS has done such a great job of getting us all in here. Nice and safely. So let's go ahead and start. Light is coming off a very strong Q3. When we look at the things that have happened, nearly all defense and classified customers are engaged in digital modernization efforts. We've seen so much acceleration of that in the last 20 months, but let's talk about some of the current challenges Derek that customers are facing across operations sustainment with respect to the need to modernize. >>Sure, sure thing. Um, so over the past two years, we spent a better part of all that time, trying to really figure out what are our customers' hardest problems. And, you know, that's across the health vertical, the DOD vertical, uh, the Intel vertical, uh, you name it. We spent a lot of time trying to figure it out. And we kept coming up on three reoccurring themes, one, which is the explosion of data. There's so much data being generated across our customer's environments. Um, there's not enough human brain power to deal with it. All right. So we need to be able to apply technology in a way that reduces the cognitive burden on operators who must do operations and sustainment to get to a business outcome. Uh, the second one and most importantly for us is advanced cyber threats. We've all heard about the colonial pipeline hack. >>We've heard about solar winds. The scary part about that is what about the hacks that we don't know about? Right. And that's something that here at lighthouse, we're really focused on applying technology, cyber AIML in a way that we can detect when someone's in our environments or in our customer environments. And then we can opt out, obviously, um, do some remediation and get them out of our environment. So mission operations are not compromised. And then lastly, customer environments are heterogeneous. You have cloud, you have on-premise infrastructure. Uh, you have edge devices, IOT devices. It's very difficult to be able to do management and orchestration over all these different devices, all the different platforms that are out there. So working in concert with AWS, we build a solution to be able to do just that, which we'll talk about a little later, David, anything else that you want to, >>We talked about the explosion of data, the cybersecurity landscape changing dramatically, and the customers needing to be able to modernize and leverage the power of technology. Yeah, >>So our customers, uh, we have basically three areas that we see our customers having challenges in. And one of them, once they get to the cloud, they don't have the transparency on cost and usage, right. Uh, when you get the engineers are excited, the mission is exploded with extra activities. Um, but our customers don't have a sense on where the cost is going and how that relates to their mission, right? So we help them figure out, okay, your, your cost is going up, which is fine because it's applying to your mission and it's helping you actually be more successful than before. Right? And the other area is, uh, they need, uh, a multi-platform strategy that doesn't impact their existing conditions, right? They don't have the practicality or the funding that's required to just rip and replace everything. And you can't do that. You have to maintain your mission. >>If you have to maintain about a lot of critical capability that they already have, but at the same time, figure out how am I going to add the extensions and the new capabilities, right? And we have certain ways that we can do that to allow them to start getting into the cloud, leveraging a lot of additional capability that they never had before, but maintaining the investment that they've done in the past years to maintain their mission success. Right. Uh, and then the third is skill up-skilling. So we found that a lot of people have a hard time. Once we move them into AWS, specifically their operational duties and things change. And there's a big gap there in terms of training, uh, getting familiar with how that impacts their process and methodology, and that that's where we helped them a lot, uh, modify that revolution and how they do that stuff. >>That's excellent. That upskilling is critical, as things are changing so dramatically, we have, you talked about data and the cybersecurity changes Derek. And you know, every company, every branch of the federal is probably a data company or data organization, or if it's not, it has to become one. But the cyber threats are crazy. The things that have been going on in the last 20 months, the acceleration of ransomware, ransomware as a service, you talked about colonial, like we only hear about the big ones, but how many it's no longer a will we get hit by ransomware? Or will we be hacked? It's when, talk to me about some of those, about those challenges and also the need to be able to deliver real-time data as real-time missions are going on in that real-time is now no longer a nice to have. >>Right? So, um, it's a great question. And one of the things that I'll say is there's some studies out there that said 75% of the computing, uh, that will be happening over the next 10 years will be at the edge, right? So we're not going to be able to go at the edge, collect all this data, ship it back to a centralized way to process it. We're not going to be able to do that. What we have to do is take capability that may have been clouded, able push that capability to the edge. Where did that be? AI ML. It could be your mission applications, and we need to be able to exploit data in near real time, um, to which allows us to make mission critical decisions at the point of need. There's not going to be enough time to collect a big swath of data, move it back across a bandwidth that is temporarily constrained. In many cases, we just can't do it that way. So I think moving as much capability to the edge as possible in order for us to be able to make an impact in near real time, that's what we need to do across all of our verticals, not just DOD, but on the healthcare side to the Intel side, you name it. We gotta be able to move capability as far forward as >>Possible. And where Derek with you for a minute, where are those verticals with respect to embracing that, adopting that being ready to be able to take on those technologies? Because culturally, I can imagine, you know, legacy his story, history organizations to change his heart >>Change is hard. Um, and one of the strategies that we've tried to implement within that context is that the legacy systems, the culture that is already out there, we're not just going to be able to turn all of that off, right. We're going to have to make sure that the new capabilities and the legacy systems co-exist. So that's one of the reasons that we have an approach where we use microservices, very much API driven, such that, uh, you know, a mission critical system that may have been online for the last 20 years. We're not just going to turn it off, but what we can do is start to build sidecar capabilities, microservices, to extend that capability of that system without rebuilding it, we can't build our way out of all the technical debt. What we can do is figure out how do we need to extend this capability to get to a mission need and build a microservices. That's very thin. That's very lightweight. And that's how you start to connect the dots between your mission applications, the data, the data centricity that we talked about and other capabilities that need access to data, to be able to effectuate a decision. >>You make it sound so easy. Derek, >>It's certainly not easy, but in working with AWS, we really have taken this forward and we're really deploying, uh, similar capabilities today. Um, so it's really the way that we have to modernize. We have to be able to do it step by step strangle out the old as we bring in the new right. >>So David, let's talk about the AWS partnership, what you guys are doing as the critical importance of being able to help the verticals modernize at speed at scale in real time. Talk to me about what Leidos and AWS are doing together. >>So we work with Adobe very closely, um, for every engagement we have with our customers, we have AWS as our side, we do the reviews of, of their architecture and their approach. We take, we take into account the data strategy of the organization as long along with their cloud, uh, because we found that you have to combine their cloud and their data strategy because of the volumes of data that Derrick talked about, right. That they needed to integrate. And so we come up with a custom strategy and a roadmap for them to adopt that without like Derek said, um, deprecating any old capabilities that currently have any extending it out into, into the cloud so that those areas are what we strive to get them through. And we talk about a lot about the digital enterprise and how that is for us from light of this point of view, we see that as building an API ecosystem for our customer, right? Because the API is really the key. And if you look at companies like Twilio that have an API first approach, that's, what's allowed them to integrate very old technology like telephones into the new cloud, right? So that approach is really the unique approach that was taken with our customers for to see the success that we've seen. >>Well, can you tell me, David's sticking with you for a minute about upscaling. I know that AWS has a big focus on that. It's got a restart program for helping folks that were unemployed during the pandemic or underemployed, but the upskilling, as we talked about during this interview is incredibly important. As things change are changing so quickly, is there any sort of upskilling kind of partnerships that you're doing with AWS that you, >>Uh, so as a partner, we ourselves get a lot of free upskilling and training, uh, as AWS from your partner. Um, but also with our customers, we're able to customize and build specific training plans and curriculums that is targeted specifically for the operators, right? They don't come from a technology background like we do, but they come from a mission background so we can modify and understand what they need to learn and what they don't really need to worry about so much and just target exactly what they need to do. So they can just do their day-to-day jobs and their duties for the mission. >>That's what it's all about. Derek, can you share an example that you think really speaks volumes to light us and AWS together to help customers modernize? >>One thing I like about AWS is that the partnership is what we describe as a deep technical partnership. It's not just transactional. It's not like, Hey, buy this X services and we'll, we'll do this. I have a great example of this year. We kicked off a pilot with an army customer and we actually leveraged AWS pro. So we were literally building a proof of concept together. So in 90 days, what we, what we did was get the customer to understand we're moving more to native AWS services, EMR, uh, to be more specific that you can save money on tons of licensing costs that you otherwise would have had to pay for it. After the pilot was over, we recognized that we will save the government $1.2 million and they have now said, yes, let's go AWS native, which is, uh, which is, uh, a methodology that we still want to stamp out and use continually because the more and more that you adopt that native services, you're going to be able to move faster. Because as soon as you deploy a system, it's already legacy. When you start to do the native services, as things more services come online, we're sort of their glue where to make sure those things that are coming, the services that are AWS are deploying out, we'd bring, we, we then bring that innovation into our customer environment. So saving a customer, the government $1.2 million at a big deal for us. >>It's huge. And I'm sure you there's, that's one of many examples of significant outcomes that you're helping the verticals achieve. Absolutely. >>Yeah. One of our >>Core focuses. That's excellent. And also to do it so quickly and 90 days to be able to show the army a significant savings is a, is a huge, uh, kudos to, to Linus and to AWS. David, talk to me a little bit about the, from a partnership perspective, how do you guys go into a joint organizations together? I imagine one of the most important things is that transparency from the verticals perspective, whether it's DOD or health or Intel, talk to me about that, that kind of unified partnership. And what is the customer and customer experience? I imagine one team. >>Yes. So we go into, we engage with our AWS counterparts at the very beginning of an engagement. So they have their dedicated teams. We have our dedicated teams and we are fully transparent with each other, what the customers are facing. And we both focus on the customer pain points, right? What was really going to drive the customer. Um, and that's how we sort of approach the customer. So the customer sees us as a single team. Uh, we do things like we'll build out what we call the well-architected framework or wafer for short, right. And that allows us to make sure that we're leveraging all the best practices from AWS, from their clients on the commercial side. And we can leverage that into the government, right. They can get a lot of learnings and lessons learned that they don't have to repeat because some of the commercial cupboard companies who are ahead of us have I've done the hard learning, right. And we can incorporate that into their mission and into their operations. >>That's critical because there isn't the time. Right? I think that's one of the things that Penn has taught us is that there isn't there, like we talked about real-time data, there is, it's no longer a nice to have, right. But even from a training and from a deployment perspective that needs to be done incredibly efficiently with, we're talking about probably large groups of people. I imagine with Leidos folks, AWS folks, and the verticals. So that coordination between, I imagine what are probably two fairly culturally aligned organizations is critical. No. >>Yeah. One of the things that we put in places, this idea of bachelors environment, so that means you could be a Leidos person. You could be an AWS person, there's no badge. We're just sitting there, we're here to do good work, to bring value to a customer. And that's something that's really fantastic about our relationship that we do have. So every week we are literally building things together and that's, that's what the government, that's what the public sector folks expect. No, one's not gonna own it all. You have to be able to work together to be able to bring value to our customers across all the verticals that >>I like. That badge list environment, that's critical for organizations to work together. Harmoniously given there's as the data explosion just continues as does the edge explosion and the IOT device explosion more and more complexity comes into the environment. So that Badger less environment I met David from your perspective is really critical to the success of every mission that you're working on. >>Yeah. I mean, I think the badge approaches is critical without it, the existing teams have a hard time building that trust and being, and feeling like we're part of that team, right? Trust is really important in, in mission success. And so when we enter a new arena, we try to get, build that trust as quickly as we can show them that, you know, we're there to help them with their mission. And we're not really there for anything else. So they feel comfortable to share, you know, the really deep pain points that they're not really sharing all the time. And that's what allows Leidos specifically to, to really be successful with them because they share all their skeletons and we don't judge them. Right. We've say, okay, here's your problems. Here's some solutions. And here are the pros and cons and we figure out a solution together, right. It's a really built together sort of mindset that makes us successful. Okay. >>Togetherness as key, last question, guys, what are some of the things that attendees can learn and feel and see, and smell from Leidos this week at reinvent? >>We want to take that one. >>Um, yeah. So with Leidos, um, we're around, we have, uh, various custom, uh, processes with AWS, uh, because of our peer partnership. We have the MSSP that we just got as a launch partner. So there's a lot of interaction that we have with AWS. Um, anytime that AWS sees that there is opportunity for us to talk to a customer and talk to potential vendor, they'll pull us in. So if you guys come by the booth and you need to talk to an SSI, they'll, they'll pull us in and we'll have those conversations. >>Excellent guys, thank you so much for joining me, talking about Leidos, AWS, what you guys are doing together and how you're helping transform government. You make it sound easy. Like I said, Derek, I know that it's not, but it's great to hear the transparency with which guys are all working. Thank you so much for your time. Thank you. Thank you. My pleasure for my guests. I'm Lisa Martin. You're watching the cube, the leader in global live tech coverage.

Published Date : Nov 30 2021

SUMMARY :

I'm pleased to welcome two guests from Laos here with me. So if we've got to get it all in this week. We've seen so much acceleration of that in the last 20 months, but let's talk about some of the current So we need to be able to apply technology And then we can opt out, and the customers needing to be able to modernize and leverage the power of technology. So our customers, uh, we have basically three areas that we see our customers having challenges in. And we have certain ways that we can do that to allow them That upskilling is critical, as things are changing so dramatically, we have, you talked about data and not just DOD, but on the healthcare side to the Intel side, you name it. to embracing that, adopting that being ready to be able to take on those technologies? So that's one of the reasons that we have an approach where we use microservices, very much API driven, You make it sound so easy. We have to be able to do it step by step strangle out the old as we bring in the new So David, let's talk about the AWS partnership, what you guys are doing as the critical importance So that approach is really the unique approach that was taken with our customers for to see the success that but the upskilling, as we talked about during this interview is incredibly important. Uh, so as a partner, we ourselves get a lot of free upskilling and training, uh, Derek, can you share an example that you think really speaks volumes to light us So we were literally building a proof of And I'm sure you there's, that's one of many examples of significant outcomes that And also to do it so quickly and 90 days to be able to show the army And we can leverage that into the government, right. So that coordination between, I imagine what are probably two fairly that we do have. So that Badger less environment I met David from your perspective is really critical to the success build that trust as quickly as we can show them that, you know, we're there to help them with their mission. We have the MSSP that we just got as a launch partner. but it's great to hear the transparency with which guys are all working.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

DerekPERSON

0.99+

Lisa MartinPERSON

0.99+

AWSORGANIZATION

0.99+

Derrick PledgerPERSON

0.99+

David chowPERSON

0.99+

David ChouPERSON

0.99+

$1.2 millionQUANTITY

0.99+

DerrickPERSON

0.99+

75%QUANTITY

0.99+

LaosLOCATION

0.99+

AdobeORGANIZATION

0.99+

Las VegasLOCATION

0.99+

90 daysQUANTITY

0.99+

LeidosORGANIZATION

0.99+

Last yearDATE

0.99+

two guestsQUANTITY

0.99+

oneQUANTITY

0.99+

one teamQUANTITY

0.99+

OneQUANTITY

0.99+

thirdQUANTITY

0.99+

single teamQUANTITY

0.98+

TwilioORGANIZATION

0.98+

bothQUANTITY

0.98+

second oneQUANTITY

0.98+

this weekDATE

0.97+

first approachQUANTITY

0.97+

IntelORGANIZATION

0.97+

two lifeQUANTITY

0.96+

three areasQUANTITY

0.94+

this yearDATE

0.91+

three reoccurring themesQUANTITY

0.9+

last 20 monthsDATE

0.89+

last 20 monthsDATE

0.89+

todayDATE

0.89+

2021DATE

0.88+

InventEVENT

0.87+

last 20 yearsDATE

0.85+

Lisa Lorenzin, Zscaler | AWS re:Invent 2021


 

>>Welcome to the cubes, continuing coverage of AWS reinvent 2021. I'm your host, Lisa Martin. We are running one of the industry's most important and largest hybrid tech events of the year. This year with AWS and its ecosystem partners. We have two life studios, two remote studios, and over 100 guests. So stick around as we talk about the next 10 years of cloud innovation, I'm very excited to be joined by another Lisa from Zscaler. Lisa Lorenzen is here with me, the field CTO for the Americas. She's here to talk about ZScaler's mission to make doing business and navigating change a simpler, faster, and more productive experience. Lisa, welcome to the program. >>Thank you. It's a pleasure to be here. >>So let's talk about Zscaler in AWS. Talk to me about the partnership, what you guys are doing together. >>Yeah, definitely. Z scaler is a strategic security ISV partner with AWS. So we provide AWS customers with zero trust, secure remote access to AWS, and this can improve their security posture as well as their user experience with AWS. These scaler recently announced that we are the first and only cloud security service to achieve the FedRAMP PI authorization to operate. And that FedRAMP ZPA service is built on AWS gov cloud. ZScaler's also an AWS marketplace seller where our customers can purchase our zero trust exchange services as well as request or high value security assessments. We're excited about that as we're seeing a rapid increase in customer adoption as these scaler via the AWS marketplace, we vetted our software on AWS edge services that support emerging use cases, including 5g, IOT, and OT. So for example, Zscaler runs on wavelength, outposts, snowball and snowcones, and Zscaler has strategic partnerships with leading AWS service providers and system integration partners, including Verizon NTT, BT, Accenture, Deloitte, and many of the leading national and regional AWS consulting partners. >>Great summary there. So you mentioned something I want to get more understanding on this. It sounds like it's a differentiator for CSO scale. You said that you guys recently announced to the first and only cloud security service to achieve FedRAMP high. Uh, ATO built on AWS gov cloud. Talk to me about and what the significance of that is. >>I L five authorization to operate means that we are able to protect federal assets for the department of defense, as well as for the civilian agencies. It just extends the certification of our cloud by the government to ensure that we meet all of the requirements to protect that military side of the house, as well as the civilian side of the house. >>Got it super important there, let's talk about zero trust. It's a super hot topic. We've seen so many changes to the threat landscape during the pandemic. How are some of the ways that Z scaler and AWS are helping customers tackle this together? >>Well, I'd actually like to answer that by telling a little bit of a story. Um, Growmark is one of our Z scaler and AWS success stories when they had to send everyone home to work from home overnight, the quote that we had from is the users just went home and nothing changed. ZPA made work from anywhere, just work, and they were able to maintain complete business continuity. So even though their employers might have had poor internet service at home, or, you know, 80 challenging infrastructure, if you've got kids on your wifi bunch of kids in the neighborhood doing remote school, everyone's working from home, you don't have the reliability or the, maybe the bandwidth capacity that you would when you're sitting in an office. And Zscaler private access is a cloud delivered zero trust solution that leverages dynamic resilient, TLS encrypted tunnels to connect the user to an application rather than putting an end point on a network. >>And the reason that's important is it makes for a much more reliable and resilient service, even in environments that may not have the best connectivity I live out in the county. I really, some days think that there's a hamster on a wheel somewhere in my cable modem network, and I am a consumer of this, right. I connect to Z scaler over Zscaler private access, I'm protected by Zscaler internet access. And so I access our internal applications that are running in AWS as well this way. And it makes a huge difference. Growmark really started with an SAP migration to AWS, and this was long before the pandemic. So they started out looking for that better user experience and the zero trust capability. They were able to ensure that their SAP environment was dark to the internet, even though it was running in the cloud. And that put them in this position to leverage that zero trust service when the pandemic was upon us, >>That ability or that quote that you mentioned, it just worked was absolutely critical for all of us in every industry. And I'm sure a lot of folks who were trying to manage working from home, the spouses from home kids doing, you know, school online also felt like you with the hamster on the wheel, I'm sure their internet access, but being able to have that business continuity was table-stakes especially early on for most organizations. We saw a lot of digital transformation, a lot of acceleration of it in the last 20 months during the pandemic. Talk to me about how Z scaler helps customers from a digital transformation perspective and maybe what some of the things were that you saw in the last 20 months that have accelerated >>Absolutely. Um, another example, there would be Jefferson health, and really, as we saw during the pandemic, as you say, it accelerated a lot of the existing trends of mobility, but also migration to the cloud. And when you move applications to the cloud, honestly, it's a complex environment and maybe the controls and the risk landscape is not as well. Understood. So Z scaler also has another solution, which is our cloud security posture management. And this is really ensuring that your configuration on your environment, that those workloads run in is controlled, understood correctly, coordinated and configured. So as deference and health migrated to the cloud first model, they were able to leverage the scalers workload posture to measure and control that risk. Again, it's environment where the combination of AWS and Z scaler together gives them a flexible, resilient solution that they can be confident is correctly configured and thoroughly locked down. >>And that's critical for businesses in any organization, especially as quickly as how quickly things changed in the last 20 months or so I do wonder how your customer conversations have has changed as I introduced you as the field CTO of the America's proceeds killer. I'm sure you talk with a lot of customers. How has the security posture, um, zero trust? How has that risen up within the organizational chain? Is that something that the board is concerned about? >>My gosh, yes. And zero trust really has gone through the Gartner hype cycle. You've got the introduction, the peak of interest, the trough of despair, and then really rising back into what's actually feasible. Only zero trust has done that on a timeline of over a decade. When the term was first introduced, I was working with firewall VPN enact technology, and frankly, we didn't necessarily have the flexibility, the scalability, or the resilience to offer true zero trust. You can try to do that with network security controls, but when you're really protecting a user connecting to an application, you've got an abstraction layer mismatch. What we're seeing now is the reemergence of zero trust as a priority. And this was greatly accelerated honestly by the cybersecurity executive order that came out a few months ago from the Biden administration, which made zero trust a priority for the federal government and the public sector, but also raised visibility on zero trust for the private sector as well. >>When we're looking at zero trust as a way to perhaps ward off some of these high profile breaches and outages like the colonial pipeline, whole situation that was based on some legacy technology for remote access that was exploited and led to a breach that they had to take their entire infrastructure offline to mitigate. If we can look at more modern delivery mechanisms and more sophisticated controls for zero trust, that helps the board address a number of challenges ranging from obviously risk management, but also agility and cost reduction in an environment where more than ever belts are being tightened. New ways of delivering applications are being considered. But the ability to innovate is more important than ever. >>It is more important than ever the ability to innovate, but it really changing security landscape. I'm glad to hear that you're seeing, uh, this change as a result of the executive order that president Biden put down in the summer. That's good news. It sounds like there's some progress being made there, but we saw, you mentioned colonial pipeline. We saw a lot in the last 20, 22 months or so with ransomware becoming a household word, also becoming something that is a matter of when companies in any industry get hit and versus if it's no longer kind of that choice anymore. So talk to me about some of the threats and some of the stats that Z scaler has seen particularly in the last 20, 22 months. >>Oh gosh. Well, let's see. I'm just going to focus on the last 12 months, cause that's really where we've got some of the best data. We've seen a 500% increase in ransomware delivered over encrypted channels. And what that means is it's really critical to have scalable SSL inspection that can operate at wire speed without impeding the user experience or delay in critical projects, server communications, activities that need to happen without any introduced in any additional latency. So if you think about what that takes the Z scaler internet access solution is protecting users, outbound access in the same way that Zscaler private access protects access to private resources. So we're really seeing more and more organizations seeing that both of these services are necessary to deliver a comprehensive zero trust. You have to protect and control the outbound traffic to make sure that nothing good leaks out, nothing bad sneaks in. >>And at the same time, you have to protect and control the inbound traffic and inbound is, you know, a much broader definition with apps in the data center in the cloud these days. We're also seeing that 30% of malware is delivered through trusted applications like file shares or collaboration tools. So it's no longer enough to only inspect web traffic. Now you have to be able to really inspect all flavors of traffic when you're doing that outbound protection. So another good example where Z scaler and AWS work together here is in Amazon workspaces. And there's a huge trend towards desktop as a service, for example, and organizations are starting to recognize that they need to protect both the user experience and also the connectivity onward in Amazon workspaces, the same way that they would for a traditional end user device. So we see Z scaler running in the Amazon workspaces instances to protect that outbound traffic and control that inbound traffic as well. >>Another big area is the ransomware infections are not the problem. It's the result. So over half of the ransomware infections include data theft or leakage. And that is a double whammy because you get what's called double extortion where not only do you have to pay to unlock your machines, but you have to pay not to have that stolen data exposed to the rest of the world. So it's more important than ever to be able to break that kill chain as early as possible to ensure that the or the server traffic itself isn't exposed to the initial infection vector. If you do happen to get an infection vector that sneaks through, you need to be able to control the lateral movement so that it doesn't spread in your environment. And then if both of those controls fail, you also need the outbound protection such as CASBY and DLP to ensure that even if they get into the environment, they can't exfiltrate any of the data that they find as a result. We're seeing that the largest security risk today is lateral movement inside the corporate network. And that's one of the things that makes these ransomware double extortion situations, such a problem. >>Last question for you. And we've got about a minute left. I'm curious, you said over 50% of ransomware attacks are now double extortion. How do you guys help customers combat that? So >>We really deliver a solution that eliminates a lot of the attack surface and a lot of the risks. We have no inbound listener, unlike a traditional VPN. So the outbound only connections mean you don't have the external attack surface. You can write these granular policy controls to eliminate lateral movement. And because we integrate with customer's existing identity and access management, we can eliminate the credential exposure that can lead to a larger spread in a compromised environment. We also can eliminate the problem of unpatched gateways, which led to things like colonial pipeline or some of the other major breaches we've seen recently. And we can remove that single point of failure. So you can rely on dynamic optimized traffic distribution for all of these secure services. Basically, what we're trying to do is make it simpler and more secure at the same time, >>Simpler and more secure at the same time is what everyone needs regardless of industry. Lisa, thank you for joining me today, talking about Zscaler in AWS, zero trust the threat landscape that you're seeing, and also how's the scaler and AWS together can help customers mitigate those growing risks. We appreciate your insights and your thoughtfulness. >>Thank you >>For Lisa Lorenzen. I'm Lisa Martin. You're watching the cubes coverage of AWS reinvent stick around more great content coming up next.

Published Date : Nov 30 2021

SUMMARY :

We are running one of the industry's most important and largest It's a pleasure to be here. Talk to me about the partnership, what you guys are doing together. So we provide AWS customers with zero trust, secure remote access to AWS, You said that you guys recently announced to the first and only cloud of the requirements to protect that military side of the house, as well as the civilian side of the house. We've seen so many changes to the threat landscape during the pandemic. of kids in the neighborhood doing remote school, everyone's working from home, you don't have the reliability or in this position to leverage that zero trust service when the pandemic was upon us, it in the last 20 months during the pandemic. And when you move applications to the cloud, Is that something that the board is concerned the scalability, or the resilience to offer true zero trust. But the ability to innovate is more important It is more important than ever the ability to innovate, but it really changing security landscape. of these services are necessary to deliver a comprehensive zero trust. And at the same time, you have to protect and control the inbound traffic and inbound is, ensure that the or the server traffic itself isn't I'm curious, you said over 50% of ransomware So the outbound only connections mean you don't have the Lisa, thank you for joining me today, talking about Zscaler in AWS, zero trust the threat landscape more great content coming up next.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AWSORGANIZATION

0.99+

Lisa LorenzenPERSON

0.99+

Lisa MartinPERSON

0.99+

DeloitteORGANIZATION

0.99+

Lisa LorenzinPERSON

0.99+

BTORGANIZATION

0.99+

30%QUANTITY

0.99+

500%QUANTITY

0.99+

AccentureORGANIZATION

0.99+

two remote studiosQUANTITY

0.99+

LisaPERSON

0.99+

firstQUANTITY

0.99+

AmazonORGANIZATION

0.99+

two life studiosQUANTITY

0.99+

oneQUANTITY

0.99+

over 100 guestsQUANTITY

0.99+

bothQUANTITY

0.99+

GartnerORGANIZATION

0.99+

over 50%QUANTITY

0.99+

This yearDATE

0.99+

BidenPERSON

0.99+

first modelQUANTITY

0.98+

2021DATE

0.98+

GrowmarkORGANIZATION

0.97+

single pointQUANTITY

0.97+

ZscalerORGANIZATION

0.97+

CASBYORGANIZATION

0.97+

zero trustQUANTITY

0.97+

pandemicEVENT

0.97+

todayDATE

0.97+

over a decadeQUANTITY

0.95+

AmericasLOCATION

0.94+

Verizon NTTORGANIZATION

0.94+

AmericaLOCATION

0.94+

ZscalerTITLE

0.91+

last 12 monthsDATE

0.91+

last 20 monthsDATE

0.9+

IOTTITLE

0.89+

80 challenging infrastructureQUANTITY

0.88+

a minuteQUANTITY

0.86+

last 20DATE

0.83+

ZPATITLE

0.83+

ATOORGANIZATION

0.82+

Z scalerTITLE

0.81+

JeffersonPERSON

0.81+

ZScalerORGANIZATION

0.81+

Nick Volpe and Kym Gully AWS Executive Summit 2021


 

(upbeat music) >> Hello and welcome back to theCUBE's coverage of AWS Executive Summit at re:Invent 2021. I'm John Furrier, your host of theCUBE. This segment is about surviving and thriving with the digital revolution that's happening in the digital transformation that's turning into and changing businesses. We've got two great guests here with Guardian Life, Nick Volpe, CIO of Individual Markets at Guardian Life and Kim Gully, CTO of Life and Annuities at Accenture. Accenture obviously doing a lot of cutting-edge work, Guardian changing the game. Nick, thanks for coming on. Kim, thanks for coming on. >> Thanks John, good to be here. >> So, well, before I get into the question, I want to just set the table a little bit. The pandemic has given everyone a mandate. The good projects are exposed. The bad projects are exposed. Everyone can see what's happening because the pandemic forced everyone to identify what's working, what's not working, what the double-down on. Innovation for customers is a big focus, but now with the pandemic relieving and coming out of it, the world's changed. This is an opportunity for businesses. Nick, this is something that you guys are focused on. Can you take us through what Guardian Life's doing in this post pandemic changeover as cloud goes next level? >> Yeah, thanks John. So the immediate need in the pandemic situation was about the new business capability. So those familiar with insurance, traditionally life insurance underwriting, disability underwriting is very in-person, fluids, labs, attending physician statements. And when March of 2020 broke, that all came to an abrupt halt. Doctor's office were either closed. Testing centers were either closed or inundated with COVID testing. So we had to come up with some creative ways to digitize our new business, adopt the application, and adopt our medical questionnaires, and also get creative on some of our underwriting standards that put us at certain limits and certain levels and when we needed the fluids. So we are pretty quickly, we're agile about decisions there. And we moved from about 40 to 50% adoption rate of our electronic applications to the north of 98% across the board. In addition, we saw some opportunities for products and more capabilities beyond new business. So after we weathered the storm, we started to take a step back. And like you said, look at what we were doing, that have a start, stop, continue conversation internally to say, this digitization is a new norm. How do we meet it from every angle, not just a new business. And that's where we started to look at our policy administration systems, moving more to the cloud and leveraging the cloud to its fullest extent versus just a lifted shift. >> Kim, I want to get your perspective at Accenture. I've done a lot of interviews with the past, I think 18 months. A lot of use cases with Accenture, almost in every vertical where you guys are almost like the firefighters, get called in to like help out 'cause the cloud actually now is an enabler. How do you see the impact of the pandemic reverbing through? I mean, obviously you guys come to the table, you guys bring in, I mean, what's your perspective on this? >> So, yeah, it's really interesting. I think the most interesting fact is we talk about, Nick raise such a strong area in our business of underwriting and how can we expedite that, is being talked on the table for a number of years, but the industry has been very slow or reluctant to embrace. And the pandemic became an enforcer in it to be honest. And a lot of the companies were thinking about it prior, but that's it, they'll think about it. I mean, even Accenture, we launched a huge three-year investment to get clients into cloud and digital transformation, but the pandemic just expedited everything. Now the upside is clients that were in a well-advanced stage of planning, they were easily able to adopt, but clients that weren't, were really left behind. So we became very, very busy just supporting the clients that didn't have as much forethought as likes of Guardian, et cetera. >> Nick, it brings up a good point. I want to get your reaction to see if you agree. I mean, people who didn't put their toe in the cloud or just jump in the deep end, really got flat-footed when the pandemic hit, because they weren't prepared. People who were either ingratiated in with the cloud or having active projects or even being full deployments in there did well. What's your take on that? >> Yeah, the enablement we had and the gift we were given by starting our cloud journey, in I want to say 2016, 17 was we really started moving to the cloud. And I think we were the only insurer that moved production load to the cloud. At that point, most of insurers were putting their development environments, maybe even their SIT environments, but Guardian had the strategy of getting out of the data center or moving to a much more flexible, scalable environment architecture using the AWS cloud. So we completed our journey into the cloud by 2018, 19, and we were at the point of really capitalizing versus moving. So we were able to move very quickly, very nimbly. When the pandemic hit or in any digital situation, we have that flexibility and capacity that AWS provides us to really respond to our customers, our customers need. So we were one of the more fortunate insurers that were well into our cloud journey. And at the point of optimization versus the point of moving. >> Let's talk about the connection with Accenture's life insurance and annuity platform also known ALIP, I think the acronym is. What was that? Why was that relevant? What was that all about? >> Yeah, so I'll go first and then Kim, you can jump in and see if you agree with me. >> He essentially help that, love it. (laughs) >> Yeah, you would suspect you would, right John? >> Yeah. (laughs) >> Like I said, our new business focus was the original, like the emergency situation when the pandemic hit. But as we went further into it and realized the mortality and morbidity and the needs and wants of our customers, which is a major focus of Guardian, really being, having the client at the center of every conversation we have, we realized that there was a real opportunity for product and it's product continues to change and you had regulations like 7702 coming out where you had to reprice the entire portfolio to be able to sell it by January 1, 2022. We realized our current systems are for policy admin. We're not matching our digital capabilities that we had moved to the cloud. So we embarked on a very extensive RFP to Accenture and a few other vendors that would come to the table and work with us. And we just really got to a place where combination of our desire to be on the cloud, be flexible, and be capable for our customers, married really well with the knowledge, the industry knowledge and the capabilities that Accenture brought to the table with the ALIP platform. Their book of business, their current infrastructure, their configuration versus development, really all aligned with our need for flexible, fast time to market. We're looking to cut development times significantly. We're looking to cut test in times significantly. And as of right now, it's all proving true between the cloud capability and the ALIP capability. We are reaping the benefits of having this new platform coming up in live very soon here. >> Before I get to Accenture's perspective, I want to just ask you a quick follow-up on that, Nick, if you don't mind. You basically talk us through, okay, I can see what's happening here. You get with Accenture, take advantage of what they got going on. You get into the cloud, you start getting the efficiencies, get the cultural change. What refactoring have you seen? What's your vision, I should say. What's your vision around what's next? Because clearly there's a playbook. You get in the cloud, re-platform, you get the cultural fit, you understand the personnel issues, how to tap the resources, then you've got to look for innovation where you can start changing, how you do things to refactor the business model. >> Yeah, so I think that, specifically to this conversation, that's around the product capability. So for all too long, the insurance companies have had three specific sleeves of insurance products. We've had individual life. We have an individual disability and we'd have individual annuities. Each of them serving a specific purpose in the customer's lives. What this platform and this cloud platform allows us to do is start to think about, can we create the concept of a single wrapper? Can we bring some of these products together? Can we centralize the buying process? And with ALIP behind the scenes, you don't have that, I kind of equate it to building a Ferrari and attaching a trailer to it, and that's what we were doing today. Our digital front-ends, our new business capabilities are all being anchored down or slowed down by our traditional mainframe back-ends. By introducing Accenture on the cloud in AWS, we now have our Ferrari fully free to run as fast as it can versus anchoring this massive trailer to it. So it really was a matter of bringing our product innovation to our digital front-end innovation that we've been working on for two or three years prior. >> I mean, this is the kind of the Amazon way. You decouple things, you decompose, you don't want to have a drag. And with containers, we're seeing companies look at existing legacy in a way that's different. Could you talk about how you guys look at that Nick internally because a lot of CIO's are saying, Hey, you know what? I can have the best of both worlds. I don't have to kill the old to bring in the new, but I can certainly modernize everything. What's your reaction to that? >> Yeah. And I think that's our exact path forward. We don't feel like we need to blow the ocean. We're going after this surgically for the things that we think are going to be most impactful to our customers. So legacy blocks of business that are sitting out there, that are for completely closed, they're not our concern. It's really hitching this new ALIP capability to the next generation of products, the next generation of customer needs, understanding data. Data capture is very important. So if you look at the mainframes and what we're living on now, it's all about the owner of the policy. You lose connection with the beneficiary or the insured. What these new platforms allowed us to do is really understand the household around the products that they're buying. I know it sounds simple, but that data architecture, that data infrastructure on these newer platforms and in the cloud, you can churn it faster, you have scale to do more analysis, but you're also able to capture in a much cleaner way. On the traditional systems, you're talking about what we call intimately the blob on the mainframe that has your name, your first name, your last name, your address, all in one free form field sitting in some database. It's very hard to discern. On these new platforms, given our need and our desire to be deeper into the client's lives, understanding their needs, ALIP coupled with AWS, with our new business capabilities on the front-end really puts together that true customer value chain. That's going to differentiate us. >> Kim, okay, CTO of ALIP as he calls it, the acronym for the service you have. This is a great example. I hate to use the word on-ramp cause that sounds so old. But in a way, in vertical markets, you're seeing the power of the cloud because the data and the AI could be freed up and you can take advantage of all the heavy lifting by providing some platform or some support with Amazon, your expertise. This is a great use case of that, I think. And this is I think a future trend where the developments can be faster, that value can be faster, and your customers don't have to build all the lower level abstractions, if you will. Can you describe the essential relationship to your customers as you guys? Because this is a real great use case. >> Yeah, it is. Our philosophy is simple. Let's not reinvent the wheel. And with cloud and native services that AWS provide, we want to focus on the business of what the system needs to do and not all the little side bits. We can get a great service that's fully managed, that has security patches updates. We want to focus on the real deal. Like Nick wants to focus on the business and not so much what's underneath it. That's my problem, I'm focusing on that. And we will work together in a nice little gel. You've had the relatively new term, no code/low code. It's strange. A modern system like ALIP has been that way for a number of years. Basically it means, I don't want to make code changes. I just want to be able to configure it. So now more people can have access to make change, and we can even get it to the point where it's the people that are sitting there, dealing with the clients. That would be the ultimate, where they can innovate and come up with ideas and try things because we've got it so simple. We're not there yet, let's be realistic, but that's the ultimate goal. So ALIP, the no code/low code has been around for quite some time. And maybe we should take advantage of that, but I think we're missing one thing. So as good as the platform is, the cloud moving in, calculating native services using the built-in security that comes with all that and extending the function and then be able to tap into the InsurTech, FinTech, internet of things, and quickly adapt. I think the partnership is big. Okay, it's very strong part of the exercise. So you can add the product, but without the people that work well together, I think it's also a big challenge. All programs have their idiosyncrasies and there's a lot of challenges along the way. There's one really small simple example I can use. I'd say Guardian is one of our industries market leaders when they approach the security. They really do lead the way out there. They're very strict, very responsible, which is such a pleasure to say, but at the end of the day, you still need to run a business. So, 'cause we're a partnership because we all have the same challenges, we want to get to success. We were able to work together quite quickly. We planned out the right approach that maximize the security, but it also progressed the business and we applied that into the overall program. So I think it is a product definitely. I think it is everything Nick said, you actually elaborated on, but I'd like to point out, there's a big part of the partnership to make it a success as well. >> Yeah, great, great call out there. Nick, let's get your reaction on that because I want to get it to the customer side of it. This enablement platform is the new, I mean, platform has been around for awhile, but the notion of buying tools and having platforms are now interesting 'cause you have to take this low code/no code capability. I mean, you still got a code. I mean, there's some coding going on, but what it means is ease of use composing and being fast. Platforms are super important. That requires real architecture and partnership. What's your reaction? >> Yeah, so I think I'll tie it all together between AWS and ALIP, and here's the beauty of it. So we have something called LaunchPad where we're able to quickly stand up in ALIP instance for development capabilities because of our Amazon relationship. And then to Kim's point, we have been successful with 85% or more, of all the work we've done with an ALIP is configuration versus code and I'd actually I'd venture to say 90%. So that's extremely powerful when you think about the speed to market and our need to be product innovative. So if our developers and even our analysts that sit on the business side could come in and quickly stand up a development environment, start to play with actuarial calculations, new product features and function and then spin that to a more higher-end development environment. You now have the perfect coupling of a new policy administration system that has a flexibility and configuration with a cloud provider like Amazon and AWS that allows us to move quickly with environments, whereas in days past, you'd have to have an architecture team come in and stand up the servers. And I'm going way back, but like buy the boxes, put the boxes in place and wire them down. This combination of ALIP and AWS has really brought a new capability to Guardian and we're really excited about. >> I love that little comparison. Let me just quickly ask you, compared to the old way, give us an order of magnitude of pain and timing involved versus what you just described as standing up something very quickly and getting value and having people shift their intellectual capital into value activities versus undifferentiated heavy lifting. >> Yes, I'll give you real dates. So we really engaged with Accenture on the ALIP program right before Thanksgiving of last year. We had our environment stood up and running, all of our DEV, SIT, UAT up by February, March timeframe on AWS and we are about to launch our first product configuration into the ALIP platform coming November. So within a year, we've taken arguably decades of product innovation from our mainframes and built it onto the ALIP platform on the Amazon cloud. So I don't know that you can do that in any other type of environment or partnership. >> That's amazing. That's just great example to me where cloud scale and real refactoring and business agility is plays out. So congratulations. I got to ask you now, on the customer side you mentioned, you guys love providing value to the customers. What is the impact to the customer? Okay, now you're a customer, Guardian Life's customer. What's the impact to them? Can you share how you see that rendering itself in the marketplace? >> Yeah, so clearly AWS has rendered tons of value to the customer across the value stream whether it be our new business capability, our underwriting capability, our ability to process data and use their scale. I mean, it just goes on and on about the AWS, but specifically around ALIP, the new API environment that we have, the connectivity that we can now make with the new back-end policy admin systems has really brought us to a new level, whether it be repricing, product innovation, responding to claims capabilities, responding to servicing capabilities that the customer might need. We're able to introduce more self-service. So if you think about it from the back-end policy admin going forward to our client portal, we're able to expose more transactions to self-serve. So minimize calls to the call center, minimize frustration of hold times and allow them to come onto the portal and do more and interact more with their policies because we're on this new, more modern cloud environment and a new more modern policy admin. So we're delivering new capabilities to the customer from beginning to end being on the cloud with ALIP. >> Okay, final question. What's next for Guardian Life's journey year with Accenture? What's your plans? What do you want to knock down for the next year? What's on your mind? What's next? >> So that's an easy question. We've had this roadmap plan since we first started talking to Accenture, at least I've had it in my head. We want off all of our policy admin systems for new business come end of 2025. So we've got about four policy admin systems maintaining our different lines of business, our individual disability, our life insurance, and our annuities, for systems that are weighing us down a little bit. We have a glide path and a roadmap with Accenture as a partner to get off of all of these for new business capability by end of 2024. And I'm being gracious to my teams when I say that I'd like to go a little bit sooner. And then we begin to migrate the most important blocks of business that caused the most angst and most concerned with the executive leadership team and then complete the product. But along the way, given regulation, given new customer needs, meeting the needs of the customer's changing life, we're going to have parallel tracks. So I envision we continue to have this flywheel turning of moving, but then we begin another flywheel right next to it that says we're going to innovate now on the new platform as well. So ultimately John, next year, if I could have my entire whole life block, as it stands today on the new admin platform, and one or two new product innovations on the platform as well by the 3rd quarter, 4th quarter of next year, that would be a success as far as I'm concerned. >> Awesome, you guys had all planned out. I love, and I have such a passion for how technology powers business. And this is such a great story for next gen where the modernization trend is today and where it's going. So Nick appreciate it. Kim, thanks for coming out with Accenture. Nick, so just an easy question for you. I have to ask you another one. This is I got you here. You guys are doing a lot of great work. For other CIOs out there that are going through this right now, whatever they are on the spectrum, missed the CloudWave, getting in now, this notion of refactoring and then re-platforming and then refactoring business is a playbook we're seeing emerge. People can get the benefits of going to the cloud, certainly for efficiency, but now it opens up the aperture for different kinds of business models. With more data access, with machine learning. This refactoring seems to be the new hot thing where the best minds are saying, wow, we could do more, even more. What's your vision? How would you share those folks out there of the CIOs? What should they be thinking? What's their approach? What advice would you give? >> Yeah, so a lot of the mistakes we make as CIOs, we go for the white hot core first. We went the other way. We went for the newer digital assets. We went for the stuff that wasn't as concerning to the business. Should we fall over? Should there be an outage? Should there be anything? So if you avoid the white hot core, improve it with your peripherals, easier moves to the cloud portals, broker portals, beneficiary portals, simple AIX frames, moving to the cloud and making them cloud native, new builds. So we started with all those peripheral pieces of the architecture and we avoided the white hot core because that's where you start to get those very difficult conversations about, I don't know if I'm ready to move. And I don't see the obvious benefit of moving a dividend generating policy admin system to the cloud, like why? When you prove it in the pudding and you put the other things out there and prove you can be successful, the conversation to move your core and your white hot core out to the platform out to leverage the cloud and to leverage new admin platforms, it becomes a much easier conversation because you've kind of cut your teeth on something much less detrimental to the business should it go alright. >> What's the old expression, put water through the pipes, get some reps in and get the team ready to bring training, whatever your metaphor you use, that's what you're essentially saying there. Get some, your sea legs, get practice. >> Exactly. >> Then go for the hard stuff. >> It's such a valid point, John. We see a lot of different approaches across a lot of different companies and the biggest challenges, the core is the biggest part. And if you start with that, it can be the scariest part. And I've seen companies trip up big time and it becomes such a bubble spend, which really knocks you on for years, lose confidence in your strategy and everything else. And you're only as strong as your weakest link. So whether you do the outside first or the inside first, from a weakest link until the journey is complete, you never going to maximize. So it was a very different and new and great approach that they took by doing a learning curve around the easiest stuff and then hit in the core. >> Yeah, well, that's a great point. One quick, quick followup on that is that, talk about the impact to the personnel, Kim and Nick, because there's a morale issue going on too. There's a training. I won't say training, but there's a not re-skilling, but there's the rigor, if you're refactoring, you are re-skilling, you're doing new things. The impact of morale and confidence you get certainly. you don't want to be in the white core unconfident. >> Maybe I should get first 'cause it's Nick's stuff. So he probably might want to say a lot, yeah. What we see with a lot of insurance companies, they grow through acquisition. Okay, they're very large companies, grown over time, buying companies with businesses and systems and bringing it in. They usually bring a tenure staff. So getting the staff to the next generation, that staff is extremely important because they know everything that you've got today and then not so aware with what's coming up in the future. And there is a transition and people shouldn't feel threatened, but there is change and people do need to adopt and evolve and it should be fun and interesting, but it is a challenge at that turnover point on who controlling what, and then you get the concerns and get paranoid. So it is a true HR issue that you need to manage through. >> Nick you're the final word here. Go for it. >> Yeah, John. I'll give you a story that I think will sum the whole thing up about the excitement versus contention we see here at Guardian. I have a 50-year veteran on my legacy platform team and this person is so excited, got themselves certified in Amazon and is now leading the charge to bring our mainframes onto ALIP and is one of the most essential, and I've actually had Accenture tell me, if I had a person like this on every one of my engagements who is not only knowledgeable of the legacy, but is so excited to move to the new, I don't think I'd have a failed implementation. So that's the kind of Guardian, the kind of backing Guardian's putting behind this. We are absolutely focusing on rescaling. We are not going to the market. We're giving everyone the opportunity and we have an amazing take-up rate. And again, like I said, 50-year veteran who probably could have retired 10 years ago is so excited, reeducated themselves and is now a key part of this implementation. >> And who wouldn't want to drive a Ferrari when you see it come in. I mean, back in the trailer. Great story, Nick. Thank you for coming on, great insight. Kim, great stuff for the Accenture, as always a great story here. We're here at the heart of the real focus where all companies are feeling right now. We're surviving and thriving and coming out of the pandemic with a growth strategy and a business model powered by technology. So thanks for sharing the story, appreciate it. >> Thanks John, appreciate it. >> Okay, it's CUBE coverage of AWS Executive Summit at re:Invent 2021. I'm John Furrier, your host of theCUBE. Thanks for watching. (bright music)

Published Date : Oct 27 2021

SUMMARY :

in the digital transformation and coming out of it, the world's changed. and leveraging the cloud 'cause the cloud actually And a lot of the companies to see if you agree. had and the gift we were given Let's talk about the connection and then Kim, you can jump in He essentially help that, love it. Yeah. and the ALIP capability. You get in the cloud, re-platform, I kind of equate it to building a Ferrari I can have the best of both worlds. and in the cloud, you can churn it faster, and the AI could be freed up but at the end of the day, you but the notion of buying of all the work we've done with an ALIP compared to the old way, and built it onto the ALIP What is the impact to the customer? and on about the AWS, down for the next year? of business that caused the most angst I have to ask you another one. the conversation to move your core get some reps in and get the and the biggest challenges, talk about the impact to the personnel, So getting the staff Go for it. and is now leading the charge and coming out of the pandemic of AWS Executive Summit

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

Nick VolpePERSON

0.99+

AmazonORGANIZATION

0.99+

KimPERSON

0.99+

John FurrierPERSON

0.99+

NickPERSON

0.99+

Kim GullyPERSON

0.99+

January 1, 2022DATE

0.99+

AccentureORGANIZATION

0.99+

2016DATE

0.99+

oneQUANTITY

0.99+

AWSORGANIZATION

0.99+

next yearDATE

0.99+

GuardianORGANIZATION

0.99+

March of 2020DATE

0.99+

90%QUANTITY

0.99+

three-yearQUANTITY

0.99+

end of 2024DATE

0.99+

50-yearQUANTITY

0.99+

3rd quarterDATE

0.99+

2018DATE

0.99+

85%QUANTITY

0.99+

Guardian LifeORGANIZATION

0.99+

FerrariORGANIZATION

0.99+

EachQUANTITY

0.99+

18 monthsQUANTITY

0.99+

17DATE

0.99+

ALIPORGANIZATION

0.98+

end of 2025DATE

0.98+

one thingQUANTITY

0.98+

firstQUANTITY

0.98+

both worldsQUANTITY

0.97+

todayDATE

0.97+

OneQUANTITY

0.97+

ThanksgivingEVENT

0.97+

two great guestsQUANTITY

0.97+

first productQUANTITY

0.97+

Devon Reed, Dell Technologies | CUBE Conversation, September 2021


 

>>Hello, I'm John Frey with the queue here for cube conversation with Devon Reed, senior director of apex offer product mentioned Dell technologies. They have a great to see you. Congratulations on apex and the momentum and the big news. >>Yeah. Thank you for having me here, John. It's a, it's a pleasure to be here with you and I can't wait to talk to you about the stuff. >>So we chatted last Dell technologies world about apex in great length. Um, first given update on what's the new news and to where's it come from since Dell tech DEC world. What's, what's the big update on the product and the news you're launching today. >>Yeah, so it's been a, it's been a fantastic journey here, John. And, um, you know, since Dell technologies world, we've learned a ton from our customers and the reception has been extremely positive. We're seeing a ton of interest from our customers. We're building demand, um, and we're learning a lot, but I think if we boil it down to what we're, we're really learning here is that customers are living in a cloud first world. And what that means is that customers want to move, uh, you know, to the public cloud because the public cloud brings simplicity, agility, the ability to pay for only what they use and they don't need to manage their infrastructure. However, what we're hearing from our customers as well is that they're, um, a little hesitant to move all of their workloads to the public cloud, because there are certain performance requirements, latency requirements, and security requirements that are, uh, still, um, held from on-prem infrastructure vendors. And that's, that's the beauty of apex to bring that the, the simplicity of the public cloud and the security and the performance of the private cloud in one with it. >>I want to get your thoughts real quick before I move on to the news, because this comes up a lot in conversations. In fact, I just had a conversation this morning on camera and also off-camera around virtualization of data, right? So, and how on premises? The bare metal growth is there, right? So you starting to see from a performance standpoint, when you security, we get that. There's not a lot of on premises reasons why to be on premises for the security reason, but performance, you brought that up. Talk more about that real quick, because I think this is really becoming quite more traction than people thought there's a performance gain. On-premise with some of the new tech, what's your reaction to that? >>Yeah, exactly. I think that's a, it's a great call out John. And especially as you get into some of these new applications where the computation needs to be directly next to, uh, the data in which is processing latency and performance is extremely important. We hear that day in and day out from our customers. And that's why it's really, it's really important to focus on not only on public cloud environments, but on-premise infrastructure. And that's what apex really, really helps customers, um, bridge that bridge, that gap. >>And for the folks watching there's a great interview, search his name, Devon's name, and look at last year's announcement. We covered it in detail with apex. So some great content there. Go check that out. I got to ask about the news. You had some new announcements at VMworld earlier today. What can you tell us about the news? >>Yeah, yeah, we did. John. This is, this is an amazing year for Dell at VM world in general. Um, there's a ton of announcements that have come out with collaboration with VMware and Dell, but for apex specifically. And that's what I'm here to talk to you about is that we're introducing a new offer to the apex portfolio. And this offer, we call apex cloud services with VMware cloud. And what this really is, is it's a full infrastructure as a service stack and it's utilizing Dell's, uh, hyper-converged infrastructure. So it's integrated storage, networking compute, and we combine that with, um, the virtualization stack from, uh, VMware virtualization stack and the services. It's a solution that's managed by Dell, it's designed for six nines of availability. And again, going back to what customers are asking for, it allows customers, the performance, the security, and it also provides those consistent operations across their multi-cloud environments. >>What's the driver behind the customer requirements than this. Is there a specific use case that jumps out off the page on, on the managed service? Could you share why the traction? >>Yeah. You know, um, this space is growing really rapidly and it's the new space. And as we talk to more and more customers, we learned there's a ton of different use cases, a ton of different deployments that are really coming to the forefront. But if I really boil it down, there are a few that are kind of rising to the top year. And I think first and foremost, we see a lot of deployments in VDI and really the driver behind that is some of those, those environments are complex. And what the customers are trying to do is really offload those it administrative tasks and have companies like Dell manager. And that's what we're doing for them. Another one is, um, you know, really around that latency, latency and security, really trying to drive applications to not suffer from, you know, that hate latency and security kind of benefits. >>Now, um, what we've seen is we have a lot of interest from very large enterprises that actually want to build and modernize their data centers. So they're either consolidating their data centers or they're trying to move to a fully automated, uh, hybrid cloud situation, right. And I'm talking very large deployments of, uh, VMware based, um, private cloud, uh, capabilities. And I say one other place that we're seeing a lot of interest in these sort of capabilities is large distributed kind of edge use cases. So think, um, you know, think use cases where you have, um, hundreds of remote office locations or a thousands of retail locations that is very difficult for customers to manage. And we take that burden away from our customers. >>Uh, thanks for laying out the customer scenarios and the use case. Good stuff I got to ask you about the solution now appears that it was jointly developed with VMware. Is that right? And if so, can you tell me more about that? >>Yeah, yeah, exactly. John, this is, this is amazing. The amount of collaboration that has gone into this solution with VMware is incredible and really it's based on customer feedback. And we saw, you know, based on this feedback, we've saw a real need to basically take the best of VMware software and their services capabilities, and our, you know, Dell's world-leading infrastructure capabilities and really combine it with the simplicity and agility that apex apex provides. So we've been working with VMware very tightly, uh, over the past year and more to really develop the solution. It's been a great journey, been spending a ton of time with the VMware team building this and, um, you know, customers really love what VMware cloud enables and customers love apex. So it's a really powerful combo and we think it's, it's really the next, uh, kind of rocket ship for, you know, the combined companies here. Yeah. >>I think the VDI piece and these use cases, you mentioned only get more relevant and complex at the same time with the whole shifting in the working environments, you know, the work from home, the future of work, you know, you have the blurring of the lines between private, you know, home versus corporate network. It's like, I mean, we thought it was hard before it's going to get even more complicated. So the pressure's on to abstract away the complexity. So, so totally relevant. Yeah. >>And demand for these kinds of solutions we're seeing, you know, the interest is, is doubling. Uh, it seems like almost every six months, you know, there's a lot more interest, especially as we progress through this pandemic and the, and uh, this environment that we're living in you. So, okay. >>So I got to ask you going forward again. Great progress from our last time we chatted at Dell technology worlds last year, um, 2021, um, what's ahead for Dell and the VMware partnership. Tell us more, how does that look? Um, extending is what's the trajectory look like, and you share any specifics, what can we expect? What's the headroom? What should customers expect? >>Yeah, yeah. You know, we get that question a lot and really, um, you know, nothing is really, although we are going to be separating as, as different entities, you know, the collaboration and the, the level of, um, joint development that we have between the two companies, uh, couldn't be stronger now. And we don't, we do not expect that to change. And we're just getting started on this thing and there's a lot more to come for sure. >>What's the biggest thing that you're, you're excited about. Obviously apex has been a good, it's a trajectory. The progress has been great. The market's in your favor, what's, what's exciting for you right now. Where do you see the action? Um, you know, where's, where's the fun for you in this what's that what's, uh, what's your take? >>You know, it always, for me, the fun always comes down to customers and understanding what the customers want, understanding what the solution, where the solution works, where the solution doesn't work, really working with our customers to really understand their problems and really try to work. So that's where I, I get my energy, uh, in this whole thing and to see the, see the pipeline grow and the sales coming in, that's just, it's really exciting for me, you know, as we're kind of embarking on this new, as a service, uh, world for the, for the multicloud world, it's, it's just, it's fantastic, John, >>You know, the one-click buy as you go consumption-based, this is the trend and infrastructure as code, which is a cloud ethos, and you may not have any on premises with security and now performance, it seems like we're seeing the second wave of virtualization kick in on premises where now that you're in a cloud operating model from storage compute, networking, kind of almost a reboot, almost a reset or an extension or a real-life, it seems like it's another second life of, of, of, of, of innovation. What's your reaction to that? >>Yeah. I, I definitely agree with you, John, and, you know, from a, from a vision perspective, we're just, we're just starting to, uh, you know, we're just starting out there and we, you know, if we think about the power, uh, in the breadth of the portfolio that Dell has, it is unmatched in the industry. So first and foremost, you know, there's a lot more from a, from a solution perspective that we can bring to the floor. So I think that's, that's really exciting. I like the position that we have there and in terms of collaboration with VMware, we're just getting started there too. And, uh, I spend, uh, almost a half of my day with VMware employees, which is incredible amount of collaboration. And there's so much more that we've talked about in our roadmap, uh, to really build out this vision when you start thinking about not just virtualization, but you start to talk about, um, you know, these, these new operating environments, including Kubernetes and Tansu capabilities. And, um, you know, how do you, how do you hit different, uh, use cases with, um, not only hyper converged and hyper-converged infrastructure, but different types of infrastructure as well. And then you start to span, uh, not only the prem, but the co-location facilities and, and the edge, and you bring this all together under the apex console. And I like our future >>Console based provisioning, easy, uh, congratulations on the big news apex cloud services with VMware cloud, um, for the folks watching, that's going to come in and maybe adopt the solution, the managed service, what can they expect from Dell? >>Uh, what you can expect is a very simple experience. So, uh, everything starts and ends with what we call our apex console. So a customer from the time they, they want to learn about our services to, um, you know, getting quotes on them, to actually transacting the, uh, the service, um, to operating the infrastructure from that. And then we provide a full set of, uh, services under the cover where a customer doesn't need to worry about the actual infrastructure management. And we provide customer success managers for every account. So we, we are there with you, uh, along every step of the journey to make this as seamless and easy as possible. So it's a fantastic, uh, experience for our customers. And that's, that's one of the things that they really love about the apex is that, um, you know, kind of white glove service that we're providing >>Devin. Great to see you, Devin Marine, senior director of Dell apex offer product management. He's only getting the product to see and congratulatory success, apex cloud services with VMware clouds, the big news here at VMworld with Dell technologies, I'm John furrier cube conversation, breaking it down and bringing the news to you. Thanks for watching.

Published Date : Oct 5 2021

SUMMARY :

and the big news. It's a, it's a pleasure to be here with you and I So we chatted last Dell technologies world about apex in great length. And, um, you know, since Dell technologies world, So you starting to see from a performance standpoint, And especially as you get into some And for the folks watching there's a great interview, search his name, Devon's name, and look at last year's And that's what I'm here to talk to you about is Could you share why the traction? Another one is, um, you know, really around that latency, latency and security, So think, um, you know, think use cases where you have, And if so, can you tell me more about that? And we saw, you know, based on this feedback, you know, the work from home, the future of work, you know, you have the blurring of the lines between And demand for these kinds of solutions we're seeing, you know, the interest is, So I got to ask you going forward again. um, you know, nothing is really, although we are going to be separating as, Um, you know, where's, where's the fun for you sales coming in, that's just, it's really exciting for me, you know, You know, the one-click buy as you go consumption-based, this is the trend but the co-location facilities and, and the edge, and you bring this all together under um, you know, getting quotes on them, to actually transacting the, He's only getting the product to see and congratulatory success, apex cloud services with VMware

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

DellORGANIZATION

0.99+

Devon ReedPERSON

0.99+

September 2021DATE

0.99+

John FreyPERSON

0.99+

DevinPERSON

0.99+

Devin MarinePERSON

0.99+

2021DATE

0.99+

two companiesQUANTITY

0.99+

VMwareORGANIZATION

0.99+

apexORGANIZATION

0.99+

hundredsQUANTITY

0.99+

last yearDATE

0.99+

thousandsQUANTITY

0.99+

firstQUANTITY

0.98+

DevonPERSON

0.98+

oneQUANTITY

0.98+

six ninesQUANTITY

0.98+

VMworldORGANIZATION

0.96+

Dell TechnologiesORGANIZATION

0.96+

apex apexTITLE

0.92+

this morningDATE

0.91+

earlier todayDATE

0.9+

pandemicEVENT

0.9+

todayDATE

0.9+

Dell apexORGANIZATION

0.86+

apexTITLE

0.83+

six monthsQUANTITY

0.83+

remote officeQUANTITY

0.83+

second lifeQUANTITY

0.81+

second wave ofEVENT

0.71+

VM worldORGANIZATION

0.71+

past yearDATE

0.63+

thingsQUANTITY

0.61+

KubernetesTITLE

0.61+

VMware cloudORGANIZATION

0.57+

casesQUANTITY

0.56+

VMwareTITLE

0.53+

TansuTITLE

0.53+