Image Title

Search Results for Sat:

Satish Iyer, Dell Technologies | SuperComputing 22


 

>>We're back at Super Computing, 22 in Dallas, winding down the final day here. A big show floor behind me. Lots of excitement out there, wouldn't you say, Dave? Just >>Oh, it's crazy. I mean, any, any time you have NASA presentations going on and, and steampunk iterations of cooling systems that the, you know, it's, it's >>The greatest. I've been to hundreds of trade shows. I don't think I've ever seen NASA exhibiting at one like they are here. Dave Nicholson, my co-host. I'm Paul Gell, in which with us is Satish Ier. He is the vice president of emerging services at Dell Technologies and Satit, thanks for joining us on the cube. >>Thank you. Paul, >>What are emerging services? >>Emerging services are actually the growth areas for Dell. So it's telecom, it's cloud, it's edge. So we, we especially focus on all the growth vectors for, for the companies. >>And, and one of the key areas that comes under your jurisdiction is called apex. Now I'm sure there are people who don't know what Apex is. Can you just give us a quick definition? >>Absolutely. So Apex is actually Dells for a into cloud, and I manage the Apex services business. So this is our way of actually bringing cloud experience to our customers, OnPrem and in color. >>But, but it's not a cloud. I mean, you don't, you don't have a Dell cloud, right? It's, it's of infrastructure as >>A service. It's infrastructure and platform and solutions as a service. Yes, we don't have our own e of a public cloud, but we want to, you know, this is a multi-cloud world, so technically customers want to consume where they want to consume. So this is Dell's way of actually, you know, supporting a multi-cloud strategy for our customers. >>You, you mentioned something just ahead of us going on air. A great way to describe Apex, to contrast Apex with CapEx. There's no c there's no cash up front necessary. Yeah, I thought that was great. Explain that, explain that a little more. Well, >>I mean, you know, one, one of the main things about cloud is the consumption model, right? So customers would like to pay for what they consume, they would like to pay in a subscription. They would like to not prepay CapEx ahead of time. They want that economic option, right? So I think that's one of the key tenets for anything in cloud. So I think it's important for us to recognize that and think Apex is basically a way by which customers pay for what they consume, right? So that's a absolutely a key tenant for how, how we want to design Apex. So it's absolutely right. >>And, and among those services are high performance computing services. Now I was not familiar with that as an offering in the Apex line. What constitutes a high performance computing Apex service? >>Yeah, I mean, you know, I mean, this conference is great, like you said, you know, I, there's so many HPC and high performance computing folks here, but one of the things is, you know, fundamentally, if you look at high performance computing ecosystem, it is quite complex, right? And when you call it as an Apex HPC or Apex offering offer, it brings a lot of the cloud economics and cloud, you know, experience to the HPC offer. So fundamentally, it's about our ability for customers to pay for what they consume. It's where Dell takes a lot of the day to day management of the infrastructure on our own so that customers don't need to do the grunge work of managing it, and they can really focus on the actual workload, which actually they run on the CHPC ecosystem. So it, it is, it is high performance computing offer, but instead of them buying the infrastructure, running all of that by themself, we make it super easy for customers to consume and manage it across, you know, proven designs, which Dell always implements across these verticals. >>So what, what makes the high performance computing offering as opposed to, to a rack of powered servers? What do you add in to make it >>Hpc? Ah, that's a great question. So, I mean, you know, so this is a platform, right? So we are not just selling infrastructure by the drink. So we actually are fundamentally, it's based on, you know, we, we, we launch two validated designs, one for life science sales, one for manufacturing. So we actually know how these PPO work together, how they actually are validated design tested solution. And we also, it's a platform. So we actually integrate the softwares on the top. So it's just not the infrastructure. So we actually integrate a cluster manager, we integrate a job scheduler, we integrate a contained orchestration layer. So a lot of these things, customers have to do it by themself, right? If they're buy the infrastructure. So by basically we are actually giving a platform or an ecosystem for our customers to run their workloads. So make it easy for them to actually consume those. >>That's Now is this, is this available on premises for customer? >>Yeah, so we, we, we make it available customers both ways. So we make it available OnPrem for customers who want to, you know, kind of, they want to take that, take that economics. We also make it available in a colo environment if the customers want to actually, you know, extend colo as that OnPrem environment. So we do both. >>What are, what are the requirements for a customer before you roll that equipment in? How do they sort of have to set the groundwork for, >>For Well, I think, you know, fundamentally it starts off with what the actual use case is, right? So, so if you really look at, you know, the two validated designs we talked about, you know, one for, you know, healthcare life sciences, and one other one for manufacturing, they do have fundamentally different requirements in terms of what you need from those infrastructure systems. So, you know, the customers initially figure out, okay, how do they actually require something which is going to require a lot of memory intensive loads, or do they actually require something which has got a lot of compute power. So, you know, it all depends on what they would require in terms of the workloads to be, and then we do havet sizing. So we do have small, medium, large, we have, you know, multiple infrastructure options, CPU core options. Sometimes the customer would also wanna say, you know what, as long as the regular CPUs, I also want some GPU power on top of that. So those are determinations typically a customer makes as part of the ecosystem, right? And so those are things which would, they would talk to us about to say, okay, what is my best option in terms of, you know, kind of workloads I wanna run? And then they can make a determination in terms of how, how they would actually going. >>So this, this is probably a particularly interesting time to be looking at something like HPC via Apex with, with this season of Rolling Thunder from various partners that you have, you know? Yep. We're, we're all expecting that Intel is gonna be rolling out new CPU sets from a powered perspective. You have your 16th generation of PowerEdge servers coming out, P C I E, gen five, and all of the components from partners like Invidia and Broadcom, et cetera, plugging into them. Yep. What, what does that, what does that look like from your, from your perch in terms of talking to customers who maybe, maybe they're doing things traditionally and they're likely to be not, not fif not 15 G, not generation 15 servers. Yeah. But probably more like 14. Yeah, you're offering a pretty huge uplift. Yep. What, what do those conversations look >>Like? I mean, customers, so talking about partners, right? I mean, of course Dell, you know, we, we, we don't bring any solutions to the market without really working with all of our partners, whether that's at the infrastructure level, like you talked about, you know, Intel, amd, Broadcom, right? All the chip vendors, all the way to software layer, right? So we have cluster managers, we have communities orchestrators. So we usually what we do is we bring the best in class, whether it's a software player or a hardware player, right? And we bring it together as a solution. So we do give the customers a choice, and the customers always want to pick what you they know actually is awesome, right? So they that, that we actually do that. And, you know, and one of the main aspects of, especially when you talk about these things, bringing it as a service, right? >>We take a lot of guesswork away from our customer, right? You know, one of the good example of HPC is capacity, right? So customers, these are very, you know, I would say very intensive systems. Very complex systems, right? So customers would like to buy certain amount of capacity, they would like to grow and, you know, come back, right? So give, giving them the flexibility to actually consume more if they want, giving them the buffer and coming down. All of those things are very important as we actually design these things, right? And that takes some, you know, customers are given a choice, but it actually, they don't need to worry about, oh, you know, what happens if I actually have a spike, right? There's already buffer capacity built in. So those are awesome things. When we talk about things as a service, >>When customers are doing their ROI analysis, buying CapEx on-prem versus, versus using Apex, is there a point, is there a crossover point typically at which it's probably a better deal for them to, to go OnPrem? >>Yeah, I mean, it it like specifically talking about hpc, right? I mean, why, you know, we do have a ma no, a lot of customers consume high performance compute and public cloud, right? That's not gonna go away, right? But there are certain reasons why they would look at OnPrem or they would look at, for example, Ola environment, right? One of the main reasons they would like to do that is purely have to do with cost, right? These are pretty expensive systems, right? There is a lot of ingress, egress, there is a lot of data going back and forth, right? Public cloud, you know, it costs money to put data in or actually pull data back, right? And the second one is data residency and security requirements, right? A lot of these things are probably proprietary set of information. We talked about life sciences, there's a lot of research, right? >>Manufacturing, a lot of these things are just, just in time decision making, right? You are on a factory floor, you gotta be able to do that. Now there is a latency requirement. So I mean, I think a lot of things play, you know, plays into this outside of just cost, but data residency requirements, ingress, egress are big things. And when you're talking about mass moments of data you wanna put and pull it back in, they would like to kind of keep it close, keep it local, and you know, get a, get a, get a price >>Point. Nevertheless, I mean, we were just talking to Ian Coley from aws and he was talking about how customers have the need to sort of move workloads back and forth between the cloud and on-prem. That's something that they're addressing without posts. You are very much in the, in the on-prem world. Do you have, or will you have facilities for customers to move workloads back and forth? Yeah, >>I wouldn't, I wouldn't necessarily say, you know, Dell's cloud strategy is multi-cloud, right? So we basically, so it kind of falls into three, I mean we, some customers, some workloads are suited always for public cloud. It's easier to consume, right? There are, you know, customers also consume on-prem, the customers also consuming Kohler. And we also have like Dell's amazing piece of software like storage software. You know, we make some of these things available for customers to consume a software IP on their public cloud, right? So, you know, so this is our multi-cloud strategy. So we announced a project in Alpine, in Delta fold. So you know, if you look at those, basically customers are saying, I love your Dell IP on this, on this product, on the storage, can you make it available through, in this public environment, whether, you know, it's any of the hyper skill players. So if we do all of that, right? So I think it's, it shows that, you know, it's not always tied to an infrastructure, right? Customers want to consume the best thumb and if we need to be consumed in hyperscale, we can make it available. >>Do you support containers? >>Yeah, we do support containers on hpc. We have, we have two container orchestrators we have to support. We, we, we have aner similarity, we also have a container options to customers. Both options. >>What kind of customers are you signing up for the, for the HPC offerings? Are they university research centers or is it tend to be smaller >>Companies? It, it's, it's, you know, the last three days, this conference has been great. We probably had like, you know, many, many customers talking to us. But HC somewhere in the range of 40, 50 customers, I would probably say lot of interest from educational institutions, universities research, to your point, a lot of interest from manufacturing, factory floor automation. A lot of customers want to do dynamic simulations on factory floor. That is also quite a bit of interest from life sciences pharmacies because you know, like I said, we have two designs, one on life sciences, one on manufacturing, both with different dynamics on the infrastructure. So yeah, quite a, quite a few interest definitely from academics, from life sciences, manufacturing. We also have a lot of financials, big banks, you know, who wants to simulate a lot of the, you know, brokerage, a lot of, lot of financial data because we have some, you know, really optimized hardware we announced in Dell for, especially for financial services. So there's quite a bit of interest from financial services as well. >>That's why that was great. We often think of Dell as, as the organization that democratizes all things in it eventually. And, and, and, and in that context, you know, this is super computing 22 HPC is like the little sibling trailing around, trailing behind the super computing trend. But we definitely have seen this move out of just purely academia into the business world. Dell is clearly a leader in that space. How has Apex overall been doing since you rolled out that strategy, what, two couple? It's been, it's been a couple years now, hasn't it? >>Yeah, it's been less than two years. >>How are, how are, how are mainstream Dell customers embracing Apex versus the traditional, you know, maybe 18 months to three year upgrade cycle CapEx? Yeah, >>I mean I look, I, I think that is absolutely strong momentum for Apex and like we, Paul pointed out earlier, we started with, you know, making the infrastructure and the platforms available to customers to consume as a service, right? We have options for customers, you know, to where Dell can fully manage everything end to end, take a lot of the pain points away, like we talked about because you know, managing a cloud scale, you know, basically environment for the customers, we also have options where customers would say, you know what, I actually have a pretty sophisticated IT organization. I want Dell to manage the infrastructure, but up to this level in the layer up to the guest operating system, I'll take care of the rest, right? So we are seeing customers who are coming to us with various requirements in terms of saying, I can do up to here, but you take all of this pain point away from me or you do everything for me. >>It all depends on the customer. So we do have wide interest. So our, I would say our products and the portfolio set in Apex is expanding and we are also learning, right? We are getting a lot of feedback from customers in terms of what they would like to see on some of these offers. Like the example we just talked about in terms of making some of the software IP available on a public cloud where they'll look at Dell as a software player, right? That's also is absolutely critical. So I think we are giving customers a lot of choices. Our, I would say the choice factor and you know, we are democratizing, like you said, expanding in terms of the customer choices. And I >>Think it's, we're almost outta our time, but I do wanna be sure we get to Dell validated designs, which you've mentioned a couple of times. How specific are the, well, what's the purpose of these designs? How specific are they? >>They, they are, I mean I, you know, so the most of these valid, I mean, again, we look at these industries, right? And we look at understanding exactly how would, I mean we have huge embedded base of customers utilizing HPC across our ecosystem in Dell, right? So a lot of them are CapEx customers. We actually do have an active customer profile. So these validated designs takes into account a lot of customer feedback, lot of partner feedback in terms of how they utilize this. And when you build these solutions, which are kind of end to end and integrated, you need to start anchoring on something, right? And a lot of these things have different characteristics. So these validated design basically prove to us that, you know, it gives a very good jump off point for customers. That's the way I look at it, right? So a lot of them will come to the table with, they don't come to the blank sheet of paper when they say, oh, you know what I'm, this, this is my characteristics of what I want. I think this is a great point for me to start from, right? So I think that that gives that, and plus it's the power of validation, really, right? We test, validate, integrate, so they know it works, right? So all of those are hypercritical. When you talk to, >>And you mentioned healthcare, you, you mentioned manufacturing, other design >>Factoring. We just announced validated design for financial services as well, I think a couple of days ago in the event. So yep, we are expanding all those DVDs so that we, we can, we can give our customers a choice. >>We're out of time. Sat ier. Thank you so much for joining us. Thank you. At the center of the move to subscription to everything as a service, everything is on a subscription basis. You really are on the leading edge of where, where your industry is going. Thanks for joining us. >>Thank you, Paul. Thank you Dave. >>Paul Gillum with Dave Nicholson here from Supercomputing 22 in Dallas, wrapping up the show this afternoon and stay with us for, they'll be half more soon.

Published Date : Nov 17 2022

SUMMARY :

Lots of excitement out there, wouldn't you say, Dave? you know, it's, it's He is the vice Thank you. So it's telecom, it's cloud, it's edge. Can you just give us a quick definition? So this is our way I mean, you don't, you don't have a Dell cloud, right? So this is Dell's way of actually, you know, supporting a multi-cloud strategy for our customers. You, you mentioned something just ahead of us going on air. I mean, you know, one, one of the main things about cloud is the consumption model, right? an offering in the Apex line. we make it super easy for customers to consume and manage it across, you know, proven designs, So, I mean, you know, so this is a platform, if the customers want to actually, you know, extend colo as that OnPrem environment. So, you know, the customers initially figure out, okay, how do they actually require something which is going to require Thunder from various partners that you have, you know? I mean, of course Dell, you know, we, we, So customers, these are very, you know, I would say very intensive systems. you know, we do have a ma no, a lot of customers consume high performance compute and public cloud, in, they would like to kind of keep it close, keep it local, and you know, get a, Do you have, or will you have facilities So you know, if you look at those, basically customers are saying, I love your Dell IP on We have, we have two container orchestrators We also have a lot of financials, big banks, you know, who wants to simulate a you know, this is super computing 22 HPC is like the little sibling trailing around, take a lot of the pain points away, like we talked about because you know, managing a cloud scale, you know, we are democratizing, like you said, expanding in terms of the customer choices. How specific are the, well, what's the purpose of these designs? So these validated design basically prove to us that, you know, it gives a very good jump off point for So yep, we are expanding all those DVDs so that we, Thank you so much for joining us. Paul Gillum with Dave Nicholson here from Supercomputing 22 in Dallas,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
TerryPERSON

0.99+

Dave NicholsonPERSON

0.99+

AWSORGANIZATION

0.99+

Ian ColeyPERSON

0.99+

Dave VellantePERSON

0.99+

Terry RamosPERSON

0.99+

DavePERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

EuropeLOCATION

0.99+

Paul GellPERSON

0.99+

DavidPERSON

0.99+

Paul GillumPERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

John FurrierPERSON

0.99+

Andy JassyPERSON

0.99+

190 daysQUANTITY

0.99+

AmazonORGANIZATION

0.99+

PaulPERSON

0.99+

European Space AgencyORGANIZATION

0.99+

Max PetersonPERSON

0.99+

DellORGANIZATION

0.99+

CIAORGANIZATION

0.99+

AfricaLOCATION

0.99+

oneQUANTITY

0.99+

Arcus GlobalORGANIZATION

0.99+

fourQUANTITY

0.99+

BahrainLOCATION

0.99+

D.C.LOCATION

0.99+

EvereeORGANIZATION

0.99+

AccentureORGANIZATION

0.99+

JohnPERSON

0.99+

UKLOCATION

0.99+

four hoursQUANTITY

0.99+

USLOCATION

0.99+

DallasLOCATION

0.99+

Stu MinimanPERSON

0.99+

Zero DaysTITLE

0.99+

NASAORGANIZATION

0.99+

WashingtonLOCATION

0.99+

Palo Alto NetworksORGANIZATION

0.99+

CapgeminiORGANIZATION

0.99+

Department for Wealth and PensionsORGANIZATION

0.99+

IrelandLOCATION

0.99+

Washington, DCLOCATION

0.99+

an hourQUANTITY

0.99+

ParisLOCATION

0.99+

five weeksQUANTITY

0.99+

1.8 billionQUANTITY

0.99+

thousandsQUANTITY

0.99+

GermanyLOCATION

0.99+

450 applicationsQUANTITY

0.99+

Department of DefenseORGANIZATION

0.99+

AsiaLOCATION

0.99+

John WallsPERSON

0.99+

Satish IyerPERSON

0.99+

LondonLOCATION

0.99+

GDPRTITLE

0.99+

Middle EastLOCATION

0.99+

42%QUANTITY

0.99+

Jet Propulsion LabORGANIZATION

0.99+

INSURANCE Improve Underwriting


 

>>Good afternoon, I'm wanting or evening depending >>On where you are and welcome to this breakout session around insurance, improve underwriting with better insights. >>So first and >>Foremost, let's summarize very quickly, um, who we're with and what we're talking about today. My name is Mooney castling, and I'm the managing director at Cloudera for the insurance vertical. And we have a sizeable presence in insurance. We have been working with insurance companies for a long time now, over 10 years, which in terms of insurances, maybe not that long, but for technology, it really is. And we're working with, as you can see some of the largest companies in the world and in the continents of the world. However, we also do a significant amount of work with smaller insurance companies, especially around specialty exposures and the regionals, the mutuals in property, casualty, general insurance, life, annuity, and health. So we have a vast experience of working with insurers. And, um, we'd like to talk a little bit today about what we're seeing recently in the underwriting space and what we can do to support the insurance industry >>In there. So >>Recently what we have been seeing, and it's actually accelerated as a result of their recent pandemic that we all have been going through. We see that insurers are putting even more emphasis on accounting for every individual customer's risks, lotta via commercial, a client or a personal person, personal insurance risk in a dynamic and a bespoke way. And what I mean with that is in a dynamic way, it means that risks and risk assessments change very regularly, right? Companies go into different business situations. People behave differently. Risks are changing all the time and they're changing per person. They're not changing the narrow generically my risk at a certain point of time in travel, for example, it might be very different than any of your risks, right? So what technology has started to enable is underwrite and assess those risks at those very specific individual levels. And you can see that insurers are investing in depth capability. The value of, um, artificial intelligence and underwriting is growing dramatically. As you see from some of those quotes here and also risks that were historically very difficult to assess such as networks, uh, vendor is global supply chains, um, works workers' compensation that has a lot of moving parts to it all the time and anything that deals with rapidly changing risks, exposures and people, and businesses have been supported more and more by technology such as ours to help account for that. >>And this is a bit a difficult slide. So bear with me for a second here. What this slide shows specifically for underwriting is how data-driven insights help manage underwriting. And what you see on the left side of this slide is the progress insurers make in analytical capabilities. And quite often the first steps are around reporting and that tends to be run from a data warehouse, operational data store, Starsky, Matt, um, data, uh, models. And then, and reporting really is, uh, quite often as a BI function, of course, a business intelligence function. And it really, you know, at a regular basis informs the company of what has been taken place now in the second phase, the middle dark, the middle color blue. The next step that is shore stage is to get into descriptive analytics. And what descriptive analytics really do is they try to describe what we're learning in reporting. >>So we're seeing certain events and sorts and findings and sorts of numbers and certain trends happening in reporting. And in the descriptive phase, we describe what this means and you know why this is happening. And then ultimately, and this is the holy grill, the end goal we like to get through predictive analytics. So we like to try to predict what is going to happen, uh, which risk is a good one to underwrite, you know, Watts next policy, a customer might need or wants water claims as we discuss it. And not a session today, uh, might become fatherless or a which one we can move straight through because they're not supposed to be any issues with it, both on the underwriting and the claims side. So that's where every insurer is shooting for right now. But most of them are not there yet. Totally. Right. So on the right side of this slide specifically for underwriting, we would, we like to show what types of data generally are being used in use cases around underwriting, in the different faces of maturity and analytics that I just described. >>So you will see that on the reporting side, in the beginning, we start with braids, information, quotes, information, submission information, bounding information. Um, then if you go to the descriptive phase, we start to add risk engineering information, risk reports, um, schedules of assets on the commercial side, because some are profiles, uh, as the descriptions move into some sort of an unstructured data environments, um, notes, diaries, claims notes, underwriting notes, risk engineering notes, transcripts of customer service calls, and then totally to the outer side of this baseball field looking slide, right? You will see the relatively new data sources that can add tremendous value. Um, but I'm not Whitely integrated yet. So I will walk through some use cases around specifically. So think about sensors, wearables, you know, sense of some people's bodies, sensors, moving assets for transportation, drone images for underwriting. It's not necessary anymore to send, uh, an inspection person and inspector or a risk risk inspector or engineer to every building. You know, insurers now fly drones over it, to look at the roofs, et cetera, photos. You know, we see it a lot in claims first notice of loss, but we also see it for underwriting purposes that policies all done out at pretty much say sent me pictures of your five most valuable assets in your home and we'll price your home and all its contents for you. So we start seeing more and more movements towards those, as I mentioned earlier, dynamic and bespoke types of underwriting. >>So this is how Cloudera supports those initiatives. So on the left side, you see data coming into your insurance company. There are all sorts of different states, Dara. Some of them aren't managed and controlled by you. Some audits you get from third parties and we'll talk about Della medics in a little bit. It's one of the use cases, the move into the data life cycle, the data journey. So the data is coming into your organization. You collected, you store it, you make it ready for utilization. You plop it, eat it in an operational environment for processing what in an analytical environment for analysis. And then you close on the loop and adjusted from the beginning if necessary, no specifically for insurance, which is if not the most regulated industry in the world it's coming awfully close. And it will come in as a, as a very admirable second or third. >>Um, it's critically important that that data is controlled and managed in the correct way on all the different regulations that, that we are subject to. So we do that in the cloud era share data experiment experience, which is where we make sure that the data is accessed by the right people. And that we always can track who did watch to any point in time to that data. Um, and that's all part of the Cloudera data platform. Now that whole environment that we run on premise as well as in the cloud or in multiple clouds or in hybrid, most insurers run hybrid models, which are part of that data on premise and part of the data and use cases and workloads in the cloud. We support enterprise use cases around on the writing in risk selection, individualized pricing, digital submissions, quote processing, the whole quote, quote bound process, digitally fraud and compliance evaluations and network analysis around, um, service providers. So I want to walk you through some of the use cases that we've seen in action recently that showcases how this >>Work in real life. First one >>Is to seize that group plus Cloudera, um, uh, full disclosure is obviously for the people that know a Dutch health insurer. I did not pick the one because I happen to be Dutch is just happens to be a fantastic use case and what they were struggling with as many, many insurance companies is that they had a legacy infrastructure that made it very difficult to combine data sets and get a full view of the customer and its needs. Um, as any ensure customer demands and needs are rapidly changing competition is changing. So C-SAT decided that they needed to do something about it. And they built a data platform on Cloudera that helps them do a couple of things. It helps them support customers better or proactively. So they got really good in pinging customers on what potential steps they need to take to improve on their health in a preventative way. >>But also they sped up rapidly their, uh, approvals of medical procedures, et cetera. And so that was the original intent, right? It's like serve the customers better or retain the customers, make sure what they have the right access to the right services when they need us in a proactive way. As a side effect of this, um, data platform. They also got much better in, um, preventing and predicting fraud and abuse, which is, um, the topic of the other session we're running today. So it really was a good success and they're very happy with it. And they're actually starting to see a significant uptick in their customer service, KPIs >>And results. >>The other one that I wanted to quickly mention is Octo as most of you know, Optune is a very, very large telemedics provider, telematics data provider globally speaking with Cloudera for quite some time, this one I want to showcase because it showcases what we can do with data in mass amounts. So for Octo, we, um, analyze on Cloudera 5 million connected cars, ongoing with 11 billion data points and really want to doing as the creating the algorithms and the models and insurers use to, um, to, um, run, um, tell them insurance telematics programs, right to pay as you drive B when you drive the, how you drive. And this whole telemedics part of insurance is actually growing very fast too in, in, still in solidified proof of concept mini projects, kind of initiatives. But, um, what we're succeeding is that companies are starting to offer more and more services around it. >>So they become preventative and predictive too. So now you got to the program staff being me as a dry for seeing Monique you're hopping in the car for two hours. Now, maybe it's time to take a break. Um, we see that there's a Starbucks coming up on the right or any coffee shop. That's part of a bigger chain. Uh, we know because you have that app on your phone, that you are a Starbucks user. So if you stop there, we'll give you a 50 discount on your regular coffee. So we start seeing these types of programs coming through to, again, keep people safe and keep cars safe, but primarily of course the people in it, and those are the types of use cases that we start seeing in that telematic space. >>This looks more complicated than it is. So bear with me for a second. This is a commercial example because we see a data work. A lot of data were going on in commercial insurance. It's not Leah personal insurance thing. Commercial is near and dear to my heart. It's where I started. I actually, for a long time, worked in global energy insurance. So what this one wheelie explains is how we can use sensors on people's outfits and people's clothes to manage risks and underwrite risks better. So there are programs now for manufacturing companies and for oil and gas, where the people that work in those places are having sensors as part of their work outfits. And it does a couple of things. It helps in workers' comp underwriting and claims because you can actually see where people are moving, what they are doing, how long they're working. >>Some of them even tracks some very basic health-related information like blood pressure and heartbeat and stuff like that, temperature. Um, so those are all good things. The other thing that had to us, it helps, um, it helps collect data on the specific risks and exposures. Again, we're getting more and more to individual underwriting or individual risk underwriting, who insurance companies that, that ensure these, these, um, commercial commercial enterprises. So they started giving discounts if the workers were sensors and ultimately if there is an unfortunate event and it like a big accident or big loss, it helps, uh, first responders very quickly identify where those workers are and, and, and if, and how they're moving, which is all very important to figure out who to help first in case something bad happens. Right? So these are the type of data that quite often got implements in one specific use case, and then get broadly move to other use cases or deployed into other use cases to help price risks better, better, and keep, you know, risks, better control, manage, and provide preventative care. Right? >>So these were some of the use cases that we run in the underwriting space that are very excited to talk about. So as a next step, what we would like you to do is considered opportunities in your own companies to advance whisk assessment specific to your individual customer's need. And again, customers can be people they can be enterprises to can be other, any, any insurable entity, right? The police physical dera.com solutions insurance, where you will find all our documentation assets and thought leadership around the topic. And if you ever want to chat about this, you know, please give me a call or schedule a meeting with us. I get very passionate about this topic. I'll gladly talk to you forever. If you happen to be based in the us and you ever need somebody to filibuster on insurance, please give me a call. I'll easily fit 24 hours on this one. Um, so please schedule a call with me. I promise to keep it short. So thank you very much for joining this session. And as the last thing I would like to remind all of you read our blogs, read our tweets. We'd our thought leadership around insurance. And as we all know, insurance is sexy.

Published Date : Aug 5 2021

SUMMARY :

On where you are and welcome to this breakout session around insurance, improve underwriting And we're working with, as you can see some of the largest companies in the world So And you can see that insurers are investing in depth capability. And what you see on the left side of this slide And in the descriptive phase, we describe what this means So think about sensors, wearables, you know, sense of some people's bodies, sensors, So the data is coming into your organization. And that we always can track who did watch to any point in time to that data. Work in real life. So C-SAT decided that they needed to do something about it. It's like serve the customers better or retain the customers, make sure what they have the right access to The other one that I wanted to quickly mention is Octo as most of you know, So now you got to the program staff So what this one So they started giving discounts if the workers were sensors and So as a next step, what we would like you to do is considered opportunities

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

GartnerORGANIZATION

0.99+

DavePERSON

0.99+

JohnPERSON

0.99+

Lisa MartinPERSON

0.99+

VikasPERSON

0.99+

LisaPERSON

0.99+

MichaelPERSON

0.99+

DavidPERSON

0.99+

Katherine KosterevaPERSON

0.99+

StevePERSON

0.99+

Steve WoodPERSON

0.99+

JamesPERSON

0.99+

PaulPERSON

0.99+

EuropeLOCATION

0.99+

Andy AnglinPERSON

0.99+

Eric KurzogPERSON

0.99+

Kerry McFaddenPERSON

0.99+

EricPERSON

0.99+

Ed WalshPERSON

0.99+

IBMORGANIZATION

0.99+

Jeff ClarkePERSON

0.99+

LandmarkORGANIZATION

0.99+

AustraliaLOCATION

0.99+

KatherinePERSON

0.99+

AndyPERSON

0.99+

GaryPERSON

0.99+

AmazonORGANIZATION

0.99+

two hoursQUANTITY

0.99+

Paul GillinPERSON

0.99+

ForresterORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

Michael DellPERSON

0.99+

CiscoORGANIZATION

0.99+

JeffPERSON

0.99+

Peter BurrisPERSON

0.99+

Jeff FrickPERSON

0.99+

2002DATE

0.99+

Mandy DhaliwalPERSON

0.99+

John FurrierPERSON

0.99+

2019DATE

0.99+

fiveQUANTITY

0.99+

StarbucksORGANIZATION

0.99+

PolyComORGANIZATION

0.99+

USLOCATION

0.99+

San JoseLOCATION

0.99+

BostonLOCATION

0.99+

INSURANCE V1 | CLOUDERA


 

>>Good morning or good afternoon or good evening, depending on where you are and welcome to this session, reduce claims, fraud, we're data, very excited to have you all here. My name is Winnie castling and I'm Cloudera as managing director for the insurance vertical. First and foremost, we want to let you know that we know insurance. We have done it for a long time. Collectively, personally, I've done it for over 30 years. And, you know, as a proof of that, we want to let you know that we insure, we insure as well as we do data management work for the top global companies in the world, in north America, over property casualty, general insurance health, and, um, life and annuities. But besides that, we also take care of the data needs for some smaller insurance companies and specialty companies. So if you're not one of the huge Glomar conglomerates in the world, you are still perfectly fine with us. >>So >>Why are we having this topic today? Really digital claims and digital claims management is accelerating. And that's based on a couple of things. First and foremost, customers are asking for it. Customers are used to doing their work more digitally over the last descending year or two. And secondly, with the last year or almost two, by now with the changes that we made in our work processes and in society at large around cuvettes, uh, both regulators, as well as companies have enabled digital processing and the digital journey to a degree that they've never done before. Now that had some really good impacts for claims handling. It did meant that customers were more satisfied. They felt they have more control over their processes in the cloud and the claims experience. It also reduced in a lot of cases, both in commercial lines, as well as in personal lines, the, um, the, the time periods that it took to settle on a claim. However, um, the more digital you go, it, it opened up more access points for fraud, illicit activities. So unfortunately we saw indicators of fraud and fraud attempts, you know, creeping up over the last time period. So we thought it was a good moment to look at, you know, some use cases and some approaches insurers can take to manage that even better than they already >>Are. >>And this is how we plan to do that. And this is how we see this in action. On the left side, you see progress of data analytics and data utilization, um, around, in this case, we're talking about claims fraud, but it's a generic picture. And really what it means is most companies that start with data affords pretty much start around data warehousing and we eliminate analytics and all around BI and reporting, which pretty much is understanding what we know, right? The data that we already have utilizing data to understand better what we know already. Now, when we move to the middle blue collar, we get into different types of analytics. We get into exploratory data science, we get to predictions and we start getting in the space of describing what we can learn from what we know, but also start moving slowly into predicting. So first of all, learn and gather insights of what we already know, and then start augmenting with that with other data sets and other findings, so that we can start predicting for the future, what might happen. >>And that's the point where we get to AI, artificial intelligence and machine learning, which will help us predict which of our situations and claims are most likely to have a potential fraud or abuse scenario attached to it. So that's the path that insurers and other companies take in their data management and analytics environments. Now, if you look at the right side of this light, you see data complexity per use cases in this case in fraud. So the bubbles represent the types of data that are being used, or the specific faces that we discussed on the left side. So for reporting, we used a TPA data, policy verification, um, claims file staff data, that it tends to be heavily structured and already within the company itself. And when you go to the middle to the more descriptive basis, you start getting into unstructured data, you see a lot of instructor texts there, and we do a use case around that later. >>And this really enables us to better understand what the scenarios are that we're looking at and where the risks are around. In our example today, fraud, abuse and issues of resources. And then the more you go to the upper right corner, you see the outside of the baseball field, people refer to it, you see new unstructured data sources that are being used. You tend to see the more complex use cases. And we're looking at picture analysis, we're looking at voice analysis there. We're looking at geolocation. That's quite often the first one we look at. So this slide actually shows you the progress and the path in complexity and in utilization of data and analytical tool sets to manage data fraud, fraud, use cases, optimally. >>Now how we do that and how we look at at a Cloudera is actually not as complicated as, as this slight might want to, um, to, to give you an impression. So let's start at the left side at the left side, you see the enterprise data, which is data that you as an organization have, or that you have access to. It doesn't have to be internal data, but quite often it is now that data goes into a data journey, right? It gets collected first. It gets manipulated and engineered so that people can do something with it. It gets stored something, you know, people need to have access to it. And then they get into analytical capabilities who are inside gathering and utilization. Now, especially for insurance companies that all needs to be underpinned by a very, very strong security and governance, uh, environment. Because if not the most regulated industry in the world, insurance is awfully close. >>And if it's not the most regulated one, it's a close second. So it's critically important that insurers know, um, where the data is, who has access to it for Rodriguez, uh, what is being used for so terms like lineage, transparency are crucial, crucially important for insurance. And we manage that in the shared data experience. So it goes over the whole Cloudera platform and every application or tool or experience you use would include Dao. And on the right side, you see the use cases that tend to be deployed around claims and claims fraud, claims, fraud management. So over the last year or so, we've seen a lot of use cases around upcoding people get one treatment or one fix on a car, but it gets coded as a more expensive one. That's a fraud scenario, right? We see also the more classical fraud things and we see anti money laundering. So those are the types of use cases on the right side that we are supporting, um, on the platform, uh, around, um, claims fraud. >>And this is an example of how that actually looks like now, this is a one that it's actually a live one of, uh, a company that had, um, claims that dealt with health situations and being killers. So that obviously is relevant for health insurers, but you also see it in, um, in auto claims and counterclaims, right, you know, accidents. There are a lot of different claims scenarios that have health risks associated with it. And what we did in this one is we joined tables in a complex schema. So we have to look at the claimant, the physician, the hospital, all the providers that are involved procedures that are being deployed. Medically medicines has been utilized to uncover the full picture. Now that is a hard effort in itself, just for one claim and one scenario. But if you want to see if people are abusing, for example, painkillers in this scenario, you need to do that over every instant that is member. >>This claimant has, you know, with different doctors, with different hospitals, with different pharmacies or whatever that classically it's a very complicated and complex, um, the and costly data operation. So nowadays that tends to be done by graph databases, right? So you put fraud rings within a graph database and walk the graph. And if you look at it here in batch, you can see that in this case, that is a member that was shopping around for being killers and went through different systems and different providers to get, um, multiple of the same big LR stat. You know, obviously we don't know what he or she did with it, but that's not the intent of the system. And that was actually a fraud and abuse case. >>So I want to share some customer success stories and recent, uh, AML and fraud use cases. And we have a couple of them and I'm not going to go in an awful lot of detail, um, about them because we have some time to spend on one of them immediately after this. But one of them for example, is voice analytics, which is a really interesting one. And on the baseball slide that I showed you earlier, that would be a right upper corner one. And what happened there is that an insurance company utilized the, uh, the voice records they got from the customer service people to try to predict which one were potentially fraud list. And they did it in two ways. They look at actually the contents of what was being said. So they looked at certain words that were being used certain trigger words, but they also were looking at tone of voice pitch of voice, uh, speed of talking. >>So they try to see trends there and hear trends that would, um, that would bring them for a potential bad situation. Now good and bad news of this proof of concept was it's. We learned that it's very difficult just because every human is different to get an indicator for bad behavior out of the pitch or the tone or the voice, you know, or those types of nonverbal communication in voice. But we did learn that it was easier to, to predict if a specific conversation needed to be transferred to somebody else based on emotion. You know, obviously as we all understand life and health situations tend to come with emotions, or so people either got very sad or they got very angry or so the proof of concept didn't really get us to a firm understanding of potential driverless situation, but it did get us to a much better understanding of workflow around, um, claims escalation, um, in customer service to route people, to the right person, depending on what they need. >>And that specific time, another really interesting one was around social media, geo open source, all sorts of data that we put together. And we linked to the second one that I listed on slide here that was an on-prem deployment. And that was actually an analysis that regulators were asking for in a couple of countries, uh, for anti money laundering scams, because there were some plots out there that networks of criminals would all buy the low value policies, surrendered them a couple of years later. And in that way, God criminal money into the regular amount of monetary system whitewashed the money and this needed some very specific and very, very complex link analysis because there were fairly large networks of criminals that all needed to be tied together, um, with the actions, with the policies to figure out where potential pain points were. And that also obviously included ecosystems, such as lawyers, administrative offices, all the other things, no, but most, you know, exciting. >>I think that we see happening at the moment and we, we, you know, our partner, if analytics just went live with this with a large insurer, is that by looking at different types that insurers already have, um, unstructured data, um, um, their claims nodes, um, repour its claims, filings, um, statements, voice records, augmented with information that they have access to, but that's not their ours such as geo information obituary, social media Boyd on the cloud. And we can analyze claims much more effectively and efficiently for fraud and litigation and alpha before. And the first results over the last year or two showcasing a significant degree is significant degrees in claims expenses and, um, and an increase at the right moment of what a right amount in claims payments, which is obviously a good thing for insurers. Right? So having said all of that, I really would like to give Sri Ramaswami, the CEO of infinite Lytics, the opportunity to walk you through this use case and actually show you how this looks like in real life. So Sheree, here >>You go. So >>Insurers often ask us this question, can AI help insurance companies, lower loss expenses, litigation, and help manage reserves better? We all know that insurance industry is majority. Majority of it is unstructured data. Can AI analyze all of this historically and look for patterns and trends to help workflows and improve process efficiencies. This is exactly why we brought together industry experts at infill lyrics to create the industries where very first pre-trained and prebuilt insights engine called Charlie, Charlie basically summarizes all of the data structured and unstructured. And when I say unstructured, I go back to what money basically traded. You know, it is including documents, reports, third-party, um, it reports and investigation, uh, interviews, statements, claim notes included as well at any third party enrichment that we can legally get our hands on anything that helps the adjudicate, the claims better. That is all something that we can include as part of the analysis. And what Charlie does is takes all of this data and very neatly summarizes all of this. After the analysis into insights within our dashboard, our proprietary naturally language processing semantic models adds the explanation to our predictions and insights, which is the key element that makes all of our insights >>Actually. So >>Let's just get into, um, standing what these steps are and how Charlie can help, um, you know, with the insights from the historical patterns in this case. So when the claim comes in, it comes with a lot of unstructured data and documents that the, uh, the claims operations team have to utilize to adjudicate, to understand and adjudicate the claim in an efficient manner. You are looking at a lot of documents, correspondences reports, third party reports, and also statements that are recorded within the claim notes. What Charlie basically does is crunches all, all of this data removes the noise from that and brings together five key elements, locations, texts, sentiments, entities, and timelines in the next step. >>In the next step, we are basically utilizing Charlie's built-in proprietary, natural language processing models to semantically understand and interpret all of that information and bring together those key elements into curated insights. And the way we do that is by building knowledge, graphs, and ontologies and dictionaries that can help understand the domain language and convert them into insights and predictions that we can display on the dash. Cool. And if you look at what has been presented in the dashboard, these are KPIs and metrics that are very interesting for a management staff or even the operations. So the management team can basically look at the dashboard and start with the summarized data and start to then dig deeper into each of the problematic areas and look at patterns at that point. And these patterns that we learn from not only from what the system can provide, but also from the historic data can help understand and uncover some of these patterns in the newer claims that are coming in so important to learn from the historic learnings and apply those learnings in the new claims that are coming in. >>Let's just take a very quick example of what this is going to look like a claims manager. So here the claims manager discovers from the summarized information that there are some problems in the claims that basically have an attorney involved. They have not even gone into litigation and they still are, you know, I'm experiencing a very large, um, average amount of claim loss when they compare to the benchmark. So this is where the manager wants to dig deeper and understand the patterns behind it from the historic data. And this has to look at the wealth of information that is sitting in the unstructured data. So Charlie basically pulls together all these topics and summarizes these topics that are very specific to certain losses combined with entities and timelines and sentiments, and very quickly be able to show to the manager where the problematic areas are and what are those patterns leading to high, severe claims, whether it's litigation or whether it's just high, severe indemnity payments. >>And this is where the managers can adjust their workflows based on what we can predict using those patterns that we have learned and predict the new claims, the operations team can also leverage Charlie's deep level insights, claim level insights, uh, in the form of red flags, alerts and recommendations. They can also be trained using these recommendations and the operations team can mitigate the claims much more effectively and proactively using these kind of deep level insights that need to look at unstructured data. So at the, at the end, I would like to say that it is possible for us to achieve financial benefits, leveraging artificial intelligence platforms like Charlie and help the insurers learn from their historic data and being able to apply that to the new claims, to work, to adjust their workflows efficiently. >>Thank you very much for you. That was very enlightening as always. And it's great to see that actually, some of the technology that we all work so hard on together, uh, comes to fruition in, in cost savings and efficiencies and, and help insurers manage potential bad situations, such as claims fraud batter, right? So to close this session out as a next step, we would really urge you to a Sasha available data sources and advanced or predictive fraud prevention capabilities aligned with your digital initiatives to digital initiatives that we all embarked on over the last year are creating a lot of new data that we can use to learn more. So that's a great thing. If you need to learn more at one to learn more about Cloudera and our insurance work and our insurance efforts, um, you to call me, uh, I'm very excited to talk about this forever. So if you want to give me a call or find a place to meet when that's possible again, and schedule a meeting with us, and again, we love insurance. We'll gladly talk to anyone until they say in parts of the United States, the cows come home about it. And we're dad. I want to thank you all for attending this session and hanging in there with us for about half an hour. And I hope you have a wonderful rest of the day. >>Good afternoon, I'm wanting or evening depending on where you are and welcome to this breakout session around insurance, improve underwriting with better insights. >>So first and >>Foremost, let's summarize very quickly, um, who we're with and what we're talking about today. My name is goonie castling, and I'm the managing director at Cloudera for the insurance vertical. And we have a sizeable presence in insurance. We have been working with insurance companies for a long time now, over 10 years, which in terms of insurance, it's maybe not that long, but for technology, it really is. And we're working with, as you can see some of the largest companies in the world and in the continents of the world. However, we also do a significant amount of work with smaller insurance companies, especially around specialty exposures and the regionals, the mutuals in property, casualty, general insurance, life, annuity, and health. So we have a vast experience of working with insurers. And, um, we'd like to talk a little bit today about what we're seeing recently in the underwriting space and what we can do to support the insurance industry in there. >>So >>Recently what we have been seeing, and it's actually accelerated as a result of the recent pandemic that we all have been going through. We see that insurers are putting even more emphasis on accounting for every individual customers with lotta be a commercial clients or a personal person, personal insurance risk in a dynamic and a B spoke way. And what I mean with that is in a dynamic, it means that risks and risk assessments change very regularly, right? Companies go into different business situations. People behave differently. Risks are changing all the time and the changing per person they're not changing the narrow generically my risk at a certain point of time in travel, for example, it might be very different than any of your risks, right? So what technology has started to enable is underwrite and assess those risks at those very specific individual levels. And you can see that insurers are investing in that capability. The value of, um, artificial intelligence and underwriting is growing dramatically. As you see from some of those quotes here and also risks that were historically very difficult to assess such as networks, uh, vendors, global supply chains, um, works workers' compensation that has a lot of moving parts to it all the time and anything that deals with rapidly changing risks, exposures and people, and businesses have been supported more and more by technology such as ours to help, uh, gone for that. >>And this is a bit of a difficult slide. So bear with me for a second here. What this slide shows specifically for underwriting is how data-driven insights help manage underwriting. And what you see on the left side of this slide is the progress in make in analytical capabilities. And quite often the first steps are around reporting and that tends to be run from a data warehouse, operational data store, Starsky, Matt, um, data, uh, models and reporting really is, uh, quite often as a BI function, of course, a business intelligence function. And it really, you know, at a regular basis informs the company of what has been taken place now in the second phase, the middle dark, the middle color blue. The next step that is shore stage is to get into descriptive analytics. And what descriptive analytics really do is they try to describe what we're learning in reporting. >>So we're seeing sorts and events and sorts and findings and sorts of numbers and certain trends happening in reporting. And in the descriptive phase, we describe what this means and you know why this is happening. And then ultimately, and this is the holy grill, the end goal we like to get through predictive analytics. So we like to try to predict what is going to happen, uh, which risk is a good one to underwrite, you know, watch next policy, a customer might need or wants water claims as we discuss it. And not a session today, uh, might become fraud or lists or a which one we can move straight through because they're not supposed to be any issues with it, both on the underwriting and the claims side. So that's where every insurer is shooting for right now. But most of them are not there yet. >>Totally. Right. So on the right side of this slide specifically for underwriting, we would, we like to show what types of data generally are being used in use cases around underwriting, in the different faces of maturity and analytics that I just described. So you will see that on the reporting side, in the beginning, we start with rates, information, quotes, information, submission information, bounding information. Um, then if you go to the descriptive phase, we start to add risk engineering information, risk reports, um, schedules of assets on the commercial side, because some are profiles, uh, as a descriptions, move into some sort of an unstructured data environment, um, notes, diaries, claims notes, underwriting notes, risk engineering notes, transcripts of customer service calls, and then totally to the other side of this baseball field looking slide, right? You will see the relatively new data sources that can add tremendous value. >>Um, but I'm not Whitely integrated yet. So I will walk through some use cases around these specifically. So think about sensors, wearables, you know, sensors on people's bodies, sensors, moving assets for transportation, drone images for underwriting. It's not necessary anymore to send, uh, an inspection person and inspector or risk, risk inspector or engineer to every building, you know, be insurers now, fly drones over it, to look at the roofs, et cetera, photos. You know, we see it a lot in claims first notice of loss, but we also see it for underwriting purposes that policies out there. Now that pretty much say sent me pictures of your five most valuable assets in your home and we'll price your home and all its contents for you. So we start seeing more and more movements towards those, as I mentioned earlier, dynamic and bespoke types of underwriting. >>So this is how Cloudera supports those initiatives. So on the left side, you see data coming into your insurance company. There are all sorts of different data. There are, some of them are managed and controlled by you. Some orders you get from third parties, and we'll talk about Della medics in a little bit. It's one of the use cases. They move into the data life cycle, the data journey. So the data is coming into your organization. You collected, you store it, you make it ready for utilization. You plop it either in an operational environment for processing or in an analytical environment for analysis. And then you close on the loop and adjusted from the beginning if necessary, no specifically for insurance, which is if not the most regulated industry in the world it's coming awfully close, and it will come in as a, a very admirable second or third. >>Um, it's critically important that that data is controlled and managed in the correct way on the old, the different regulations that, that we are subject to. So we do that in the cloud era Sharon's data experiment experience, which is where we make sure that the data is accessed by the right people. And that we always can track who did watch to any point in time to that data. Um, and that's all part of the Cloudera data platform. Now that whole environment that we run on premise as well as in the cloud or in multiple clouds or in hybrids, most insurers run hybrid models, which are part of the data on premise and part of the data and use cases and workloads in the clouds. We support enterprise use cases around on the writing in risk selection, individualized pricing, digital submissions, quote processing, the whole quote, quote bound process, digitally fraud and compliance evaluations and network analysis around, um, service providers. So I want to walk you to some of the use cases that we've seen in action recently that showcases how this work in real life. >>First one >>Is to seize that group plus Cloudera, um, uh, full disclosure. This is obviously for the people that know a Dutch health insurer. I did not pick the one because I happen to be dodged is just happens to be a fantastic use case and what they were struggling with as many, many insurance companies is that they had a legacy infrastructure that made it very difficult to combine data sets and get a full view of the customer and its needs. Um, as any insurer, customer demands and needs are rapidly changing competition is changing. So C-SAT decided that they needed to do something about it. And they built a data platform on Cloudera that helps them do a couple of things. It helps them support customers better or proactively. So they got really good in pinging customers on what potential steps they need to take to improve on their health in a preventative way. >>But also they sped up rapidly their, uh, approvals of medical procedures, et cetera. And so that was the original intent, right? It's like serve the customers better or retain the customers, make sure what they have the right access to the right services when they need it in a proactive way. As a side effect of this, um, data platform. They also got much better in, um, preventing and predicting fraud and abuse, which is, um, the topic of the other session we're running today. So it really was a good success and they're very happy with it. And they're actually starting to see a significant uptick in their customer service, KPIs and results. The other one that I wanted to quickly mention is Octo. As most of you know, Optune is a very, very large telemedics provider, telematics data provider globally. It's been with Cloudera for quite some time. >>This one I want to showcase because it showcases what we can do with data in mass amounts. So for Octo, we, um, analyze on Cloudera 5 million connected cars, ongoing with 11 billion data points. And really what they're doing is the creating the algorithms and the models and insurers use to, um, to, um, run, um, tell them insurance, telematics programs made to pay as you drive pay when you drive, pay, how you drive. And this whole telemedics part of insurance is actually growing very fast too, in, in, still in sort of a proof of concept mini projects, kind of initiatives. But, um, what we're succeeding is that companies are starting to offer more and more services around it. So they become preventative and predictive too. So now you got to the program staff being me as a driver saying, Monique, you're hopping in the car for two hours. >>Now, maybe it's time you take a break. Um, we see that there's a Starbucks coming up on the ride or any coffee shop. That's part of a bigger chain. Uh, we know because you have that app on your phone, that you are a Starbucks user. So if you stop there, we'll give you a 50 cents discount on your regular coffee. So we start seeing these types of programs coming through to, again, keep people safe and keep cars safe, but primarily of course the people in it, and those are the types of use cases that we start seeing in that telematic space. >>This looks more complicated than it is. So bear with me for a second. This is a commercial example because we see a data work. A lot of data were going on in commercial insurance. It's not Leah personal insurance thing. Commercial is near and dear to my heart. That's where I started. I actually, for a long time, worked in global energy insurance. So what this one wheelie explains is how we can use sensors on people's outfits and people's clothes to manage risks and underwrite risks better. So there are programs now for manufacturing companies and for oil and gas, where the people that work in those places are having sensors as part of their work outfits. And it does a couple of things. It helps in workers' comp underwriting and claims because you can actually see where people are moving, what they are doing, how long they're working. >>Some of them even tracks some very basic health-related information like blood pressure and heartbeat and stuff like that, temperature. Um, so those are all good things. The other thing that had to us, it helps, um, it helps collect data on the specific risks and exposures. Again, we're getting more and more to individual underwriting or individual risk underwriting, who insurance companies that, that ensure these, these, um, commercial, commercial, um, enterprises. So they started giving discounts if the workers were sensors and ultimately if there is an unfortunate event and it like a big accident or big loss, it helps, uh, first responders very quickly identify where those workers are. And, and, and if, and how they're moving, which is all very important to figure out who to help first in case something bad happens. Right? So these are the type of data that quite often got implements in one specific use case, and then get broadly moved to other use cases or deployed into other use cases to help price risks, betters better, and keep, you know, risks, better control, manage, and provide preventative care. Right? >>So these were some of the use cases that we run in the underwriting space that are very excited to talk about. So as a next step, what we would like you to do is considered opportunities in your own companies to advance risk assessment specific to your individual customer's need. And again, customers can be people they can be enterprises to can be other any, any insurable entity, right? The please physical dera.com solutions insurance, where you will find all our documentation assets and thought leadership around the topic. And if you ever want to chat about this, please give me a call or schedule a meeting with us. I get very passionate about this topic. I'll gladly talk to you forever. If you happen to be based in the us and you ever need somebody to filibuster on insurance, please give me a call. I'll easily fit 24 hours on this one. Um, so please schedule a call with me. I promise to keep it short. So thank you very much for joining this session. And as a last thing, I would like to remind all of you read our blogs, read our tweets. We'd our thought leadership around insurance. And as we all know, insurance is sexy.

Published Date : Aug 4 2021

SUMMARY :

of the huge Glomar conglomerates in the world, you are still perfectly fine with us. So we thought it was a good moment to look at, you know, some use cases and some approaches The data that we already have utilizing data to understand better what we know already. And when you go to the middle to the more descriptive basis, So this slide actually shows you the progress So let's start at the left side at the left side, And on the right side, you see the use cases that tend So we have to look at the claimant, the physician, the hospital, So nowadays that tends to be done by graph databases, right? And on the baseball slide that I showed you earlier, or the tone or the voice, you know, or those types of nonverbal communication fairly large networks of criminals that all needed to be tied together, the opportunity to walk you through this use case and actually show you how this looks So That is all something that we can include as part of the analysis. So um, you know, with the insights from the historical patterns in this case. And the way we do that is by building knowledge, graphs, and ontologies and dictionaries So here the claims manager discovers from Charlie and help the insurers learn from their historic data So if you want to give me a call or find a place to meet Good afternoon, I'm wanting or evening depending on where you are and welcome to this breakout session And we're working with, as you can see some of the largest companies in the world of the recent pandemic that we all have been going through. And quite often the first steps are around reporting and that tends to be run from a data warehouse, And in the descriptive phase, we describe what this means So on the right side of this slide specifically for underwriting, So think about sensors, wearables, you know, sensors on people's bodies, sensors, And then you close on the loop and adjusted from the beginning if necessary, So I want to walk you to some of the use cases that we've seen in action recently So C-SAT decided that they needed to do something about it. It's like serve the customers better or retain the customers, make sure what they have the right access to So now you got to the program staff and keep cars safe, but primarily of course the people in it, and those are the types of use cases that we start So what this one you know, risks, better control, manage, and provide preventative care. So as a next step, what we would like you to do is considered opportunities

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
StarbucksORGANIZATION

0.99+

CharliePERSON

0.99+

two hoursQUANTITY

0.99+

Winnie castlingPERSON

0.99+

C-SATORGANIZATION

0.99+

goonie castlingPERSON

0.99+

ClouderaORGANIZATION

0.99+

50 centsQUANTITY

0.99+

MoniquePERSON

0.99+

24 hoursQUANTITY

0.99+

United StatesLOCATION

0.99+

north AmericaLOCATION

0.99+

todayDATE

0.99+

SharonPERSON

0.99+

infinite LyticsORGANIZATION

0.99+

first stepsQUANTITY

0.99+

ShereePERSON

0.99+

FirstQUANTITY

0.99+

second oneQUANTITY

0.99+

two waysQUANTITY

0.99+

oneQUANTITY

0.99+

bothQUANTITY

0.99+

second phaseQUANTITY

0.99+

OptuneORGANIZATION

0.99+

twoQUANTITY

0.99+

over 10 yearsQUANTITY

0.99+

Sri RamaswamiPERSON

0.99+

one scenarioQUANTITY

0.99+

last yearDATE

0.99+

StarskyORGANIZATION

0.99+

thirdQUANTITY

0.99+

over 30 yearsQUANTITY

0.98+

one treatmentQUANTITY

0.98+

one fixQUANTITY

0.98+

one claimQUANTITY

0.98+

11 billion data pointsQUANTITY

0.97+

firstQUANTITY

0.97+

First oneQUANTITY

0.97+

first oneQUANTITY

0.97+

secondlyQUANTITY

0.96+

GlomarORGANIZATION

0.96+

secondQUANTITY

0.96+

5 million connected carsQUANTITY

0.94+

ClouderaTITLE

0.93+

about half an hourQUANTITY

0.93+

LeahORGANIZATION

0.93+

five most valuable assetsQUANTITY

0.92+

a couple of years laterDATE

0.89+

eachQUANTITY

0.89+

first resultsQUANTITY

0.88+

first noticeQUANTITY

0.88+

one of themQUANTITY

0.82+

pandemicEVENT

0.81+

OctoTITLE

0.78+

SashaPERSON

0.77+

five key elementsQUANTITY

0.76+

ClouderaPERSON

0.75+

use caseQUANTITY

0.73+

DaoTITLE

0.7+

MattORGANIZATION

0.69+

first respondersQUANTITY

0.69+

coupleQUANTITY

0.68+

OctoORGANIZATION

0.67+

CharlieTITLE

0.66+

Bren Briggs, Hypergiant | CUBE Conversation, July 2021


 

(digital music) >> Welcome to this CUBE Conversation. I'm Lisa Martin. Bren Briggs, joins me next, the Director of DevOps and Cybersecurity at Hypergiant. Bren, welcome to theCUBE. >> Hey there, I'm glad to be here. >> You have a very cool background, which I wish we had time to get into your mandolin playing, but we don't. Tell me a little bit about Hypergiant this is a company that's new to me? >> So we are an AI and Machine Learning Company, and we had the slogan we talked about a lot, it's almost tongue in cheek, "Tomorrowing Today" where we want to build and focus on technology that advances the state-of-the-art and we want to, where this deep history and background in services, where we build custom solutions for companies that have data problems and that have AI and machine learning problems. And they come to us and we help them make sense of their data and we build a custom software solution from top to bottom. And we help them with their data problems and their really difficult problems that they have there in a very specialized way. And yeah, that's what we do. It's really fun. >> "Tomorrowing Today", I like that build T-shirts with that on that. (Bren chuckles) So talk to me about the work that you guys are doing with SUSE Rancher Government Labs. You're doing some very cool work with the air force, help me understand that. >> Sure, so about a year and some change ago, we had a government contract, an air force contract, to develop some new or just to basically write an experiment with some new sensing technology onboard a satellite. So we built this satellite, we were talking about how we're going to employ DevOps' best practices on the satellite and if that's even a thing that can be done. How we get these rights of space and really thinking through the entire process. And as we did this, we were getting more and more deeply involved with a very very new group. Actually, we kind of started at the same time. A new group within the air force called, Platform One. Platform One's mission is to bring DevSecOps to the DoD Enterprise. And so as we're kind of starting off together and getting to know each other, Rob Slaughter who started and ran Platform One for the first bit of his existence, he said, "hey, we're going to incorporate some Platform One stuff into this. Let's talk about just building an actual Platform One satellite and see what that looks like." And so that was kind of the start of this whole idea was what do we do and how do we do DevSecOps in low Earth orbit? Can we put Kubernetes on satellite and will it work? >> And tell me some of the results? So, I used to work for NASA, so I would geek out on anything that has to do with the space program. But talk to me about some of the things that you uncovered bringing Kubernetes, AI, machine learning to this, outer Edge of Earth? >> I think the first thing that we learned that I think, it's an understatement to say that space is hard. (Both laughing) But it really is. And that was the part that we learned about was it was hard in all of the ways that we did not expect. And a lot of it had to do with just government and logistics. We learned that it is difficult a lot of times to just to find a way to get into space and then once you're there, how you operate in the conditions that you're in and how you could even communicate with your satellite is it's just a logistical adventure on top of all of the other engineering problems that you have while you're on low Earth orbit? The other thing that we figured out was awkward things are difficult. While you're on orbit, they can be slow or fragmented and so it pays to get it right the first time but that's not the nature of modern software development is you'd never get it right and you're continually updating. So that was a problem that really nagged us for awhile was after we did the wider experiment, like how would we continuously update this and what would we do? And those ideas and questions fed into the experiment that became Sat One and then the follow one much bigger experiment that became the Edge One and Edge working group. >> Tell me a little bit about the wider experiment, give me some context of how that relates to Platform One, Sat One? >> I can't (laughing) I can't really go into details about what wider did or anything like that. It was not a classified mission, it's just not something that I can disclose. >> Okay, got it. >> Sorry. >> So talk to me about some of the work that you guys are doing together Hypergiant with SUSE in terms of pushing forward the next generation of Kubernetes to low Earth orbit and beyond. >> Sure, so SUSE RGS, specifically, Chris Nuber, like, one of the things that I have to do is I have to be a cheerleader for all of the amazing people that were on this project. And two people in particular, Chris Tacke and Chris Nuber, were instrumental in making this work. I was like almost tangentially involved where I was doing some input and architecture and helping debug but it was really Chris Tacke and Chris Nuber that made this thing, that built this thing and made it work. And Chris Nuber, was our assigned resource from SUSE RGS. And he said, "Obviously SUSE is going to prefer, or SUSE is going to prefer SUSE products." That it makes sense. But there's a reason because the products that he implemented and the patterns that he implemented and the architecture and expertise that he brought were second to none, I don't think that we could have done better with any other distribution of Kubernetes. He recommended a K3s is a very lightweight Kubernetes distribution that had really good opinions. It's a single binary. It was very easy to deploy and manage and update and it just, it really didn't break. That was the best thing that we were looking for (chuckles) it was one solid piece with no moving parts, relatively speaking. And so Chris Nuber was very essential in providing the Kubernetes architecture while Chris Tacky was the one who helped us write some of the demo applications and build the fail over and out of band interaction that we were going to have from the hardware on the satellite to the Kubernetes control plane. >> Very cool. It sounds like you had a great collaborative team there, which is essential in any environment. >> We deed. >> And I liked how you described space as a logistical adventure that reminds me very much of my days at NASA. (Bren laughing) It definitely is a logistical adventure to put it mildly. Talk to me a little bit about the work that you're doing to define the Edge for the Department of Defense? That sounds very intriguing. >> Yeah, so this was almost a direct result of what happened with the sat one experiment where Rob Slaughter and a few of the other folks who saw what we did with sat one, you know, were again, logistical adventure. We built this entire thing and we worked so hard and we're moving through fright flight readiness checks and as things happen, funding kind of went. And so you've got all this experience and this like, prototype that this really confident that it's space ready and everything and they said, "hey, listen, you know, we have the same problem on our flight with terrestrial environments, they're nearly identical the only difference is, you know, you don't have to worry about radiation nearly as much." (laughing) So then, you know, we joked about that and we started this new idea, this Edge One idea as part of the AVMs program, where they're figuring out this new, like battlefield communications pattern of the future. And one of the things that they're really concerned about is secure processing and how do you do applications at like where people are stationed, which could be anywhere in very remote locations. Then that's what turned into Edge One is, you know, we imagined initially Edge One as satellite one without wings and earth bound and that grew into, well, what about submarines? What about carriers? What about command and control squadrons that are stationed in cities? What about special operators that are far forward? What about first responders who are moving into, you know, hazardous environmental conditions? Can you wear a Kubernetes cluster with like super low power arm chips? And so we started thinking of all these different applications of what Edge could be anywhere from a five volt board all the way up to a data center in a box. And that caused us to realize that we're going to break Edge into really three categories based on the amount of material or resources needed to power it and how hard it is to get to. So we have the Near Edge, which is, you know, you have data center like capabilities, and it's easy to get to it, but you, because you have people stationed with it, but you may have reached back once every month or so. So think, you know, a shift that's underway or an air gap system or something like that. And then you have a Tiny Edge, which is exactly like kind of the more traditional idea that you think of when you think of Edge, which is really, really tiny compute, maybe it's on a windmill or something I don't really know, pick your thing to put Kubernetes on that should never have Kubernetes, that's the kind of thing. And then you've got Far Edge, which is, you know, if the control plane crashes, good luck, you'll never getting to it. And so that would be a satellite. And so the far it... so really a lot of these, it depends on the failure mode. Like what happens when it fails and that for the most part defines kind of what category you're going to be in. >> Tiny Edge, Near Edge and Far Edge. I think Sir. Richard Branson and his team went to the Far Edge (chuckles) low Earth orbit >> He did (laughing). >> This last weekend, I guess, yeah. That low Earth orbit does seem like it would be the Far Edge. Talk to me a little bit about, I mean, you talk about these applications then from a defense perspective that very dramatically, what are some of the important lessons that you've learned besides if it breaks in the Far Edge, you're not getting to it. >> Some of the important lessons that we learned. So I actually did this exact job in the air force. I was a combat communicator, which meant that we took, by pure coincidence I'm back in this, like, I did not intend for this to happen its pure coincidence, (Lisa laughing) but, you know, we communicate, we went out to the Edge, right. We went out to the Near Edge and we did all of this stuff. And the biggest lesson, I think learning from doing this or doing that and then going into this is that the world doesn't have to revolve around SharePoint anymore (Lisa laughing) because we can shape our own habitation (Both laughing) >> That is good to know. >> If it can be done on SharePoint, the air force and the army will do it in SharePoint, I promise you. They've done some actually terrifying things with it. All joking aside though, I think that one of the things that we learned was the difference between like something being complex and complicated when it came to systems engineering and management, like this is a very complex system it's actually orders of magnitude more complex than the current deployments that are out there which is effectively VMware and you're migrating virtual machines across multiple physical nodes in these remote data centers. But it's also complicated, it's really difficult to manage these deployments and the hardware. And I remember like when I was in combat comm, we had this 72 hour goal to get all of our systems up. And it was kind of like a 50-50, if we would make it, it felt like most of the time where you had priorities for getting things up and running. And obviously, you know, that certain applications weren't as important as others. So they were the ones that had to fall on the wayside if you're going to make your 72 hour mark. But I'm just thinking about like how difficult it was to deploy and manage all of this stuff and now with Kubernetes, yes, the complexity is far higher, but we can make it so it's not as complicated. We can offload a lot of that brain sweat, the people in the rear echelon, where they can connect in remotely after you come up and you get reached back, they push your config and your mission profile is there. And now you're focused on the mission you're not focused on debugging pods, and you're focused on the mission and not focused on, you know, why my virtual machine didn't migrate or something like that. And we can get applications that are built in-house and updated continuously, and we can verify and validate the sources of where these things are coming from. And all of these are important problems to everybody, not just the military, but the military tends to have the money and the ability to think about these things first, 'cause that's where these problems tend to get solved first. >> So interesting. You've sort of had this circular experience being in the air force, now coming back and working on projects like this, what are some of the things that Hypergiant has learned? And some of the things that are next next for Hypergiant as a company? >> I think that we are getting really good at being a small contractor in the Federal space where we actually were just awarded an IDIQ with a cap of $950 million in a small group of, I think, 23 other companies. And so that shows right there the investment that the Federal Government has in us and the potential that they see for us to build and deliver these highly tailored and specialized solutions. The other thing that we've learned is how to form like coalitions to collaborate with a lot of these other smaller companies. I think that the days of seeing the Defense Industrial Base dominated by the same four people or five people are over. And it's not that these people, I mean, they've been, they've basically been propping up most of the defense industry for a very long time and I think a lot of people would argue that, you know, this is a problem, right, you have this near monopoly of a very few people, but the other thing is that they're not as nimble, they grow by acquisition and we have this ability to be highly tailored and specialized and we don't need to do everything in the world to survive. We can go and form coalitions with other groups to go solve a particular problem. Like we're great at AI and ML, and we're great at DevSecOps, then maybe we're not so great at, you know, hardware or you know, things like that. Like we can go partner up with these people and solve problems together and we don't have to be a Boeing to do it and you don't have to go hire a Boeing to do this. And I think that's really, really great, no slight to Boeing, but I think it's really great that it's a lot easier for smaller companies to do this and we are navigating this new world and we're bringing Agile into the government and that's, yeah, in some cases we have to drag them, kicking and screaming into this decade, but, you know, that's what we're doing and I'm very excited to see that because when I was in Agile and DevOps, those were words you didn't say, you weren't allowed to do that. >> No. >> Now they've done a complete 180, it's really cool. >> That's cool. I have a minimum that brings in thought diversity, having more companies to work with, but to your point, the agility that you bring in as a smaller company helping them to actually embrace Agile, that's huge because to your point, that's kind of historically not what government organizations are used to. So it sounds like a little bit they've learned a tremendous amount from working with small companies like Pepperdine. >> I like the thing so. Platform One is a fantastic example. So it was really started as a what we're calling software factories within the air force and within the DOD and other DOD branches have now started to replicate the pattern. So we have several software factories within the air force and Platform One is like the DevSecOps Software factory, and we have the ski camp and space camping, Kobayashi Maru and you're noticing a theme here (laughing) and so they're very nerdy names, but so we have these software factories and there's all these projects are being worked. But one of the amazing things I noticed when I showed up to work on the first day was that I had no idea who was uniformed and who was civilian. It was a completely badge off rank, off situation. Very few people showed up in uniform and the ones that did typically had their blouse off so you had no idea what their rank was. Everybody went by first name and we behaved like a start-up. And these civilians were coming from other startups like Hypergiant or a Timo or other very small, very specialized groups and SUSE RGS, of course they were there too and they're embedded in several different teams. And so you have this, like this quasi company that got this startup really that got formed and the culture is very, you know, very varies, you know, bay area startup type in some ways, for both better and worse. There's, I mean, we're, definitely full tilt on (laughs) on the Agile train there, but it's just, it's like nothing I've ever seen inside the DOD. And they're not just learning from these small companies and from Agile companies, but they're behaving like them. And it's spreading, they're seeing what work is getting done and what can be accomplished and how you can continuously deliver value instead of working for, you know, six or eight months and then showing the customer something and them hating it and you sending it back and, you know, it's more of a continuous improvement type thing. And I think that they're embracing that and I'm very excited to see it. >> That's important 'cause changing a culture is incredibly hard but seeing and hearing that they're embracing that is exciting. And I'm sure there's going to be many more things you could talk about generally, but I got to ask you if somebody like SUSE gave you $250,000, and you could buy one of the tickets on Branson's next flight, would you do it? >> I mean, yeah, why would I not? Like, how can I pass up a trip, (Lisa laughing) you know, go to the Edge of space. >> The Far Edge. >> Like yeah, the Far Edge, maybe I'll just, you know, hurdle the satellite out the window, as you know, we're up there, you know, peak and probably could throw it quite that fast, but we'll see. (Lisa laughing) But yeah, no, I think I would take the trip, yeah, that'd be fun. >> You're brave. Brave than I'm, I don't know. Well, Bren it's been delightful talking to you. Thank you for sharing what you guys at Hypergiant and SUSE have been doing together, the Department of Defense, the exciting things going on there and for the new definitions and my lexicon of the Edge, it's been great talking to you. >> Thank you, have a great day. >> You too. For Bren Briggs, I'm Lisa Martin. You're watching a CUBE Conversation. (digital music)

Published Date : Jul 19 2021

SUMMARY :

the Director of DevOps and this is a company that's new to me? and we had the slogan So talk to me about the and getting to know each other, the things that you uncovered and so it pays to get that I can disclose. that you guys are doing and the patterns that he implemented It sounds like you had a great And I liked how you described space and that for the most part Richard Branson and his team besides if it breaks in the Far Edge, and we did all of this stuff. and the ability to think And some of the things that and the potential that they see 180, it's really cool. the agility that you bring and the ones that did and you could buy one of the tickets you know, go to the Edge of space. the window, as you know, and my lexicon of the Edge, For Bren Briggs, I'm Lisa Martin.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

Rob SlaughterPERSON

0.99+

$250,000QUANTITY

0.99+

Chris TackePERSON

0.99+

BoeingORGANIZATION

0.99+

NASAORGANIZATION

0.99+

July 2021DATE

0.99+

HypergiantORGANIZATION

0.99+

Bren BriggsPERSON

0.99+

Chris NuberPERSON

0.99+

Richard BransonPERSON

0.99+

SUSEORGANIZATION

0.99+

72 hourQUANTITY

0.99+

sixQUANTITY

0.99+

$950 millionQUANTITY

0.99+

BrenPERSON

0.99+

Chris TackyPERSON

0.99+

Department of DefenseORGANIZATION

0.99+

two peopleQUANTITY

0.99+

EarthLOCATION

0.99+

PepperdineORGANIZATION

0.99+

eight monthsQUANTITY

0.99+

SUSE Rancher Government LabsORGANIZATION

0.99+

AgileTITLE

0.99+

five peopleQUANTITY

0.99+

first thingQUANTITY

0.98+

SharePointTITLE

0.98+

first timeQUANTITY

0.98+

four peopleQUANTITY

0.98+

23 other companiesQUANTITY

0.98+

bothQUANTITY

0.98+

DevSecOpsORGANIZATION

0.98+

five voltQUANTITY

0.98+

BothQUANTITY

0.98+

Platform OneORGANIZATION

0.98+

oneQUANTITY

0.97+

LisaPERSON

0.97+

one solid pieceQUANTITY

0.97+

Kobayashi MaruORGANIZATION

0.95+

first dayQUANTITY

0.95+

KubernetesTITLE

0.95+

Edge OneCOMMERCIAL_ITEM

0.94+

DevOpsTITLE

0.94+

TimoORGANIZATION

0.94+

BransonPERSON

0.94+

DODTITLE

0.93+

about a yearDATE

0.93+

DevSecOpsTITLE

0.93+

SUSE RGSTITLE

0.92+

three categoriesQUANTITY

0.92+

last weekendDATE

0.9+

first respondersQUANTITY

0.87+

Walter Bentley and Jason Smith, Red Hat | AnsibleFest 2020


 

(upbeat music) >> Narrator: From around the globe, it's theCUBE with digital coverage of Ansible Fest 2020 brought to you by Red Hat. >> Welcome back to theCUBE's coverage, Cube virtual's coverage of Ansible Fest 2020 virtual. We're not face to face this year. I'm your host John Furrier with theCube. We're virtual, this theCube virtual and we're doing our part, getting the remote interviews with all the best thought leaders experts and of course the Red Hat experts. We've got Walter Bentley, Senior Manager of Automation practice with Red Hat and Jason Smith, Vice President of North American services, back on theCube. We were in Atlanta last year in person. Guys, thanks for coming on virtually. Good morning to you. Thanks for coming on. >> Good morning John. Good morning, good morning. >> So since Ansible Fest last year a lot's happened where she's living in seems to be an unbelievable 2020. Depending on who you talk to it's been the craziest year of all time. Fires in California, crazy presidential election, COVID whole nine yards, but the scale of Cloud has just unbelievably moved some faster. I was commenting with some of your colleagues around the snowflake IBO it's built on Amazon, right? So value is changed, people are shifting, you starting to clear visibility on what these modern apps are looking like, it's Cloud native, it's legacy integrations, it's beyond lift and shift as we've been seeing in the business. So I'd love to get, Jason we'll start with you, your key points you would like people to know about Ansible Fest 2020 this year because there's a lot going on this year because there's a lot to build on and there's a tailwind for Cloud native and customers have to move fast. What's your thoughts? >> Yeah so, a lot has happened since last year and customers are looking to be a lot more selective around their automation technologies. So they're not just looking for another tool. They're really looking for an automation platform, a platform that they can leverage more of an enterprise strategy and really be able to make sure that they have something that's secure, scalable, and they can use across the enterprise to be able to bring teams together and really drive value and productivity out of their automation platform. >> What's the key points in the customers and our audience around the conversations around the learning, that's the new stuff happening in using Ansible this year? What are the key top things, Jason? Can you comment on what you're seeing the big takeaway for our audience watching? >> Yeah, so a lots change like you said, since last year. We worked with a lot of customers around the world to implement Ansible and automation at scale. So we're using our automation journeys as we talked about last year and really helping customers lay out a more prescriptive approach on how they're going to deliver automation across their enterprise. So customers are really working with us because we're working with the largest customers in the world to implement their strategies. And when we work with new customers we can bring those learnings and that experience to them. So they're not having to learn that for the first time and figure it out on their own, but they're really able to learn and leverage the experience we have through hundreds of customers and at enterprise scale and can take the value that we can bring in and help them through those types of projects much more quickly than they could on their own. >> It's interesting. We were looking at the research numbers and look at the adoption of what Ansible's doing and you guys are with Red Hat it's pretty strong. Could you share on the services side because there's a lot of services going on here? Not just network services and software services, just traditional services. What are the one or two reasons why customer engaged with Red Hat services? What would that be? >> Yeah so, like I said, I mean, we bring that experience. So customers that typically might have to spend weeks troubleshooting and making decisions on how they're going to deliver their implementations, they can work with us and we can bring those best practices in and allow them to make those decisions and implement those best practices within hours instead of weeks, and really be able to accelerate their projects. Another thing is we're a services company as part of a product company. So we're not there just to deliver services. We're really focused on the success of the customer, leveraging our technologies. So we're there to really train and mentor them through the process so that they're really getting up to speed quickly. They're taking advantage of all of the expertise that we have to be able to build their own experience and expertise. So they can really take over once we're gone and be able to support and advance that technology on their own. So they're really looking to us to not only implement those technologies for them, but really with them and be able to train and mentor them. Like I said, and take advantage of those learnings. We also help them. We don't just focus on the technologies but really look at the people in process side of things. So we're bringing in a lot of principles from DevOps and Agile on open practices and helping customers really transform and be able to do things in a new way, to be much more efficient, a lot more agile, be able to drive a lot more value out of our technology. >> Walter, I got to ask you, last year we were chatting about this, but I want to get the update. And I'd like you to just give us a quick refresh definition about the automation adoption journey because this is a real big deal. I mean, we're looking at the trends. Everyone realizes automation is super important at scale, as you think about whether it's software data, anything's about automation it's super important, but it's hard. I mean, the marketplace we were looking at the numbers. I was talking to IDC for you guys at this festival and of Ansible Fest, and they said about five to 10% of enterprises are containerized, which means this huge wave coming of containerization. This is about the automation adoption journey because you start containerizing, (laughs) right? You start looking at the workflows on the pipelinig and how the codes being released and everything. This is important stuff. Give us the update on the automation adoption journey and where it is in the portfolio. >> Well, yeah, just as you called it out, last year on main stage and Ansible fest, almost every customer expressed the need and desire to have to have a strategy as to how they drive their adoption of automation inside their enterprise. And as we've gone over the past few months of splitting this in place with many customers, what we've learned is that many customers have matured into a place where they are now looking at the end to end workflow. Instead of just looking at the tactical thing that they want to automate, they are actually looking at the full ribbon, the full workflow and determining are there changes that need to be made and adjusted to be more efficient when it comes to dealing with automation. And then the other piece as we alluded to already is the contagious nature of that adoption. We're finding that there are organizations that are picking up the automation adoption journey, and because of the momentum it creates inside of that organization we're finding other municipalities that are associated with them are now also looking to be able to take on the journey because of that contagious nature. So we can see that how it's spreading in a positive way. And we're really looking forward to being able to do more of it as the next quarter and the next year comes up. >> Yeah, and that whole sharing thing is a big part of the content theme and the community thing. So great reference on that, good thing is word of mouth and community and collaboration is a good call out there. A quick question for you, you guys recently had a big win with NTT DoCoMo and their engagement with you guys on the automation, adoption journey. Walter, what were some of the key takeaways? Jason you can chime in too I'd like to get some specifics around where it's been successful >> To me, that customer experience was one that really was really exciting, primarily because we learned very early on that they were completely embodying that open source culture and they were very excited to jump right in and even went about creating their own community of practice. We call them communities of practice. You may know them as centers of excellence. They wanted to create that very early in increment, way before we were even ready to introduce it. And that's primarily because they saw how being able to have that community of practice in place created an environment of inclusion across the organization. They had legacy tools in place already, actually, there was a home grown legacy tool in place. And they very quickly realized that it didn't need to remove that tool, they just needed to figure out a way of being able to how to optimize and streamline how they leverage it and also be able to integrate it into the Ansible automation platform. Another thing I wanted to very quickly note is that they very quickly jumped onto the idea of being able to take those large workflows that they had and breaking them up into smaller chunks. And as you already know, from last year when we spoke about it, that's a pivotal part of what the automation adoption journey brings to our organization. So to sum it all up, they were all in, automation first mindset is what that was driving them. And all of those personas, all of those personal and cultural behaviors are what really helped drive that engagement to be very successful. >> Jason, we'll get your thoughts on this because again, Walter brought up last year's reference to breaking things up into modules. We look at this year's key news it's all about collections. You're seeing content is a big focus, content being not like a blog post or a media asset. Like this is content, but code is content. It's sharing. If it's being consumed by other people, there's now community. You're seeing the steam of enabling. I mean, you're looking at successes, like you guys are having with NTT DoCoMo and others. Once people realize there's a better way and success is contagious, as Walter was saying, you are now enabling new ways to do things faster at scale and all that good stuff has been go check out the keynotes. You guys talk about it all day long with the execs. But I want to learn, right? So when you enable success, people want to be a part of it. And I could imagine there's a thirst and demand for training and the playbooks and all the business models, innovations that's going on. What are you seeing for people that want to learn? Is there training? Is there certifications? Because once you get the magic formula as Walter pointed out, and we all know once people see what success looks like, they're going to want to duplicate it. So as this wave comes, it's like having the new surfboard. I want to surf that wave. So what's the update on Ansible's training, the tools, how do I learn, it's a certification of all. Just take a minute to explain what's going on. >> Yeah, so it's been a crazy world as we've talked about over the last six, seven months here, and we've really had to adapt ourselves and our training and consulting offerings to be able to support our remote delivery models. So we very, very quickly back in the March timeframe, we're able to move our consultants to a remote work force and really implement the tools and technologies to be able to still provide the same value to customers remotely as we have in person historically. And so it's actually been really great. We've been able to make a really seamless transition and actually our C-SAT net promoter scores have actually gone up over the last six months or so. So I think we've done a great job being able to still offer the same consulting capabilities remotely as we have onsite. And so that's obviously with a real personal touch working hand in hand with our customers to deliver these solutions. But from a training perspective, we've actually had to do the same thing because customers aren't onsite, they can't do in person training. We've been able to move our training offerings to completely virtual. So we're continuing to train our customers on Ansible and our other technologies through a virtual modality. And we've also been able to take all of our certifications and now offer those remotely. So as, whereas customers historically, would have had to gone into a center and get those certifications in person, they can now do those certifications remotely. So all of our training offerings and consulting offerings are now available remotely as well as they were in person in the past and will be hopefully soon enough, but it's really not-- >> You would adopt to virtual. >> Excuse me. >> You had to adopt to the virtual model quickly for trainings. >> Exactly. >> What about the community role? What's the role of the community? You guys have a very strong community. Walter pointed out the sharing aspect. Well, I pointed out he talked about the contagious people are talking. You guys have a very robust community. What's the role of community in all of this? >> Yeah, so as Walter said, we have our communities a practice that we use internally we work with customers to build communities of practice, which are very much like a centers of excellence, where people can really come together and share ideas and share best practices and be able to then leverage them more broadly. So, whereas in the past knowledge was really kept in silos, we're really helping customers to build those communities and leverage those communities to share ideas and be able to leverage the best practices that are being adopted more broadly. >> That's awesome. Yeah, break down those silos of course. Open up the data, good things will happen, a thousand flowers bloom, as we always say. Walter, I want to get your thoughts on this collection, what that enables back to learning and integrations. So if collections are going to be more pervasive and more common place the ability to integrate, we were covering for VMware world, there's a VMware module collection, I should say. What are customers doing when you integrate in cross technology parties because now obviously customers are going to have a lot of choice and options. If I'm an integration partner, it's all about Cloud native and the kinds of things we're talking about, you're going to have a lot of integration touch points. What's the most effective way for customers integrating other technology partners into Ansible? >> And this is one of the major benefits that came out of the announcement last year with the Ansible automation platform. The Anible automation platform really enables our customers to not just be able to do automation, but also be able to connect the dots or be able to connect other tools, such as other ITM SM tools or be able to connect into other parts of their workflows. And what we're finding in breaking down really quickly is two things. Collections obviously, is a huge aspect. And not just necessarily the collections but the automation service catalog is really where the value is because that's where we're placing all of these certified collections and certified content that's certified by Red Hat now that we create alongside with these vendors and they're unavailable to customers who are consuming the automation platform. And then the other component is the fact that we're now moved into a place where we now have something called the automation hub. which is very similar to galaxy, which is the online version of it. But the automation hub now is a focus area that's dedicated to a customer, where they can store their content and store those collections, not just the ones that they pull down that are certified by Red hat, but the ones that they create themselves. And the availability of this tool, not only just as a SaaS product, but now being able to have a local copy of it, which is brand new out of the press, out of the truck, feature is huge. That's something that customers have been asking for a very long time and I'm very happy that we're finally able to supply it. >> Okay, so backup for a second, rewind, fell off the truck. What does that mean? It's downloadable. You're saying that the automation hub is available locally. Is that what-- >> Yes, Sir. >> So what does that mean for the customer? What's the impact for them? >> So what that means is that previously, customers would have to connect into the internet. And the automation hub was a SaaS product, meaning it was available via the internet. You can go there, you can sync up and pull down content. And some customers prefer to have it in house. They prefer to have it inside of their firewall, within their control, not accessible through the internet. And that's just their preferences obviously for sometimes it's for compliance or business risk reasons. And now, because of that, we were able to meet that ask and be able to make a local version of it. Whereas you can actually have automation hub locally your environment, you can still sync up data that's out on the SaaS version of automation hub, but be able to bring it down locally and have it available with inside of your firewall, as well as be able to add your content and collections that you create internally to it as well. So it creates a centralized place for you to store all of your automation goodness. >> Jason, I know you got a hard stop and I want to get to you on the IBM question. Have you guys started any joint service engages with IBM? >> Yeah, so we've been delivering a lot of engagements jointly through IBM. We have a lot of joint customers and they're really looking for us to bring the best of both Red Hat services, Red Hat products, and IBM all together to deliver joint solutions. We've actually also worked with IBM global technology services to integrate Ansible into their service offerings. So they're now really leveraging the power of Ansible to drive lower cost and more innovation with our customers and our joint customers. >> I think that's going to be a nice lift for you guys. We'll get into the IBM machinery. I mean, you guys got a great offering, you always had great reviews, great community. I mean, IBM's is just going to be moving this pretty quickly through the system, I can imagine. What's some of the the feedback so far? >> Yeah, it's been great. I mean, we have so many, a large joint customers and they're helping us to get to a lot of customers that we were never able to reach before with their scale around the world. So it's been great to be able to leverage the IBM scale with the great products and services that Red Hat offers to really be able to take that more broadly and continue to drive that across customers in an accelerated pace. >> Well, Jason, I know you've got to go. We're going to stay with Walter while you drop off, but I want to ask you one final question. For the folks watching or asynchronously coming in and out of Ansible Fest 2020 this year. What is the big takeaway that you'd like to share? What is the most important thing people should pay attention to? Well, a couple things it don't have to be one thing, do top three things. what should people be paying attention to this year? And what's the most important stories that you should highlight? >> Yeah, I think there's a lot going on, this technology is moving very quickly. So I think there's a lot of great stories. I definitely take advantage of the customer use cases and hearing how other customers are leveraging Ansible for automation. And again really looking to not use it just as a tool, but really in an enterprise strategy that can really change their business and really drive cost down and increase revenues by leveraging the innovation that Ansible and automation provides. >> Jason, thank you for taking the time. Great insight. Really appreciate the commentary and hopefully we'll see you next year in person Walter. (all talking simultaneously) Walter, let's get back to you. I want to get into this use case and some of the customer feedback, love the stories. And we look, we'd love to get the new data, we'd love to hear about the new products, but again, success is contagious, you mentioned that I want to hear the use cases. So a lot of people have their ear to the ground, they look up the virtual environments, they're learning through new ways, they're looking for signals of success. So I got to ask you what are the things that you're hearing over and over again, as you guys are spinning up engagements? What are some of the patterns that are emerging that are becoming a trend in terms of what customers are consistently doing to overcome some of their challenges around automation? >> Okay, absolutely. So what we're finding is that over time that customers are raising the bar on us. And what I mean by that is that their expectations out of being able to take on tools now has completely changed and specifically when we're talking around automation. Our customers are now leading with the questions of trying to find out, well, how do we reduce our operational costs with this automation tool? Are we able to increase revenue? Are we able to really truly drive productivity and efficiency within our organization by leveraging it? And then they dovetail into, "Well, are we able to mitigate business risk, "even associated with leveraging this automation tool?" So as I mentioned, customers are up leveling what their expectations are out of the automation tools. And what I feel very confident about is that with the launch of the Ansible automation platform we're really able to be able to deliver and show our customers how they're able to get a return on their investment, how by taking part and looking at re-working their workflows how we're able to bring productivity, drive that efficiency. And by leveraging it to be able to mitigate risks you do get the benefits that they're looking for. And so that's something that I'm very happy that we were able to rise to the occasion and so far so good. >> Last year I was very motivated and very inspired by the Ansible vision and content product progress. Just the overall vibe was good, community of the product it's always been solid, but one of the things that's happening I want to get your commentary and reaction to this is that, and we've been riffing on this on theCube and inside the community is certainly automation, no brainer, machine learning automation, I mean, you can't go wrong. Who doesn't want automation? That's like saying, "I want to watch more football "and have good food and good wifi. I mean, it's good things, right? Automation is a good thing. So get that. But the business model issues you brought up ROI from the top of the ivory tower and these companies, certainly with COVID, we need to make money and have modern apps. And if you try to make that sound simple, right? X as a service, SaaS everything is a service. That's easy to say, "Hey, Walter, make everything as a service." "Got it, boss." Well, what the hell do you do? I mean, how do you make that happen? You got Amazon, you got Multicloud, you got legacy apps. You're talking about going in and re-architecting the application development process. So you need automation for the business model of everything as a service. What's your reaction to that? Because it's very complicated. It's doable. People are getting there but the Nirvana is, everything is a service. This is a huge conversation. I mean, it's really big, but what's your reaction to that when I bring that up. >> Right. And you're right, it is a huge undertaking. And you would think that with the delivery of COVID into our worlds that many organizations would probably shy away from making changes. Actually, they're doing the opposite. Like you mentioned, they're running towards automation and trying to figure out how do they optimize and be able to scale, based on this new demand that they're having, specifically new virtual demand. I'm happy you mentioned that we actually added something to the automation adoption journey to be able to combat or be able to solve for that change. And being able to take on that large ask of everything as a service, so to speak. And increment zero at the very beginning of the automation adoption journey we added something called navigate. And what navigate is, is it's a framework where we would come in and not just evaluate what they want to automate and bring that into a new workflow, but we evaluate what they already have in place, what automation they have in place, as well as the manual tasks and we go through, and we try to figure out how do you take that very complex, large thing and stream it down into something that can be first off determined as a service and made available for your organization to consume, and as well as be able to drive the business risks or be able to drive your business objectives forward. And so that exercise that we're now stepping our customers through makes a huge difference and puts it all out in front of you so that you can make decisions and decide which way you want to go taking one step at a time. >> And you know it's interesting, great insight, great comment. I think this is really where the dots are going to connect over the next few years. Everything is as a service. You got to lay the foundation. But if you really want to get this done I got to ask you the question around Ansible's ability to integrate and implement with other products. So could you give an examples of how Ansible has integrated and implemented with other Red Hat products or other types of technology vendors products? >> Right. So one example that always pops to the top of my head and I have to give a lot of credit to one of my managing architects who was leading this effort. Was the simple fact that you when you think about a mainframe, right? So now IBM is our new family member. When you think about mainframes, you think about IBM and it just so happens that there's a huge ask and demand and push around being able to automate ZOS mainframe. And IBM had already embarked on the path of determining, well, can this be done with Ansible? And as I mentioned before, my managing architect partnered up with the folks on IBM's side, so the we're bringing in Red Hat consulting, and now we have IBM and we're working together to move that idea forward of saying, "Hey, you can automate things with the mainframe." So think about it. We're in 2020 now in the midst of a new normal. And now we're thinking about and talking about automating mainframes. So that just shows how things have evolved in such a great way. And I think that that story is a very interesting one. >> It's so funny the evolution. I'm old enough to remember. I came out of college in the 80s and I would look at the old mainframe guys who were like "You guys are going to be dinosaurs." They're still around. I mean, some of the banking apps, I mean some of them are not multi threaded and all the good stuff, but they are powering, they are managing a workload, but this is the beautiful thing about Cloud. And some of the Cloud activities is that you can essentially integrate, you don't have to replace the old to bring in the new. This has been a common pattern. This is where containers, microservices, and Cloud has been a dream state because you can essentially re layer and glue it together. This is a big deal. What's your reaction to that? >> No, it's a huge deal. And the reality is, is that we need all of it. We need the legacy behaviors around infrastructure. So we need the mainframe still because they has a distinct purpose. And like you mentioned, a lot of our FSI customers that is the core of where a lot of their data and performance comes out of. And so it's not definitely not a pull out and replace. It's more of how they integrate and how can you streamline them working together to create your end to end workflow. And as you mentioned, making it available to your organizations to consume as a service. So definitely a fan of being able to integrate and add to and everything has a purpose. Is what we're coming to learn. >> Agility, the modern application, horizontal scalability, Cloud is the new data center. Walter great insights, always great to chat with you. You always got some good commentary. I want to ask you one final question. I asked Jason before he dropped off. Jason Smith, who was our guest here and hit a hard stop. What is the most important story that people should pay attention to this year at Ansible Fest? Remember it's virtual, so there's going to be a lot of content around there, people are busy, it's asynchronous consumption. What should they pay attention to from a content standpoint, maybe some community sizes or a discord group? I mean, what should people look at in this year? What should they walk away with as a key message? Take a minute to share your thoughts. >> Absolutely. Absolutely key messages is that, kind of similar to the message that we have when it comes down to the other circumstances going on in the world right now, is that we're all in this together. As an Ansible community, we need to work together, come together to be able to share what we're doing and break down those silos. So that's the overall theme. I believe we're doing that with the new. So definitely pay attention to the new features that are coming out with the Ansible automation platform. I alluded to the on-prem automation hub, that's huge. Definitely pay attention to the new content that is being released in the service catalog. There's tons of new content that focus on the ITSM and a tool. So being able to integrate and leverage those tools then the easier math model, there's a bunch of network automation advances that have been made, so definitely pay attention to that. And the last teaser, and I won't go into too much of it, 'cause I don't want to steal the thunder. But there is some distinct integrations that are going to go on with OpenShift around containers and the SQL automation platform that you definitely are going to want to pay attention to. If anyone is running OCP in their environment they definitely going to want to pay attention to this. Cause it's going to be huge. >> Private cloud is back, OpenStack is back, OCP. You got OpenShift has done really well. I mean, again, Cloud has been just a great enabler and bringing all this together for developers and certainly creating more glue, more abstractions, more automation, infrastructure is code is here. We're excited for it Walter, great insight. Great conversation. Thank you for sharing. >> No, it's my pleasure. And thank you for having me. >> I'm John Furrier with theCube, your host for theCube virtual's, part of Ansible Fest, virtual 2020 coverage. Thanks for watching. (gentle upbeat music)

Published Date : Oct 2 2020

SUMMARY :

brought to you by Red Hat. and of course the Red Hat experts. Good morning John. and customers have to move fast. and really be able to make sure and that experience to them. and look at the adoption and really be able to and how the codes being and because of the momentum it creates and their engagement with you guys and also be able to integrate it and the playbooks and and technologies to be able to You had to adopt to What about the community role? and be able to leverage the best practices the ability to integrate, that came out of the You're saying that the automation and be able to make a local version of it. and I want to get to to drive lower cost and more innovation I mean, IBM's is just going to and continue to drive We're going to stay with And again really looking to So I got to ask you what are the things And by leveraging it to and reaction to this of everything as a service, so to speak. the dots are going to connect and I have to give a lot of credit the old to bring in the new. and add to and everything has a purpose. that people should pay attention to that are going to go on with and bringing all this And thank you for having me. I'm John Furrier with theCube,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JasonPERSON

0.99+

WalterPERSON

0.99+

IBMORGANIZATION

0.99+

Jason SmithPERSON

0.99+

oneQUANTITY

0.99+

AtlantaLOCATION

0.99+

JohnPERSON

0.99+

John FurrierPERSON

0.99+

AnsibleORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

last yearDATE

0.99+

2020DATE

0.99+

Walter BentleyPERSON

0.99+

AmazonORGANIZATION

0.99+

MarchDATE

0.99+

Last yearDATE

0.99+

next yearDATE

0.99+

Red hatORGANIZATION

0.99+

this yearDATE

0.99+

ZOSTITLE

0.98+

two thingsQUANTITY

0.98+

first timeQUANTITY

0.98+

SQLTITLE

0.98+

next quarterDATE

0.98+

two reasonsQUANTITY

0.98+

bothQUANTITY

0.98+

Ansible FestEVENT

0.98+

CaliforniaLOCATION

0.98+

one final questionQUANTITY

0.98+

OpenStackTITLE

0.97+

Paul Savill, CenturyLink | AWS re:Invent 2019


 

>>long from Las Vegas. It's the Q covering a ws re invent 2019. Brought to you by Amazon Web service is and in along with its ecosystem partners. >>Welcome back Inside the Sands. Here's to continue our coverage here. Live on the Cube of AWS Reinvent 2019 Absolutely jam packed isles. Great educational sessions and one of the feature presenters now joins us well. Dave Alana John Walls with Paul Saville. Who's the SPP of court networking technology solutions at Caen. Freely. Paul, Good to see you again. >>Yeah, let's see you, John. >>So you just finished up. We'll get in that just a little bit. First off, just give me your impression of what's going on here and the energy and the vibe that you're getting. >>Yeah, I think it's fantastic. I mean, it's very high energy here, you know, there's a lot of new things that that are emerging terms of the applications that we're seeing the use cases for the cloud. And of course, exciting stuff happened around ej compute with the announcement of AWS with the outpost, Long >>will jump in Najaf. Everybody has a different idea, right? You weren't so I mean, if you define the edge, at least. How do you see it? >>Yeah, it's very simple definition of how we see the edge. It's putting compute very close to the point of interaction, and the interaction could be with humans or the inner action could be with devices or other electron ICS that need toe that need to be controlled or that need to communicate. But the point is getting that that computers close as possible to it from a performance standpoint that's needed. >>Okay, so we heard that a lot from Andy Jassy ethic yesterday. Right now compute to the data. I mean, with all due respect, it's like he was talking about like it was a new concept, right? We've been here for quite some time, so talk more about how you see the edge evolving. I mean, look, I have a lot of credit to Amazon because, you know, they used to not talk about hybrid. I predict a couple years to talk about multi cloud. Guarantee it because that's what customers are doing, so they respond to customers at the same time. I like their edge strategy because it's all about developers. Infrastructures code on the edge But you guys are about, you know, moving that data on or not necessarily bring in the computer that. So how do you see the edge >>evolving? Yeah, so the reason this whole trend is happening is because what's happening with the new technologies that are enabling a whole new set of applications out there? Things like What's going on with artificial intelligence and machine learning and virtual reality those the robotics control Those things are basically driving this need to place compute as close as possible to that point of interaction. The problem is that when you do that, costs go up. And that's the conundrum that we've kind of been in because when Compute gets housed at the customer premise in a home in a business in an enterprise, then that's the most expensive real estate that that there is, and you can't get the economies of scale that's there. The only other choice to date has been the public cloud, and that could be hundreds or thousands of miles away. And these new applications that require really tight control and interaction can't operate in that kind of environment, And yet it's too expensive to run those applications at the very edge at the premise itself. So that's why this middle ground now of a place and compute nearby, where conserve many locations or must be house more cost effectively. >>Okay, so you got the speed of light problem, right? So you deal with that later by making the compute proximate to the data, but it doesn't have to be like right next to it. Correct. But But what are we talking distance wise? It's that to be synchronised distance or >>when we think of the distance, we think about it in terms of milliseconds of delay, from where the edge device, the thing that needs to interact with the computer, the application needs to interact with. And we have not seen any applications that from the customers we talked to that really get beyond our need tighter than five milliseconds of delay. Now that's one way. So if we get into that range of place and compute within five milliseconds of the of the edge interaction, the device that it needs to interact with, that is enough to meet some of the most tightest requirements that we've seen around robotics control, video analytics and another >>like I could ship code to the data. But the problem is, if it needs to be real time, right, it's still too much. It's too much late, right? That's the problem that you're solving. That's right. Okay, >>so what's what you were talking about? Why milliseconds matter? That's right. So give me some examples, if you will, then about why, why five matters more than 10 or five matters more than eight or 20 or whatever, because we're talking about such an infant testable difference. But yet it does matter. In some respects. It does, >>because so give you an example of robotics, for example, robotics control. You know that is one of things that requires the most tight Leighton see requirement because it depends upon the robotics itself. If it's a machining tools that's working on a laid, then that doesn't require a tide of response time to the controller as, say, a scanning device that Israel time pushing things around very fast in doing an optical read on it to make the decision about how about where it pushes the device next, that type of interaction of control requires a much tighter, late and see performance, and that's why you get start, you start to see these ranges. But as I said, we're not seeing anything below that kind of five millisecond type of range from >>the other thing that's changing it and help me understand. This is yeah, Okay, you're moving the compute closer to the data, which increases costs. And I want to understand how you're addressing that. Maybe one of the ways addresses you're bringing the cloud model, the operating model to the data. So right patches, security patches, maintenance, things like that are reduced. Is that how you're addressing costs? >>Yeah, that is part of it. And that's why the eight of US outpost is very interesting because it is really a complete instance of AWS that is in a much smaller form factor that you can deploy very close to that point of interaction close to the customer to the customer premise, and that enables customers to leverage pretty much the full power of AWS in engaging with those devices and coding to those devices and dropping those applications closed. >>Now you lose the multi tenant aspect Is that right down unnecessarily >>from our understanding of outpost, it's a single 10 a device coming out the gate. But ultimately it's gonna be a multi tenant device. >>Yeah, okay, so near term, it's easier to manage. But it's it's multi instance, I guess, yeah, over time, maybe you could share that. That resource is still not getting. >>The interesting thing is that even though it's a single tenant device, there's still many great use cases because even a single Tenet device in set in one market could serve multiple enterprise locations. So it still has that kind of a sense of scale because you concert as long as it's it's one enterprise. Conserve many locations off of that one. That one device. >>Okay, so you don't get the massive economies of scale, but you're opening abuse cases that never existed before. >>That's right. But what about what do you do with the data supplied basically held something data scale and edge devices creating that much more data. All of a sudden speed becomes a little more challenging, taking in a lot more information, trying to process in different ways after feeding off of that, so a sudden you have a much more complex challenge because it's not static, right? This is a very dynamic environment, >>That's right. Yeah, and there's a very big trend that's happening now, which is that data is being created at the edge, and it's staying at the edge for a whole number of reasons. You know, in the Old World you would pretty much collect data and you'd ship it off to the centralized data center or to the public cloud to be housed there. And that's today. That's where 80% of data resides. But there's a big shift happening where that data now needs to reside at the deep edge because it needs to have that fast interaction with something that's that's working with or because of government regulations that are now coming in that are having much stricter tolerances around. You have to know exactly where your data is can't cross state lines. It can't, you know, get out of certain security zone. Things like that are forcing companies now to keep that massive amount of data in a very understand known localized position. >>You gotta act on it in real time. Yeah, some of it will go back to the cloud, but you see folks persist. The data at the edge or not so much persistent data. People want to store it at the edges. Well, >>uh, people in the story at the edge where where it's going to have a lot of interaction. So if you're running A if you're running a chemical plant, you may not need to have access to a lot of data outside that chemical plant. But you you're intensively analyzing that data in the chemical plant, and you don't want to ship it off someplace centrally, 1000 miles away. To be access from there. It needs to be acted on locally, and that's why it's compute this movement toward EJ computers really building and becoming stronger. >>Talk about your tech. You know what? What's the real value of what you do? You obviously reducing late, sees they gotta secure all this stuff but >>central and brings the number of tools to help in this whole space. So the first of all, the network that we provide that could tie it all together from the enterprise location to the to the edge location where compute can be housed all the way back to the public cloud core way have a network that spans the entire U. S. Fiber all over the place, and we can use those lonely and see fiber optic connections to change those those areas together in the most optimal fashion. To get the kind of performance that you need to handle these distributed computing environments, we also bring compute technology itself. We have our own variety of EJ compute, where we can build custom edge compute solutions for customers that meet their very specific SPECT requirements that could be dedicated to them. We can incorporate AWS computer technology as well, and we have way have I t service's and skilled people, thousands of employees that are focused on the space that build these solutions together. For customers that tie together, the public cloud resource is the edge. Compute resource is the network resource is the wireless connectivity capabilities that's needed on customer premise and the management solutions to tie it all together in that very mixed environment. >>We were just on a session with Teresa Carlson runs public sector for AWS, telling the SAT in a session. Marty Walsh, the mayor of Boston, has got this big smart city initiative going on. I know that's one of the cases you're working on. Maybe talk about that a little bit. And maybe some of the other interesting use cases. >>Yeah, that's right. Definitely. Smart cities are a big our big use case, though. The one and we're we're actually actively working on a number of them. I would say that those used the smart City use cases tend to move very slowly because you're talking about municipalities and long decision making cycle, I'll tell you that. We've seen >>there's a 50 year plan he put forward, >>but the use cases that we're really seeing the most traction with our interestingly is robotics is a really big one, and Video Analytics is another big one. So we're actually deploying edge used case solutions right now. In those scenarios, the Robotics one is a great one because those devices need to be. Those robotic devices need to be controlled within a really tight millisecond tolerance, and but the computer needs to be housed in a very it's much more reliable economic location. The video Analytics piece is a really interesting one that we're seeing very, very big demand for, because retailers have now reached the point with the technology where they can do things like they can, they can figure out by doing video analytics whether somebody is acting suspiciously in the store and we're hearing that they can, they think they can now cut Devery out of retail locations dramatically by using video analytics. And when you talk about big savings to the bottom line of a company that makes a big savings to them so that those very to good use cases we're seeing that a real today. You >>know what the other things you were talking about earlier was about the disappearance of Compute Divide. So where to go? Wait. >>I like to say that in the old days, if you've been around long enough like I know you're old because watching you on TV >>way get out of college, Does that make you feel way get out of college? >>Everything was in the mainframe, right? You essentially. Yet when you went to work, you had a terminal, and everything was house Essentially. Then we went to distributed where client server model, where you everybody was working on desktops and a lot of the compute was on the desk tops and very little went back to a mainframe. Then we made the ship to the cloud where he pushed his much in the centralized location as we can, too. So he's shifted way back to centralized. That's the compute divide. I'm talking about goat, that big ship from decentralized, centralized, decentralized. Now we're actually moving to a new world where that pendulum swing that compute divide is disappearing because compute isn't most economically stored. Anyone location, it's everywhere. It's gonna be at the Io ti edge. It's gonna be at the premise it's going to be in market locations. They were essential. Eyes is gonna be in the public cloud core. It's gonna be all around us. And that's what I mean by the by the disappearance of the compute >>divine. And, you know, I wantto come back on that. You talk about a pendulum. A lot of people talk about the pendulum swings mainframe and distributed. A lot of people say it's the pendulum is swinging back, but you just described it differently. It's It's a ubiquitous matrix. Now you'd is everywhere. >>That's where you hear the term fog computing the idea of the fog. Now it's not the cloud that you can see off in the distance. It's just everywhere, right, surround you and that's how combines we can start to think about how >>I first heard that you're like, I don't know eight years ago. What the heck is this? It was ahead of its time, but now it's really starting to show. This is sort of new expansion of what we know is cloud reading redefining? Yes, exactly. Net ej five g. That's, you know, another big piece of it. You know, Amazon's obviously excited about that with wavelength, right? What do you see for five G? How's that? It can affect this whole equation. >>Yeah, I think five G is gonna have a have a number of EJ applications and was primarily gonna be around the mobile space. You know, it's the the advantage of it is that it increases band with and support smoke mobility, and it allows for a little bit higher resilience because they can take the part of the spectrum and make sure that they're carving it out and dedicating it for particular applications that are there. But I tell you that the five G gets a lot of attention in terms of being how EJ computer's gonna roll out. But we're not saying that at all. edge compute is available today and that we're providing those edge compute solutions through our fiber optic networks. What we're seeing is that every enterprise that we're talking to once fiber into their into their enterprise location. Because once you have fiber there, that's gonna be the most secure, reliable and scalable solutions fiber kin can effectively scale as Bigas. Any customer could ever consume the bandwidth. And they know that once they get fiber into that application into their location that they're good for for the future because they can totally scale with that. And that's how we're deploying edge solutions today, >>Paul. I know you got a plane to catch, and you got to go. But after that age comment, we're gonna keep you for another hour. No, I think it's great. You're doing all right. All right, Hang on. We're about to say goodbye to Paul now. Well, you have a free event. 2019. Coverage continues. Right here on the right

Published Date : Dec 5 2019

SUMMARY :

Brought to you by Amazon Web service Paul, Good to see you again. going on here and the energy and the vibe that you're getting. emerging terms of the applications that we're seeing the use cases for the cloud. You weren't so I mean, if you define the edge, at least. But the point is getting that that computers close as possible to it from a performance standpoint that's needed. Infrastructures code on the edge But you guys are about, you know, moving that data on that there is, and you can't get the economies of scale that's there. by making the compute proximate to the data, but it doesn't have to be like right the thing that needs to interact with the computer, the application needs to interact with. That's the problem that you're solving. So give me some examples, if you will, then about why, why five matters more than 10 or and that's why you get start, you start to see these ranges. the operating model to the data. really a complete instance of AWS that is in a much smaller form factor that you But ultimately it's gonna be a multi tenant device. I guess, yeah, over time, maybe you could share that. So it still has that kind of a sense of scale because you concert as long as it's But what about what do you do with the data supplied basically held something data in the Old World you would pretty much collect data and you'd ship it off to the centralized The data at the edge or analyzing that data in the chemical plant, and you don't want to ship it off someplace centrally, What's the real value of what you do? To get the kind of performance that you need to handle these distributed computing environments, I know that's one of the cases you're working on. tend to move very slowly because you're talking about municipalities and long decision and but the computer needs to be housed in a very it's much more reliable economic location. know what the other things you were talking about earlier was about the disappearance of Compute Divide. It's gonna be at the premise it's going to be in market locations. A lot of people talk about the pendulum That's where you hear the term fog computing the idea of the fog. You know, Amazon's obviously excited about that with wavelength, You know, it's the the advantage of it is that it increases band with and Right here on the right

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
PaulPERSON

0.99+

Marty WalshPERSON

0.99+

AmazonORGANIZATION

0.99+

Teresa CarlsonPERSON

0.99+

Paul SavillePERSON

0.99+

80%QUANTITY

0.99+

AWSORGANIZATION

0.99+

hundredsQUANTITY

0.99+

Paul SavillPERSON

0.99+

CenturyLinkORGANIZATION

0.99+

Las VegasLOCATION

0.99+

JohnPERSON

0.99+

50 yearQUANTITY

0.99+

yesterdayDATE

0.99+

2019DATE

0.99+

1000 milesQUANTITY

0.99+

oneQUANTITY

0.99+

one wayQUANTITY

0.99+

Andy JassyPERSON

0.99+

USLOCATION

0.98+

todayDATE

0.98+

more than 10QUANTITY

0.98+

eightQUANTITY

0.98+

five millisecondsQUANTITY

0.98+

BostonLOCATION

0.97+

FirstQUANTITY

0.97+

eight years agoDATE

0.97+

thousands of milesQUANTITY

0.97+

more than eightQUANTITY

0.97+

firstQUANTITY

0.97+

20QUANTITY

0.96+

Inside the SandsTITLE

0.94+

one marketQUANTITY

0.93+

five millisecondQUANTITY

0.93+

singleQUANTITY

0.92+

one deviceQUANTITY

0.91+

Dave Alana John WallsPERSON

0.91+

BigasORGANIZATION

0.9+

one enterpriseQUANTITY

0.88+

RoboticsORGANIZATION

0.88+

Amazon WebORGANIZATION

0.86+

U. S. FiberLOCATION

0.81+

CaenLOCATION

0.81+

single tenant deviceQUANTITY

0.8+

five mattersQUANTITY

0.78+

thousands of employeesQUANTITY

0.78+

InventEVENT

0.77+

couple yearsQUANTITY

0.71+

IsraelLOCATION

0.69+

LeightonORGANIZATION

0.65+

single TenetQUANTITY

0.64+

SATORGANIZATION

0.6+

five GCOMMERCIAL_ITEM

0.57+

re invent 2019EVENT

0.56+

wsEVENT

0.55+

10QUANTITY

0.53+

Compute DivideEVENT

0.49+

Video AnalyticsTITLE

0.45+

fiveCOMMERCIAL_ITEM

0.45+

ReinventEVENT

0.44+

GORGANIZATION

0.4+

Ken Ringdahl, Veeam | VeeamON 2019


 

you live from Miami Beach Florida Biman 2019 brought to you by beam welcome back to Miami everybody this is the cube the leader in live tech coverage I'm Dave Volante with my co-host Peter burst we're wrapping up day two of v-mon 2019 and so we've been talking about cloud hybrid cloud data protection backup evolving to more of an automated data management environment can bring dollars here and he is in charge of really building out the VM ecosystem that he's the vice president of global alliance architecture at VM Kent great to see you again thanks for coming on yeah thanks Dave preciate so the ecosystem is evolving you know you're in a competitive marketplace but one of the things that differentiates Veeam is you know billion dollar company and people want to do business with your customers and so the ecosystem keeps growing and growing and you guys have some you know blue chip names at the top of your sponsor list we do a good job but you're not done yet so not at all and I think Dave you know it's it's really great to see how v-mon has evolved and you know in our partner ecosystem you know we have you know you talked about us hitting a billion dollars you know we rat marinelle's we hit 350,000 customers that customer number is a huge asset for us when we talked to our partners you know that is something that they're all trying to tap into right they love you know and our customers are really passionate and we have partners that come to us and they say hey look you know and that you know the bigger partners than us and they're saying hey will you please work with us will you please you know we want to do deeper integration because our customers you know are saying we're Veeam customers and and you you know you know mister partner you have to go work with teams so that so that our solutions will work better together so it's a it's a great asset to us yeah and it's it's evolved since you know it's just certainly just the first Vemma and I was at the very first one I think was we were talking was at the Aria whatever it was five years ago so so you know ecosystem is I think Jason Buffington was quoting Archimedes today and you know livre and and that ecosystem is is you know a huge opportunity for growth ok so let's get into it well first of all I want to ask you if time was interesting global alliance architecture yes so we're not talking technical architecture necessarily we're talking about what the architecture of the ecosystem or both yeah so some money you know my role my responsibilities and what my team looks after is everything technical related to our partners so veem we're a hundred percent is fee and you know ratmir and aundrea to co-founders and leaders to the company you know that that's something that they take to heart and it's something that's actually really valuable when we talk to our partners is we don't really overlap very much especially with the infrastructure partners that we have and so you know my job is to take the great products we have and make it work really well and go deep with our partners so create value with these partners there's sometimes their product integrations storage snapshot integrations we announced the width beam program two weeks ago we are together at that next with the rest of your team talking about Nutanix mine with theme which is a secondary storage integrated solution so all of those that's all part of my roles so solution architecture and product integrations you know through our partner ecosystem which which is very broad it stretches from storage partners to platform partners to other is feeds like Oracle SAT even healthcare partners yeah Peter we were excited about the width Eames stuff dat who is with Fein yours with Vemma yeah so my team is responsible for the overall architecture with Vemma it's it's really a joint collaboration within within Veeam so we have an R&D investment that's building the intellectual property that powers the you know the system under the covers my team's responsible for the broader architecture how we bring it together how we bring it to market through the channel right and and and how we bring it to our customers and that whole experience so my team is is intimately involved in that so a lot of people talk about inflection points in the industry and clearly were in the middle one way of describing it is that the first 50 years were known process unknown technology we never gonna do accounting we knew are going to do HR where you were going to do blah blah blah blah blah and there was mainframe client-server with a lot of other stuff but the whole notion of backup and restore and data protection grew up out of the complexity in the infrastructure as we move forward it's interesting because it's known technology it's gonna be cloud relatively known yes but what's interesting is we don't know what the processes are gonna be we don't know what we're gonna automate we don't know how we're going to change the business it's all going to be data driven which places an enormous burden on IT and specifically how they use data within the business so I'm gonna ask your question it's a long preamble but I'm asking the question I asked you out in there too and this is not the test but the question is look as we move forward as data is used to differentiate a business it suggests that there's going to be greater specialization in how data use is used which could and should lead to greater specialization in the role that veem and related technologies will play within the business and the question then is is the with veem approach a way to let allow innovation to bloom so that specialization can be accommodated and supported within the VM ecosystem yeah so yeah Peter good question and so I tell you that the short answer is yes the longer answer is I wasn't shorter than the short answer is yes the longer answer is it doesn't have to be with Veeam but really our goal and and what we want to empower our partners and so really the goal of with Veeam is hey we're already working across our partner ecosystem and we you know we work with with the likes of NetApp and HP and pure and Nutanix and you know and all the platform providers as well public clouds you know our goal is is to make VM ubiquitous and drive better value to our customers and through our partners right we need partners no matter what when we're working with a customer there's always there's always a workload we're protecting and we need a place to land our backup so no matter what we're always working with one or two partners in a deal and sometimes it's multiple because then you TR out to cloud storage and in other places you know with with veem what we're trying to do is is really simplify that process for customers and so make that process from the buying experience all the way through the delivery and the deployment and the management and the ongoing management day 1 and day 2 operations we want to make that all seamless and give them higher value now one thing we're looking to enable and by adding api's with veeam is we want to leverage the strengths of the partners we have and so you know I often end up in these discussions because we have a broad partner ecosystem we've already announced - with VM solutions we have a third that you know we did last year with Cisco that's in the market that's sort of similar in nature and we're gonna add more and you know the question our partners even ask us is you know you already got three of them why are you gonna add another one you know how am I going to differentiate and the answer is you know they differentiate with their own technology and and the idea is we have these open API so that they can they can build their own solutions they fit different markets and fit different use cases some are small small customer solutions some are enterprise but our goal is to enable them to be creative and how they build on top of eeeem but but have you know Veen be a core part of that solution rather so so it is a core part of solution yes apply to specific customer absolutely okay so the term seamless always you know triggers me in a way because seamless is like open right it's evolved over time and so what was seamless you know 10 years ago wasn't really seamless in today's terms so when you talk about seamless we're talking about if I understand it deep engineering right getting access to primitives through api's and creating solutions that are differentiable as a function of your partner's core value proposition and obviously integrating with meme with 350,000 customers so you're now in the ball game with with Veen customers so so so talk about the importance of api's and how that actually gets done yeah and seamless to whom to the partners to the customer to ultimately it's to the customer boom but but but there's got to be an ease of integration as well with the partners and I'd like to understand that better yeah absolutely so I'll give you an example of something we've done in the past that's that we're trying to model this with veem program after so but a year and a half as part of our 9.5 update 3 we introduced what we call universal storage API and we've talked about our version 10 there were five core features of version 10 when we announced that two years ago in New Orleans you're the first time you were you were with us at a v-mon and one of those was Universal storage API and what that means is you know we help we help our partners we help our customers ultimately by way of our partners on the primary side of integrating storage snapshots with vmware vsphere and so when we when we go to backup a vm we take a snapshot of that vm and with this with our storage snapshot integration we then take a storage snapshot of the volume that vm is on and we can release that VM where a snapshot very quickly so it's very low touch and low impact on the environment well we we introduced this API so that we could scale we had we had done our own storage snapshot and integration with you know call it 5 or 6 storage vendors over the previous seven years eight years right in the last year and a half we've added seven right and that's the scale we're talking about and allowing our partners to build the storage snapshot plug-in together right so we have a program we invite them into that program we collaborate on it they develop the plug-in we jointly test it and we release it and so we're trying to sort of take and that's been very successful as I said eight years five or six storage snapshot vendors year and a half we've done like another seven or eight so it's been very successful and we have more that are in queue so we'll be talking about more of these as time goes on in the very near future with the width beam program we're looking to do something very similar it's gonna be an invite-only program realistically the secondary storage partner is this the universe is probably 20 the logical universe for us is probably 10 to 12 so it's not going to be huge but it's gonna be impactful for our partners and so we'll invite them into the program we'll have an agreement of us working together we'll jointly develop and test it and we'll bring it to market together at the end of the day you know both our partner and veem we have our name on it and I'm sure you heard from rat mayor and Danny and others right we have our NPS score which we really really value and it's really high it's best in the industry and if we're putting our name on a solution in the market we also want to make sure that we're working on it together in it you know it really goes through the rigor of what it takes to bring a Vemma solution tomorrow actually you know what nobody's talked this week this week about the NPS core if they maybe they have in the keynote so that it might have missed it but well I was in the keynotes what is it today well yeah so so an NPS score is basically you know from from 0 to 100 it's it's you know we'll a customer reference you or recommend you right right and so ours is 73 ok the industry the the general average in in in our space is about 28 to 30 so we're about 2 and a half times that that's core you know and that's in Frank Zubin said to me one time it's easy to have a high NPS core if you're a one product company but you're not a one product company no no we've we've evolved substantially I mean you know we've we've added agents to cover physical workload we've we've added cloud support we've added other applications we've added veem availability Orchestrator we've added beam backup for office 365 we have VA C which is the availability console for our service providers which has cloud connected it's a very broad portfolio everything comes back to beam backup and replication as the flagship foundation but we have all these other products that that now help our customers solve their problems the reason we were so excited about this with wid theme is this notion of cloud and hybrid cloud and you talk about programmable infrastructure you really have been pushing just bringing the cloud experience to your data talking about that for a while and part of that has to be infrastructure as code and it can't really do that without open api's and this sort of seamless integration well the cloud is testing us with you as well the cloud is a really an architecture for how you're going to distribute work as opposed to how you can centralize Handicap I think for a long time we got it wrong it's all presumed and it's all gonna go to the center we're in fact when you get that level of standardization and common conventions and the technologies are built to make a tea that much easier it allows you to distribute the work a lot more effectively get the data closer to where the works going to be done and that is enormous implications for how we think about things but it also means that we when we talk about bringing the cloud to the data that the data has to be there the data services that make that data part of a broader fabric have to be there and it all has to be assured so that the system knows something about where the data is and what services can be applied to it in advance of actually moving the workloads that suggests ultimately that the technology set that veem is offering is going to evolve relatively rapidly so the whole notion of you know with V today for secondary storage but I could see that becoming something that you guys take two new classes of data service providers pretty quickly I don't want you to pre-announce anything but what do you think yeah Peter I think I think you're really on to something and when we when we sort of look at the worlds right the infrastructure world were in you know and and certainly some of our partners would draw a slightly different picture but we see Veen as as the common thread in the middle right because at the end of the day and I think you mentioned it as you were just talking there you know when we talk about hybrid cloud right we see now our customers especially commercial and enterprise and large enterprise customers it is it is a very heterogeneous environment it's multiple hypervisors different storage platforms it's multiple cloud providers because they're picking best to breed for the workload and so they need a platform that's got really breadth in depth of coverage and so the the one common thread we weave between there is Veeam right so if if we are that data protection layer as I mentioned before you know we're in the middle we're protecting a primary workload and we're writing our data to a secondary workload but in the middle is Veeam and so that workload we're protecting on Prem cloud secondary data centers theme is the thread in between there you can move that data around and wherever that is we can make use for now I'll give you a good example today you know let's say we're protecting a visa or workload on Prem right we back that up to it to assist them locally so we can have fast restore but ultimately we tear that out bean cloud tier capacity tear tear that's AWS so we can we can actually recover workloads in Atos one or two we have directory store which would take a backup from on-prem and directly move it there for DRAM migration purposes or we can simply consume that that backup that's now up in the cloud because Veen backups are self-describing we can lose the system on Prem and recover it so your point about making the data close to your workload with with veeam in the middle we enable that for our customers regardless of where they want to go yeah so we think that that's going to change the mindset from protection to assurance so assure your data is local and then it's the right data it's Integris and all the other things and then ultimately you know move it and back it up to some other site so it's but it's a subtle switch it's gonna be interesting to see how it plays out this is obviously well and as we talked about as you need to begin to protect things like containers like functions that come and go super quickly assurance has more meaning because there's the security threats and if you can help solve those problems through your partners through automation spinning containers up and down making it harder for the bad guys to you know a target a specific container raising essentially the cost so lowers their ROI that is a new game yeah and and I'll call out one thing a rat mayor I thought did a really good job on stage yesterday in his keynote he popped the slide which talked about the universal storage API and with theme and it had all our partners sort of around that you know that that I think he Illustrated our strategy which is hey we're focusing on the core parts of backup and replication and helping the core parts the data protection we're gonna partner with everything else that's adjacent to that we're not going to go solve maybe some of the security problems ourselves we're gonna enable some hooks secure restore maybe as an example we've announced you know in the technology keynote yesterday we announced a new API that allows partners to come in and crack open Veen backups and take a look at them one of the things could be deep inspection so you know our strategy and our goal is really to be open to our partners so that they can come in and add value and again our our goal for our customers is give them choice so give them choice of to choose best-of-breed solutions don't go do it and say hey you got to go use partner a you know hey we're gonna we're gonna have an API that others can build to and you go choose your best debris partner or your platform technology choice well and with 350,000 customers you've got a big observation space so guys have always been customer driven can give you the last word on vivant 2019 you're our last guest then we're gonna wrap with a little analysis on our end but give us the bumper sticker yeah I think the bumper sticker is hey you know we've you know from a business perspective you know we hit a billion dollars in bookings we have hit 350,000 customers the Innovation Train is really moving our Veen clouds here that we announced with update four earlier this year has gone way beyond our expectations and and we're looking to continue to build on that momentum so we're just super excited you know we if I'm the closer I'll say thanks to all of our sponsors we have a lot of great sponsors and on the cloud side on the on the Alliance partners side the channel side you know it's just it's it's a testament to where we are as a companies yeah and you're building out a great ecosystem congratulations on that and and good luck going forward and we'll see you around at the shows it's great it's great to have you guys right thank you all right you're welcome all right keep it right there everybody Peter and I went back to wrap right after this short break and watching the cube live from V Mon 2019 from Miami we'll be right back

Published Date : May 22 2019

SUMMARY :

the partners we have and so you know I

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Ken RingdahlPERSON

0.99+

DavePERSON

0.99+

Jason BuffingtonPERSON

0.99+

oneQUANTITY

0.99+

MiamiLOCATION

0.99+

Dave VolantePERSON

0.99+

VemmaORGANIZATION

0.99+

DannyPERSON

0.99+

CiscoORGANIZATION

0.99+

Frank ZubinPERSON

0.99+

350,000 customersQUANTITY

0.99+

New OrleansLOCATION

0.99+

PeterPERSON

0.99+

AWSORGANIZATION

0.99+

sevenQUANTITY

0.99+

VeeamORGANIZATION

0.99+

HPORGANIZATION

0.99+

last yearDATE

0.99+

eightQUANTITY

0.99+

yesterdayDATE

0.99+

two weeks agoDATE

0.99+

five years agoDATE

0.99+

todayDATE

0.99+

tomorrowDATE

0.99+

threeQUANTITY

0.99+

two partnersQUANTITY

0.99+

this weekDATE

0.98+

Peter burstPERSON

0.98+

bothQUANTITY

0.98+

0QUANTITY

0.98+

10 years agoDATE

0.98+

10QUANTITY

0.98+

thirdQUANTITY

0.98+

eight yearsQUANTITY

0.98+

73QUANTITY

0.98+

100QUANTITY

0.98+

sixQUANTITY

0.98+

two years agoDATE

0.98+

first 50 yearsQUANTITY

0.98+

12QUANTITY

0.98+

2019DATE

0.97+

twoQUANTITY

0.97+

Miami BeachLOCATION

0.97+

30QUANTITY

0.97+

NutanixORGANIZATION

0.97+

5QUANTITY

0.97+

about 2 and a half timesQUANTITY

0.96+

earlier this yearDATE

0.96+

billion dollarsQUANTITY

0.96+

first timeQUANTITY

0.96+

year and a halfQUANTITY

0.96+

hundred percentQUANTITY

0.95+

two new classesQUANTITY

0.95+

one timeQUANTITY

0.95+

billion dollarQUANTITY

0.95+

a year and a halfQUANTITY

0.95+

day twoQUANTITY

0.94+

VeeamPERSON

0.94+

VeenORGANIZATION

0.94+

NetAppORGANIZATION

0.93+

firstQUANTITY

0.92+

ArchimedesPERSON

0.92+

fiveQUANTITY

0.91+

AriaORGANIZATION

0.9+

6 storage vendorsQUANTITY

0.9+

FeinORGANIZATION

0.89+

VM KentORGANIZATION

0.88+

OracleORGANIZATION

0.87+

pureORGANIZATION

0.86+

first oneQUANTITY

0.86+

ratmirPERSON

0.86+

Paul Young, Google Cloud Platform | SAP SAPPHIRE NOW 2018


 

from Orlando Florida it's the cube covering si P sapphire now 2018 brought to you by net app welcome to the cube I'm Lisa Martin with Keith Townsend and we are in Orlando Florida that sa piece a fire now 2018 or in the net out booth really cool sa piece a fire is an enormous event this is like the 25th year they've been doing it and it's been really interesting to learn Keith about sa P and how they have really transformed and one of the things that's critical is their partner ecosystem so we're excited to welcome back to the cube a cube alumni Paul Young who is the director of sa P go to market from Google platform Paul it's nice to see you thanks so what is the current news with Google and sa P so you know I think we're making a major push into this three Marquette I think the the yesterday's announcements are we all still have a four tire buy on a server online but we also brought up capacity all the way up to 20 terabytes so we really can handle pretty much all the customer base at this point so on the one end that's good there is however a lot of other stuff we're doing in the AI space in the joint engineering space with SCP and and a lot of work we're doing in the make it a lot easier for SUV customers to adopt the cloud right and and beyond just what's happening a lot in the market right now which is you know 80 percent of the customers who mu and s pieces in the cloud just do straight lift and shift so there's no for momentum with a it's just ticking the box you're in the cloud we're doing a ton of work in engineering on our own and with SCP right now to make that a much more valuable journey for the customers so yeah I don't wake up in the morning at Google and think what am I going to do today it's you know it's a there's a lot of stuff going on so Paul let's not be shy that we've had you on the cube before and your ear s AP alone and as you look out at the hyper scalars the big cloud providers s ap more or less has a reference architecture for how to do cloud how to do s AP and a hyper scale of cloud but it's not just about that base capability when I when I talk to my phone I love asking Google questions when I look at you know capabilities like AI and tensor flow and machine learning that gets me excited just in general what as you looked out at the Haifa scalers what excited you about Google is specific as you we were s ap work to fall 3 so what's so exciting about Google I did I joke internally I was I was a customer of recipes for seven years I did 20 years of SVP and and yeah and and then woke up one morning and decided to go to Google yeah I do I get this question a lot on the yeah my conversation always is it wasn't based on the cafeteria food there are other things to join me across it seriously cuz in my last roll at scpi I was working with all three the hyper scalars and one of the questions I always got from SCP people is well they're all just the same right or and when you actually work with them you discover the are different and that's no disrespect to anyone but they approach the world differently they all have different business models and and the Google thing that really put me is that the the kind of engineering and the future focus was just tremendous right this other girl could do was was immense and so I said I'll jump forward to the future and then will come back but just if you look at the investment school was making in AI and machine learning all the stuff we were order a Google i/o with the the you know custom-built testable computers that can just do an amazing performance greatness or but it's got to be applied right so so things that partially built with Deloitte it's a deletion of the demonstration for it but just to give an example of where we think the future is we build a model in Nai where we have we basically two invoices and we taught the AI system to do data entry and SCP so that's not an interface we didn't say hey here's an invoice and here's all the fields and we map them all across and here's ETL and here's other things we do right here's our interface mapping we literally said imagine you're an AP processor how do you enter an invoice and you give it detail universities and it spends a lot of time doing really stupid things trying to put addresses in the number field of someone else and then suddenly it works so how to enter an invoice and at that point it knows how to enter an invoice and then what you do is you give it more and more invoices or more and more different structures and it learns how to what an invoice is and it learns how to process that and then suddenly it can do complete data entry right so we build as a model this is sort of thing Google does just to test the limits Deloitte came along and said well that's really cool could we actually take it and run it as a product and so the light now has that in there there are engineering further out where literally you can give it any invoice it will it's not OCR it will look at the invoice and it will work out that is an invoice where all the bits you need are from it it will then work out how you would do data entry on that into an SUV system and it will enter the invoice that's a future world where I know SUVs already launched the I our own doing three-way match interesting we're talking about future won't where your your entire accounts payable Department is a Gmail inbox where they mail you invoices that you've never seen before but we're able to understand what a vendor is grantee as a vendor guarantee is not fraud checked and do the deed to entry completely automatically that is the massive new world right and that's just a tiny little bit of what we can do at Google we have it just pretty also we haven't demo running on the booth where we have tensorflow looking at pure experience pharmaceuticals right right we have we have a demo run on the booth which is a graphic of someone we're actually running at customers where we have a camera reading pharmaceutical boxes as they go past or their pinky perfect curlers in this case but it doesn't just look at the box and say I count one box it reads the text on the box but it reads the text in the box was in noise from STP was supposed to be manufactured and it comes back and says well am I putting double-strength pills and single side boxes is this most legal have I mean sent the correct box is it you know is the packaging correct it also knows what a good box looks like and it learns what a damaged box looks like a nice packaging looks like an it knows how to reject them and again that level of technology where we can monitor all of your production lines and give you guarantee quality and pharmaceuticals anywhere else tell me six months ago anyone even imagined that was possible we're doing that right now all right that that ability to work with SCP because it's all integrated with SCP we're doing Depot of efficient that ability to deliver that sort of capability at the speed we deliver that is world-changing right well you know one of the things that I just kept imagining as you gwangsu the description of invoicing thankee was on a run of the day I'm a small business owner and these things are troublesome like you get in an invoice and I'm thinking you know I got a deal my my wife does the Council of payable accounts receivable I'm like there has to be a way to automate get but then I thought about just those challenges like you get one person says an invoice that the invoices at the bottom right hand corner the the invoice numbers on the bottom right hand corner the the amount due etcetera etc just really silly questions that AI should be AI machine learning should be able to deal with build mederma yesterday on stage says that AI should all been human capability and that's a great example of how a I augments you might take a bit and it doesn't in the AP example it doesn't do a hundred percent correct all the time right it knows what it's wrong in the example of Joey runs your seat comes up and says the dates wrong here I need to fix it so it's taken the it's taken the menial work out of the process and it's lighten people really add value in it but it's also a great example of the cloud at work and what it's supposed to do right again if all you do is take official SCP and drop it in the cloud you're just running in a different place if you get to a world where with Google we we don't expose your data to everybody else but we understand what the world's invoices look like and we have that knowledge and we make the entire world more efficient by having the model know how to work that's a radically better place right and that's that's that's there's just never been that value prop before and that's it's a great big exciting thing to wake up in the morning to think that's what we do right so Lisa in the industry we have this term that data has credit I think it's fairly safe at the this week we can say that processing technology compute has gravity it's we had another guest on it says that they use a process and a technology in solution and one customer works out fine and another customer not the same results it's this complexity is this kind of dish 'part of technology that is just not easy to apply across across companies so the other part really quickly that I want to talk about is you know this isn't just about AI right it's not just about the future I mean one of the key in me I said I'm a long-term HCV customer I work a lot of customers everybody wants to get to the cool bit you know and though I always used to joke internally everybody wants to eat candy they're ready vegetables first right and so we better get you across or you can candida vegetables whichever way you've got to eat both there's some point right so um so look just getting customers into the club becomes one of the challenges it's one of the other areas where we're really applying engineering so I'm three weeks ago we bought della Strada as an example Villa Stratos is an amazing company what well so it does basically it's a plug into VMware you drop it into VMware and it watches your SUV systems running it profiles them and it works out what size capacity you're going to need in the cloud at the point where it's then got enough information it'll basically ping you and say hey I know no I'm not a machine do you want exactly the same performance at lowest price in the cloud or do you want better performance here's two configurations pick the one you want give it your Google user ID and password it will build the security build the application servers and begin a migration for you automatically depending on the timing demand the size the box between 30 minutes and two hours later you will have a running version of your SCP system in the closet never been done before that's been performance the way it works basically it's a bit a little bit of magic but it knows how much what's the minimum amount of data we need to ship across through NSEP it knows where all the data is hidden on the box on the disk then sdb needs to run and it just ships that first and then it fills in the gaps afterwards the repair mechanism so from there on the one hand you could do lists and share and frankly our competitors have been using it to do lift and shift in the past it over some a ton of potential right for a bunch of customers we can replicate their production boxes in real time and give them 30-second RPO RTO in high availability but that done but it's like that I can now take that replicated image and I can run operations on it I can run tests on I can run QE rebuilds were you because of the Google pricing model you don't pay me in advance you pay me in arrears for only the computer time that you use so you are a QA system you've got two days worth of work to rebuild it don't shut down your QA system pay me for two days rebuild and you're done or we have integrated it directly into the SDP upgrade tools so you can pipe across your system to us and we will immediately do a test upgrade for you into s4 HANA or you see us rocky or BW an Hana whatever you want I have a customer in Canada who really jumped from ECC e6 and hazard by 5 to s4 Hana using an earlier version of the tools in 72 hours with a lot of gaps to look at in between we reckon we're gonna crush that down into under 24 hours so under 24 hours we can you can literally click on an SUV server and we will not just bring you to the cloud but we will upgrade you all the way to the latest version and we we have all the components we've done it we're pushing that through right and so what we're doing now is taken the hard work and automating that so we can get to the really cool stuff in the eye side right that's way again this is where all of us for all the hyper scalers hosts you know SV systems we want to do something that's better than that right we want to make it easy to get there but we know that in order to justify what you do we're all have seven your room app 2x or hard on right so we want to make it really easy to do that and we want to make it incredibly easy to add in AI and all the other technologies along the way that's a DES and a pricing model that nobody will be right and that's that's a pretty cool place to be I'm mighty glad to be a good place I could tell by your energy so ease of use everybody wants that you talked about just the example of invoices how they can vary so dramatically and you know whether you're a small business owner to a large enterprise there's so much complexity and and fact that was one of the things that was talked about it was this morning well yeah when how so plot I was even talking about naming conventions and how customers were starting to get confused with all of the different acquisitions SAT has done so a I what Google is doing with AI on sa piece sounds like a huge differentiator so tell us as we wrap up here what makes you know in a nutshell Google different than the other hyper scale that s AP partners with and specifically what excites you about going to market with s AP at the base level your Google's just on a different scale from everybody right we are effectively put 25% of the internet if you look at our own assets we we own dark fiber that's equivalent to about 4% of the entire caballo sorry four times the entire capacity of the Internet right MA so my ability to deliver to those customers at scale and up performance levels just unchallenged in this space so you know it's a Google clearly is excelled in a lot of different areas it's been credibly starting to bring that to SVP and carry through but you're right that the the the value add ultimately isn't just the hey I can I can run you and I can run you better write the value add is so March we announced direct innovation rihana and Google bigquery when you're talking about bigquery right massive datasets that you can know Bridge to Hana if you're a retailer this is one last example I can now join all the ad tech data Google has so I can tell you all the agile currently run in Google once we march was being viewed anonymized in clusters so you can't tell the original consumers but I know that data and directly worded to bigquery and I can join at stp so I can now say you are advertising in this area let's being clicked on but I know you don't have the inventory to actually support the advertising so I want you to move advertising somewhere else right and so I can do that manually rename when I had any I to that the potential is is incredible right we've only just started so ya know next time I want the cube we'll see where we're at but it's a it's a fun place to be speaking the next time gasps have a conference coming up Google next is coming up at the end of July yeah it's we have a lot of announcements through probably the rest of the year right there's a lot of stuff going on as we come to massive scale in the SUV space so yeah anyone who's interested in this stuff especially even if you're just interesting the I stuff Google next is the place to be so sounds like it I'm expecting some big things from that based on what you talked about on how enthusiastic you are about being at Google Paul thanks so much for joining Keith and me back on the cube and we look forward to talking to you again Thanks thank you for watching the cube Lisa Martin with Keith Townsend @s AP Safire 2018 thanks for watching

Published Date : Jun 9 2018

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
Paul YoungPERSON

0.99+

CanadaLOCATION

0.99+

25%QUANTITY

0.99+

seven yearsQUANTITY

0.99+

80 percentQUANTITY

0.99+

Villa StratosORGANIZATION

0.99+

two daysQUANTITY

0.99+

Lisa MartinPERSON

0.99+

DeloitteORGANIZATION

0.99+

2018DATE

0.99+

72 hoursQUANTITY

0.99+

Keith TownsendPERSON

0.99+

Orlando FloridaLOCATION

0.99+

JoeyPERSON

0.99+

GoogleORGANIZATION

0.99+

20 yearsQUANTITY

0.99+

KeithPERSON

0.99+

three weeks agoDATE

0.99+

Orlando FloridaLOCATION

0.99+

MarchDATE

0.99+

oneQUANTITY

0.98+

25th yearQUANTITY

0.98+

todayDATE

0.98+

two invoicesQUANTITY

0.98+

yesterdayDATE

0.98+

PaulPERSON

0.98+

six months agoDATE

0.97+

NaiLOCATION

0.97+

one boxQUANTITY

0.97+

sevenQUANTITY

0.97+

LisaPERSON

0.96+

end of JulyDATE

0.96+

hundred percentQUANTITY

0.96+

bothQUANTITY

0.96+

one personQUANTITY

0.95+

this morningDATE

0.94+

30-secondQUANTITY

0.94+

this weekDATE

0.94+

about 4%QUANTITY

0.94+

under 24 hoursQUANTITY

0.93+

three-wayQUANTITY

0.93+

GmailTITLE

0.93+

two configurationsQUANTITY

0.93+

under 24 hoursQUANTITY

0.93+

firstQUANTITY

0.92+

marchDATE

0.92+

one morningQUANTITY

0.91+

threeQUANTITY

0.91+

SCPORGANIZATION

0.88+

VMwareTITLE

0.88+

Council of payable accountsORGANIZATION

0.87+

one customerQUANTITY

0.87+

Paul YoungPERSON

0.87+

scpiORGANIZATION

0.86+

single side boxesQUANTITY

0.85+

one ofQUANTITY

0.84+

thingsQUANTITY

0.83+

s4COMMERCIAL_ITEM

0.82+

3DATE

0.82+

NSEPTITLE

0.81+

one last exampleQUANTITY

0.8+

Leemon Baird, Hashgraph | Blockchain Unbound 2018


 

>> Announcer: Live from San Juan, Puerto Rico, it's The Cube! Covering Blockchain Unbound. Brought to you by BlockChain Industries. >> Hello and welcome to this special exclusive coverage, in Puerto Rico, for BlockChain Unbound, I'm John Furrier, the host of The Cube. We're here for two days of wall-to-wall coverage. Our next guest is from Hashgraph. He's Leemon Baird, who's CEO? >> CTO, and co-founder. >> CTO, okay that's great. OK, so you got on, you're about to go on stage, Hashgraph launched two days ago, a lot of buzz, we talked to a couple entrepreneurs in your ecosystem, early partners, doing some healthcare stuff. What is Hashgraph, why is it important, and why are you guys excited? >> Oh, yes. So this is, this is fantastic. Two days ago we were able to announce the existence of a public ledger, Hedera Hashgraph Council. The Hedera Hashgraph ledger is going to be a public ledger with a cryptocurrency, file system, smart contracts in Solidity. All Solidity contracts run without change. It is built on a consensus algorithm, called Hashgraph. And if you want to know what that is, in 12 minutes I'll be speaking on this stage about what it is. >> OK, so I'll see everyone who knows what hashing is, but I mean what makes you guys different, if it's going to be that protocol, is it the speed, is it the performance, reliability, what's the main differentiator for you guys? >> Yes, so it's security and speed and fairness all at the same time. It's ABFT security which is very strong. It's hundreds of thousands of transactions per second, with a few seconds latency, even in just one shard. That's even before you add sharding to get even faster. And then it's fairness of ordering. Three things that are new, it's because of the Hashgraph protocol, which is different from just hashing. >> Interviewer: Yeah. >> But it uses hashing. >> Yeah. So here's the question I have for you, what's on people's mind, whether they're an investor in a company that's in your ecosystem, how can you bet on a company that's only two days old? Why are you guys important? What's the answer to that question? >> The answer to that is, we are not two days old. (laughter) >> Two days launched. >> Two days launch, but first of all, the Hashgraph algorithm was invented in 2015. We have been having Swirls incorporated, has been doing permission ledgers for a couple years now. And we have great traction. We have a global presence with CU Ledger, the credit unions around the world. >> So, we have got real traction with the permission ledgers, and for years people have been saying, "Yes, but what we really want is a public ledger, could you please, please, please do that?" >> And what are some of the used case data coming out of your trials before you launch? I mean, what were the key criterias on the product side? What was the key product requirements definitions that you guys focused on? >> So, speed and security, having them both at the same time. And usually you have to choose between one and the other. The security we have is very high. It's ABFT, which means that, double spins won't happen, and it's hard for someone to shut down the network. But you know what, even the credit unions, I think were even more interested in the speed. The truth is, at a small number of transactions a second, there's things you can do, but in a large number, there's more things you can do. >> You know there's a lot of activity on the value creation side, which is really phenomenal, so creating value, capturing value, that is the premise of this revolution, but let's just put that aside for a second, but the real action is on the decentralized application developer. These are the ones that are looking for a safe harbor, because they just want to build new kinds of apps, and then have a reliable set of infrastructure, kind of like how cloud computing had dev ops movement. That's what's going on in this world. What's your answer to that? What's your pitch to those folks, saying, "Hey developers, Hashgraph is for you." What's your answer? >> Yes, and by the way, this is not just to new developers. We've got 20,000, I think, now on our telegram channel. We have amazing response from our developer community. We have a whole team that is working with them to develop really interesting things that we have demonstrations and so on. So, my pitch to them is thank you because we have them in addition, since we can run Solidity out of the box, all of those developers have already been developing on us for years without knowing it. Thank you and for others, there's no limit to what you can do when you have speed and security at the same time. >> So, Solidity, talk about the dynamics of this new language. Why is it important? And for someone that might be new to that approach, what's your story? What do you say to them? "Hey, it's great, jump right in?" Is there a community they can come to? Do you have a great community? What's the story for that new developer? >> Yes, so I would tell the new developer, "You know we'll probably have a new language someday, but right now we're sticking with the standard. We're starting by supporting the standard language." On these ledgers, there are smart contracts, which are programs that run on top of them in a distributed way. You have to write them in some programming language. Solidity is the most common one right now. >> Is the smart contract, the killer app going on, in terms of demand, what people are looking for? Or is it just the ledger piece of it? What's the main, kind of, threshold point at this point and juncture? >> We see cryptocurrency is a killer app in many industries. Smart contracts is the killer app in other industries. File storage, actually, with certain properties that allow irevocation servers is the killer app in certain industries and we are talking on having to gain traction in all three of those. >> OK, talk about the community, which, by the way, it's great. There's a new stack that's developing. I know you're going on-stage and I'd love to spend more time with you to talk about those impacts at each level of the stack. But, let's talk about your community. What are you guys doing? How did you get here? What's some of the feedback? What's some of the conversations in the community and where you're going to take it? >> OK, the conversations are amazing. The interest is amazing. There appears to be this enormous pent-up demand for something that can have security and speed at the same time, along with this fairness thing. People are talking about doing whole new kinds of things, like, games where every move is an action in the ledger, is a transaction in the ledger. The fairness is important and the speed is important and you want security and then anything involving money, you want security and anything involving identity, you want security, so these are all... What we're hearing from people is, "We've been waiting." In fact, literally, every big company has a blockchain group and what we keep hearing is, "We've been excited for years, but we're not doing anything yet, because it just wasn't ready." Now, the technology is ready. >> So, tired, kicking to actually putting some stuff into action. >> And that's happening now. That's what our customers tell us, "We've been kicking the tires, we've been holding off, we've been waiting for the technology to be mature." Now, it's mature. >> What are some of the low-hanging use cases that you're seeing coming out of the gate? >> So, the credit union industry is going to be using this for keeping information that credit unions share with each other, information about identity, information about threat models, information about contracts they have with each other, all sorts of things like that. We have Machine Zone, multi-billion dollar game company was on the stage with us, talking about how they are going to be using this for doing payments for their system. Just, Sat-oor-ee is amazing. Watch the video. Gabe did an amazing job there on his stuff. And he said the reason they had to go with us is because we were fast and secure and no-one else is the way we are. >> What are some of the white spaces that you see out there, if you could point to some developers and entrepreneurs out there and say, "Hey, here's some white space. Go take it down." What would you say? >> Exactly, find a place where trust matters. I do hear people saying, "I want to start a company, but, you know, we could run on a single server and be just as good. Well, great, then use a single server and be just as good. (laughter) >> Good luck with that. (laughs) >> No, no >> Yeah, but, that's just their choice. >> Don't use a hammer when a screwdriver is appropriate. >> Yes. >> Not everything is a nail, but you know what? There's a lot of nails out there. What you should do is, if trust matters, and if no one person is trustworthy. If you want your users to be able to trust, that a community is trusting it, then you need to go to a ledger and if you want speed and security, then go with us, especially if you want fairness. Look at auctions. We've had people build an auction on us. Look at stock markets, look at games. Look at places where fairness matters. Look at us. >> So, I got to ask about a reputation piece, because in fairness comes data about reputation and I see reputations not as a single protocol, but a unique instance in all applications, so there's no, kind of global reputation. There might be reputation in each application. What's your view on reputation? Is that going to be a unique thing? How do you guys deal that with your fairness, peace, consensus, what's your thoughts? >> Reputation is critical, identity is critical. The two of them come together. Suited in amenity is critical. For reputation, you can have your how many stars did you get, how many people have rated you? We're not building that system. We're building the thing that allows you to build that system on top of it. Anybody can build on top of it. What you do need, though, is you need a revocation service and a shared file service that no-one can corrupt. No one can change things they aren't supposed to change. No one can delete things they're not supposed to delete. People say immutable, well, it's not really immutable. It's just make sure it mutates the right way. >> And also, cost and transaction cost and speed is a huge issue on Blockchain as we know it today. Ethereum has took a lot of hits on this. What's your position, ERC 20? People are doing a lot of token work without the smart contract. We're hearing people saying that it's not ready, there's some performance issues outside of CryptoKitties, what else is there? What's your thoughts? >> Exactly, so, ERC 20, since we do Solidity, we do ERC 20 if we want. If you want, anyone who wants to can do it. But, you talked about the cost of the transactions. If you're going to charge a dollar a transaction, there are absolutely useful things you can do, but if you're going to charge a tiny fraction of a cent per transaction, there are whole new use cases you can do. And that's what we're all about. >> Awesome, Leemon, I know you got to get up on stage, but I got to ask you one final question. Where do you guys go from here? What's on your to do list? Obviously, you guys, what's the situation with the funding? A number of people in the company, can you share a quick snapshot of what you guys have raised, what the status of the firm is and what your plans are? >> The interest is fantastic. We have raised money or are raising money. We have people working for us and we're hiring very fast. >> Did you raise equity financing, like preferred stock or are you doing ICO? >> Hudera is not equity. Hudera is just a simple agreement for future tokens and we have various things going on. (laughter) You know all the space, but, of course. So, there's a lot of things going on. Swirl's head equity, we're led by NEA. The first round was led by NEA. We're not taking, sorry, we're not selling equity right now in Swirls, but-- >> So, NEA is an investor. >> Oh, yeah, >> Who's the partner on-- >> Sorry, in Swirls. >> Oh, Swirls. >> It's confusing. Hudera is the public, Swirls is the private. Both are important to the world. We continue to do both. I'm CTO of both, I'm co-founder of both. >> It's a corporate structure to get around the new-- >> Not to get around, not to get around. It's because it's two different things. Public and private are really two different things. >> Explain the difference real quick. >> Yes, private is you have several companies like just credit unions in it and it's important that no one but a credit union run a node. It's important. Public is, I want everyone to run nodes, not just people with mining rigs. Every person can earn money running nodes, that's the goal. >> And having that corporate structure gives some stability to that positioning. >> It's all about stability and the public ledger has to be run by someone who isn't me. It has to be run by 39 different companies, not a single entity for trust. >> Great, well, this is also a great topic. We don't have time for it, but this is super important. Corporate governance on how you structure the company, which relates to the IP and its relationship to communities is super important. >> It's radically different than what we're doing. It's because we started from sin, it has to be trustworthy. You need to split governance from consensus. We want millions of nodes doing consensus for transparency, so you know what's going on. We're going to release the code as open review so everyone sees what's going on. It's incredibly important, but you also need governance by people who know what they're doing, but not one person. It's got to be split, so 39 Fortune 100, but global, across the world, across different industries, 18 industries across different companies running it. Not us running it. >> Interviewer: That's where community matters. >> Them running it, incredibly important, incredibly important. >> OK, we've got to go. Congratulations, Hashgraph, two days old. Protocol worked for multiple years, coming out of the closet, doing great work. Congratulations. Thanks for coming on the Cube. >> Thank you very much >> Good luck on stage. We'll be back with more coverage here in Puerto Rico. This is the Cube. I'm John Furrier. Thanks for watching.

Published Date : Mar 15 2018

SUMMARY :

Brought to you by BlockChain Industries. I'm John Furrier, the host of The Cube. and why are you guys excited? And if you want to know what that is, of the Hashgraph protocol, What's the answer to that question? The answer to that is, but first of all, the Hashgraph algorithm And usually you have to choose is on the decentralized there's no limit to what you What do you say to them? Solidity is the most common one right now. Smart contracts is the killer at each level of the stack. is an action in the ledger, to actually putting the tires, we've been holding off, is going to be using this What are some of the white but, you know, we could Good luck with that. Don't use a hammer when a to a ledger and if you How do you guys deal is you need a revocation is a huge issue on Blockchain cost of the transactions. but I got to ask you one final question. The interest is fantastic. You know all the space, but, of course. Hudera is the public, Not to get around, not to get around. running nodes, that's the goal. gives some stability to that positioning. and the public ledger has to be you structure the company, but you also need governance where community matters. Them running it, incredibly important, Thanks for coming on the Cube. This is the Cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
2015DATE

0.99+

Leemon BairdPERSON

0.99+

two daysQUANTITY

0.99+

twoQUANTITY

0.99+

John FurrierPERSON

0.99+

GabePERSON

0.99+

Puerto RicoLOCATION

0.99+

18 industriesQUANTITY

0.99+

20,000QUANTITY

0.99+

Two daysQUANTITY

0.99+

NEAORGANIZATION

0.99+

39 different companiesQUANTITY

0.99+

LeemonPERSON

0.99+

bothQUANTITY

0.99+

Hedera Hashgraph CouncilORGANIZATION

0.99+

BlockChain IndustriesORGANIZATION

0.99+

BothQUANTITY

0.99+

12 minutesQUANTITY

0.99+

SwirlsORGANIZATION

0.99+

Two days agoDATE

0.99+

HuderaORGANIZATION

0.99+

Three thingsQUANTITY

0.99+

each applicationQUANTITY

0.99+

Machine ZoneORGANIZATION

0.99+

first roundQUANTITY

0.99+

HashgraphORGANIZATION

0.99+

two days agoDATE

0.98+

millionsQUANTITY

0.98+

single serverQUANTITY

0.98+

San Juan, Puerto RicoLOCATION

0.98+

each levelQUANTITY

0.98+

two different thingsQUANTITY

0.97+

single serverQUANTITY

0.97+

one personQUANTITY

0.97+

single protocolQUANTITY

0.95+

single entityQUANTITY

0.95+

CryptoKittiesORGANIZATION

0.95+

HashgraphOTHER

0.95+

todayDATE

0.94+

ERC 20OTHER

0.94+

HashgraphPERSON

0.94+

2018DATE

0.91+

HederaORGANIZATION

0.9+

multi-billion dollarQUANTITY

0.89+

threeQUANTITY

0.88+

SwirlORGANIZATION

0.86+

yearsQUANTITY

0.86+

one shardQUANTITY

0.86+

one final questionQUANTITY

0.85+

a secondQUANTITY

0.85+

SolidityORGANIZATION

0.81+

39 FortuneQUANTITY

0.81+

doubleQUANTITY

0.8+

hundreds of thousands of transactions per secondQUANTITY

0.79+

one personQUANTITY

0.78+

oneQUANTITY

0.74+

couple yearsQUANTITY

0.74+

CTOPERSON

0.67+

BlockChain UnboundORGANIZATION

0.66+

CUORGANIZATION

0.65+

everyQUANTITY

0.65+

Blockchain UnboundTITLE

0.65+

The CubeORGANIZATION

0.65+

centQUANTITY

0.64+

HashgraphCOMMERCIAL_ITEM

0.64+

few secondsQUANTITY

0.63+

firstQUANTITY

0.62+

100QUANTITY

0.6+

EthereumORGANIZATION

0.59+

dollarQUANTITY

0.54+

SatDATE

0.54+

Derek Kerton, Autotech Council | Autotech Council - Innovation in Motion


 

hey welcome back everybody Jeff Rick here with the cube we're at the mill pedis at an interesting event is called the auto tech council innovation in motion mapping and navigation event so a lot of talk about autonomous vehicles so it's a lot of elements to autonomous vehicles this is just one small piece of it it's about mapping and navigation and we're excited to have with us our first guest again and give us a background of this whole situation just Derick Curtin and he's the founder and chairman of the auto tech council so first up there welcome thank you very much good to be here absolutely so for the folks that aren't familiar what is the auto tech council autofit council is a sort of a club based in Silicon Valley where we have gathered together some of the industry's largest OMS om is mean car makers you know of like Rio de Gono from France and a variety of other ones they have offices here in Silicon Valley right and their job is to find innovation you find that Silicon Valley spark and take it back and get it into cars eventually and so what we are able to do is gather them up put them in a club and route a whole bunch of Silicon Valley startups and startups from other places to in front of them in a sort of parade and say these are some of the interesting technologies of the month so did they reach out for you did you see an opportunity because obviously they've all got the the Innovation Centers here we were at the Ford launch of their innovation center you see that the tagline is all around is there too now Palo Alto and up and down the peninsula so you know they're all here so was this something that they really needed an assist with that something opportunity saw or was it did it come from more the technology side to say we needed I have a new one to go talk to Raja Ford's well it's certainly true that they came on their own so they spotted Silicon Valley said this is now relevant to us where historically we were able to do our own R&D build our stuff in Detroit or in Japan or whatever the cases all of a sudden these Silicon Valley technologies are increasingly relevant to us and in fact disruptive to us we better get our finger on that pulse and they came here of their own at the time we were already running something called the telecom Council Silicon Valley where we're doing a similar thing for phone companies here so we had a structure in place that we needed to translate that into beyond modem industry and meet all those guys and say listen we can help you we're going to be a great tool in your toolkit to work the valley ok and then specifically what types of activities do you do with them to execute division you know it's interesting when we launched this about five years ago we're thinking well we have telecommunication back when we don't have the automotive skills but we have the organizational skills what turned out to be the cases they're not coming here the car bakers and the tier 1 vendors that sell to them they're not coming here to study break pad material science and things like that they're coming to Silicon Valley to find the same stuff the phone company two years ago it's lookin at least of you know how does Facebook work in a car out of all these sensors that we have in phones relate to automotive industry accelerometers are now much cheaper because of reaching economies of scale and phones so how do we use those more effectively hey GPS is you know reach scale economies how do we put more GPS in cars how do we provide mapping solutions all these things you'll set you'll see and sound very familiar right from that smartphone industry in fact the thing that disrupts them the thing that they're here for that brought them here and out of out of defensive need to be here is the fact that the smartphone itself was that disruptive factor inside the car right right so you have events like today so gives little story what's it today a today's event is called the mapping and navigation event what are people who are not here what's what's happening well so every now and then we pick a theme that's really relevant or interesting so today is mapping and navigation actually specifically today is high definition mapping and sensors and so there's been a battle in the automotive industry for the autonomous driving space hey what will control an autonomous car will it be using a map that's stored in memory onboard the car it knows what the world looked like when they mapped it six months ago say and it follows along a pre-programmed route inside of that world a 3d model world or is it a car more likely with the Tesla's current they're doing where it has a range of sensors on it and the sensors don't know anything about the world around the corner they only know what they're sensing right around them and they drive within that environment so there's two competing ways of modeling a 3d world around autonomous car and I think you know there was a battle looking backwards which one is going to win and I think the industry has come to terms with the fact the answer is both more everyday and so today we're talking about both and how to infuse those two and make better self-driving vehicles so for the outsider looking in right I'm sure they get wait the mapping wars are over you know Google Maps what else is there right but then I see we've got TomTom and meet a bunch of names that we've seen you know kind of pre pre Google Maps and you know shame on me I said the same thing when Google came out with a cert I'm like certain doors are over who's good with so so do well so Eddie's interesting there's a lot of different angles to this beyond just the Google map that you get on your phone well anything MapQuest what do you hear you moved on from MapQuest you print it out you're good together right well that's my little friends okay yeah some people written about some we're burning through paper listen the the upshot is that you've MapQuest is an interesting starting board probably first it's these maps folding maps we have in our car there's a best thing we have then we move to MapQuest era and $5,000 Sat Navs in some cars and then you might jump forward to where Google had kind of dominate they offered it for free kicked you know that was the disruptive factor one of the things where people use their smartphones in the car instead of paying $5,000 like car sat-nav and that was a long-running error that we have in very recent memory but the fact of the matter is when you talk about self-driving cars or autonomous vehicles now you need a much higher level of detail than TURN RIGHT in 400 feet right that's that's great for a human who's driving the car but for a computer driving the car you need to know turn right in 400.000 five feet and adjust one quarter inch to the left please so the level of detail requires much higher and so companies like TomTom like a variety of them that are making more high-level Maps Nokia's form a company called here is doing a good job and now a class of car makers lots of startups and there's crowdsource mapping out there as well and the idea is how do we get incredibly granular high detail maps that we can push into a car so that it has that reference of a 3d world that is extremely accurate and then the next problem is oh how do we keep those things up to date because when we Matt when when a car from this a Nokia here here's the company house drives down the street does a very high-level resolution map with all the equipment you see on some of these cars except for there was a construction zone when they mapped it and the construction zone is now gone right update these things so these are very important questions if you want to have to get the answers correct and in the car stored well for that credit self drive and once again we get back to something to mention just two minutes ago the answer is sensor fusion it's a map as a mix of high-level maps you've got in the car and what the sensors are telling you in real time so the sensors are now being used for what's going on right now and the maps are give me a high level of detail from six months ago and when this road was driven it's interesting back of the day right when we had to have the CD for your own board mapping Houston we had to keep that thing updated and you could actually get to the edge of the sea didn't work we were in the islands are they covering here too which feeds into this is kind of of the optical sensors because there's kind of the light our school of thought and then there's the the biopic cameras tripod and again the answers probably both yeah well good that's a you know that's there's all these beat little battles shaping up in the industry and that's one of them for sure which is lidar versus everything else lidar is the gold standard for building I keep saying a 3d model and that's basically you know a computer sees the world differently than your eye your eye look out a window we build a 3d model of what we're looking at how does computer do it so there's a variety of ways you can do it one is using lidar sensors which spin around biggest company in this space is called Bella died and been doing it for years for defense and aviation it's been around pointing laser lasers and waiting for the signal to come back so you basically use a reflected signal back and the time difference it takes to be billows back it builds a 3d model of the objects around that particular sensor that is the gold standard for precision the problem is it's also bloody expensive so the karmak is said that's really nice but I can't put for $8,000 sensors on each corner of a car and get it to market at some price that a consumers willing to pay so until every car has one and then you get the mobile phone aside yeah but economies of scale at eight thousand dollars we're looking at going that's a little stuff so there's a lot of startups now saying this we've got a new version of lighter that's solid-state it's not a spinning thing point it's actually a silicon chip with our MEMS and stuff on it they're doing this without the moving parts and we can drop the price down to two hundred dollars maybe a hundred dollars in the future and scale that starts being interesting that's four hundred dollars if you put it off all four corners of the car but there's also also other people saying listen cameras are cheap and readily available so you look at a company like Nvidia that has very fast GPUs saying listen our GPUs are able to suck in data from up to 12 cameras at a time and with those different stereoscopic views with different angle views we can build a 3d model from cheap cameras so there's competing ideas on how you build a model of the world and then those come to like Bosh saying well we're strong in car and written radar and we can actually refine our radar more and more and get 3d models from radar it's not the good resolution that lidar has which is a laser sense right so there's all these different sensors and I think there the answer is not all of them because cost comes into play below so a car maker has to choose well we're going to use cameras and radar we're gonna use lidar and high heaven so they're going to pick from all these different things that are used to build a high-definition 3d model of the world around the car cost effective and successful and robust can handle a few of the sensors being covered by snow hopefully and still provide a good idea of the world around them and safety and so they're going to fuse these together and then let their their autonomous driving intelligence right on top of that 3d model and drive the car right so it's interesting you brought Nvidia in what's really fun I think about the autonomous vehicle until driving cars and the advances is it really plays off the kind of Moore's laws impact on the three tillers of its compute right massive compute power to take the data from these sensors massive amounts of data whether it's in the pre-programmed map whether you're pulling it off the sensors you're pulling off a GPS lord knows where by for Wi-Fi waypoints I'm sure they're pulling all kinds of stuff and then of course you know storage you got to put that stuff the networking you gotta worry about latency is it on the edge is it not on the edge so this is really an interesting combination of technologies all bring to bear on how successful your car navigates that exit ramp you're spot-on and that's you're absolutely right and that's one of the reasons I'm really bullish on self-driving cars a lot more than in the general industry analyst is and you mentioned Moore's law and in videos taking advantage of that with a GPUs so let's wrap other than you should be into kind of big answer Big Data and more and more data yes that's a huge factor in cars not only are cars going to take advantage of more and more data high definition maps are way more data than the MapQuest Maps we printed out so that's a massive amount of data the car needs to use but then in the flipside the cars producing massive amounts of data I just talked about a whole range of sensors I talked lidar radar cameras etc that's producing data and then there's all the telemetric data how's the car running how's the engine performing all those things car makers want that data so there's massive amounts of data needing to flow both ways now you can do that at night over Wi-Fi cheaply you can do it over an LTE and we're looking at 5g regular standards being able to enable more transfer of data between the cars and the cloud so that's pretty important cloud data and then cloud analytics on top of that ok now that we've got all this data from the car what do we do with it we know for example that Tesla uses that data sucked out of cars to do their fleet driving their fleet learning so instead of teaching the cars how to drive I'm a programmer saying if you see this that they're they're taking the information out of the cars and saying what are the situation these cars are seen how did our autonomous circuitry suggest the car responds and how did the user override or control the car in that point and then they can compare human driving with their algorithms and tweak their algorithms based on all that fleet to driving so it's a master advantage in sucking data out of cars massive advantage of pushing data to cars and you know we're here at Kingston SanDisk right now today so storage is interesting as well storage in the car increasingly important through these big amount of data right and fast storage as well High Definition maps are beefy beefy maps so what do you do do you have that in the cloud and constantly stream it down to the car what if you drive through a tunnel or you go out of cellular signal so it makes sense to have that map data at least for the region you're in stored locally on the car in easily retrievable flash memory that's dropping in price as well alright so loop in the last thing about that was a loaded question by the way and I love it and this is the thing I love this is why I'm bullish and more crazier than anybody else about the self-driving car space you mentioned Moore's law I find Moore's law exciting used to not be relevant to the automotive industry they used to build except we talked about I talked briefly about brake pad technology material science like what kind of asbestos do we use and how do we I would dissipate the heat more quickly that's science physics important Rd does not take advantage of Moore's law so cars been moving along with laws of thermodynamics getting more miles per gallon great stuff out of Detroit out of Tokyo out of Europe out of Munich but Moore's law not entirely relevant all of a sudden since very recently Moore's law starting to apply to cars so they've always had ECU computers but they're getting more compute put in the car Tesla has the Nvidia processors built into the car many cars having stronger central compute systems put in okay so all of a sudden now Moore's law is making cars more able to do things that they we need them to do we're talking about autonomous vehicles couldn't happen without a huge central processing inside of cars so Moore's law applying now what it did before so cars will move quicker than we thought next important point is that there's other there's other expansion laws in technology if people look up these are the cool things kryder's law so kryder's law is a law about storage in the rapidly expanding performance of storage so for $8.00 and how many megabytes or gigabytes of storage you get well guess what turns out that's also exponential and your question talked about isn't dat important sure it is that's why we could put so much into the cloud and so much locally into the car huge kryder's law next one is Metcalfe's law Metcalfe's law has a lot of networking in it states basically in this roughest form the value of network is valued to the square of the number of nodes in the network so if I connect my car great that's that's awesome but who does it talk to nobody you connect your car now we can have two cars you can talk together and provide some amount of element of car to car communications and some some safety elements tell me the network is now connected I have a smart city all of a sudden the value keeps shooting up and up and up so all of these things are exponential factors and there all of a sudden at play in the automotive industry so anybody who looks back in the past and says well you know the pace of innovation here has been pretty steep it's been like this I expect in the future we'll carry on and in ten years we'll have self-driving cars you can't look back at the slope of the curve right and think that's a slope going forward especially with these exponential laws at play so the slope ahead is distinctly steeper in this deeper and you left out my favorite law which is a Mars law which is you know we underestimate in the short term or overestimate in the short term and underestimate in the long term that's all about it's all about the slope so there we could go on for probably like an hour and I know I could but you got a kill you got to go into your event so thanks for taking min out of your busy day really enjoyed the conversation and look forward to our next one my pleasure thanks all right Jeff Rick here with the Q we're at the Western Digital headquarters in Milpitas at the Auto Tech Council innovation in motion mapping and navigation event thanks for watching

Published Date : Jun 15 2017

SUMMARY :

for the signal to come back so you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
$8,000QUANTITY

0.99+

$8.00QUANTITY

0.99+

$5,000QUANTITY

0.99+

JapanLOCATION

0.99+

Jeff RickPERSON

0.99+

Derick CurtinPERSON

0.99+

DetroitLOCATION

0.99+

Silicon ValleyLOCATION

0.99+

Autotech CouncilORGANIZATION

0.99+

Derek KertonPERSON

0.99+

TokyoLOCATION

0.99+

Silicon ValleyLOCATION

0.99+

FranceLOCATION

0.99+

Jeff RickPERSON

0.99+

MunichLOCATION

0.99+

TeslaORGANIZATION

0.99+

MilpitasLOCATION

0.99+

NokiaORGANIZATION

0.99+

todayDATE

0.99+

Palo AltoLOCATION

0.99+

400 feetQUANTITY

0.99+

eight thousand dollarsQUANTITY

0.99+

HoustonLOCATION

0.99+

two hundred dollarsQUANTITY

0.99+

two carsQUANTITY

0.99+

four hundred dollarsQUANTITY

0.99+

NvidiaORGANIZATION

0.99+

MoorePERSON

0.99+

FordORGANIZATION

0.99+

first guestQUANTITY

0.99+

FacebookORGANIZATION

0.99+

EuropeLOCATION

0.99+

ten yearsQUANTITY

0.99+

six months agoDATE

0.99+

EddiePERSON

0.99+

two years agoDATE

0.98+

Google MapsTITLE

0.98+

twoQUANTITY

0.98+

MapQuestORGANIZATION

0.98+

two minutes agoDATE

0.98+

one quarter inchQUANTITY

0.98+

GoogleORGANIZATION

0.98+

TomTomORGANIZATION

0.97+

six months agoDATE

0.97+

Western DigitalLOCATION

0.97+

bothQUANTITY

0.97+

Silicon ValleyLOCATION

0.97+

both waysQUANTITY

0.97+

each cornerQUANTITY

0.96+

firstQUANTITY

0.96+

one of the reasonsQUANTITY

0.96+

two competing waysQUANTITY

0.96+

MarsLOCATION

0.95+

telecom Council Silicon ValleyORGANIZATION

0.95+

an hourQUANTITY

0.95+

Google mapTITLE

0.95+

MetcalfePERSON

0.93+

oneQUANTITY

0.92+

every carQUANTITY

0.89+

up to 12 camerasQUANTITY

0.89+

Kingston SanDiskORGANIZATION

0.88+

OMSORGANIZATION

0.88+

one small pieceQUANTITY

0.88+

about five years agoDATE

0.87+

innovation in motion mapping and navigationEVENT

0.86+

auto tech councilORGANIZATION

0.86+

tier 1QUANTITY

0.82+

Caroline Chan & Dan Rodriguez, Intel Corporation - Mobile World Congress 2017 - #MWC17 - #theCUBE


 

>> [Announcer] Live, from Silicon Valley, it's The Cube Covering Mobile World Congress 2017. Brought to you by Intel. >> [John] Welcome back, everyone. We are here live in Palo Alto, California for a special two days of Mobile World Congress. We're on day two of wall to wall coverage from eight a.m. to six p.m. Really breaking down what's happening in studio and going to our reporters and analysts in the field. We'll have Pete Injerich coming up next and we're going to get on the ground analysis from the current analysis, now with global data. But next we have a segment where, I had a chance this morning early in the morning my time top of the morning Tuesday in Barcelona which was hours ago, I had a chance to speak with Caroline Chan and Dan Rodriguez. I wanted to get their opinion on what's happening and I asked Caroline Chan, "What's the biggest story coming out of Mobile World Congress?" This is what she had to say: >> [Caroline Chan] So last year this time, the people coming in asked a lot of questions about 5G technology. Is it real? Can we really pull it off? You know, 3G, 4G, it's a little bit ho-hum. But this year, I would say when I look around, not just in Apple, everybody else is good. I'm also hoping to, people talk about it as a faithful, I went to a panel last night with Orange, and AT&T, and Telefonica. I think the conversation switched from will there be a 5G to solutions. So, I look around in our booth and next door in Verizon there's a lot of cars, autonomous driving. We had network 5G enable smart city, it's in our homes, It becomes from technology to solution, and then in the last discussion about this iteration of 5G, there was an announcement about the 5G in our loan, Whole bunch of talk about acceleration. It's really becoming how can we quickly get out there. And then the other thing I've read is about AI. How does AI now because 5G becomes an enigma. AI and the cloud, there's all these analytics, so 5G can actually now be able to bring that into the cloud. So AI becomes a buzzword. I just read the SAT CTO Was all NWC live TV at the venue, I talked about AI and 5G transforming the mobile industry, so it really becomes much more of a solution oriented. >> [Dan] No, I can't agree with Caroline more there. Tremendous amount of excitement around 5G as well as network transformation in the show and the two things are really becoming linked. So Caroline mentioned a few of the use cases out there on 5G, so again, lots of autonomous driving, lots of smart home, lots of smart city. I personally had a great time hanging out in our smart home demonstration earlier, but I think the key linkage of all those use cases is that the network needs to become more intelligent, more flexible, and definitely more agile to be able to support this wide variety of use cases. And we're seeing it being really echoed back by not only operators, but a lot of the OES and telecommunication equipment factors, really rallying behind NSE and truly the path to 5G. >> [John] Take a minute, guys, to explain the 5G revolution and why it's not just an evolution from 4G. What's the difference? What is the key enabler of 5G and what is Intel have that's different now than it was before. >> [Caroline] So you imagined 3G is all about getting better voice and also a little bit of SMS, and 4G is a literal 3G on steroids. Now 4G has all these, you can go on the internet and download all kind of things. 5G takes that to the next level. So 2G, 3G, and 4G is about network building for the masses If you think about it it's like a general network. So when you build it and somebody vertical says I want to make this my private network for my enterprise it's a best effort basis, so either too hot or too cold. So what that means is it operates under a wirenut either giving you way too much, unable to recuperate your investments or if it gives you not enough, you wind up with a bad user experience. 5G fundamentally changes this. Why does it change in the standard itself that's undergoing in the 3G PP. As you have a different type of schedule with them, you must predict the different use cases. For example, if you're doing a mission cryptic IOG versus a massive connector IOG, you get a different protocol. You strip out some of the heavy amount of signaling that is typically needed for mission critical for something that's just there like smart city, like traffic light changes, that kind of information you don't need that to generate a whole bunch of bandwidth. So you see something with a different, natively different in the protocol itself so that's a fundamental shift from the mindset that we always had. So that is technology enabled. And the second thing is that the network today, thanks to all the network transformation journey that everybody is on, it's much softer and flexible, it moves away from a single part purse, a belt, power to something that is much more flexible, such that you can enable something like the network driving So a prize for enhanced mobile program for ARPR would be different from something for autonomous driving. So it makes the network fundamentally different, the interface itself is much more flexible for different types of applications, and then not to mention that we have different types of spectrums on the traditional 3 GHz to 6 And now two millimeter waves we open up a whole swathe of the spectrum to allow for a much, much bigger bandwidth and things like camera applications. It really changed the game. >> [Dan] Thanks, Caroline. So I think at a high level, what Caroline was pointing out is that the wide variety of use cases with 5G will stretch and pull the network in all sorts of directions. Essentially, there will be different use cases that require blatant fact network speed, but maximum amounts of bandwidth, but some use cases also require very low latency. So when you think about all the variety of use cases, the best way to truly insure you're meeting the user experience and also delivering the right economic value for the industry is to move to more intelligent and a flexible network. And as Caroline mentioned, it is going to be software-defined. And when you think about some of the products that we're investing in, and the status in our group for networking of course you think about our Intel Xeon processors. These processors can be found in a number of servers around the globe, and customers are using these for a variety of virtual network functions, really everything ranging from the core network to the access network to newer use cases such as virtual TV. In this bit, we did announce some additional products that will be made available later in the year. This is the Atom C3000 series as well as the Xeon D1500 network series. Both of these are SoC, and when you think about 5G, you do think about the mix of centralized and distributed to plan it, and you think about that network edge becoming smarter, so these types of SoCs are very critical because they provide excellent performance density at the right power level so you can have a very intelligent edge of your network. >> [John] Great point. Just to follow up on that, it's interesting, we had a conversation yesterday in The Cube around millimeter waves, CBMA, all the different types of wireless, and I think what's interesting is you have some use cases where you have a lot of density and some cases where you need low latency, but you also have an internet of things. A car, for example, you could say, we were discussing a car is essentially going to become a data center on wheels, where mobility is going to be very important and might not need precise bandwidth per se, but in more mobility in some cases you'll need more bandwidth. And also as an internet of things comes on, whether they're industrial devices that the notion of a phone being provisioned once and then being used is not the same use case as, say, IOT, which you could have anything connected to a network, these devices are going to come on and offline all the time, so there's a real need for dynamic networks. What is Intel's approach here, because this seems to be the conversation that most people are talking about that's happening under the hood, that's the true enabler around bringing out the real mobile edge. >> [Caroline] The couple things that we're doing, number one we use a concept called flex term, flex core which is a server-based platform that works on a variety of technologies applied to it lots of these real time visualizations, dynamic resource sharing and reconfigurations, we're able to support what you just described and provide a flex support team for different types of scenarios. And then the other thing that builds into the 5G support network Splicing allows you to splice up to the pairs of light resources for a variety of cases, Including the coarse part of it, so for example, HP here in this room is demonstrating what looks a server, walks like a server and is a server and it has the RAM, virtual PC, it has orchestration, it has mobile edge computing, it's really become a network in a box. So the fact is the ultimate freedom to support the service providers and enterprises and to apply all the 5G to different scenarios. >> [John] The final question, guys, is market readiness through partners and collaboration. Intel obviously is the leader, Intel Inside who was the main story we've been hearing at Mobile World Congresses end to end, fortunately a great piece with Intel CEO talking about the end to end value in the underlying architecture, it all runs on Intel, it works better, it brings up the notion of market readiness in the ecosystem. What are you guys doing to make the ecosystem robust and vibrant, because Intel can't do it alone, you're going to need partners. Thoughts on how you guys are accelerating it, and really the market readiness for 5G and just timing in your mind when all the fruit comes off the 5G tree, if you will. >> [Caroline] We started with the trials this year, so 2017 we're going to be able, we're working closely with partners, like Ericsson, Nokia, and Cisco and we should be seeing early performance coming up and I really think the wide spread of commercial publicly is more like 2019, 2020 timeframe because of some of the standardization, would you say? >> [Dan] Yeah, so that's a great summary, Caroline. I think the key thing that we're really seeing at Mobile Congress and things that we're investing in, diverse as you mentioned. It definitely takes a village to pull off this network transformation and the movement to 5G, and I think the great thing is about the network size is the network is becoming much more pliable, more software to find, more resilient, more agile, and it's out there to find. You can really invest in many of these innovations we've been discussing today now. So we're seeing a lot of folks start investing in Flex-Core, Network in a Box, mobilized computing, et cetera, so you transform your network now, utilizing network function virtualization, and then you have a sturdy foundation when all the 5G use cases come online in the next years. >> [John] Guys, final question. What power demos are you showing? You guys usually have great demos on the floor, Mobile World Congress, lot of glam, lot of flair at the show. >> [Dan] Great question. We have a number of super demos here, we have a smart and connected home, which showcases all sorts of intel, wireless technology out of the gateway as well as other devices we're showing a smart city, as you know, with 5G, and its lightening fast speeds to pass the lower latencies. It's truly going to change the urban landscape. And we're also showing augmented virtual reality in a few different demonstrations and one definitely caught my eye and I was pretty excited about it. In our Flex Ren demo, we were showcasing augmented virtual reality, actually viewing a skier going downhill and it was pretty exciting. I had a great time, I can't wait to when, in a few years when 5G is out there and I can use augmented virtual reality to watch a number of sporting events ranging from college football to my favorite sport, which is surfing. >> [John] What's next for 5G? How are you guys going to roll this out, what's the big plans post Mobile World Congress? >> [Caroline] Like I mentioned, we have trial plans with our partners through 2017, and then we're also participating in the Winter Olympics showcase, again through our customers. There's activities happening in China now, so I think we can be in a lot of places. You can see us in 5G. >> [John] Winter Olympics, expect to get the downloads and all the video in real time on 4K screens, thank you very much. (laughs) We expect to see some good bandwidth on the Olympics, I'm sure. >> [Dan] Hey thanks, John, this was great. >> [Caroline] Thanks, bye! >> [John] Thank you. Caroline Chan and Dan Rodriguez, from Barcelona, calling in with all the details, I'm John Furrier, we'll be back with more live coverage from the Mobile World Congress after this short break.

Published Date : Feb 28 2017

SUMMARY :

Brought to you by Intel. and going to our reporters and analysts in the field. AI and the cloud, there's all these analytics, is that the network needs to become more intelligent, What is the key enabler of 5G So 2G, 3G, and 4G is about network building for the masses and pull the network in all sorts of directions. and some cases where you need low latency, and it has the RAM, virtual PC, it has orchestration, and really the market readiness for 5G and then you have a sturdy foundation lot of flair at the show. and its lightening fast speeds to pass the lower latencies. in the Winter Olympics showcase, and all the video in real time on 4K screens, from the Mobile World Congress

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NokiaORGANIZATION

0.99+

EricssonORGANIZATION

0.99+

CarolinePERSON

0.99+

Caroline ChanPERSON

0.99+

Dan RodriguezPERSON

0.99+

OrangeORGANIZATION

0.99+

Pete InjerichPERSON

0.99+

AT&TORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

ChinaLOCATION

0.99+

TelefonicaORGANIZATION

0.99+

2017DATE

0.99+

JohnPERSON

0.99+

BarcelonaLOCATION

0.99+

2019DATE

0.99+

OlympicsEVENT

0.99+

John FurrierPERSON

0.99+

eight a.m.DATE

0.99+

two daysQUANTITY

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

last yearDATE

0.99+

Silicon ValleyLOCATION

0.99+

Winter OlympicsEVENT

0.99+

DanPERSON

0.99+

2020DATE

0.99+

Mobile World CongressEVENT

0.99+

six p.m.DATE

0.99+

3 GHzQUANTITY

0.99+

BothQUANTITY

0.99+

#MWC17EVENT

0.99+

yesterdayDATE

0.99+

IntelORGANIZATION

0.99+

second thingQUANTITY

0.99+

last nightDATE

0.98+

this yearDATE

0.98+

6QUANTITY

0.98+

HPORGANIZATION

0.98+

two thingsQUANTITY

0.98+

Mobile World CongressesEVENT

0.97+

NWCORGANIZATION

0.97+

VerizonORGANIZATION

0.97+

XeonCOMMERCIAL_ITEM

0.97+

Intel CorporationORGANIZATION

0.96+

TuesdayDATE

0.96+

day twoQUANTITY

0.96+

Mobile World Congress 2017EVENT

0.95+

todayDATE

0.94+

single partQUANTITY

0.93+

5GORGANIZATION

0.91+

AtomCOMMERCIAL_ITEM

0.88+

NSEORGANIZATION

0.88+

couple thingsQUANTITY

0.87+

this morningDATE

0.87+

two millimeter wavesQUANTITY

0.86+

AppleORGANIZATION

0.85+