Image Title

Search Results for Monte:

Sam Pierson & Monte Denehie, Talend | AWS re:Invent 2022


 

(upbeat music) (air whooshing) >> Good afternoon, cloud nerds, and welcome back to beautiful Las Vegas, Nevada. We are at AWS re:invent day four. Afternoon of day four here on theCUBE. I'm Savannah Peterson, joined by my fabulous cohost, Paul Gillin. Paul, you look sharp today. How you doing? >> Oh, you're just as fabulous, Savannah. You always look sharp. >> I appreciate that. They pay you enough to keep me buttered up over here at- (Paul laughing) It's wonderful. >> You're holding up well. >> Yeah, thank you. I am excited about our next conversation. Two fabulous gentlemen. Please welcome Sam and Monty, welcome to the show. >> Thank you. >> And it was great. Of the PR 2%, the most interesting man alive. (Paul and Savannah laughing) >> In person. Yeah, yeah. >> In the flesh. Our favorite guests so far. So how's the show been for you guys? >> Sam: It's been phenomenal. >> Just spending a lot of time with customers and partners and AWS. It's been great. It's been great. >> It is great. It's really about the community. It feels good to be back. >> Monty: Eating good food, getting my steps in above goals. >> I feel like the balance is good. We walk enough of these convention centers that you can enjoy the libations and the delicious food that's in Las Vegas and still not go home feeling like a cow. It is awesome. It's a win-win. >> To Sam's point though, meeting with customers, meeting with other technology providers that we may be able to partner with. And most importantly, in my role especially, meeting with all of our AWS key stakeholders in the partnership. So yeah, it's been great. >> Everyone's here. It's just different having a conversation in person. Even like us right now. So just in case folks aren't familiar, tell me about Talend. >> Yeah. Well, Talend is a data integration company. We've been around for a while. We have tons of different ways to get data from point A to point B, lots of different sources, lots of different connectors, and it's all about creating accessibility to that data. And then on top of that, we also have a number of solutions around governance, data health, data quality, data observability, which I think is really taking off. And so that's kind of how we're changing the business here. >> Casual change, data and governance. I don't know if anyone's talking about that at all on the snow floor. >> Been on big topic here. We've had a lot of conversations with the customers about that. >> So governance, what new dynamics has the cloud introduced into data governance? >> Well, I think historically, customers have been able to have their data on-prem. They put it into things like data lakes. And now having the flexibility to be able to bring that data to the clouds, it opens up a lot of doors, but it also opens up a lot of risks. So if you think about the chief data officer role, where you have, okay, I want to be able to bring my data to the users. I want to be able to do that at scale, operationally. But at the same time you have a tension then between the governance and the rules that really restrict the way that you can do that. Very strong tension between those two things. >> It really is a delicate balance. And especially as people are trying to accelerate and streamline their cloud projects, a lot to consider. How do you all help them do that? Monty, let's go to you. >> Yeah, we keep saying data, data, what is it really? It's ones and zeros. In this day and age, everything we see, we touch, we do, we either use data, or we create data, and then that... >> Savannah: We are data quite literally. >> We literally are data. And so then what you end up with is all these disparate data silos and different applications with different data, and how do you bring all that together? And that's where customers really struggle. And what we do is we bring it all together, and we make it actionable for the customer. We make it very simple for them to take the data, use it for the outcomes that they're looking for in their business initiatives. >> Expand on that. What do you mean make it actionable? Do you tag it? Do you organize it in some way? What's different about your approach? >> I mean, it's a really flexible platform. And I think we're part of a broader ecosystem. Even internally, we are a data driven company. Coming into the company in April, I was able to come in and get this realtime view of like, "Hey, here's where our teams are." And it's all in front of me in a Tableau dashboard that's populated from Talend integration, bringing data out of our different systems, different systems like Workday where we're giving offers out to people. And so everything from managing headcount to where our AWS spend is, all of that stuff. >> Now, we've heard a lot of talk about data and in fact the keynote yesterday that was focused mainly on data and getting data out of silos. How do you play with AWS in that role? Because AWS has other data integration partners. >> Sam: For sure. >> What's different about your relationship? Yeah. >> Go ahead. >> Yeah, we've had a strong relationship with AWS for many years now. We've got more than 80 connectors into the different AWS services. So we're not new to the AWS game. We align with the sales teams, we align with the partner teams, and then of course, we align with all the different business units and verticals so that we can enact that co-sell motion together with AWS. >> Sam: Yeah. And I think from our product standpoint, again, just being a hyper flexible platform, being able to put, again, any different type of source of data, to any type of different destination, so things like Redshift, being able to bring data into those cloud data warehouses is really how we do that. And then I think we have between bringing data from A to B, we're also able to do that along a number of different dimensions. Whether that's just like, "Hey, we just need to do this once a day to batch, all the way down to event driven things, streaming and the like. >> That customization must be really valuable for your customers as well. So one of the big themes of the show has been cost reduction. Obviously with the economic times as we're potentially dipping our toes into as well, is just in general, always wanting to increase margins. How do you help customers cut cost? >> Well, it's cost cutting, but it's also speed to market. The faster you can get a product to market, the faster you can help your customers. Let's say healthcare life sciences, pharmaceutical companies, patient outcomes. >> Great and timely example there. >> Patient outcomes, how do they get drugs to market quicker? Well, AstraZeneca leveraged our platform along with AWS. And they even said >> Cool. >> for every dollar that they spend on data initiatives, they get $40 back. That's a billion dollars >> Wow. >> savings by getting a drug to market one month faster. >> Everybody wins. >> How do you accelerate that process? >> Well, by giving them the right data, taking all the massive data that I mentioned, siloed in everywhere, and making it so that the data scientists can take all of this data and make use of it, makes sense of it, and move their drug production along much quicker. >> Yeah. And I think there's other things too like being very flexible in the way that it's deployed. Again, I think like you have this historical story of like, it takes forever for data to get updated, to get put together. >> Savannah: I need it now. And in context. >> And I think where we're coming from is almost more of a developer focus where your jobs are able to be deployed in any way you want. If you want to containerize those, you want to scale them, you need to schedule them that way. We plug into a lot of different ecosystems. I think that's a differentiation as well. >> I want to hang out on this one just for a second 'cause it's such a great customer success story and so powerful. I mean, in VC land, if you can take a dollar and make two, they'll give you a 10x valuation, 40. That is so compelling. I mean, do you think other customers could expect that kind of savings? A billion dollars is nothing to laugh at especially when we're talking about developing a vaccine. Yeah, go for it, Sam. >> It really depends on the use case. I think what we're trying to do is being able to say, "Hey, it's not just about cost cutting, but it's about tailoring the offerings." We have other customers like major fast food vendors. They have mobile apps and when you pull up that mobile app and you're going to do a delivery, they want to be able to have a customized offering. And it's not like mass market, 20% off. It's like, they want to have a very tailored offer to that customer or to that person that's pulling open that app. And so we're able to help them architect and bring that data together so that it's immediately available and reliable to be able to give those promotions. >> We had ARP on the show yesterday. We're talking about 50 million subscribers and how they customize each one of their experiences. We all want it to be about us. We don't want that generic at... Yeah, go for it, Paul. >> Oh, okay. >> Yeah. >> Well, I don't want to break break the rhythm here, but one area where you have differentiated, about two years ago you introduced something called the trust score. >> Sam: Yeah. >> Can you explain what that is and how that has resonated with your customers? >> Yeah, let's talk about this. >> Yeah, the thing about the trust score is, how many times have you gotten a set of data? And you look at it and you say, "Where did you get this data? Something doesn't look right here." And with the trust score, what we're able to do is quantify and value the different attributes of the data. Whether it's how much this is being used. We can profile the data, and we have a trust score that runs over time where you can actually then look at each of these data sets. You can look at aggregates of data sets to then say... If you're the data engineer, you can say, "Oh my, something has gone wrong with this particular dataset." Go in, quickly pull up the data. You can see if some third party integration has polluted your data source. I mean, this happens all the time. And I think if you sort of compare this to the engineering world, you're always looking to solve those problems sooner, earlier in the chain. You don't want your consumer calling you saying, "Hey, I've got a problem with the data, or I've got a problem- >> You don't want them to know there was ever a problem in theory. >> Yeah, the trust score helps those data engineers and those people that are taking care of the data address those problems sooner. >> How much data does somebody need to be able to get to the point where they can have a trust score? If you know what I'm trying to say. How do we train that? >> I mean, it can be all the way from just like a single data source that's getting updated, all the way to very large complex ones. That's where we've introduced this hierarchy of data sets. So it's not just like, "Hey, you've got a billion data sources here and here are the trust scores." But it's like, you can actually architect this to say like, "Okay, well, I have these data sets that belong to finance." And then finance will actually get, "Here's the trust score for these data sets that they rely on." >> What causes datasets to become untrustworthy? >> Yeah. Yeah. I mean, it happens all the time. >> A of different things, right? >> In my history, in the different companies that I've been at, on the product side, we have seen different integrations that maybe somebody changes something. In upstream, some of those integrations can actually be quite brittle. And as a consumer of that data, it's not necessarily your fault, but that data ends up getting put into your production database. All of a sudden your data engineering team is spending two days unwinding those transactions, fixing the data that's in there. And all the while, that bad data that's in your production system, is causing a problem for somebody that is ultimately relying on that. >> Is that usually a governance problem? >> I think governance is probably a separate set of constraints. This is sort of the tension between wanting to get all of the data available to your consumers versus wanting to have the quality around it as well. >> It's tough balance. And I think that it's really interesting. Everybody wants great data, and you could be making decisions that affect people's wellness, quite frankly. >> For sure. >> Very dramatically if you're ill-informed. So that's very exciting. >> To your point, we are all data. So if the data is bad, we're not going to get the outcomes that we want ultimately, >> I know. We certainly want the best outcomes for ourselves. >> We track that data health for its entire life cycle throughout the process. >> That's cool. And that probably increases your confidence in the trust score as well 'cause you're looking at so much data all the time. You got a smart thing going on over here. I like it. I like it a lot. >> We believe in it and so does AWS because they are a strong partner of ours, and so do customers. I think we mentioned we've had some phenomenal customer conversations along with- >> What a success story and case study. I want to dust your shoulders off right now if I wasn't tethered in. That's super impressive. So what's next for you all? >> Yeah, so I think we're going to continue down this path of data health and data governance. Again, I kind of talked about the... you're talking about data health being this differentiator on top of just moving the data around and being really good at that. I think you're also going to have different things around country level or state level governance, literal laws that you need to comply with. And so like- >> Savannah: CCPA- >> I mean, a long list- >> Oodles. Yeah. Yeah, yeah, yeah. >> I think we're going to be doing some interesting things there. We are continuing to proliferate the sources of data that we connect to. We're always looking for the latest and greatest things to put the data into. I think you're going to see some interesting things come out of that too. >> And we continue to grow our relationship with AWS, our already strong relationship. So you can procure Talend products to the AWS marketplace. We just announced Redshift serverless support for Talend. >> All their age. >> Which sounds amazing, but because we've been doing this for so long with AWS, dirty little secret, that was easy for us to do because we're already doing all this stuff. So we made the announcement and everyone was like, "Congratulations." Like, "Thanks." >> Look at you all. Full of the humble brags. I love it. >> Talend has gone through some twists and turns over the last couple of years. Company went private, was purchased by Thoma Bravo about a year and a half ago. At that time, your CEO said that it was a chance to really refocus the company on some core strategic initiatives and move forward. Both of you joined obviously after that happened. But what did you see about sort of the new Talend that attracted you, made you want to come over here? >> For sure. Yeah. I think, when I got a chance to talk to the board and talk to Chris, our chair, we talked about there being the growth thesis behind it. So I think Thoma been a great partner to Talend. I think we're able to do some things internally that would be I think, fairly challenging for companies that are in the public markets right now. I think especially, just a lot of pressure on different prices and the cost capital and all of that. >> Right now. >> That was a really casual way of stating that. But yeah, just a little pressure. >> Little bit of pressure. And who knows? Who knows how long that's going to last, right? But I think we've got a great board in place. They've been very strong strategic partner for us talking about all the different ways that we can grow. I think it's been a good partner for us- >> One of the strengths of Thoma's strategy is synergy between the companies they've acquired. >> Oh, for sure. >> They've acquired about 40 software companies. Are you seeing synergy? You talk to those other companies a lot? >> Yeah, so I have an operating partner. I talk with him on a weekly, sometimes daily basis. If we have questions or like, "Hey, what are you seeing in this space?" We can get plugged in to advisors very quickly. I think it's been a very helpful thing where... otherwise, you're relying on your personal network or things like that. >> This is why Monty was saying it was easy for you guys to go serverless. >> And we keep talking about trust, but in this case, Thoma Bravo really trusts our senior leadership team to make the right decisions that Sam and I are here making as we move forward. It's a great relationship. >> Sam: A good team. >> It sounds like it. All the love. I can feel the love even from you guys talking about it, it's genuine. You're not just getting paid to show this. That's fantastic. >> Are we getting paid for this or... >> Yeah. (Savannah giggling) (Paul laughing) I mean, some folks in the audience are probably going to want your autograph after this, although you get that a lot- >> Pictures are available after- >> Yeah, selfies are 10 bucks. That's how I get my boos budget. So last question for you. We have a challenge here on the theCUBE re:invent. We're looking for your 32nd hot take. Think of it as your thought leadership sizzle reel. Biggest takeaway, key themes from the show or looking forward into 2023? Sam, you're ready to rock, go. >> Yeah, totally. >> I think you're going to continue to hear the tension between being able to bring the data to the masses versus the simplicity and being able to do that in a way that is compliant with all the different laws, and then clean data. It's like a lot of different challenges that arise when you do this at scale. And so I think if you look at the things that AWS is announcing, I think you look at any sort of vendor in the data space are announcing, you see them sort of coming around to that set of ideas. Gives me a lot of confidence in the direction that we're going that we're doing the right stuff and we're meeting customers and prospects and partners, and everybody is like... We kind of get into this conversation and I'll say, "Yeah, that's it. We want to get involved in that." >> You can really feel the momentum. Yeah, it's true. It's great. What about you, Monty? >> I mean, I don't need 30 seconds. I mentioned it. >> Great. >> Between Talend and AWS, we're aligned from the sales teams to the product teams, the partner teams and the alliances. We're just moving forward and growing this relationship. >> I love it. That was perfect. And on that note, Sam, Monty, thank you so much for joining us. >> Yeah, thanks for having us. >> I'm sure your careers are going to continue to be rad at Talend and I can't wait to continue the conversation. >> Sam: Yeah, it's a great team. >> Yeah, clearly. I mean, look at you two. If you're any representation of the culture over there, they're doing something great. (Monty laughing) I thank all of you for tuning in to our nearly... Well, shoot. I think now over 100 interviews at AWS Reinvent in Sin City. We are hanging out here. Paul and I've got a couple more for you. So we hope to see you tuning in with Paul Gillin. I'm Savannah Peterson. You're watching theCUBE, the leader in high tech coverage. (upbeat music)

Published Date : Dec 1 2022

SUMMARY :

How you doing? you're just as fabulous, Savannah. They pay you enough to keep I am excited about our next conversation. Of the PR 2%, the most Yeah, yeah. So how's the show been for you guys? of time with customers really about the community. getting my steps in above goals. I feel like the balance is good. in the partnership. a conversation in person. changing the business here. on the snow floor. We've had a lot of conversations that really restrict the How do you all help them do that? and then that... and how do you bring all that together? What do you mean make it actionable? And I think we're part and in fact the keynote yesterday your relationship? so that we can enact that And then I think we have between So one of the big themes of the show the faster you can help your customers. get drugs to market quicker? for every dollar that they to market one month faster. and making it so that the data scientists Again, I think like you have And in context. And I think where we're coming from I mean, do you think other customers and when you pull up that mobile app We had ARP on the show yesterday. called the trust score. And I think if you sort of compare this You don't want them to Yeah, the trust score to be able to get to the point I mean, it can be all the way I mean, it happens all the time. on the product side, we have all of the data available And I think that it's really interesting. So that's very exciting. So if the data is bad, the best outcomes for ourselves. We track that data health in the trust score as well I think we mentioned I want to dust your literal laws that you need to comply with. I think we're going to be doing So you can procure Talend that was easy for us to do the humble brags. Both of you joined obviously and talk to Chris, our chair, That was a really But I think we've got One of the strengths You talk to those other companies a lot? I think it's been a very it was easy for you guys to go serverless. to make the right decisions I can feel the love even from I mean, some folks in the audience on the theCUBE re:invent. the data to the masses You can really feel the momentum. I mean, I don't need 30 seconds. from the sales teams to the product teams, And on that note, Sam, Monty, continue the conversation. I mean, look at you two.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ChrisPERSON

0.99+

SamPERSON

0.99+

MontyPERSON

0.99+

AWSORGANIZATION

0.99+

Paul GillinPERSON

0.99+

PaulPERSON

0.99+

SavannahPERSON

0.99+

AprilDATE

0.99+

Savannah PetersonPERSON

0.99+

$40QUANTITY

0.99+

40QUANTITY

0.99+

two daysQUANTITY

0.99+

Las VegasLOCATION

0.99+

AstraZenecaORGANIZATION

0.99+

30 secondsQUANTITY

0.99+

TalendORGANIZATION

0.99+

10 bucksQUANTITY

0.99+

20%QUANTITY

0.99+

Sin CityLOCATION

0.99+

2023DATE

0.99+

ThomaPERSON

0.99+

twoQUANTITY

0.99+

yesterdayDATE

0.99+

BothQUANTITY

0.99+

one monthQUANTITY

0.99+

two thingsQUANTITY

0.99+

more than 80 connectorsQUANTITY

0.99+

Las Vegas, NevadaLOCATION

0.98+

Sam PiersonPERSON

0.98+

OneQUANTITY

0.97+

eachQUANTITY

0.97+

over 100 interviewsQUANTITY

0.97+

about 50 million subscribersQUANTITY

0.97+

todayDATE

0.97+

a dollarQUANTITY

0.97+

about 40 software companiesQUANTITY

0.96+

once a dayQUANTITY

0.95+

32nd hot takeQUANTITY

0.94+

oneQUANTITY

0.94+

billion dollarsQUANTITY

0.94+

about two years agoDATE

0.93+

one areaQUANTITY

0.92+

Thoma BravoORGANIZATION

0.92+

TalendTITLE

0.92+

TalendPERSON

0.92+

about a year and a half agoDATE

0.91+

Monte DenehiePERSON

0.9+

Thoma BravoPERSON

0.88+

TableauTITLE

0.88+

Two fabulous gentlemenQUANTITY

0.86+

day fourQUANTITY

0.85+

Breaking Analysis: We Have the Data…What Private Tech Companies Don’t Tell you About Their Business


 

>> From The Cube Studios in Palo Alto and Boston, bringing you data driven insights from The Cube at ETR. This is "Breaking Analysis" with Dave Vellante. >> The reverse momentum in tech stocks caused by rising interest rates, less attractive discounted cash flow models, and more tepid forward guidance, can be easily measured by public market valuations. And while there's lots of discussion about the impact on private companies and cash runway and 409A valuations, measuring the performance of non-public companies isn't as easy. IPOs have dried up and public statements by private companies, of course, they accentuate the good and they kind of hide the bad. Real data, unless you're an insider, is hard to find. Hello and welcome to this week's "Wikibon Cube Insights" powered by ETR. In this "Breaking Analysis", we unlock some of the secrets that non-public, emerging tech companies may or may not be sharing. And we do this by introducing you to a capability from ETR that we've not exposed you to over the past couple of years, it's called the Emerging Technologies Survey, and it is packed with sentiment data and performance data based on surveys of more than a thousand CIOs and IT buyers covering more than 400 companies. And we've invited back our colleague, Erik Bradley of ETR to help explain the survey and the data that we're going to cover today. Erik, this survey is something that I've not personally spent much time on, but I'm blown away at the data. It's really unique and detailed. First of all, welcome. Good to see you again. >> Great to see you too, Dave, and I'm really happy to be talking about the ETS or the Emerging Technology Survey. Even our own clients of constituents probably don't spend as much time in here as they should. >> Yeah, because there's so much in the mainstream, but let's pull up a slide to bring out the survey composition. Tell us about the study. How often do you run it? What's the background and the methodology? >> Yeah, you were just spot on the way you were talking about the private tech companies out there. So what we did is we decided to take all the vendors that we track that are not yet public and move 'em over to the ETS. And there isn't a lot of information out there. If you're not in Silicon (indistinct), you're not going to get this stuff. So PitchBook and Tech Crunch are two out there that gives some data on these guys. But what we really wanted to do was go out to our community. We have 6,000, ITDMs in our community. We wanted to ask them, "Are you aware of these companies? And if so, are you allocating any resources to them? Are you planning to evaluate them," and really just kind of figure out what we can do. So this particular survey, as you can see, 1000 plus responses, over 450 vendors that we track. And essentially what we're trying to do here is talk about your evaluation and awareness of these companies and also your utilization. And also if you're not utilizing 'em, then we can also figure out your sales conversion or churn. So this is interesting, not only for the ITDMs themselves to figure out what their peers are evaluating and what they should put in POCs against the big guys when contracts come up. But it's also really interesting for the tech vendors themselves to see how they're performing. >> And you can see 2/3 of the respondents are director level of above. You got 28% is C-suite. There is of course a North America bias, 70, 75% is North America. But these smaller companies, you know, that's when they start doing business. So, okay. We're going to do a couple of things here today. First, we're going to give you the big picture across the sectors that ETR covers within the ETS survey. And then we're going to look at the high and low sentiment for the larger private companies. And then we're going to do the same for the smaller private companies, the ones that don't have as much mindshare. And then I'm going to put those two groups together and we're going to look at two dimensions, actually three dimensions, which companies are being evaluated the most. Second, companies are getting the most usage and adoption of their offerings. And then third, which companies are seeing the highest churn rates, which of course is a silent killer of companies. And then finally, we're going to look at the sentiment and mindshare for two key areas that we like to cover often here on "Breaking Analysis", security and data. And data comprises database, including data warehousing, and then big data analytics is the second part of data. And then machine learning and AI is the third section within data that we're going to look at. Now, one other thing before we get into it, ETR very often will include open source offerings in the mix, even though they're not companies like TensorFlow or Kubernetes, for example. And we'll call that out during this discussion. The reason this is done is for context, because everyone is using open source. It is the heart of innovation and many business models are super glued to an open source offering, like take MariaDB, for example. There's the foundation and then there's with the open source code and then there, of course, the company that sells services around the offering. Okay, so let's first look at the highest and lowest sentiment among these private firms, the ones that have the highest mindshare. So they're naturally going to be somewhat larger. And we do this on two dimensions, sentiment on the vertical axis and mindshare on the horizontal axis and note the open source tool, see Kubernetes, Postgres, Kafka, TensorFlow, Jenkins, Grafana, et cetera. So Erik, please explain what we're looking at here, how it's derived and what the data tells us. >> Certainly, so there is a lot here, so we're going to break it down first of all by explaining just what mindshare and net sentiment is. You explain the axis. We have so many evaluation metrics, but we need to aggregate them into one so that way we can rank against each other. Net sentiment is really the aggregation of all the positive and subtracting out the negative. So the net sentiment is a very quick way of looking at where these companies stand versus their peers in their sectors and sub sectors. Mindshare is basically the awareness of them, which is good for very early stage companies. And you'll see some names on here that are obviously been around for a very long time. And they're clearly be the bigger on the axis on the outside. Kubernetes, for instance, as you mentioned, is open source. This de facto standard for all container orchestration, and it should be that far up into the right, because that's what everyone's using. In fact, the open source leaders are so prevalent in the emerging technology survey that we break them out later in our analysis, 'cause it's really not fair to include them and compare them to the actual companies that are providing the support and the security around that open source technology. But no survey, no analysis, no research would be complete without including these open source tech. So what we're looking at here, if I can just get away from the open source names, we see other things like Databricks and OneTrust . They're repeating as top net sentiment performers here. And then also the design vendors. People don't spend a lot of time on 'em, but Miro and Figma. This is their third survey in a row where they're just dominating that sentiment overall. And Adobe should probably take note of that because they're really coming after them. But Databricks, we all know probably would've been a public company by now if the market hadn't turned, but you can see just how dominant they are in a survey of nothing but private companies. And we'll see that again when we talk about the database later. >> And I'll just add, so you see automation anywhere on there, the big UiPath competitor company that was not able to get to the public markets. They've been trying. Snyk, Peter McKay's company, they've raised a bunch of money, big security player. They're doing some really interesting things in developer security, helping developers secure the data flow, H2O.ai, Dataiku AI company. We saw them at the Snowflake Summit. Redis Labs, Netskope and security. So a lot of names that we know that ultimately we think are probably going to be hitting the public market. Okay, here's the same view for private companies with less mindshare, Erik. Take us through this one. >> On the previous slide too real quickly, I wanted to pull that security scorecard and we'll get back into it. But this is a newcomer, that I couldn't believe how strong their data was, but we'll bring that up in a second. Now, when we go to the ones of lower mindshare, it's interesting to talk about open source, right? Kubernetes was all the way on the top right. Everyone uses containers. Here we see Istio up there. Not everyone is using service mesh as much. And that's why Istio is in the smaller breakout. But still when you talk about net sentiment, it's about the leader, it's the highest one there is. So really interesting to point out. Then we see other names like Collibra in the data side really performing well. And again, as always security, very well represented here. We have Aqua, Wiz, Armis, which is a standout in this survey this time around. They do IoT security. I hadn't even heard of them until I started digging into the data here. And I couldn't believe how well they were doing. And then of course you have AnyScale, which is doing a second best in this and the best name in the survey Hugging Face, which is a machine learning AI tool. Also doing really well on a net sentiment, but they're not as far along on that access of mindshare just yet. So these are again, emerging companies that might not be as well represented in the enterprise as they will be in a couple of years. >> Hugging Face sounds like something you do with your two year old. Like you said, you see high performers, AnyScale do machine learning and you mentioned them. They came out of Berkeley. Collibra Governance, InfluxData is on there. InfluxDB's a time series database. And yeah, of course, Alex, if you bring that back up, you get a big group of red dots, right? That's the bad zone, I guess, which Sisense does vis, Yellowbrick Data is a NPP database. How should we interpret the red dots, Erik? I mean, is it necessarily a bad thing? Could it be misinterpreted? What's your take on that? >> Sure, well, let me just explain the definition of it first from a data science perspective, right? We're a data company first. So the gray dots that you're seeing that aren't named, that's the mean that's the average. So in order for you to be on this chart, you have to be at least one standard deviation above or below that average. So that gray is where we're saying, "Hey, this is where the lump of average comes in. This is where everyone normally stands." So you either have to be an outperformer or an underperformer to even show up in this analysis. So by definition, yes, the red dots are bad. You're at least one standard deviation below the average of your peers. It's not where you want to be. And if you're on the lower left, not only are you not performing well from a utilization or an actual usage rate, but people don't even know who you are. So that's a problem, obviously. And the VCs and the PEs out there that are backing these companies, they're the ones who mostly are interested in this data. >> Yeah. Oh, that's great explanation. Thank you for that. No, nice benchmarking there and yeah, you don't want to be in the red. All right, let's get into the next segment here. Here going to look at evaluation rates, adoption and the all important churn. First new evaluations. Let's bring up that slide. And Erik, take us through this. >> So essentially I just want to explain what evaluation means is that people will cite that they either plan to evaluate the company or they're currently evaluating. So that means we're aware of 'em and we are choosing to do a POC of them. And then we'll see later how that turns into utilization, which is what a company wants to see, awareness, evaluation, and then actually utilizing them. That's sort of the life cycle for these emerging companies. So what we're seeing here, again, with very high evaluation rates. H2O, we mentioned. SecurityScorecard jumped up again. Chargebee, Snyk, Salt Security, Armis. A lot of security names are up here, Aqua, Netskope, which God has been around forever. I still can't believe it's in an Emerging Technology Survey But so many of these names fall in data and security again, which is why we decided to pick those out Dave. And on the lower side, Vena, Acton, those unfortunately took the dubious award of the lowest evaluations in our survey, but I prefer to focus on the positive. So SecurityScorecard, again, real standout in this one, they're in a security assessment space, basically. They'll come in and assess for you how your security hygiene is. And it's an area of a real interest right now amongst our ITDM community. >> Yeah, I mean, I think those, and then Arctic Wolf is up there too. They're doing managed services. You had mentioned Netskope. Yeah, okay. All right, let's look at now adoption. These are the companies whose offerings are being used the most and are above that standard deviation in the green. Take us through this, Erik. >> Sure, yet again, what we're looking at is, okay, we went from awareness, we went to evaluation. Now it's about utilization, which means a survey respondent's going to state "Yes, we evaluated and we plan to utilize it" or "It's already in our enterprise and we're actually allocating further resources to it." Not surprising, again, a lot of open source, the reason why, it's free. So it's really easy to grow your utilization on something that's free. But as you and I both know, as Red Hat proved, there's a lot of money to be made once the open source is adopted, right? You need the governance, you need the security, you need the support wrapped around it. So here we're seeing Kubernetes, Postgres, Apache Kafka, Jenkins, Grafana. These are all open source based names. But if we're looking at names that are non open source, we're going to see Databricks, Automation Anywhere, Rubrik all have the highest mindshare. So these are the names, not surprisingly, all names that probably should have been public by now. Everyone's expecting an IPO imminently. These are the names that have the highest mindshare. If we talk about the highest utilization rates, again, Miro and Figma pop up, and I know they're not household names, but they are just dominant in this survey. These are applications that are meant for design software and, again, they're going after an Autodesk or a CAD or Adobe type of thing. It is just dominant how high the utilization rates are here, which again is something Adobe should be paying attention to. And then you'll see a little bit lower, but also interesting, we see Collibra again, we see Hugging Face again. And these are names that are obviously in the data governance, ML, AI side. So we're seeing a ton of data, a ton of security and Rubrik was interesting in this one, too, high utilization and high mindshare. We know how pervasive they are in the enterprise already. >> Erik, Alex, keep that up for a second, if you would. So yeah, you mentioned Rubrik. Cohesity's not on there. They're sort of the big one. We're going to talk about them in a moment. Puppet is interesting to me because you remember the early days of that sort of space, you had Puppet and Chef and then you had Ansible. Red Hat bought Ansible and then Ansible really took off. So it's interesting to see Puppet on there as well. Okay. So now let's look at the churn because this one is where you don't want to be. It's, of course, all red 'cause churn is bad. Take us through this, Erik. >> Yeah, definitely don't want to be here and I don't love to dwell on the negative. So we won't spend as much time. But to your point, there's one thing I want to point out that think it's important. So you see Rubrik in the same spot, but Rubrik has so many citations in our survey that it actually would make sense that they're both being high utilization and churn just because they're so well represented. They have such a high overall representation in our survey. And the reason I call that out is Cohesity. Cohesity has an extremely high churn rate here about 17% and unlike Rubrik, they were not on the utilization side. So Rubrik is seeing both, Cohesity is not. It's not being utilized, but it's seeing a high churn. So that's the way you can look at this data and say, "Hm." Same thing with Puppet. You noticed that it was on the other slide. It's also on this one. So basically what it means is a lot of people are giving Puppet a shot, but it's starting to churn, which means it's not as sticky as we would like. One that was surprising on here for me was Tanium. It's kind of jumbled in there. It's hard to see in the middle, but Tanium, I was very surprised to see as high of a churn because what I do hear from our end user community is that people that use it, like it. It really kind of spreads into not only vulnerability management, but also that endpoint detection and response side. So I was surprised by that one, mostly to see Tanium in here. Mural, again, was another one of those application design softwares that's seeing a very high churn as well. >> So you're saying if you're in both... Alex, bring that back up if you would. So if you're in both like MariaDB is for example, I think, yeah, they're in both. They're both green in the previous one and red here, that's not as bad. You mentioned Rubrik is going to be in both. Cohesity is a bit of a concern. Cohesity just brought on Sanjay Poonen. So this could be a go to market issue, right? I mean, 'cause Cohesity has got a great product and they got really happy customers. So they're just maybe having to figure out, okay, what's the right ideal customer profile and Sanjay Poonen, I guarantee, is going to have that company cranking. I mean they had been doing very well on the surveys and had fallen off of a bit. The other interesting things wondering the previous survey I saw Cvent, which is an event platform. My only reason I pay attention to that is 'cause we actually have an event platform. We don't sell it separately. We bundle it as part of our offerings. And you see Hopin on here. Hopin raised a billion dollars during the pandemic. And we were like, "Wow, that's going to blow up." And so you see Hopin on the churn and you didn't see 'em in the previous chart, but that's sort of interesting. Like you said, let's not kind of dwell on the negative, but you really don't. You know, churn is a real big concern. Okay, now we're going to drill down into two sectors, security and data. Where data comprises three areas, database and data warehousing, machine learning and AI and big data analytics. So first let's take a look at the security sector. Now this is interesting because not only is it a sector drill down, but also gives an indicator of how much money the firm has raised, which is the size of that bubble. And to tell us if a company is punching above its weight and efficiently using its venture capital. Erik, take us through this slide. Explain the dots, the size of the dots. Set this up please. >> Yeah. So again, the axis is still the same, net sentiment and mindshare, but what we've done this time is we've taken publicly available information on how much capital company is raised and that'll be the size of the circle you see around the name. And then whether it's green or red is basically saying relative to the amount of money they've raised, how are they doing in our data? So when you see a Netskope, which has been around forever, raised a lot of money, that's why you're going to see them more leading towards red, 'cause it's just been around forever and kind of would expect it. Versus a name like SecurityScorecard, which is only raised a little bit of money and it's actually performing just as well, if not better than a name, like a Netskope. OneTrust doing absolutely incredible right now. BeyondTrust. We've seen the issues with Okta, right. So those are two names that play in that space that obviously are probably getting some looks about what's going on right now. Wiz, we've all heard about right? So raised a ton of money. It's doing well on net sentiment, but the mindshare isn't as well as you'd want, which is why you're going to see a little bit of that red versus a name like Aqua, which is doing container and application security. And hasn't raised as much money, but is really neck and neck with a name like Wiz. So that is why on a relative basis, you'll see that more green. As we all know, information security is never going away. But as we'll get to later in the program, Dave, I'm not sure in this current market environment, if people are as willing to do POCs and switch away from their security provider, right. There's a little bit of tepidness out there, a little trepidation. So right now we're seeing overall a slight pause, a slight cooling in overall evaluations on the security side versus historical levels a year ago. >> Now let's stay on here for a second. So a couple things I want to point out. So it's interesting. Now Snyk has raised over, I think $800 million but you can see them, they're high on the vertical and the horizontal, but now compare that to Lacework. It's hard to see, but they're kind of buried in the middle there. That's the biggest dot in this whole thing. I think I'm interpreting this correctly. They've raised over a billion dollars. It's a Mike Speiser company. He was the founding investor in Snowflake. So people watch that very closely, but that's an example of where they're not punching above their weight. They recently had a layoff and they got to fine tune things, but I'm still confident they they're going to do well. 'Cause they're approaching security as a data problem, which is probably people having trouble getting their arms around that. And then again, I see Arctic Wolf. They're not red, they're not green, but they've raised fair amount of money, but it's showing up to the right and decent level there. And a couple of the other ones that you mentioned, Netskope. Yeah, they've raised a lot of money, but they're actually performing where you want. What you don't want is where Lacework is, right. They've got some work to do to really take advantage of the money that they raised last November and prior to that. >> Yeah, if you're seeing that more neutral color, like you're calling out with an Arctic Wolf, like that means relative to their peers, this is where they should be. It's when you're seeing that red on a Lacework where we all know, wow, you raised a ton of money and your mindshare isn't where it should be. Your net sentiment is not where it should be comparatively. And then you see these great standouts, like Salt Security and SecurityScorecard and Abnormal. You know they haven't raised that much money yet, but their net sentiment's higher and their mindshare's doing well. So those basically in a nutshell, if you're a PE or a VC and you see a small green circle, then you're doing well, then it means you made a good investment. >> Some of these guys, I don't know, but you see these small green circles. Those are the ones you want to start digging into and maybe help them catch a wave. Okay, let's get into the data discussion. And again, three areas, database slash data warehousing, big data analytics and ML AI. First, we're going to look at the database sector. So Alex, thank you for bringing that up. Alright, take us through this, Erik. Actually, let me just say Postgres SQL. I got to ask you about this. It shows some funding, but that actually could be a mix of EDB, the company that commercializes Postgres and Postgres the open source database, which is a transaction system and kind of an open source Oracle. You see MariaDB is a database, but open source database. But the companies they've raised over $200 million and they filed an S-4. So Erik looks like this might be a little bit of mashup of companies and open source products. Help us understand this. >> Yeah, it's tough when you start dealing with the open source side and I'll be honest with you, there is a little bit of a mashup here. There are certain names here that are a hundred percent for profit companies. And then there are others that are obviously open source based like Redis is open source, but Redis Labs is the one trying to monetize the support around it. So you're a hundred percent accurate on this slide. I think one of the things here that's important to note though, is just how important open source is to data. If you're going to be going to any of these areas, it's going to be open source based to begin with. And Neo4j is one I want to call out here. It's not one everyone's familiar with, but it's basically geographical charting database, which is a name that we're seeing on a net sentiment side actually really, really high. When you think about it's the third overall net sentiment for a niche database play. It's not as big on the mindshare 'cause it's use cases aren't as often, but third biggest play on net sentiment. I found really interesting on this slide. >> And again, so MariaDB, as I said, they filed an S-4 I think $50 million in revenue, that might even be ARR. So they're not huge, but they're getting there. And by the way, MariaDB, if you don't know, was the company that was formed the day that Oracle bought Sun in which they got MySQL and MariaDB has done a really good job of replacing a lot of MySQL instances. Oracle has responded with MySQL HeatWave, which was kind of the Oracle version of MySQL. So there's some interesting battles going on there. If you think about the LAMP stack, the M in the LAMP stack was MySQL. And so now it's all MariaDB replacing that MySQL for a large part. And then you see again, the red, you know, you got to have some concerns about there. Aerospike's been around for a long time. SingleStore changed their name a couple years ago, last year. Yellowbrick Data, Fire Bolt was kind of going after Snowflake for a while, but yeah, you want to get out of that red zone. So they got some work to do. >> And Dave, real quick for the people that aren't aware, I just want to let them know that we can cut this data with the public company data as well. So we can cross over this with that because some of these names are competing with the larger public company names as well. So we can go ahead and cross reference like a MariaDB with a Mongo, for instance, or of something of that nature. So it's not in this slide, but at another point we can certainly explain on a relative basis how these private names are doing compared to the other ones as well. >> All right, let's take a quick look at analytics. Alex, bring that up if you would. Go ahead, Erik. >> Yeah, I mean, essentially here, I can't see it on my screen, my apologies. I just kind of went to blank on that. So gimme one second to catch up. >> So I could set it up while you're doing that. You got Grafana up and to the right. I mean, this is huge right. >> Got it thank you. I lost my screen there for a second. Yep. Again, open source name Grafana, absolutely up and to the right. But as we know, Grafana Labs is actually picking up a lot of speed based on Grafana, of course. And I think we might actually hear some noise from them coming this year. The names that are actually a little bit more disappointing than I want to call out are names like ThoughtSpot. It's been around forever. Their mindshare of course is second best here but based on the amount of time they've been around and the amount of money they've raised, it's not actually outperforming the way it should be. We're seeing Moogsoft obviously make some waves. That's very high net sentiment for that company. It's, you know, what, third, fourth position overall in this entire area, Another name like Fivetran, Matillion is doing well. Fivetran, even though it's got a high net sentiment, again, it's raised so much money that we would've expected a little bit more at this point. I know you know this space extremely well, but basically what we're looking at here and to the bottom left, you're going to see some names with a lot of red, large circles that really just aren't performing that well. InfluxData, however, second highest net sentiment. And it's really pretty early on in this stage and the feedback we're getting on this name is the use cases are great, the efficacy's great. And I think it's one to watch out for. >> InfluxData, time series database. The other interesting things I just noticed here, you got Tamer on here, which is that little small green. Those are the ones we were saying before, look for those guys. They might be some of the interesting companies out there and then observe Jeremy Burton's company. They do observability on top of Snowflake, not green, but kind of in that gray. So that's kind of cool. Monte Carlo is another one, they're sort of slightly green. They are doing some really interesting things in data and data mesh. So yeah, okay. So I can spend all day on this stuff, Erik, phenomenal data. I got to get back and really dig in. Let's end with machine learning and AI. Now this chart it's similar in its dimensions, of course, except for the money raised. We're not showing that size of the bubble, but AI is so hot. We wanted to cover that here, Erik, explain this please. Why TensorFlow is highlighted and walk us through this chart. >> Yeah, it's funny yet again, right? Another open source name, TensorFlow being up there. And I just want to explain, we do break out machine learning, AI is its own sector. A lot of this of course really is intertwined with the data side, but it is on its own area. And one of the things I think that's most important here to break out is Databricks. We started to cover Databricks in machine learning, AI. That company has grown into much, much more than that. So I do want to state to you Dave, and also the audience out there that moving forward, we're going to be moving Databricks out of only the MA/AI into other sectors. So we can kind of value them against their peers a little bit better. But in this instance, you could just see how dominant they are in this area. And one thing that's not here, but I do want to point out is that we have the ability to break this down by industry vertical, organization size. And when I break this down into Fortune 500 and Fortune 1000, both Databricks and Tensorflow are even better than you see here. So it's quite interesting to see that the names that are succeeding are also succeeding with the largest organizations in the world. And as we know, large organizations means large budgets. So this is one area that I just thought was really interesting to point out that as we break it down, the data by vertical, these two names still are the outstanding players. >> I just also want to call it H2O.ai. They're getting a lot of buzz in the marketplace and I'm seeing them a lot more. Anaconda, another one. Dataiku consistently popping up. DataRobot is also interesting because all the kerfuffle that's going on there. The Cube guy, Cube alum, Chris Lynch stepped down as executive chairman. All this stuff came out about how the executives were taking money off the table and didn't allow the employees to participate in that money raising deal. So that's pissed a lot of people off. And so they're now going through some kind of uncomfortable things, which is unfortunate because DataRobot, I noticed, we haven't covered them that much in "Breaking Analysis", but I've noticed them oftentimes, Erik, in the surveys doing really well. So you would think that company has a lot of potential. But yeah, it's an important space that we're going to continue to watch. Let me ask you Erik, can you contextualize this from a time series standpoint? I mean, how is this changed over time? >> Yeah, again, not show here, but in the data. I'm sorry, go ahead. >> No, I'm sorry. What I meant, I should have interjected. In other words, you would think in a downturn that these emerging companies would be less interesting to buyers 'cause they're more risky. What have you seen? >> Yeah, and it was interesting before we went live, you and I were having this conversation about "Is the downturn stopping people from evaluating these private companies or not," right. In a larger sense, that's really what we're doing here. How are these private companies doing when it comes down to the actual practitioners? The people with the budget, the people with the decision making. And so what I did is, we have historical data as you know, I went back to the Emerging Technology Survey we did in November of 21, right at the crest right before the market started to really fall and everything kind of started to fall apart there. And what I noticed is on the security side, very much so, we're seeing less evaluations than we were in November 21. So I broke it down. On cloud security, net sentiment went from 21% to 16% from November '21. That's a pretty big drop. And again, that sentiment is our one aggregate metric for overall positivity, meaning utilization and actual evaluation of the name. Again in database, we saw it drop a little bit from 19% to 13%. However, in analytics we actually saw it stay steady. So it's pretty interesting that yes, cloud security and security in general is always going to be important. But right now we're seeing less overall net sentiment in that space. But within analytics, we're seeing steady with growing mindshare. And also to your point earlier in machine learning, AI, we're seeing steady net sentiment and mindshare has grown a whopping 25% to 30%. So despite the downturn, we're seeing more awareness of these companies in analytics and machine learning and a steady, actual utilization of them. I can't say the same in security and database. They're actually shrinking a little bit since the end of last year. >> You know it's interesting, we were on a round table, Erik does these round tables with CISOs and CIOs, and I remember one time you had asked the question, "How do you think about some of these emerging tech companies?" And one of the executives said, "I always include somebody in the bottom left of the Gartner Magic Quadrant in my RFPs. I think he said, "That's how I found," I don't know, it was Zscaler or something like that years before anybody ever knew of them "Because they're going to help me get to the next level." So it's interesting to see Erik in these sectors, how they're holding up in many cases. >> Yeah. It's a very important part for the actual IT practitioners themselves. There's always contracts coming up and you always have to worry about your next round of negotiations. And that's one of the roles these guys play. You have to do a POC when contracts come up, but it's also their job to stay on top of the new technology. You can't fall behind. Like everyone's a software company. Now everyone's a tech company, no matter what you're doing. So these guys have to stay in on top of it. And that's what this ETS can do. You can go in here and look and say, "All right, I'm going to evaluate their technology," and it could be twofold. It might be that you're ready to upgrade your technology and they're actually pushing the envelope or it simply might be I'm using them as a negotiation ploy. So when I go back to the big guy who I have full intentions of writing that contract to, at least I have some negotiation leverage. >> Erik, we got to leave it there. I could spend all day. I'm going to definitely dig into this on my own time. Thank you for introducing this, really appreciate your time today. >> I always enjoy it, Dave and I hope everyone out there has a great holiday weekend. Enjoy the rest of the summer. And, you know, I love to talk data. So anytime you want, just point the camera on me and I'll start talking data. >> You got it. I also want to thank the team at ETR, not only Erik, but Darren Bramen who's a data scientist, really helped prepare this data, the entire team over at ETR. I cannot tell you how much additional data there is. We are just scratching the surface in this "Breaking Analysis". So great job guys. I want to thank Alex Myerson. Who's on production and he manages the podcast. Ken Shifman as well, who's just coming back from VMware Explore. Kristen Martin and Cheryl Knight help get the word out on social media and in our newsletters. And Rob Hof is our editor in chief over at SiliconANGLE. Does some great editing for us. Thank you. All of you guys. Remember these episodes, they're all available as podcast, wherever you listen. All you got to do is just search "Breaking Analysis" podcast. I publish each week on wikibon.com and siliconangle.com. Or you can email me to get in touch david.vellante@siliconangle.com. You can DM me at dvellante or comment on my LinkedIn posts and please do check out etr.ai for the best survey data in the enterprise tech business. This is Dave Vellante for Erik Bradley and The Cube Insights powered by ETR. Thanks for watching. Be well. And we'll see you next time on "Breaking Analysis". (upbeat music)

Published Date : Sep 7 2022

SUMMARY :

bringing you data driven it's called the Emerging Great to see you too, Dave, so much in the mainstream, not only for the ITDMs themselves It is the heart of innovation So the net sentiment is a very So a lot of names that we And then of course you have AnyScale, That's the bad zone, I guess, So the gray dots that you're rates, adoption and the all And on the lower side, Vena, Acton, in the green. are in the enterprise already. So now let's look at the churn So that's the way you can look of dwell on the negative, So again, the axis is still the same, And a couple of the other And then you see these great standouts, Those are the ones you want to but Redis Labs is the one And by the way, MariaDB, So it's not in this slide, Alex, bring that up if you would. So gimme one second to catch up. So I could set it up but based on the amount of time Those are the ones we were saying before, And one of the things I think didn't allow the employees to here, but in the data. What have you seen? the market started to really And one of the executives said, And that's one of the Thank you for introducing this, just point the camera on me We are just scratching the surface

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ErikPERSON

0.99+

Alex MyersonPERSON

0.99+

Ken ShifmanPERSON

0.99+

Sanjay PoonenPERSON

0.99+

Dave VellantePERSON

0.99+

DavePERSON

0.99+

Erik BradleyPERSON

0.99+

November 21DATE

0.99+

Darren BramenPERSON

0.99+

AlexPERSON

0.99+

Cheryl KnightPERSON

0.99+

PostgresORGANIZATION

0.99+

DatabricksORGANIZATION

0.99+

NetskopeORGANIZATION

0.99+

AdobeORGANIZATION

0.99+

Rob HofPERSON

0.99+

FivetranORGANIZATION

0.99+

$50 millionQUANTITY

0.99+

21%QUANTITY

0.99+

Chris LynchPERSON

0.99+

19%QUANTITY

0.99+

Jeremy BurtonPERSON

0.99+

$800 millionQUANTITY

0.99+

6,000QUANTITY

0.99+

OracleORGANIZATION

0.99+

Redis LabsORGANIZATION

0.99+

November '21DATE

0.99+

ETRORGANIZATION

0.99+

FirstQUANTITY

0.99+

25%QUANTITY

0.99+

last yearDATE

0.99+

OneTrustORGANIZATION

0.99+

two dimensionsQUANTITY

0.99+

two groupsQUANTITY

0.99+

November of 21DATE

0.99+

bothQUANTITY

0.99+

BostonLOCATION

0.99+

more than 400 companiesQUANTITY

0.99+

Kristen MartinPERSON

0.99+

MySQLTITLE

0.99+

MoogsoftORGANIZATION

0.99+

The CubeORGANIZATION

0.99+

thirdQUANTITY

0.99+

GrafanaORGANIZATION

0.99+

H2OORGANIZATION

0.99+

Mike SpeiserPERSON

0.99+

david.vellante@siliconangle.comOTHER

0.99+

secondQUANTITY

0.99+

twoQUANTITY

0.99+

firstQUANTITY

0.99+

28%QUANTITY

0.99+

16%QUANTITY

0.99+

SecondQUANTITY

0.99+

Natasha | DigitalBits VIP Gala Dinner Monaco


 

(upbeat music) >> Hello, everyone. Welcome back to theCUBE's extended coverage. I'm John Furrier, host of theCUBE. We are here in Monaco at the Yacht Club, part of the VIP Gala with Prince Albert, DigitalBits, theCUBE. theCUBE and Prince Albert celebrating Monaco leaning into crypto. I'm here with Natasha Mahfar, who's our guest. She just came on theCUBE. Great story. Great to see you. Thanks for coming on. >> Thank you so much for having me. >> Tell the folks what you do real quick. >> Sure. So I actually started my career in Silicon Valley, like you have. And I had the idea of creating a startup in mental health that was voice based only. So it was peer to peer support groups via voice. So I created this startup, pretended to be a student at Stanford and built out a whole team, and unfortunately, at that time, no one was in the space of mental health and voice. Now, as you know, it's a $30 billion industry that's one of the biggest in Silicon Valley. So my career really started from there. And due to that startup, I got involved in the World XR Forum. Now, the World XR Forum is kind of like a mini Davos, but a little bit more exclusive, where we host entrepreneurs, people in blockchain, crypto, and we have a five day event covering all sorts of topics. So- >> When you host them, you mean like host them and they hang out and sleep over? It's a hotel? Is it an event? A workshop? >> There's workshops. We arrange hotels. We pretty much arrange everything that there is. >> It's a group get together. >> It's a group get together. Pretty much like Davos. >> And so Natasha, I wanted to talk to you about what we're passionate about which is theCUBE is bringing people up to have a voice and give them a voice. Give people a platform. You don't have to be famous. If you have something to say and share, we found that right now in this environment with media, we go out to an event, we stream as many stories, but we also have the virtual version of our studio. And I could tell you, I've found that internationally now as we bring people together, there are so many great stories. >> Absolutely. >> Out there that need to be told. And the bottleneck isn't the media, it's the fact that it's open now. >> Yes. >> So why aren't the stories coming out? So our mission is to get the stories. >> Wow. >> Scale stories. The more stories that are scaled, the more people can feel it. More people are impacted by it, and it changes the world. It gets people serendipity with data 'cause we're, you know, you shared some data about what you're working on. >> Yeah, of course. It's all about data these days. And the fact that you're doing it so openly is great because there is a need for that today, so. >> What do you see right now in the market for media? I mean, we got emerging markets, a lot of misinformation. Trust is a big problem. >> Right. >> Bullying, harassing. Smear campaigns. What's news, what's not news. I mean, how do you get your news? I mean, how do people figure out what's going on? >> No, absolutely. And this is such a pure format and a way of doing it. How did you come up with the idea, and how did you start? >> Well, I started... I realized after the Web 2.0, when social media started taking over and ruining the democratization . Blogging, podcasting, which I started in 2004, one of the first podcasts in Silicon Valley. >> Wow. >> I saw the network of that. I saw the value that people had when normal people, they call it user generated content, shared information. And I discovered something amazing that a nobody like me can have a really top podcast. >> Well, you're definitely not a nobody, but... >> Well, I was back then. And nobody knew me back then. But what it is is that even... If you put your voice out there, people will connect to it. And if you have the ability to bring other people in, you start to see a social dynamic. And what social media ruined, Facebook, Twitter, not so much Twitter 'cause Twitter's more smeary, but it's still got to open the API, LinkedIn, they're all terrible. They're all gardens. They don't really bring people together, so I think that stalled for about almost eight years or nine years. Now, with crypto and decentralization, you start to see the same thing come back. Democratization, level the playing field, remove the middle man and person, intermediate the middle bottlenecks. So with media, we found that live streaming and going to events was what the community wants. And then interviewing people, and getting their ideas out there. Not promotional, not getting paid to say stuff. Yeah, they get the plug in for the company that they're working on, that's good for everybody. But more share something that you're passionate about, data. And it works. And people like it. And we've been doing it for 12 years, and it creates a great brand of openness, community, and network effect. So we scaled up the brand to be- >> And it seems like you're international now. I mean, we're sitting in Monte Carlo, so I don't think it gets better than that. >> Well, in 2016, we started going international. 2017, we started doing stuff in Europe. 2018, we did the crypto, Middle East. And we also did London, a lot of different events. We had B2B Enterprise and Crypto Blooming. 2019, we were like, "Let's go global with staff and whatnot." >> Wow. >> And the pandemic hits. >> I know. >> And that really kind of allowed us to pivot and turn us into a virtual hybrid. And that's why we're into the metaverse, as we see the value of a physical face to face event where intimacy's there, but why aren't my friends connected first party? >> Right. How much would you say the company has grown from the time that you kind of pivoted? >> Well, we've grown in a different direction with new capabilities because the old way is over. >> Right. >> Every event right now, this event here, is in person. People are talking. They get connections. But every person that's connecting has a social graph behind them that's online too, and immediately available. And with Instagram, direct messaging, Telegram, Signal, all there. >> It's brilliant. Honestly, it was brilliant idea and a brilliant pivot. >> Thank you for interviewing me. >> Yeah, of course. (Natasha and John laugh) >> Any other questions? >> That should do it. >> Okay. Are you going to have fun tonight? >> Absolutely. >> What is your take of the Monaco scene here? What's it like? >> You know, I think it's a really interesting scene. I think there's a lot of potential because this is such an international place so it draws a very eclectic crowd, and I think there's a lot that could be done here. And you have a lot of people from Europe that are starting to get into this whole crypto, leaving kind of the traditional banks and finance behind. So I think the potential is very strong. >> Very progressive. Well, Natasha, thank you for sharing. >> Thank you so much. >> Here on theCUBE. We're the extended edition CUBE here in Monaco with Prince Albert, theCUBE, and Prince Albert, DigitalBits Al Burgio, a great market here for them. And just an amazing time. And thanks for watching. Natasha, thanks for coming on. Thanks for watching theCUBE. We'll be back with more after this break. (upbeat music)

Published Date : Aug 22 2022

SUMMARY :

part of the VIP Gala with Prince Albert, And I had the idea of creating everything that there is. It's a group get together. And so Natasha, I wanted to talk to you And the bottleneck isn't the media, So our mission is to get the stories. the more people can feel it. And the fact that you're now in the market for media? I mean, how do you get your news? And this is such a pure I realized after the Web 2.0, I saw the network of that. Well, you're definitely And if you have the ability And it seems like And we also did London, a And that really kind from the time that you kind of pivoted? because the old way is over. And with Instagram, direct it was brilliant idea Yeah, of course. to have fun tonight? And you have a lot of people from Europe Well, Natasha, thank you for sharing. We're the extended edition

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Natasha MahfarPERSON

0.99+

NatashaPERSON

0.99+

JohnPERSON

0.99+

John FurrierPERSON

0.99+

2004DATE

0.99+

EuropeLOCATION

0.99+

Silicon ValleyLOCATION

0.99+

2018DATE

0.99+

12 yearsQUANTITY

0.99+

2019DATE

0.99+

2016DATE

0.99+

2017DATE

0.99+

$30 billionQUANTITY

0.99+

MonacoLOCATION

0.99+

DigitalBitsORGANIZATION

0.99+

theCUBEORGANIZATION

0.99+

five dayQUANTITY

0.99+

Monte CarloLOCATION

0.99+

LondonLOCATION

0.99+

Middle EastLOCATION

0.98+

todayDATE

0.98+

FacebookORGANIZATION

0.98+

TwitterORGANIZATION

0.97+

tonightDATE

0.97+

LinkedInORGANIZATION

0.96+

oneQUANTITY

0.96+

nine yearsQUANTITY

0.96+

World XR ForumEVENT

0.95+

first podcastsQUANTITY

0.95+

StanfordORGANIZATION

0.93+

first partyQUANTITY

0.9+

B2B EnterpriseORGANIZATION

0.89+

Prince AlbertORGANIZATION

0.88+

AlbertORGANIZATION

0.86+

PrincePERSON

0.84+

Prince AlbertPERSON

0.82+

DavosPERSON

0.8+

eight yearsQUANTITY

0.8+

InstagramORGANIZATION

0.76+

DinnerEVENT

0.74+

Yacht ClubLOCATION

0.72+

TelegramTITLE

0.71+

pandemicEVENT

0.68+

CUBEORGANIZATION

0.67+

Al BurgioORGANIZATION

0.61+

SignalORGANIZATION

0.5+

Crypto BloomingEVENT

0.41+

theCUBETITLE

0.4+

Breaking Analysis: Snowflake Summit 2022...All About Apps & Monetization


 

>> From theCUBE studios in Palo Alto in Boston, bringing you data driven insights from theCUBE and ETR. This is "Breaking Analysis" with Dave Vellante. >> Snowflake Summit 2022 underscored that the ecosystem excitement which was once forming around Hadoop is being reborn, escalated and coalescing around Snowflake's data cloud. What was once seen as a simpler cloud data warehouse and good marketing with the data cloud is evolving rapidly with new workloads of vertical industry focus, data applications, monetization, and more. The question is, will the promise of data be fulfilled this time around, or is it same wine, new bottle? Hello, and welcome to this week's Wikibon CUBE Insights powered by ETR. In this "Breaking Analysis," we'll talk about the event, the announcements that Snowflake made that are of greatest interest, the major themes of the show, what was hype and what was real, the competition, and some concerns that remain in many parts of the ecosystem and pockets of customers. First let's look at the overall event. It was held at Caesars Forum. Not my favorite venue, but I'll tell you it was packed. Fire Marshall Full, as we sometimes say. Nearly 10,000 people attended the event. Here's Snowflake's CMO Denise Persson on theCUBE describing how this event has evolved. >> Yeah, two, three years ago, we were about 1800 people at a Hilton in San Francisco. We had about 40 partners attending. This week we're close to 10,000 attendees here. Almost 10,000 people online as well, and over over 200 partners here on the show floor. >> Now, those numbers from 2019 remind me of the early days of Hadoop World, which was put on by Cloudera but then Cloudera handed off the event to O'Reilly as this article that we've inserted, if you bring back that slide would say. The headline it almost got it right. Hadoop World was a failure, but it didn't have to be. Snowflake has filled the void created by O'Reilly when it first killed Hadoop World, and killed the name and then killed Strata. Now, ironically, the momentum and excitement from Hadoop's early days, it probably could have stayed with Cloudera but the beginning of the end was when they gave the conference over to O'Reilly. We can't imagine Frank Slootman handing the keys to the kingdom to a third party. Serious business was done at this event. I'm talking substantive deals. Salespeople from a host sponsor and the ecosystems that support these events, they love physical. They really don't like virtual because physical belly to belly means relationship building, pipeline, and deals. And that was blatantly obvious at this show. And in fairness, all theCUBE events that we've done year but this one was more vibrant because of its attendance and the action in the ecosystem. Ecosystem is a hallmark of a cloud company, and that's what Snowflake is. We asked Frank Slootman on theCUBE, was this ecosystem evolution by design or did Snowflake just kind of stumble into it? Here's what he said. >> Well, when you are a data clouding, you have data, people want to do things with that data. They don't want just run data operations, populate dashboards, run reports. Pretty soon they want to build applications and after they build applications, they want build businesses on it. So it goes on and on and on. So it drives your development to enable more and more functionality on that data cloud. Didn't start out that way, you know, we were very, very much focused on data operations. Then it becomes application development and then it becomes, hey, we're developing whole businesses on this platform. So similar to what happened to Facebook in many ways. >> So it sounds like it was maybe a little bit of both. The Facebook analogy is interesting because Facebook is a walled garden, as is Snowflake, but when you come into that garden, you have assurances that things are going to work in a very specific way because a set of standards and protocols is being enforced by a steward, i.e. Snowflake. This means things run better inside of Snowflake than if you try to do all the integration yourself. Now, maybe over time, an open source version of that will come out but if you wait for that, you're going to be left behind. That said, Snowflake has made moves to make its platform more accommodating to open source tooling in many of its announcements this week. Now, I'm not going to do a deep dive on the announcements. Matt Sulkins from Monte Carlo wrote a decent summary of the keynotes and a number of analysts like Sanjeev Mohan, Tony Bear and others are posting some deeper analysis on these innovations, and so we'll point to those. I'll say a few things though. Unistore extends the type of data that can live in the Snowflake data cloud. It's enabled by a new feature called hybrid tables, a new table type in Snowflake. One of the big knocks against Snowflake was it couldn't handle and transaction data. Several database companies are creating this notion of a hybrid where both analytic and transactional workloads can live in the same data store. Oracle's doing this for example, with MySQL HeatWave and there are many others. We saw Mongo earlier this month add an analytics capability to its transaction system. Mongo also added sequel, which was kind of interesting. Here's what Constellation Research analyst Doug Henschen said about Snowflake's moves into transaction data. Play the clip. >> Well with Unistore, they're reaching out and trying to bring transactional data in. Hey, don't limit this to analytical information and there's other ways to do that like CDC and streaming but they're very closely tying that again to that marketplace, with the idea of bring your data over here and you can monetize it. Don't just leave it in that transactional database. So another reach to a broader play across a big community that they're building. >> And you're also seeing Snowflake expand its workload types in its unique way and through Snowpark and its stream lit acquisition, enabling Python so that native apps can be built in the data cloud and benefit from all that structure and the features that Snowflake is built in. Hence that Facebook analogy, or maybe the App Store, the Apple App Store as I propose as well. Python support also widens the aperture for machine intelligence workloads. We asked Snowflake senior VP of product, Christian Kleinerman which announcements he thought were the most impactful. And despite the who's your favorite child nature of the question, he did answer. Here's what he said. >> I think the native applications is the one that looks like, eh, I don't know about it on the surface but he has the biggest potential to change everything. That's create an entire ecosystem of solutions for within a company or across companies that I don't know that we know what's possible. >> Snowflake also announced support for Apache Iceberg, which is a new open table format standard that's emerging. So you're seeing Snowflake respond to these concerns about its lack of openness, and they're building optionality into their cloud. They also showed some cost op optimization tools both from Snowflake itself and from the ecosystem, notably Capital One which launched a software business on top of Snowflake focused on optimizing cost and eventually the rollout data management capabilities, and all kinds of features that Snowflake announced that the show around governance, cross cloud, what we call super cloud, a new security workload, and they reemphasize their ability to read non-native on-prem data into Snowflake through partnerships with Dell and Pure and a lot more. Let's hear from some of the analysts that came on theCUBE this week at Snowflake Summit to see what they said about the announcements and their takeaways from the event. This is Dave Menninger, Sanjeev Mohan, and Tony Bear, roll the clip. >> Our research shows that the majority of organizations, the majority of people do not have access to analytics. And so a couple of the things they've announced I think address those or help to address those issues very directly. So Snowpark and support for Python and other languages is a way for organizations to embed analytics into different business processes. And so I think that'll be really beneficial to try and get analytics into more people's hands. And I also think that the native applications as part of the marketplace is another way to get applications into people's hands rather than just analytical tools. Because most people in the organization are not analysts. They're doing some line of business function. They're HR managers, they're marketing people, they're sales people, they're finance people, right? They're not sitting there mucking around in the data, they're doing a job and they need analytics in that job. >> Primarily, I think it is to contract this whole notion that once you move data into Snowflake, it's a proprietary format. So I think that's how it started but it's usually beneficial to the customers, to the users because now if you have large amount of data in paket files you can leave it on S3, but then you using the Apache Iceberg table format in Snowflake, you get all the benefits of Snowflake's optimizer. So for example, you get the micro partitioning, you get the metadata. And in a single query, you can join, you can do select from a Snowflake table union and select from an iceberg table and you can do store procedure, user defined function. So I think what they've done is extremely interesting. Iceberg by itself still does not have multi-table transactional capabilities. So if I'm running a workload, I might be touching 10 different tables. So if I use Apache Iceberg in a raw format, they don't have it, but Snowflake does. So the way I see it is Snowflake is adding more and more capabilities right into the database. So for example, they've gone ahead and added security and privacy. So you can now create policies and do even cell level masking, dynamic masking, but most organizations have more than Snowflake. So what we are starting to see all around here is that there's a whole series of data catalog companies, a bunch of companies that are doing dynamic data masking, security and governance, data observability which is not a space Snowflake has gone into. So there's a whole ecosystem of companies that is mushrooming. Although, you know, so they're using the native capabilities of Snowflake but they are at a level higher. So if you have a data lake and a cloud data warehouse and you have other like relational databases, you can run these cross platform capabilities in that layer. So that way, you know, Snowflake's done a great job of enabling that ecosystem. >> I think it's like the last mile, essentially. In other words, it's like, okay, you have folks that are basically that are very comfortable with Tableau but you do have developers who don't want to have to shell out to a separate tool. And so this is where Snowflake is essentially working to address that constituency. To Sanjeev's point, and I think part of it, this kind of plays into it is what makes this different from the Hadoop era is the fact that all these capabilities, you know, a lot of vendors are taking it very seriously to put this native. Now, obviously Snowflake acquired Streamlit. So we can expect that the Streamlit capabilities are going to be native. >> I want to share a little bit about the higher level thinking at Snowflake, here's a chart from Frank Slootman's keynote. It's his version of the modern data stack, if you will. Now, Snowflake of course, was built on the public cloud. If there were no AWS, there would be no Snowflake. Now, they're all about bringing data and live data and expanding the types of data, including structured, we just heard about that, unstructured, geospatial, and the list is going to continue on and on. Eventually I think it's going to bleed into the edge if we can figure out what to do with that edge data. Executing on new workloads is a big deal. They started with data sharing and they recently added security and they've essentially created a PaaS layer. We call it a SuperPaaS layer, if you will, to attract application developers. Snowflake has a developer-focused event coming up in November and they've extended the marketplace with 1300 native apps listings. And at the top, that's the holy grail, monetization. We always talk about building data products and we saw a lot of that at this event, very, very impressive and unique. Now here's the thing. There's a lot of talk in the press, in the Wall Street and the broader community about consumption-based pricing and concerns over Snowflake's visibility and its forecast and how analytics may be discretionary. But if you're a company building apps in Snowflake and monetizing like Capital One intends to do, and you're now selling in the marketplace, that is not discretionary, unless of course your costs are greater than your revenue for that service, in which case is going to fail anyway. But the point is we're entering a new error where data apps and data products are beginning to be built and Snowflake is attempting to make the data cloud the defacto place as to where you're going to build them. In our view they're well ahead in that journey. Okay, let's talk about some of the bigger themes that we heard at the event. Bringing apps to the data instead of moving the data to the apps, this was a constant refrain and one that certainly makes sense from a physics point of view. But having a single source of data that is discoverable, sharable and governed with increasingly robust ecosystem options, it doesn't have to be moved. Sometimes it may have to be moved if you're going across regions, but that's unique and a differentiator for Snowflake in our view. I mean, I'm yet to see a data ecosystem that is as rich and growing as fast as the Snowflake ecosystem. Monetization, we talked about that, industry clouds, financial services, healthcare, retail, and media, all front and center at the event. My understanding is that Frank Slootman was a major force behind this shift, this development and go to market focus on verticals. It's really an attempt, and he talked about this in his keynote to align with the customer mission ultimately align with their objectives which not surprisingly, are increasingly monetizing with data as a differentiating ingredient. We heard a ton about data mesh, there were numerous presentations about the topic. And I'll say this, if you map the seven pillars Snowflake talks about, Benoit Dageville talked about this in his keynote, but if you map those into Zhamak Dehghani's data mesh framework and the four principles, they align better than most of the data mesh washing that I've seen. The seven pillars, all data, all workloads, global architecture, self-managed, programmable, marketplace and governance. Those are the seven pillars that he talked about in his keynote. All data, well, maybe with hybrid tables that becomes more of a reality. Global architecture means the data is globally distributed. It's not necessarily physically in one place. Self-managed is key. Self-service infrastructure is one of Zhamak's four principles. And then inherent governance. Zhamak talks about computational, what I'll call automated governance, built in. And with all the talk about monetization, that aligns with the second principle which is data as product. So while it's not a pure hit and to its credit, by the way, Snowflake doesn't use data mesh in its messaging anymore. But by the way, its customers do, several customers talked about it. Geico, JPMC, and a number of other customers and partners are using the term and using it pretty closely to the concepts put forth by Zhamak Dehghani. But back to the point, they essentially, Snowflake that is, is building a proprietary system that substantially addresses some, if not many of the goals of data mesh. Okay, back to the list, supercloud, that's our term. We saw lots of examples of clouds on top of clouds that are architected to spin multiple clouds, not just run on individual clouds as separate services. And this includes Snowflake's data cloud itself but a number of ecosystem partners that are headed in a very similar direction. Snowflake still talks about data sharing but now it uses the term collaboration in its high level messaging, which is I think smart. Data sharing is kind of a geeky term. And also this is an attempt by Snowflake to differentiate from everyone else that's saying, hey, we do data sharing too. And finally Snowflake doesn't say data marketplace anymore. It's now marketplace, accounting for its application market. Okay, let's take a quick look at the competitive landscape via this ETR X-Y graph. Vertical access remembers net score or spending momentum and the x-axis is penetration, pervasiveness in the data center. That's what ETR calls overlap. Snowflake continues to lead on the vertical axis. They guide it conservatively last quarter, remember, so I wouldn't be surprised if that lofty height, even though it's well down from its earlier levels but I wouldn't be surprised if it ticks down again a bit in the July survey, which will be in the field shortly. Databricks is a key competitor obviously at a strong spending momentum, as you can see. We didn't draw it here but we usually draw that 40% line or red line at 40%, anything above that is considered elevated. So you can see Databricks is quite elevated. But it doesn't have the market presence of Snowflake. It didn't get to IPO during the bubble and it doesn't have nearly as deep and capable go-to market machinery. Now, they're getting better and they're getting some attention in the market, nonetheless. But as a private company, you just naturally, more people are aware of Snowflake. Some analysts, Tony Bear in particular, believe Mongo and Snowflake are on a bit of a collision course long term. I actually can see his point. You know, I mean, they're both platforms, they're both about data. It's long ways off, but you can see them sort of in a similar path. They talk about kind of similar aspirations and visions even though they're quite in different markets today but they're definitely participating in similar tam. The cloud players are probably the biggest or definitely the biggest partners and probably the biggest competitors to Snowflake. And then there's always Oracle. Doesn't have the spending velocity of the others but it's got strong market presence. It owns a cloud and it knows a thing about data and it definitely is a go-to market machine. Okay, we're going to end on some of the things that we heard in the ecosystem. 'Cause look, we've heard before how particular technology, enterprise data warehouse, data hubs, MDM, data lakes, Hadoop, et cetera. We're going to solve all of our data problems and of course they didn't. And in fact, sometimes they create more problems that allow vendors to push more incremental technology to solve the problems that they created. Like tools and platforms to clean up the no schema on right nature of data lakes or data swamps. But here are some of the things that I heard firsthand from some customers and partners. First thing is, they said to me that they're having a hard time keeping up sometimes with the pace of Snowflake. It reminds me of AWS in 2014, 2015 timeframe. You remember that fire hose of announcements which causes increased complexity for customers and partners. I talked to several customers that said, well, yeah this is all well and good but I still need skilled people to understand all these tools that I'm integrated in the ecosystem, the catalogs, the machine learning observability. A number of customers said, I just can't use one governance tool, I need multiple governance tools and a lot of other technologies as well, and they're concerned that that's going to drive up their cost and their complexity. I heard other concerns from the ecosystem that it used to be sort of clear as to where they could add value you know, when Snowflake was just a better data warehouse. But to point number one, they're either concerned that they'll be left behind or they're concerned that they'll be subsumed. Look, I mean, just like we tell AWS customers and partners, you got to move fast, you got to keep innovating. If you don't, you're going to be left. Either if your customer you're going to be left behind your competitor, or if you're a partner, somebody else is going to get there or AWS is going to solve the problem for you. Okay, and there were a number of skeptical practitioners, really thoughtful and experienced data pros that suggested that they've seen this movie before. That's hence the same wine, new bottle. Well, this time around I certainly hope not given all the energy and investment that is going into this ecosystem. And the fact is Snowflake is unquestionably making it easier to put data to work. They built on AWS so you didn't have to worry about provisioning, compute and storage and networking and scaling. Snowflake is optimizing its platform to take advantage of things like Graviton so you don't have to, and they're doing some of their own optimization tools. The ecosystem is building optimization tools so that's all good. And firm belief is the less expensive it is, the more data will get brought into the data cloud. And they're building a data platform on which their ecosystem can build and run data applications, aka data products without having to worry about all the hard work that needs to get done to make data discoverable, shareable, and governed. And unlike the last 10 years, you don't have to be a keeper and integrate all the animals in the Hadoop zoo. Okay, that's it for today, thanks for watching. Thanks to my colleague, Stephanie Chan who helps research "Breaking Analysis" topics. Sometimes Alex Myerson is on production and manages the podcasts. Kristin Martin and Cheryl Knight help get the word out on social and in our newsletters, and Rob Hof is our editor in chief over at Silicon, and Hailey does some wonderful editing, thanks to all. Remember, all these episodes are available as podcasts wherever you listen. All you got to do is search Breaking Analysis Podcasts. I publish each week on wikibon.com and siliconangle.com and you can email me at David.Vellante@siliconangle.com or DM me @DVellante. If you got something interesting, I'll respond. If you don't, I'm sorry I won't. Or comment on my LinkedIn post. Please check out etr.ai for the best survey data in the enterprise tech business. This is Dave Vellante for theCUBE Insights powered by ETR. Thanks for watching, and we'll see you next time. (upbeat music)

Published Date : Jun 18 2022

SUMMARY :

bringing you data driven that the ecosystem excitement here on the show floor. and the action in the ecosystem. Didn't start out that way, you know, One of the big knocks against Snowflake the idea of bring your data of the question, he did answer. is the one that looks like, and from the ecosystem, And so a couple of the So that way, you know, from the Hadoop era is the fact the defacto place as to where

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Frank SlootmanPERSON

0.99+

Frank SlootmanPERSON

0.99+

Doug HenschenPERSON

0.99+

Stephanie ChanPERSON

0.99+

Christian KleinermanPERSON

0.99+

AWSORGANIZATION

0.99+

Dave VellantePERSON

0.99+

Rob HofPERSON

0.99+

Benoit DagevillePERSON

0.99+

2014DATE

0.99+

Matt SulkinsPERSON

0.99+

JPMCORGANIZATION

0.99+

2019DATE

0.99+

Cheryl KnightPERSON

0.99+

Palo AltoLOCATION

0.99+

Denise PerssonPERSON

0.99+

Alex MyersonPERSON

0.99+

Tony BearPERSON

0.99+

Dave MenningerPERSON

0.99+

DellORGANIZATION

0.99+

JulyDATE

0.99+

GeicoORGANIZATION

0.99+

NovemberDATE

0.99+

SnowflakeTITLE

0.99+

40%QUANTITY

0.99+

OracleORGANIZATION

0.99+

App StoreTITLE

0.99+

Capital OneORGANIZATION

0.99+

second principleQUANTITY

0.99+

Sanjeev MohanPERSON

0.99+

SnowflakeORGANIZATION

0.99+

1300 native appsQUANTITY

0.99+

Tony BearPERSON

0.99+

David.Vellante@siliconangle.comOTHER

0.99+

Kristin MartinPERSON

0.99+

MongoORGANIZATION

0.99+

DatabricksORGANIZATION

0.99+

Snowflake Summit 2022EVENT

0.99+

FirstQUANTITY

0.99+

twoDATE

0.99+

PythonTITLE

0.99+

10 different tablesQUANTITY

0.99+

FacebookORGANIZATION

0.99+

ETRORGANIZATION

0.99+

bothQUANTITY

0.99+

SnowflakeEVENT

0.98+

one placeQUANTITY

0.98+

each weekQUANTITY

0.98+

O'ReillyORGANIZATION

0.98+

This weekDATE

0.98+

Hadoop WorldEVENT

0.98+

this weekDATE

0.98+

PureORGANIZATION

0.98+

about 40 partnersQUANTITY

0.98+

theCUBEORGANIZATION

0.98+

last quarterDATE

0.98+

OneQUANTITY

0.98+

S3TITLE

0.97+

HadoopLOCATION

0.97+

singleQUANTITY

0.97+

Caesars ForumLOCATION

0.97+

IcebergTITLE

0.97+

single sourceQUANTITY

0.97+

SiliconORGANIZATION

0.97+

Nearly 10,000 peopleQUANTITY

0.97+

Apache IcebergORGANIZATION

0.97+

Jon Loyens, data.world | Snowflake Summit 2022


 

>>Good morning, everyone. Welcome back to the Cube's coverage of snowflake summit 22 live from Caesar's forum in Las Vegas. Lisa Martin, here with Dave Valante. This is day three of our coverage. We've had an amazing, amazing time. Great conversations talking with snowflake executives, partners, customers. We're gonna be digging into data mesh with data.world. Please welcome John loins, the chief product officer. Great to have you on the program, John, >>Thank you so much for, for having me here. I mean, the summit, like you said, has been incredible, so many great people, so such a good time, really, really nice to be back in person with folks. >>It is fabulous to be back in person. The fact that we're on day four for, for them. And this is the, the solution showcase is as packed as it is at 10 11 in the morning. Yeah. Is saying something >>Yeah. Usually >>Chopping at the bit to hear what they're doing and innovate. >>Absolutely. Usually those last days of conferences, everybody starts getting a little tired, but we're not seeing that at all here, especially >>In Vegas. This is impressive. Talk to the audience a little bit about data.world, what you guys do and talk about the snowflake relationship. >>Absolutely data.world is the only true cloud native enterprise data catalog. We've been an incredible snowflake partner and Snowflake's been an incredible partner to us really since 2018. When we became the first data catalog in the snowflake partner connect experience, you know, snowflake and the data cloud make it so possible. And it's changed so much in terms of being able to, you know, very easily transition data into the cloud to break down those silos and to have a platform that enables folks to be incredibly agile with data from an engineering and infrastructure standpoint, data out world is able to provide a layer of discovery and governance that matches that agility and the ability for a lot of different stakeholders to really participate in the process of data management and data governance. >>So data mesh basically Jamma, Dani lays out the first of all, the, the fault domains of existing data and big data initiatives. And she boils it down to the fact that it's just this monolithic architecture with hyper specialized teams that you have to go through and it just slows everything down and it doesn't scale. They don't have domain context. So she came up with four principles if I may, yep. Domain ownership. So push it out to the businesses. They have the context they should own the data. The second is data as product. We're certainly hearing a lot about that today this week. The third is that. So that makes it sounds good. Push out the, the data great, but it creates two problems. Self-serve infrastructure. Okay. But her premises infrastructure should be an operational detail. And then the fourth is computational governance. So you talked about data CA where do you fit in those four principles? >>You know, honestly, we are able to help teams realize the data mesh architecture. And we know that data mesh is really, it's, it's both a process in a culture change, but then when you want to enact a process in a culture change like this, you also need to select the appropriate tools to match the culture that you're trying to build the process in the architecture that you're trying to build. And the data world data catalog can really help along all four of those axes. When you start thinking first about, let's say like, let's take the first one, you know, data as a product, right? We even like very meta of us from metadata management platform at the end of the day. But very meta of us. When you talk about data as a product, we track adoption and usage of all your data assets within your organization and provide program teams and, you know, offices of the CDO with incredible evented analytics, very detailed that gives them the right audit trail that enables them to direct very scarce data engineering, data architecture resources, to make sure that their data assets are getting adopted and used properly. >>On the, on the domain driven side, we are entirely knowledge graph and open standards based enabling those different domains. We have, you know, incredible joint snowflake customers like Prologis. And we chatted a lot about this in our session here yesterday, where, because of our knowledge graph underpinnings, because of the flexibility of our metadata model, it enables those domains to actually model their assets uniquely from, from group to group, without having to, to relaunch or run different environments. Like you can do that all within one day catalog platform without having to have separate environments for each of those domains, federated governance. Again, the amount of like data exhaust that we create that really enables ambient governance and participatory governance as well. We call it agile data governance, really the adoption of agile and open principles applied to governance to make it more inclusive and transparent. And we provide that in a way that Confederate across those means and make it consistent. >>Okay. So you facilitate across that whole spectrum of, of principles. And so what in the, in the early examples of data mesh that I've studied and actually collaborated with, like with JPMC, who I don't think is who's not using your data catalog, but hello, fresh who may or may not be, but I mean, there, there are numbers and I wanna get to that. But what they've done is they've enabled the domains to spin up their own, whatever data lakes, data, warehouses, data hubs, at least in, in concept, most of 'em are data lakes on AWS, but still in concept, they wanna be inclusive and they've created a master data catalog. And then each domain has its sub catalogue, which feeds into the master and that's how they get consistency and governance and everything else is, is that the right way to think about it? And or do you have a different spin on that? >>Yeah, I, I, you know, I have a slightly different spin on it. I think organizationally it's the right way to think about it. And in absence of a catalog that can truly have multiple federated metadata models, multiple graphs in one platform, I, that is really kind of the, the, the only way to do it, right with data.world. You don't have to do that. You can have one platform, one environment, one instance of data.world that spans all of your domains, enable them to operate independently and then federate across. So >>You just answered my question as to why I should use data.world versus Amazon glue. >>Oh, absolutely. >>And that's a, that's awesome that you've done now. How have you done that? What, what's your secret >>Sauce? The, the secret sauce era is really an all credit to our CTO. One of my closest friends who was a true student of knowledge graph practices and principles, and really felt that the right way to manage metadata and knowledge about the data analytics ecosystem that companies were building was through federated linked data, right? So we use standards and we've built a, a, an open and extensible metadata model that we call costs that really takes the best parts of existing open standards in the semantics space. Things like schema.org, DCA, Dublin core brings them together and models out the most typical enterprise data assets providing you with an ontology that's ready to go. But because of the graph nature of what we do is instantly accessible without having to rebuild environments, without having to do a lot of management against it. It's, it's really quite something. And it's something all of our customers are, are very impressed with and, and, and, and, you know, are getting a lot of leverage out of, >>And, and we have a lot of time today, so we're not gonna shortchange this topic. So one last question, then I'll shut up and let you jump in. This is an open standard. It's not open source. >>No, it's an open built on open standards, built on open standards. We also fundamentally believe in extensibility and openness. We do not want to vertically like lock you into our platform. So everything that we have is API driven API available. Your metadata belongs to you. If you need to export your graph, you know, instantly available in open machine readable formats. That's really, we come from the open data community. That was a lot of the founding of data.world. We, we worked a lot in with the open data community and we, we fundamentally believe in that. And that's enabled a lot of our customers as well to truly take data.world and not have it be a data catalog application, but really an entire metadata management platform and extend it even further into their enterprise to, to really catalog all of their assets, but also to build incredible integrations to things like corporate search, you know, having data assets show up in corporate Wiki search, along with all the, the descriptive metadata that people need has been incredibly powerful and an incredible extension of our platform that I'm so happy to see our customers in. >>So leasing. So it's not exclusive to, to snowflake. It's not exclusive to AWS. You can bring it anywhere. Azure GCP, >>Anytime. Yeah. You know where we are, where we love snowflake, look, we're at the snowflake summit. And we've always had a great relationship with snowflake though, and really leaned in there because we really believe Snowflake's principles, particularly around cloud and being cloud native and the operating advantages that it affords companies that that's really aligned with what we do. And so snowflake was really the first of the cloud data catalogs that we ultimately or say the cloud data warehouses that we integrated with and to see them transition to building really out the data cloud has been awesome. >>Talk about how data world and snowflake enable companies like per lodges to be data companies. These days, every company has to be a data company, but they, they have to be able to do so quickly to be competitive and to, to really win. How do you help them if we like up level the conversation to really impacting the overall business? >>That's a great question, especially right now, everybody knows. And pro is a great example. They're a logistics and supply chain company at the end of the day. And we know how important logistics and supply chain is nowadays and for them and for a lot of our customers. I think one of the advantages of having a data catalog is the ability to build trust, transparency and inclusivity into their data analytics practice by adopting agile principles, by adopting a data mesh, you're able to extend your data analytics practice to a much broader set of stakeholders and to involve them in the process while the work is getting done. One of the greatest things about agile software development, when it became a thing in the early two thousands was how inclusive it was. And that inclusivity led to a much faster ROI on software projects. And we see the same thing happening in data analytics, people, you know, we have amazing data scientists and data analysts coming up with these insights that could be business changing that could make their company significantly more resilient, especially in the face of economic uncertainty. >>But if you have to sit there and argue with your business stakeholders about the validity of the data, about the, the techniques that were used to do the analysis, and it takes you three months to get people to trust what you've done, that opportunity's passed. So how do we shorten those cycles? How do we bring them closer? And that's, that's really a huge benefit that like Prologis has, has, has realized just tightening that cycle time, building trust, building inclusion, and making sure ultimately humans learn by doing, and if you can be inclusive, it, even, it even increases things like that. We all want to, to, to, to help cuz Lord knows the world needs it. Things like data literacy. Yeah. Right. >>So data.world can inform me as to where on the spectrum of data quality, my data set lives. So I can say, okay, this is usable, shareable, you know, exactly of gold standard versus fix this. Right. Okay. Yep. >>Yep. >>That's yeah. Okay. And you could do that with one data catalog, not a bunch of >>Yeah. And trust trust is really a multifaceted and multi multi-angle idea, right? It's not just necessarily data quality or data observability. And we have incredible partnerships in that space, like our partnership with, with Monte Carlo, where we can ingest all their like amazing observability information and display that in a really like a really consumable way in our data catalog. But it also includes things like the lineage who touch it, who is involved in the process of a, can I get a, a, a question answered quickly about this data? What's it been used for previously? And do I understand that it's so multifaceted that you have to be able to really model and present that in a way that's unique to any given organization, even unique within domains within a single organization. >>If you're not, that means to suggest you're a data quality. No, no supplier. Absolutely. But your partner with them and then that you become the, the master catalog. >>That's brilliant. I love it. Exactly. And you're >>You, you just raised your series C 15 million. >>We did. Yeah. So, you know, really lucky to have incredible investors like Goldman Sachs, who, who led our series C it really, I think, communicates the trust that they have in our vision and what we're doing and the impact that we can have on organization's ability to be agile and resilient around data analytics, >>Enabling customers to have that single source of truth is so critical. You talked about trust. That is absolutely. It's no joke. >>Absolutely. >>That is critical. And there's a tremendous amount of business impact, positive business impact that can come from that. What are some of the things that are next for data.world that we're gonna see? >>Oh, you know, I love this. We have such an incredibly innovative team. That's so dedicated to this space and the mission of what we're doing. We're out there trying to fundamentally change how people get data analytics work done together. One of the big reasons I founded the company is I, I really truly believe that data analytics needs to be a team sport. It needs to go from, you know, single player mode to team mode and everything that we've worked on in the last six years has leaned into that. Our architecture being cloud native, we do, we've done over a thousand releases a year that nobody has to manage. You don't have to worry about upgrading your environment. It's a lot of the same story that's made snowflake. So great. We are really excited to have announced in March on our own summit. And we're rolling this suite of features out over the course of the year, a new package of features that we call data.world Eureka, which is a suite of automations and, you know, knowledge driven functionality that really helps you leverage a knowledge graph to make decisions faster and to operationalize your data in, in the data ops way with significantly less effort, >>Big, big impact there. John, thank you so much for joining David, me unpacking what data world is doing. The data mesh, the opportunities that you're giving to customers and every industry. We appreciate your time and congratulations on the news and the funding. >>Ah, thank you. It's been a, a true pleasure. Thank you for having me on and, and I hope, I hope you guys enjoy the rest of, of the day and, and your other guests that you have. Thank you. >>We will. All right. For our guest and Dave ante, I'm Lisa Martin. You're watching the cubes third day of coverage of snowflake summit, 22 live from Vegas, Dave and I will be right back with our next guest. So stick around.

Published Date : Jun 16 2022

SUMMARY :

Great to have you on the program, John, I mean, the summit, like you said, has been incredible, It is fabulous to be back in person. Usually those last days of conferences, everybody starts getting a little tired, but we're not seeing that at all here, what you guys do and talk about the snowflake relationship. And it's changed so much in terms of being able to, you know, very easily transition And she boils it down to the fact that it's just this monolithic architecture with hyper specialized teams about, let's say like, let's take the first one, you know, data as a product, We have, you know, incredible joint snowflake customers like Prologis. governance and everything else is, is that the right way to think about it? And in absence of a catalog that can truly have multiple federated How have you done that? of knowledge graph practices and principles, and really felt that the right way to manage then I'll shut up and let you jump in. an incredible extension of our platform that I'm so happy to see our customers in. It's not exclusive to AWS. first of the cloud data catalogs that we ultimately or say the cloud data warehouses but they, they have to be able to do so quickly to be competitive and to, thing happening in data analytics, people, you know, we have amazing data scientists and data the data, about the, the techniques that were used to do the analysis, and it takes you three So I can say, okay, this is usable, shareable, you know, That's yeah. that you have to be able to really model and present that in a way that's unique to any then that you become the, the master catalog. And you're that we can have on organization's ability to be agile and resilient Enabling customers to have that single source of truth is so critical. What are some of the things that are next for data.world that we're gonna see? It needs to go from, you know, single player mode to team mode and everything The data mesh, the opportunities that you're giving to customers and every industry. and I hope, I hope you guys enjoy the rest of, of the day and, and your other guests that you have. So stick around.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

Lisa MartinPERSON

0.99+

Dave ValantePERSON

0.99+

DavePERSON

0.99+

JohnPERSON

0.99+

Jon LoyensPERSON

0.99+

Monte CarloORGANIZATION

0.99+

John loinsPERSON

0.99+

AmazonORGANIZATION

0.99+

MarchDATE

0.99+

Las VegasLOCATION

0.99+

VegasLOCATION

0.99+

Goldman SachsORGANIZATION

0.99+

yesterdayDATE

0.99+

three monthsQUANTITY

0.99+

AWSORGANIZATION

0.99+

one platformQUANTITY

0.99+

one dayQUANTITY

0.99+

thirdQUANTITY

0.99+

two problemsQUANTITY

0.99+

fourthQUANTITY

0.99+

OneQUANTITY

0.99+

2018DATE

0.99+

15 millionQUANTITY

0.98+

DaniPERSON

0.98+

secondQUANTITY

0.98+

firstQUANTITY

0.98+

third dayQUANTITY

0.98+

first oneQUANTITY

0.98+

SnowflakeORGANIZATION

0.98+

DCAORGANIZATION

0.98+

one last questionQUANTITY

0.98+

data.world.ORGANIZATION

0.97+

PrologisORGANIZATION

0.97+

JPMCORGANIZATION

0.97+

each domainQUANTITY

0.97+

today this weekDATE

0.97+

JammaPERSON

0.97+

bothQUANTITY

0.97+

first data catalogQUANTITY

0.95+

Snowflake Summit 2022EVENT

0.95+

eachQUANTITY

0.94+

todayDATE

0.94+

singleQUANTITY

0.94+

data.worldORGANIZATION

0.93+

day threeQUANTITY

0.93+

oneQUANTITY

0.93+

one instanceQUANTITY

0.92+

over a thousand releases a yearQUANTITY

0.92+

day fourQUANTITY

0.91+

SnowflakeTITLE

0.91+

fourQUANTITY

0.91+

10 11 in the morningDATE

0.9+

22QUANTITY

0.9+

one environmentQUANTITY

0.9+

single organizationQUANTITY

0.88+

four principlesQUANTITY

0.86+

agileTITLE

0.85+

last six yearsDATE

0.84+

one data catalogQUANTITY

0.84+

EurekaORGANIZATION

0.83+

Azure GCPTITLE

0.82+

CaesarPERSON

0.82+

series COTHER

0.8+

CubeORGANIZATION

0.8+

data.worldOTHER

0.78+

LordPERSON

0.75+

thousandsQUANTITY

0.74+

single sourceQUANTITY

0.74+

DublinORGANIZATION

0.73+

snowflake summit 22EVENT

0.7+

WikiTITLE

0.68+

schema.orgORGANIZATION

0.67+

early twoDATE

0.63+

CDOTITLE

0.48+

Breaking Analysis: Technology & Architectural Considerations for Data Mesh


 

>> From theCUBE Studios in Palo Alto and Boston, bringing you data driven insights from theCUBE in ETR, this is Breaking Analysis with Dave Vellante. >> The introduction in socialization of data mesh has caused practitioners, business technology executives, and technologists to pause, and ask some probing questions about the organization of their data teams, their data strategies, future investments, and their current architectural approaches. Some in the technology community have embraced the concept, others have twisted the definition, while still others remain oblivious to the momentum building around data mesh. Here we are in the early days of data mesh adoption. Organizations that have taken the plunge will tell you that aligning stakeholders is a non-trivial effort, but necessary to break through the limitations that monolithic data architectures and highly specialized teams have imposed over frustrated business and domain leaders. However, practical data mesh examples often lie in the eyes of the implementer, and may not strictly adhere to the principles of data mesh. Now, part of the problem is lack of open technologies and standards that can accelerate adoption and reduce friction, and that's what we're going to talk about today. Some of the key technology and architecture questions around data mesh. Hello, and welcome to this week's Wikibon CUBE Insights powered by ETR, and in this Breaking Analysis, we welcome back the founder of data mesh and director of Emerging Technologies at Thoughtworks, Zhamak Dehghani. Hello, Zhamak. Thanks for being here today. >> Hi Dave, thank you for having me back. It's always a delight to connect and have a conversation. Thank you. >> Great, looking forward to it. Okay, so before we get into it in the technology details, I just want to quickly share some data from our friends at ETR. You know, despite the importance of data initiative since the pandemic, CIOs and IT organizations have had to juggle of course, a few other priorities, this is why in the survey data, cyber and cloud computing are rated as two most important priorities. Analytics and machine learning, and AI, which are kind of data topics, still make the top of the list, well ahead of many other categories. And look, a sound data architecture and strategy is fundamental to digital transformations, and much of the past two years, as we've often said, has been like a forced march into digital. So while organizations are moving forward, they really have to think hard about the data architecture decisions that they make, because it's going to impact them, Zhamak, for years to come, isn't it? >> Yes, absolutely. I mean, we are moving really from, slowly moving from reason based logical algorithmic to model based computation and decision making, where we exploit the patterns and signals within the data. So data becomes a very important ingredient, of not only decision making, and analytics and discovering trends, but also the features and applications that we build for the future. So we can't really ignore it, and as we see, some of the existing challenges around getting value from data is not necessarily that no longer is access to computation, is actually access to trustworthy, reliable data at scale. >> Yeah, and you see these domains coming together with the cloud and obviously it has to be secure and trusted, and that's why we're here today talking about data mesh. So let's get into it. Zhamak, first, your new book is out, 'Data Mesh: Delivering Data-Driven Value at Scale' just recently published, so congratulations on getting that done, awesome. Now in a recent presentation, you pulled excerpts from the book and we're going to talk through some of the technology and architectural considerations. Just quickly for the audience, four principles of data mesh. Domain driven ownership, data as product, self-served data platform and federated computational governance. So I want to start with self-serve platform and some of the data that you shared recently. You say that, "Data mesh serves autonomous domain oriented teams versus existing platforms, which serve a centralized team." Can you elaborate? >> Sure. I mean the role of the platform is to lower the cognitive load for domain teams, for people who are focusing on the business outcomes, the technologies that are building the applications, to really lower the cognitive load for them, to be able to work with data. Whether they are building analytics, automated decision making, intelligent modeling. They need to be able to get access to data and use it. So the role of the platform, I guess, just stepping back for a moment is to empower and enable these teams. Data mesh by definition is a scale out model. It's a decentralized model that wants to give autonomy to cross-functional teams. So it is core requires a set of tools that work really well in that decentralized model. When we look at the existing platforms, they try to achieve this similar outcome, right? Lower the cognitive load, give the tools to data practitioners, to manage data at scale because today centralized teams, really their job, the centralized data teams, their job isn't really directly aligned with a one or two or different, you know, business units and business outcomes in terms of getting value from data. Their job is manage the data and make the data available for then those cross-functional teams or business units to use the data. So the platforms they've been given are really centralized around or tuned to work with this structure as a team, structure of centralized team. Although on the surface, it seems that why not? Why can't I use my, you know, cloud storage or computation or data warehouse in a decentralized way? You should be able to, but some changes need to happen to those online platforms. As an example, some cloud providers simply have hard limits on the number of like account storage, storage accounts that you can have. Because they never envisaged you have hundreds of lakes. They envisage one or two, maybe 10 lakes, right. They envisage really centralizing data, not decentralizing data. So I think we see a shift in thinking about enabling autonomous independent teams versus a centralized team. >> So just a follow up if I may, we could be here for a while. But so this assumes that you've sorted out the organizational considerations? That you've defined all the, what a data product is and a sub product. And people will say, of course we use the term monolithic as a pejorative, let's face it. But the data warehouse crowd will say, "Well, that's what data march did. So we got that covered." But Europe... The primest of data mesh, if I understand it is whether it's a data march or a data mart or a data warehouse, or a data lake or whatever, a snowflake warehouse, it's a node on the mesh. Okay. So don't build your organization around the technology, let the technology serve the organization is that-- >> That's a perfect way of putting it, exactly. I mean, for a very long time, when we look at decomposition of complexity, we've looked at decomposition of complexity around technology, right? So we have technology and that's maybe a good segue to actually the next item on that list that we looked at. Oh, I need to decompose based on whether I want to have access to raw data and put it on the lake. Whether I want to have access to model data and put it on the warehouse. You know I need to have a team in the middle to move the data around. And then try to figure organization into that model. So data mesh really inverses that, and as you said, is look at the organizational structure first. Then scale boundaries around which your organization and operation can scale. And then the second layer look at the technology and how you decompose it. >> Okay. So let's go to that next point and talk about how you serve and manage autonomous interoperable data products. Where code, data policy you say is treated as one unit. Whereas your contention is existing platforms of course have independent management and dashboards for catalogs or storage, et cetera. Maybe we double click on that a bit. >> Yeah. So if you think about that functional, or technical decomposition, right? Of concerns, that's one way, that's a very valid way of decomposing, complexity and concerns. And then build solutions, independent solutions to address them. That's what we see in the technology landscape today. We will see technologies that are taking care of your management of data, bring your data under some sort of a control and modeling. You'll see technology that moves that data around, will perform various transformations and computations on it. And then you see technology that tries to overlay some level of meaning. Metadata, understandability, discovery was the end policy, right? So that's where your data processing kind of pipeline technologies versus data warehouse, storage, lake technologies, and then the governance come to play. And over time, we decomposed and we compose, right? Deconstruct and reconstruct back this together. But, right now that's where we stand. I think for data mesh really to become a reality, as in independent sources of data and teams can responsibly share data in a way that can be understood right then and there can impose policies, right then when the data gets accessed in that source and in a resilient manner, like in a way that data changes structure of the data or changes to the scheme of the data, doesn't have those downstream down times. We've got to think about this new nucleus or new units of data sharing. And we need to really bring back transformation and governing data and the data itself together around these decentralized nodes on the mesh. So that's another, I guess, deconstruction and reconstruction that needs to happen around the technology to formulate ourselves around the domains. And again the data and the logic of the data itself, the meaning of the data itself. >> Great. Got it. And we're going to talk more about the importance of data sharing and the implications. But the third point deals with how operational, analytical technologies are constructed. You've got an app DevStack, you've got a data stack. You've made the point many times actually that we've contextualized our operational systems, but not our data systems, they remain separate. Maybe you could elaborate on this point. >> Yes. I think this is, again, has a historical background and beginning. For a really long time, applications have dealt with features and the logic of running the business and encapsulating the data and the state that they need to run that feature or run that business function. And then we had for anything analytical driven, which required access data across these applications and across the longer dimension of time around different subjects within the organization. This analytical data, we had made a decision that, "Okay, let's leave those applications aside. Let's leave those databases aside. We'll extract the data out and we'll load it, or we'll transform it and put it under the analytical kind of a data stack and then downstream from it, we will have analytical data users, the data analysts, the data sciences and the, you know, the portfolio of users that are growing use that data stack. And that led to this really separation of dual stack with point to point integration. So applications went down the path of transactional databases or urban document store, but using APIs for communicating and then we've gone to, you know, lake storage or data warehouse on the other side. If we are moving and that again, enforces the silo of data versus app, right? So if we are moving to the world that our missions that are ambitions around making applications, more intelligent. Making them data driven. These two worlds need to come closer. As in ML Analytics gets embedded into those app applications themselves. And the data sharing, as a very essential ingredient of that, gets embedded and gets closer, becomes closer to those applications. So, if you are looking at this now cross-functional, app data, based team, right? Business team, then the technology stacks can't be so segregated, right? There has to be a continuum of experience from app delivery, to sharing of the data, to using that data, to embed models back into those applications. And that continuum of experience requires well integrated technologies. I'll give you an example, which actually in some sense, we are somewhat moving to that direction. But if we are talking about data sharing or data modeling and applications use one set of APIs, you know, HTTP compliant, GraQL or RAC APIs. And on the other hand, you have proprietary SQL, like connect to my database and run SQL. Like those are very two different models of representing and accessing data. So we kind of have to harmonize or integrate those two worlds a bit more closely to achieve that domain oriented cross-functional teams. >> Yeah. We are going to talk about some of the gaps later and actually you look at them as opportunities, more than barriers. But they are barriers, but they're opportunities for more innovation. Let's go on to the fourth one. The next point, it deals with the roles that the platform serves. Data mesh proposes that domain experts own the data and take responsibility for it end to end and are served by the technology. Kind of, we referenced that before. Whereas your contention is that today, data systems are really designed for specialists. I think you use the term hyper specialists a lot. I love that term. And the generalist are kind of passive bystanders waiting in line for the technical teams to serve them. >> Yes. I mean, if you think about the, again, the intention behind data mesh was creating a responsible data sharing model that scales out. And I challenge any organization that has a scaled ambitions around data or usage of data that relies on small pockets of very expensive specialists resources, right? So we have no choice, but upscaling cross-scaling. The majority population of our technologists, we often call them generalists, right? That's a short hand for people that can really move from one technology to another technology. Sometimes we call them pandric people sometimes we call them T-shaped people. But regardless, like we need to have ability to really mobilize our generalists. And we had to do that at Thoughtworks. We serve a lot of our clients and like many other organizations, we are also challenged with hiring specialists. So we have tested the model of having a few specialists, really conveying and translating the knowledge to generalists and bring them forward. And of course, platform is a big enabler of that. Like what is the language of using the technology? What are the APIs that delight that generalist experience? This doesn't mean no code, low code. We have to throw away in to good engineering practices. And I think good software engineering practices remain to exist. Of course, they get adopted to the world of data to build resilient you know, sustainable solutions, but specialty, especially around kind of proprietary technology is going to be a hard one to scale. >> Okay. I'm definitely going to come back and pick your brain on that one. And, you know, your point about scale out in the examples, the practical examples of companies that have implemented data mesh that I've talked to. I think in all cases, you know, there's only a handful that I've really gone deep with, but it was their hadoop instances, their clusters wouldn't scale, they couldn't scale the business and around it. So that's really a key point of a common pattern that we've seen now. I think in all cases, they went to like the data lake model and AWS. And so that maybe has some violation of the principles, but we'll come back to that. But so let me go on to the next one. Of course, data mesh leans heavily, toward this concept of decentralization, to support domain ownership over the centralized approaches. And we certainly see this, the public cloud players, database companies as key actors here with very large install bases, pushing a centralized approach. So I guess my question is, how realistic is this next point where you have decentralized technologies ruling the roost? >> I think if you look at the history of places, in our industry where decentralization has succeeded, they heavily relied on standardization of connectivity with, you know, across different components of technology. And I think right now you are right. The way we get value from data relies on collection. At the end of the day, collection of data. Whether you have a deep learning machinery model that you're training, or you have, you know, reports to generate. Regardless, the model is bring your data to a place that you can collect it, so that we can use it. And that leads to a naturally set of technologies that try to operate as a full stack integrated proprietary with no intention of, you know, opening, data for sharing. Now, conversely, if you think about internet itself, web itself, microservices, even at the enterprise level, not at the planetary level, they succeeded as decentralized technologies to a large degree because of their emphasis on open net and openness and sharing, right. API sharing. We don't talk about, in the API worlds, like we don't say, you know, "I will build a platform to manage your logical applications." Maybe to a degree but we actually moved away from that. We say, "I'll build a platform that opens around applications to manage your APIs, manage your interfaces." Right? Give you access to API. So I think the shift needs to... That definition of decentralized there means really composable, open pieces of the technology that can play nicely with each other, rather than a full stack, all have control of your data yet being somewhat decentralized within the boundary of my platform. That's just simply not going to scale if data needs to come from different platforms, different locations, different geographical locations, it needs to rethink. >> Okay, thank you. And then the final point is, is data mesh favors technologies that are domain agnostic versus those that are domain aware. And I wonder if you could help me square the circle cause it's nuanced and I'm kind of a 100 level student of your work. But you have said for example, that the data teams lack context of the domain and so help us understand what you mean here in this case. >> Sure. Absolutely. So as you said, we want to take... Data mesh tries to give autonomy and decision making power and responsibility to people that have the context of those domains, right? The people that are really familiar with different business domains and naturally the data that that domain needs, or that naturally the data that domains shares. So if the intention of the platform is really to give the power to people with most relevant and timely context, the platform itself naturally becomes as a shared component, becomes domain agnostic to a large degree. Of course those domains can still... The platform is a (chuckles) fairly overloaded world. As in, if you think about it as a set of technology that abstracts complexity and allows building the next level solutions on top, those domains may have their own set of platforms that are very much doing agnostic. But as a generalized shareable set of technologies or tools that allows us share data. So that piece of technology needs to relinquish the knowledge of the context to the domain teams and actually becomes domain agnostic. >> Got it. Okay. Makes sense. All right. Let's shift gears here. Talk about some of the gaps and some of the standards that are needed. You and I have talked about this a little bit before, but this digs deeper. What types of standards are needed? Maybe you could walk us through this graphic, please. >> Sure. So what I'm trying to depict here is that if we imagine a world that data can be shared from many different locations, for a variety of analytical use cases, naturally the boundary of what we call a node on the mesh will encapsulates internally a fair few pieces. It's not just the boundary of that, not on the mesh, is the data itself that it's controlling and updating and maintaining. It's of course a computation and the code that's responsible for that data. And then the policies that continue to govern that data as long as that data exists. So if that's the boundary, then if we shift that focus from implementation details, that we can leave that for later, what becomes really important is the scene or the APIs and interfaces that this node exposes. And I think that's where the work that needs to be done and the standards that are missing. And we want the scene and those interfaces be open because that allows, you know, different organizations with different boundaries of trust to share data. Not only to share data to kind of move that data to yes, another location, to share the data in a way that distributed workloads, distributed analytics, distributed machine learning model can happen on the data where it is. So if you follow that line of thinking around the centralization and connection of data versus collection of data, I think the very, very important piece of it that needs really deep thinking, and I don't claim that I have done that, is how do we share data responsibly and sustainably, right? That is not brittle. If you think about it today, the ways we share data, one of the very common ways is around, I'll give you a JDC endpoint, or I give you an endpoint to your, you know, database of choice. And now as technology, whereas a user actually, you can now have access to the schema of the underlying data and then run various queries or SQL queries on it. That's very simple and easy to get started with. That's why SQL is an evergreen, you know, standard or semi standard, pseudo standard that we all use. But it's also very brittle, because we are dependent on a underlying schema and formatting of the data that's been designed to tell the computer how to store and manage the data. So I think that the data sharing APIs of the future really need to think about removing this brittle dependencies, think about sharing, not only the data, but what we call metadata, I suppose. Additional set of characteristics that is always shared along with data to make the data usage, I suppose ethical and also friendly for the users and also, I think we have to... That data sharing API, the other element of it, is to allow kind of computation to run where the data exists. So if you think about SQL again, as a simple primitive example of computation, when we select and when we filter and when we join, the computation is happening on that data. So maybe there is a next level of articulating, distributed computational data that simply trains models, right? Your language primitives change in a way to allow sophisticated analytical workloads run on the data more responsibly with policies and access control and force. So I think that output port that I mentioned simply is about next generation data sharing, responsible data sharing APIs. Suitable for decentralized analytical workloads. >> So I'm not trying to bait you here, but I have a follow up as well. So you schema, for all its good creates constraints. No schema on right, that didn't work, cause it was just a free for all and it created the data swamps. But now you have technology companies trying to solve that problem. Take Snowflake for example, you know, enabling, data sharing. But it is within its proprietary environment. Certainly Databricks doing something, you know, trying to come at it from its angle, bringing some of the best to data warehouse, with the data science. Is your contention that those remain sort of proprietary and defacto standards? And then what we need is more open standards? Maybe you could comment. >> Sure. I think the two points one is, as you mentioned. Open standards that allow... Actually make the underlying platform invisible. I mean my litmus test for a technology provider to say, "I'm a data mesh," (laughs) kind of compliant is, "Is your platform invisible?" As in, can I replace it with another and yet get the similar data sharing experience that I need? So part of it is that. Part of it is open standards, they're not really proprietary. The other angle for kind of sharing data across different platforms so that you know, we don't get stuck with one technology or another is around APIs. It is around code that is protecting that internal schema. So where we are on the curve of evolution of technology, right now we are exposing the internal structure of the data. That is designed to optimize certain modes of access. We're exposing that to the end client and application APIs, right? So the APIs that use the data today are very much aware that this database was optimized for machine learning workloads. Hence you will deal with a columnar storage of the file versus this other API is optimized for a very different, report type access, relational access and is optimized around roles. I think that should become irrelevant in the API sharing of the future. Because as a user, I shouldn't care how this data is internally optimized, right? The language primitive that I'm using should be really agnostic to the machine optimization underneath that. And if we did that, perhaps this war between warehouse or lake or the other will become actually irrelevant. So we're optimizing for that human best human experience, as opposed to the best machine experience. We still have to do that but we have to make that invisible. Make that an implementation concern. So that's another angle of what should... If we daydream together, the best experience and resilient experience in terms of data usage than these APIs with diagnostics to the internal storage structure. >> Great, thank you for that. We've wrapped our ankles now on the controversy, so we might as well wade all the way in, I can't let you go without addressing some of this. Which you've catalyzed, which I, by the way, I see as a sign of progress. So this gentleman, Paul Andrew is an architect and he gave a presentation I think last night. And he teased it as quote, "The theory from Zhamak Dehghani versus the practical experience of a technical architect, AKA me," meaning him. And Zhamak, you were quick to shoot back that data mesh is not theory, it's based on practice. And some practices are experimental. Some are more baked and data mesh really avoids by design, the specificity of vendor or technology. Perhaps you intend to frame your post as a technology or vendor specific, specific implementation. So touche, that was excellent. (Zhamak laughs) Now you don't need me to defend you, but I will anyway. You spent 14 plus years as a software engineer and the better part of a decade consulting with some of the most technically advanced companies in the world. But I'm going to push you a little bit here and say, some of this tension is of your own making because you purposefully don't talk about technologies and vendors. Sometimes doing so it's instructive for us neophytes. So, why don't you ever like use specific examples of technology for frames of reference? >> Yes. My role is pushes to the next level. So, you know everybody picks their fights, pick their battles. My role in this battle is to push us to think beyond what's available today. Of course, that's my public persona. On a day to day basis, actually I work with clients and existing technology and I think at Thoughtworks we have given the talk we gave a case study talk with a colleague of mine and I intentionally got him to talk about (indistinct) I want to talk about the technology that we use to implement data mesh. And the reason I haven't really embraced, in my conversations, the specific technology. One is, I feel the technology solutions we're using today are still not ready for the vision. I mean, we have to be in this transitional step, no matter what we have to be pragmatic, of course, and practical, I suppose. And use the existing vendors that exist and I wholeheartedly embrace that, but that's just not my role, to show that. I've gone through this transformation once before in my life. When microservices happened, we were building microservices like architectures with technology that wasn't ready for it. Big application, web application servers that were designed to run these giant monolithic applications. And now we're trying to run little microservices onto them. And the tail was riding the dock, the environmental complexity of running these services was consuming so much of our effort that we couldn't really pay attention to that business logic, the business value. And that's where we are today. The complexity of integrating existing technologies is really overwhelmingly, capturing a lot of our attention and cost and effort, money and effort as opposed to really focusing on the data product themselves. So it's just that's the role I have, but it doesn't mean that, you know, we have to rebuild the world. We've got to do with what we have in this transitional phase until the new generation, I guess, technologies come around and reshape our landscape of tools. >> Well, impressive public discipline. Your point about microservice is interesting because a lot of those early microservices, weren't so micro and for the naysayers look past this, not prologue, but Thoughtworks was really early on in the whole concept of microservices. So be very excited to see how this plays out. But now there was some other good comments. There was one from a gentleman who said the most interesting aspects of data mesh are organizational. And that's how my colleague Sanji Mohan frames data mesh versus data fabric. You know, I'm not sure, I think we've sort of scratched the surface today that data today, data mesh is more. And I still think data fabric is what NetApp defined as software defined storage infrastructure that can serve on-prem and public cloud workloads back whatever, 2016. But the point you make in the thread that we're showing you here is that you're warning, and you referenced this earlier, that the segregating different modes of access will lead to fragmentation. And we don't want to repeat the mistakes of the past. >> Yes, there are comments around. Again going back to that original conversation that we have got this at a macro level. We've got this tendency to decompose complexity based on technical solutions. And, you know, the conversation could be, "Oh, I do batch or you do a stream and we are different."' They create these bifurcations in our decisions based on the technology where I do events and you do tables, right? So that sort of segregation of modes of access causes accidental complexity that we keep dealing with. Because every time in this tree, you create a new branch, you create new kind of new set of tools and then somehow need to be point to point integrated. You create new specialization around that. So the least number of branches that we have, and think about really about the continuum of experiences that we need to create and technologies that simplify, that continuum experience. So one of the things, for example, give you a past experience. I was really excited around the papers and the work that came around on Apache Beam, and generally flow based programming and stream processing. Because basically they were saying whether you are doing batch or whether you're doing streaming, it's all one stream. And sometimes the window of time, narrows and sometimes the window of time over which you're computing, widens and at the end of today, is you are just getting... Doing the stream processing. So it is those sort of notions that simplify and create continuum of experience. I think resonate with me personally, more than creating these tribal fights of this type versus that mode of access. So that's why data mesh naturally selects kind of this multimodal access to support end users, right? The persona of end users. >> Okay. So the last topic I want to hit, this whole discussion, the topic of data mesh it's highly nuanced, it's new, and people are going to shoehorn data mesh into their respective views of the world. And we talked about lake houses and there's three buckets. And of course, the gentleman from LinkedIn with Azure, Microsoft has a data mesh community. See you're going to have to enlist some serious army of enforcers to adjudicate. And I wrote some of the stuff down. I mean, it's interesting. Monte Carlo has a data mesh calculator. Starburst is leaning in, chaos. Search sees themselves as an enabler. Oracle and Snowflake both use the term data mesh. And then of course you've got big practitioners J-P-M-C, we've talked to Intuit, Orlando, HelloFresh has been on, Netflix has this event based sort of streaming implementation. So my question is, how realistic is it that the clarity of your vision can be implemented and not polluted by really rich technology companies and others? (Zhamak laughs) >> Is it even possible, right? Is it even possible? That's a yes. That's why I practice then. This is why I should practice things. Cause I think, it's going to be hard. What I'm hopeful, is that the socio-technical, Leveling Data mentioned that this is a socio-technical concern or solution, not just a technology solution. Hopefully always brings us back to, you know, the reality that vendors try to sell you safe oil that solves all of your problems. (chuckles) All of your data mesh problems. It's just going to cause more problem down the track. So we'll see, time will tell Dave and I count on you as one of those members of, (laughs) you know, folks that will continue to share their platform. To go back to the roots, as why in the first place? I mean, I dedicated a whole part of the book to 'Why?' Because we get, as you said, we get carried away with vendors and technology solution try to ride a wave. And in that story, we forget the reason for which we even making this change and we are going to spend all of this resources. So hopefully we can always come back to that. >> Yeah. And I think we can. I think you have really given this some deep thought and as we pointed out, this was based on practical knowledge and experience. And look, we've been trying to solve this data problem for a long, long time. You've not only articulated it well, but you've come up with solutions. So Zhamak, thank you so much. We're going to leave it there and I'd love to have you back. >> Thank you for the conversation. I really enjoyed it. And thank you for sharing your platform to talk about data mesh. >> Yeah, you bet. All right. And I want to thank my colleague, Stephanie Chan, who helps research topics for us. Alex Myerson is on production and Kristen Martin, Cheryl Knight and Rob Hoff on editorial. Remember all these episodes are available as podcasts, wherever you listen. And all you got to do is search Breaking Analysis Podcast. Check out ETR's website at etr.ai for all the data. And we publish a full report every week on wikibon.com, siliconangle.com. You can reach me by email david.vellante@siliconangle.com or DM me @dvellante. Hit us up on our LinkedIn post. This is Dave Vellante for theCUBE Insights powered by ETR. Have a great week, stay safe, be well. And we'll see you next time. (bright music)

Published Date : Apr 20 2022

SUMMARY :

bringing you data driven insights Organizations that have taken the plunge and have a conversation. and much of the past two years, and as we see, and some of the data and make the data available But the data warehouse crowd will say, in the middle to move the data around. and talk about how you serve and the data itself together and the implications. and the logic of running the business and are served by the technology. to build resilient you I think in all cases, you know, And that leads to a that the data teams lack and naturally the data and some of the standards that are needed. and formatting of the data and it created the data swamps. We're exposing that to the end client and the better part of a decade So it's just that's the role I have, and for the naysayers look and at the end of today, And of course, the gentleman part of the book to 'Why?' and I'd love to have you back. And thank you for sharing your platform etr.ai for all the data.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Kristen MartinPERSON

0.99+

Rob HoffPERSON

0.99+

Cheryl KnightPERSON

0.99+

Stephanie ChanPERSON

0.99+

Alex MyersonPERSON

0.99+

DavePERSON

0.99+

ZhamakPERSON

0.99+

oneQUANTITY

0.99+

Dave VellantePERSON

0.99+

AWSORGANIZATION

0.99+

10 lakesQUANTITY

0.99+

Sanji MohanPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Paul AndrewPERSON

0.99+

twoQUANTITY

0.99+

NetflixORGANIZATION

0.99+

Zhamak DehghaniPERSON

0.99+

Data Mesh: Delivering Data-Driven Value at ScaleTITLE

0.99+

BostonLOCATION

0.99+

OracleORGANIZATION

0.99+

14 plus yearsQUANTITY

0.99+

Palo AltoLOCATION

0.99+

two pointsQUANTITY

0.99+

siliconangle.comOTHER

0.99+

second layerQUANTITY

0.99+

2016DATE

0.99+

LinkedInORGANIZATION

0.99+

todayDATE

0.99+

SnowflakeORGANIZATION

0.99+

hundreds of lakesQUANTITY

0.99+

theCUBEORGANIZATION

0.99+

david.vellante@siliconangle.comOTHER

0.99+

theCUBE StudiosORGANIZATION

0.98+

SQLTITLE

0.98+

one unitQUANTITY

0.98+

firstQUANTITY

0.98+

100 levelQUANTITY

0.98+

third pointQUANTITY

0.98+

DatabricksORGANIZATION

0.98+

EuropeLOCATION

0.98+

three bucketsQUANTITY

0.98+

ETRORGANIZATION

0.98+

DevStackTITLE

0.97+

OneQUANTITY

0.97+

wikibon.comOTHER

0.97+

bothQUANTITY

0.97+

ThoughtworksORGANIZATION

0.96+

one setQUANTITY

0.96+

one streamQUANTITY

0.96+

IntuitORGANIZATION

0.95+

one wayQUANTITY

0.93+

two worldsQUANTITY

0.93+

HelloFreshORGANIZATION

0.93+

this weekDATE

0.93+

last nightDATE

0.91+

fourth oneQUANTITY

0.91+

SnowflakeTITLE

0.91+

two different modelsQUANTITY

0.91+

ML AnalyticsTITLE

0.91+

Breaking AnalysisTITLE

0.87+

two worldsQUANTITY

0.84+

AWS Startup Showcase Opening


 

>>Hello and welcome today's cube presentation of eight of us startup showcase. I'm john for your host highlighting the hottest companies and devops data analytics and cloud management lisa martin and David want are here to kick it off. We've got a great program for you again. This is our, our new community event model where we're doing every quarter, we have every new episode, this is quarter three this year or episode three, season one of the hottest cloud startups and we're gonna be featured. Then we're gonna do a keynote package and then 15 countries will present their story, Go check them out and then have a closing keynote with a practitioner and we've got some great lineups, lisa Dave, great to see you. Thanks for joining me. >>Hey guys, >>great to be here. So David got to ask you, you know, back in events last night we're at the 14 it's event where they had the golf PGA championship with the cube Now we got the hybrid model, This is the new normal. We're in, we got these great companies were showcasing them. What's your take? >>Well, you're right. I mean, I think there's a combination of things. We're seeing some live shows. We saw what we did with at mobile world Congress. We did the show with AWS storage day where it was, we were at the spheres, there was no, there was a live audience, but they weren't there physically. It was just virtual and yeah, so, and I just got pained about reinvent. Hey Dave, you gotta make your flights. So I'm making my flights >>were gonna be at the amazon web services, public sector summit next week. At least a lot, a lot of cloud convergence going on here. We got many companies being featured here that we spoke with the Ceo and their top people cloud management, devops data, nelson security. Really cutting edge companies, >>yes, cutting edge companies who are all focused on acceleration. We've talked about the acceleration of digital transformation the last 18 months and we've seen a tremendous amount of acceleration in innovation with what these startups are doing. We've talked to like you said, there's, there's C suite, we've also talked to their customers about how they are innovating so quickly with this hybrid environment, this remote work and we've talked a lot about security in the last week or so. You mentioned that we were at Fortinet cybersecurity skills gap. What some of these companies are doing with automation for example, to help shorten that gap, which is a big opportunity >>for the job market. Great stuff. Dave so the format of this event, you're going to have a fireside chat with the practitioner, we'd like to end these programs with a great experienced practitioner cutting edge in data february. The beginning lisa are gonna be kicking off with of course Jeff bar to give us the update on what's going on AWS and then a special presentation from Emily Freeman who is the author of devops for dummies, she's introducing new content. The revolution in devops devops two point oh and of course jerry Chen from Greylock cube alumni is going to come on and talk about his new thesis castles in the cloud creating moats at cloud scale. We've got a great lineup of people and so the front ends can be great. Dave give us a little preview of what people can expect at the end of the fireside chat. >>Well at the highest level john I've always said we're entering that sort of third great wave of cloud. First wave was experimentation. The second big wave was migration. The third wave of integration, Deep business integration and what you're >>going to hear from >>Hello Fresh today is how they like many companies that started early last decade. They started with an on prem Hadoop system and then of course we all know what happened is S three essentially took the knees out from, from the on prem Hadoop market lowered costs, brought things into the cloud and what Hello Fresh is doing is they're transforming from that legacy Hadoop system into its running on AWS but into a data mess, you know, it's a passionate topic of mine. Hello Fresh was scaling they realized that they couldn't keep up so they had to rethink their entire data architecture and they built it around data mesh Clements key and christoph Soewandi gonna explain how they actually did that are on a journey or decentralized data >>measure it and your posts have been awesome on data measure. We get a lot of traction. Certainly you're breaking analysis for the folks watching check out David Landes, Breaking analysis every week, highlighting the cutting edge trends in tech Dave. We're gonna see you later, lisa and I are gonna be here in the morning talking about with Emily. We got Jeff Barr teed up. Dave. Thanks for coming on. Looking forward to fireside chat lisa. We'll see you when Emily comes back on. But we're gonna go to Jeff bar right now for Dave and I are gonna interview Jeff. Mm >>Hey Jeff, >>here he is. Hey, how are you? How's it going really well. So I gotta ask you, the reinvent is on, everyone wants to know that's happening right. We're good with Reinvent. >>Reinvent is happening. I've got my hotel and actually listening today, if I just remembered, I still need to actually book my flights. I've got my to do list on my desk and I do need to get my >>flights. Uh, >>really looking forward >>to it. I can't wait to see the all the announcements and blog posts. We're gonna, we're gonna hear from jerry Chen later. I love the after on our next event. Get your reaction to this castle and castles in the cloud where competitive advantages can be built in the cloud. We're seeing examples of that. But first I gotta ask you give us an update of what's going on. The ap and ecosystem has been an incredible uh, celebration these past couple weeks, >>so, so a lot of different things happening and the interesting thing to me is that as part of my job, I often think that I'm effectively living in the future because I get to see all this really cool stuff that we're building just a little bit before our customers get to, and so I'm always thinking okay, here I am now, and what's the world going to be like in a couple of weeks to a month or two when these launches? I'm working on actually get out the door and that, that's always really, really fun, just kind of getting that, that little edge into where we're going, but this year was a little interesting because we had to really significant birthdays, we had the 15 year anniversary of both EC two and S three and we're so focused on innovating and moving forward, that it's actually pretty rare for us at Aws to look back and say, wow, we've actually done all these amazing things in in the last 15 years, >>you know, it's kind of cool Jeff, if I may is is, you know, of course in the early days everybody said, well, a place for startup is a W. S and now the great thing about the startup showcases, we're seeing the startups that >>are >>very near, or some of them have even reached escape velocity, so they're not, they're not tiny little companies anymore, they're in their transforming their respective industries, >>they really are and I think that as they start ups grow, they really start to lean into the power of the cloud. They as they start to think, okay, we've we've got our basic infrastructure in place, we've got, we were serving data, we're serving up a few customers, everything is actually working pretty well for us. We've got our fundamental model proven out now, we can invest in publicity and marketing and scaling and but they don't have to think about what's happening behind the scenes. They just if they've got their auto scaling or if they're survivalists, the infrastructure simply grows to meet their demand and it's it's just a lot less things that they have to worry about. They can focus on the fun part of their business which is actually listening to customers and building up an awesome business >>Jeff as you guys are putting together all the big pre reinvented, knows a lot of stuff that goes on prior as well and they say all the big good stuff to reinvent. But you start to see some themes emerged this year. One of them is modernization of applications, the speed of application development in the cloud with the cloud scale devops personas, whatever persona you want to talk about but basically speed the speed of of the app developers where other departments have been slowing things down, I won't say name names, but security group and I t I mean I shouldn't have said that but only kidding but no but seriously people want in minutes and seconds now not days or weeks. You know whether it's policy. What are some of the trends that you're seeing around this this year as we get into some of the new stuff coming out >>So Dave customers really do want speed and for we've actually encapsulate this for a long time in amazon in what we call the bias for action leadership principle >>where >>we just need to jump in and move forward and and make things happen. A lot of customers look at that and they say yes this is great. We need to have the same bias fraction. Some do. Some are still trying to figure out exactly how to put it into play. And they absolutely for sure need to pay attention to security. They need to respect the past and make sure that whatever they're doing is in line with I. T. But they do want to move forward. And the interesting thing that I see time and time again is it's not simply about let's adopt a new technology. It's how do we >>how do we keep our workforce >>engaged? How do we make sure that they've got the right training? How do we bring our our I. T. Team along for this. Hopefully new and fun and exciting journey where they get to learn some interesting new technologies they've got all this very much accumulated business knowledge they still want to put to use, maybe they're a little bit apprehensive about something brand new and they hear about the cloud, but there by and large, they really want to move forward. They just need a little bit of >>help to make it happen >>real good guys. One of the things you're gonna hear today, we're talking about speed traditionally going fast. Oftentimes you meant you have to sacrifice some things on quality and what you're going to hear from some of the startups today is how they're addressing that to automation and modern devoPS technologies and sort of rethinking that whole application development approach. That's something I'm really excited to see organization is beginning to adopt so they don't have to make that tradeoff anymore. >>Yeah, I would >>never want to see someone >>sacrifice quality, >>but I do think that iterating very quickly and using the best of devoPS principles to be able to iterate incredibly quickly and get that first launch out there and then listen with both ears just >>as much >>as you can, Everything. You hear iterate really quickly to meet those needs in, in hours and days, not months, quarters or years. >>Great stuff. Chef and a lot of the companies were featuring here in the startup showcase represent that new kind of thinking, um, systems thinking as well as you know, the cloud scale and again and it's finally here, the revolution of deVOps is going to the next generation and uh, we're excited to have Emily Freeman who's going to come on and give a little preview for her new talk on this revolution. So Jeff, thank you for coming on, appreciate you sharing the update here on the cube. Happy >>to be. I'm actually really looking forward to hearing from Emily. >>Yeah, it's great. Great. Looking forward to the talk. Brand new Premier, Okay, uh, lisa martin, Emily Freeman is here. She's ready to come in and we're going to preview her lightning talk Emily. Um, thanks for coming on, we really appreciate you coming on really, this is about to talk around deVOPS next gen and I think lisa this is one of those things we've been, we've been discussing with all the companies. It's a new kind of thinking it's a revolution, it's a systems mindset, you're starting to see the connections there she is. Emily, Thanks for coming. I appreciate it. >>Thank you for having me. So your teaser video >>was amazing. Um, you know, that little secret radical idea, something completely different. Um, you gotta talk coming up, what's the premise behind this revolution, you know, these tying together architecture, development, automation deployment, operating altogether. >>Yes, well, we have traditionally always used the sclc, which is the software delivery life cycle. Um, and it is a straight linear process that has actually been around since the sixties, which is wild to me um, and really originated in manufacturing. Um, and as much as I love the Toyota production system and how much it has shown up in devops as a sort of inspiration on how to run things better. We are not making cars, we are making software and I think we have to use different approaches and create a sort of model that better reflects our modern software development process. >>It's a bold idea and looking forward to the talk and as motivation. I went into my basement and dusted off all my books from college in the 80s and the sea estimates it was waterfall. It was software development life cycle. They trained us to think this way and it came from the mainframe people. It was like, it's old school, like really, really old and it really hasn't been updated. Where's the motivation? I actually cloud is kind of converging everything together. We see that, but you kind of hit on this persona thing. Where did that come from this persona? Because you know, people want to put people in buckets release engineer. I mean, where's that motivation coming from? >>Yes, you're absolutely right that it came from the mainframes. I think, you know, waterfall is necessary when you're using a punch card or mag tape to load things onto a mainframe, but we don't exist in that world anymore. Thank goodness. And um, yes, so we, we use personas all the time in tech, you know, even to register, well not actually to register for this event, but a lot events. A lot of events, you have to click that drop down. Right. Are you a developer? Are you a manager, whatever? And the thing is personas are immutable in my opinion. I was a developer. I will always identify as a developer despite playing a lot of different roles and doing a lot of different jobs. Uh, and this can vary throughout the day. Right. You might have someone who has a title of software architect who ends up helping someone pair program or develop or test or deploy. Um, and so we wear a lot of hats day to day and I think our discussions around roles would be a better, um, certainly a better approach than personas >>lease. And I've been discussing with many of these companies around the roles and we're hearing from them directly and they're finding out that people have, they're mixing and matching on teams. So you're, you're an S R E on one team and you're doing something on another team where the workflows and the workloads defined the team formation. So this is a cultural discussion. >>It absolutely is. Yes. I think it is a cultural discussion and it really comes to the heart of devops, right? It's people process. And then tools deVOps has always been about culture and making sure that developers have all the tools they need to be productive and honestly happy. What good is all of this? If developing software isn't a joyful experience. Well, >>I got to ask you, I got you here obviously with server list and functions just starting to see this kind of this next gen. And we're gonna hear from jerry Chen, who's a Greylock VC who's going to talk about castles in the clouds, where he's discussing the moats that could be created with a competitive advantage in cloud scale. And I think he points to the snowflakes of the world. You're starting to see this new thing happening. This is devops 2.0, this is the revolution. Is this kind of where you see the same vision of your talk? >>Yes, so DeVOps created 2000 and 8, 2000 and nine, totally different ecosystem in the world we were living in, you know, we didn't have things like surveillance and containers, we didn't have this sort of default distributed nature, certainly not the cloud. Uh and so I'm very excited for jerry's talk. I'm curious to hear more about these moz. I think it's fascinating. Um but yeah, you're seeing different companies use different tools and processes to accelerate their delivery and that is the competitive advantage. How can we figure out how to utilize these tools in the most efficient way possible. >>Thank you for coming and giving us a preview. Let's now go to your lightning keynote talk. Fresh content. Premier of this revolution in Devops and the Freemans Talk, we'll go there now. >>Hi, I'm Emily Freeman, I'm the author of devops for dummies and the curator of 97 things every cloud engineer should know. I am thrilled to be here with you all today. I am really excited to share with you a kind of a wild idea, a complete re imagining of the S DLC and I want to be clear, I need your feedback. I want to know what you think of this. You can always find me on twitter at editing. Emily, most of my work centers around deVOps and I really can't overstate what an impact the concept of deVOPS has had on this industry in many ways it built on the foundation of Agile to become a default a standard we all reach for in our everyday work. When devops surfaced as an idea in 2008, the tech industry was in a vastly different space. AWS was an infancy offering only a handful of services. Azure and G C P didn't exist yet. The majority's majority of companies maintained their own infrastructure. Developers wrote code and relied on sys admins to deploy new code at scheduled intervals. Sometimes months apart, container technology hadn't been invented applications adhered to a monolithic architecture, databases were almost exclusively relational and serverless wasn't even a concept. Everything from the application to the engineers was centralized. Our current ecosystem couldn't be more different. Software is still hard, don't get me wrong, but we continue to find novel solutions to consistently difficult, persistent problems. Now, some of these end up being a sort of rebranding of old ideas, but others are a unique and clever take to abstracting complexity or automating toil or perhaps most important, rethinking challenging the very premises we have accepted as Cannon for years, if not decades. In the years since deVOps attempted to answer the critical conflict between developers and operations, engineers, deVOps has become a catch all term and there have been a number of derivative works. Devops has come to mean 5000 different things to 5000 different people. For some, it can be distilled to continuous integration and continuous delivery or C I C D. For others, it's simply deploying code more frequently, perhaps adding a smattering of tests for others. Still, its organizational, they've added a platform team, perhaps even a questionably named DEVOPS team or have created an engineering structure that focuses on a separation of concerns. Leaving feature teams to manage the development, deployment, security and maintenance of their siloed services, say, whatever the interpretation, what's important is that there isn't a universally accepted standard. Well, what deVOPS is or what it looks like an execution, it's a philosophy more than anything else. A framework people can utilize to configure and customize their specific circumstances to modern development practices. The characteristic of deVOPS that I think we can all agree on though, is that an attempted to capture the challenges of the entire software development process. It's that broad umbrella, that holistic view that I think we need to breathe life into again, The challenge we face is that DeVOps isn't increasingly outmoded solution to a previous problem developers now face. Cultural and technical challenge is far greater than how to more quickly deploy a monolithic application. Cloud native is the future the next collection of default development decisions and one the deVOPS story can't absorb in its current form. I believe the era of deVOPS is waning and in this moment as the sun sets on deVOPS, we have a unique opportunity to rethink rebuild free platform. Even now, I don't have a crystal ball. That would be very handy. I'm not completely certain with the next decade of tech looks like and I can't write this story alone. I need you but I have some ideas that can get the conversation started, I believe to build on what was we have to throw away assumptions that we've taken for granted all this time in order to move forward. We must first step back. Mhm. The software or systems development life cycle, what we call the S. D. L. C. has been in use since the 1960s and it's remained more or less the same since before color television and the touch tone phone. Over the last 60 or so odd years we've made tweaks, slight adjustments, massaged it. The stages or steps are always a little different with agile and deVOps we sort of looped it into a circle and then an infinity loop we've added pretty colors. But the sclc is more or less the same and it has become an assumption. We don't even think about it anymore, universally adopted constructs like the sclc have an unspoken permanence. They feel as if they have always been and always will be. I think the impact of that is even more potent. If you were born after a construct was popularized. Nearly everything around us is a construct, a model, an artifact of a human idea. The chair you're sitting in the desk, you work at the mug from which you drink coffee or sometimes wine, buildings, toilets, plumbing, roads, cars, art, computers, everything. The sclc is a remnant an artifact of a previous era and I think we should throw it away or perhaps more accurately replace it, replace it with something that better reflects the actual nature of our work. A linear, single threaded model designed for the manufacturer of material goods cannot possibly capture the distributed complexity of modern socio technical systems. It just can't. Mhm. And these two ideas aren't mutually exclusive that the sclc was industry changing, valuable and extraordinarily impactful and that it's time for something new. I believe we are strong enough to hold these two ideas at the same time, showing respect for the past while envisioning the future. Now, I don't know about you, I've never had a software project goes smoothly in one go. No matter how small. Even if I'm the only person working on it and committing directly to master software development is chaos. It's a study and entropy and it is not getting any more simple. The model with which we think and talk about software development must capture the multithreaded, non sequential nature of our work. It should embody the roles engineers take on and the considerations they make along the way. It should build on the foundations of agile and devops and represent the iterative nature of continuous innovation. Now, when I was thinking about this, I was inspired by ideas like extreme programming and the spiral model. I I wanted something that would have layers, threads, even a way of visually representing multiple processes happening in parallel. And what I settled on is the revolution model. I believe the visualization of revolution is capable of capturing the pivotal moments of any software scenario. And I'm going to dive into all the discrete elements. But I want to give you a moment to have a first impression, to absorb my idea. I call it revolution because well for one it revolves, it's circular shape reflects the continuous and iterative nature of our work, but also because it is revolutionary. I am challenging a 60 year old model that is embedded into our daily language. I don't expect Gartner to build a magic quadrant around this tomorrow, but that would be super cool. And you should call me my mission with. This is to challenge the status quo to create a model that I think more accurately reflects the complexity of modern cloud native software development. The revolution model is constructed of five concentric circles describing the critical roles of software development architect. Ng development, automating, deploying and operating intersecting each loop are six spokes that describe the production considerations every engineer has to consider throughout any engineering work and that's test, ability, secure ability, reliability, observe ability, flexibility and scalability. The considerations listed are not all encompassing. There are of course things not explicitly included. I figured if I put 20 spokes, some of us, including myself, might feel a little overwhelmed. So let's dive into each element in this model. We have long used personas as the default way to do divide audiences and tailor messages to group people. Every company in the world right now is repeating the mantra of developers, developers, developers but personas have always bugged me a bit because this approach typically either oversimplifies someone's career are needlessly complicated. Few people fit cleanly and completely into persona based buckets like developers and operations anymore. The lines have gotten fuzzy on the other hand, I don't think we need to specifically tailor messages as to call out the difference between a devops engineer and a release engineer or a security administrator versus a security engineer but perhaps most critically, I believe personas are immutable. A persona is wholly dependent on how someone identifies themselves. It's intrinsic not extrinsic. Their titles may change their jobs may differ, but they're probably still selecting the same persona on that ubiquitous drop down. We all have to choose from when registering for an event. Probably this one too. I I was a developer and I will always identify as a developer despite doing a ton of work in areas like devops and Ai Ops and Deverell in my heart. I'm a developer I think about problems from that perspective. First it influences my thinking and my approach roles are very different. Roles are temporary, inconsistent, constantly fluctuating. If I were an actress, the parts I would play would be lengthy and varied, but the persona I would identify as would remain an actress and artist lesbian. Your work isn't confined to a single set of skills. It may have been a decade ago, but it is not today in any given week or sprint, you may play the role of an architect. Thinking about how to design a feature or service, developer building out code or fixing a bug and on automation engineer, looking at how to improve manual processes. We often refer to as soil release engineer, deploying code to different environments or releasing it to customers or in operations. Engineer ensuring an application functions inconsistent expected ways and no matter what role we play. We have to consider a number of issues. The first is test ability. All software systems require testing to assure architects that designs work developers, the code works operators, that infrastructure is running as expected and engineers of all disciplines that code changes won't bring down the whole system testing in its many forms is what enables systems to be durable and have longevity. It's what reassures engineers that changes won't impact current functionality. A system without tests is a disaster waiting to happen, which is why test ability is first among equals at this particular roundtable. Security is everyone's responsibility. But if you understand how to design and execute secure systems, I struggle with this security incidents for the most part are high impact, low probability events. The really big disasters, the one that the ones that end up on the news and get us all free credit reporting for a year. They don't happen super frequently and then goodness because you know that there are endless small vulnerabilities lurking in our systems. Security is something we all know we should dedicate time to but often don't make time for. And let's be honest, it's hard and complicated and a little scary def sec apps. The first derivative of deVOPS asked engineers to move security left this approach. Mint security was a consideration early in the process, not something that would block release at the last moment. This is also the consideration under which I'm putting compliance and governance well not perfectly aligned. I figure all the things you have to call lawyers for should just live together. I'm kidding. But in all seriousness, these three concepts are really about risk management, identity, data, authorization. It doesn't really matter what specific issue you're speaking about, the question is who has access to what win and how and that is everyone's responsibility at every stage site reliability engineering or sorry, is a discipline job and approach for good reason. It is absolutely critical that applications and services work as expected. Most of the time. That said, availability is often mistakenly treated as a synonym for reliability. Instead, it's a single aspect of the concept if a system is available but customer data is inaccurate or out of sync. The system is not reliable, reliability has five key components, availability, latency, throughput. Fidelity and durability, reliability is the end result. But resiliency for me is the journey the action engineers can take to improve reliability, observe ability is the ability to have insight into an application or system. It's the combination of telemetry and monitoring and alerting available to engineers and leadership. There's an aspect of observe ability that overlaps with reliability, but the purpose of observe ability isn't just to maintain a reliable system though, that is of course important. It is the capacity for engineers working on a system to have visibility into the inner workings of that system. The concept of observe ability actually originates and linear dynamic systems. It's defined as how well internal states of a system can be understood based on information about its external outputs. If it is critical when companies move systems to the cloud or utilize managed services that they don't lose visibility and confidence in their systems. The shared responsibility model of cloud storage compute and managed services require that engineering teams be able to quickly be alerted to identify and remediate issues as they arise. Flexible systems are capable of adapting to meet the ever changing needs of the customer and the market segment, flexible code bases absorb new code smoothly. Embody a clean separation of concerns. Are partitioned into small components or classes and architected to enable the now as well as the next inflexible systems. Change dependencies are reduced or eliminated. Database schemas accommodate change well components, communicate via a standardized and well documented A. P. I. The only thing constant in our industry is change and every role we play, creating flexibility and solutions that can be flexible that will grow as the applications grow is absolutely critical. Finally, scalability scalability refers to more than a system's ability to scale for additional load. It implies growth scalability and the revolution model carries the continuous innovation of a team and the byproducts of that growth within a system. For me, scalability is the most human of the considerations. It requires each of us in our various roles to consider everyone around us, our customers who use the system or rely on its services, our colleagues current and future with whom we collaborate and even our future selves. Mhm. Software development isn't a straight line, nor is it a perfect loop. It is an ever changing complex dance. There are twirls and pivots and difficult spins forward and backward. Engineers move in parallel, creating truly magnificent pieces of art. We need a modern model for this modern era and I believe this is just the revolution to get us started. Thank you so much for having me. >>Hey, we're back here. Live in the keynote studio. I'm john for your host here with lisa martin. David lot is getting ready for the fireside chat ending keynote with the practitioner. Hello! Fresh without data mesh lisa Emily is amazing. The funky artwork there. She's amazing with the talk. I was mesmerized. It was impressive. >>The revolution of devops and the creative element was a really nice surprise there. But I love what she's doing. She's challenging the status quo. If we've learned nothing in the last year and a half, We need to challenge the status quo. A model from the 1960s that is no longer linear. What she's doing is revolutionary. >>And we hear this all the time. All the cube interviews we do is that you're seeing the leaders, the SVP's of engineering or these departments where there's new new people coming in that are engineering or developers, they're playing multiple roles. It's almost a multidisciplinary aspect where you know, it's like going into in and out burger in the fryer later and then you're doing the grill, you're doing the cashier, people are changing roles or an architect, their test release all in one no longer departmental, slow siloed groups. >>She brought up a great point about persona is that we no longer fit into these buckets. That the changing roles. It's really the driver of how we should be looking at this. >>I think I'm really impressed, really bold idea, no brainer as far as I'm concerned, I think one of the things and then the comments were off the charts in a lot of young people come from discord servers. We had a good traction over there but they're all like learning. Then you have the experience, people saying this is definitely has happened and happening. The dominoes are falling and they're falling in the direction of modernization. That's the key trend speed. >>Absolutely with speed. But the way that Emily is presenting it is not in a brash bold, but it's in a way that makes great sense. The way that she creatively visually lined out what she was talking about Is amenable to the folks that have been doing this for since the 60s and the new folks now to really look at this from a different >>lens and I think she's a great setup on that lightning top of the 15 companies we got because you think about sis dig harness. I white sourced flamingo hacker one send out, I oh, okay. Thought spot rock set Sarah Ops ramp and Ops Monte cloud apps, sani all are doing modern stuff and we talked to them and they're all on this new wave, this monster wave coming. What's your observation when you talk to these companies? >>They are, it was great. I got to talk with eight of the 15 and the amount of acceleration of innovation that they've done in the last 18 months is phenomenal obviously with the power and the fuel and the brand reputation of aws but really what they're all facilitating cultural shift when we think of devoPS and the security folks. Um, there's a lot of work going on with ai to an automation to really kind of enabled to develop the develops folks to be in control of the process and not have to be security experts but ensuring that the security is baked in shifting >>left. We saw that the chat room was really active on the security side and one of the things I noticed was not just shift left but the other groups, the security groups and the theme of cultural, I won't say war but collision cultural shift that's happening between the groups is interesting because you have this new devops persona has been around Emily put it out for a while. But now it's going to the next level. There's new revolutions about a mindset, a systems mindset. It's a thinking and you start to see the new young companies coming out being funded by the gray locks of the world who are now like not going to be given the we lost the top three clouds one, everything. there's new business models and new technical architecture in the cloud and that's gonna be jerry Chen talk coming up next is going to be castles in the clouds because jerry chant always talked about moats, competitive advantage and how moats are key to success to guard the castle. And then we always joke, there's no more moz because the cloud has killed all the boats. But now the motor in the cloud, the castles are in the cloud, not on the ground. So very interesting thought provoking. But he's got data and if you look at the successful companies like the snowflakes of the world, you're starting to see these new formations of this new layer of innovation where companies are growing rapidly, 98 unicorns now in the cloud. Unbelievable, >>wow, that's a lot. One of the things you mentioned, there's competitive advantage and these startups are all fueled by that they know that there are other companies in the rear view mirror right behind them. If they're not able to work as quickly and as flexibly as a competitor, they have to have that speed that time to market that time to value. It was absolutely critical. And that's one of the things I think thematically that I saw along the eighth sort of that I talked to is that time to value is absolutely table stakes. >>Well, I'm looking forward to talking to jerry chan because we've talked on the queue before about this whole idea of What happens when winner takes most would mean the top 3, 4 cloud players. What happens? And we were talking about that and saying, if you have a model where an ecosystem can develop, what does that look like and back in 2013, 2014, 2015, no one really had an answer. Jerry was the only BC. He really nailed it with this castles in the cloud. He nailed the idea that this is going to happen. And so I think, you know, we'll look back at the tape or the videos from the cube, we'll find those cuts. But we were talking about this then we were pontificating and riffing on the fact that there's going to be new winners and they're gonna look different as Andy Jassy always says in the cube you have to be misunderstood if you're really going to make something happen. Most of the most successful companies are misunderstood. Not anymore. The cloud scales there. And that's what's exciting about all this. >>It is exciting that the scale is there, the appetite is there the appetite to challenge the status quo, which is right now in this economic and dynamic market that we're living in is there's nothing better. >>One of the things that's come up and and that's just real quick before we bring jerry in is automation has been insecurity, absolutely security's been in every conversation, but automation is now so hot in the sense of it's real and it's becoming part of all the design decisions. How can we automate can we automate faster where the keys to automation? Is that having the right data, What data is available? So I think the idea of automation and Ai are driving all the change and that's to me is what these new companies represent this modern error where AI is built into the outcome and the apps and all that infrastructure. So it's super exciting. Um, let's check in, we got jerry Chen line at least a great. We're gonna come back after jerry and then kick off the day. Let's bring in jerry Chen from Greylock is he here? Let's bring him in there. He is. >>Hey john good to see you. >>Hey, congratulations on an amazing talk and thesis on the castles on the cloud. Thanks for coming on. >>All right, Well thanks for reading it. Um, always were being put a piece of workout out either. Not sure what the responses, but it seemed to resonate with a bunch of developers, founders, investors and folks like yourself. So smart people seem to gravitate to us. So thank you very much. >>Well, one of the benefits of doing the Cube for 11 years, Jerry's we have videotape of many, many people talking about what the future will hold. You kind of are on this early, it wasn't called castles in the cloud, but you were all I was, we had many conversations were kind of connecting the dots in real time. But you've been on this for a while. It's great to see the work. I really think you nailed this. I think you're absolutely on point here. So let's get into it. What is castles in the cloud? New research to come out from Greylock that you spearheaded? It's collaborative effort, but you've got data behind it. Give a quick overview of what is castle the cloud, the new modes of competitive advantage for companies. >>Yeah, it's as a group project that our team put together but basically john the question is, how do you win in the cloud? Remember the conversation we had eight years ago when amazon re event was holy cow, Like can you compete with them? Like is it a winner? Take all? Winner take most And if it is winner take most, where are the white spaces for Some starts to to emerge and clearly the past eight years in the cloud this journey, we've seen big companies, data breaks, snowflakes, elastic Mongo data robot. And so um they spotted the question is, you know, why are the castles in the cloud? The big three cloud providers, Amazon google and Azure winning. You know, what advantage do they have? And then given their modes of scale network effects, how can you as a startup win? And so look, there are 500 plus services between all three cloud vendors, but there are like 500 plus um startups competing gets a cloud vendors and there's like almost 100 unicorn of private companies competing successfully against the cloud vendors, including public companies. So like Alaska, Mongo Snowflake. No data breaks. Not public yet. Hashtag or not public yet. These are some examples of the names that I think are winning and watch this space because you see more of these guys storm the castle if you will. >>Yeah. And you know one of the things that's a funny metaphor because it has many different implications. One, as we talk about security, the perimeter of the gates, the moats being on land. But now you're in the cloud, you have also different security paradigm. You have a different um, new kinds of services that are coming on board faster than ever before. Not just from the cloud players but From companies contributing into the ecosystem. So the combination of the big three making the market the main markets you, I think you call 31 markets that we know of that probably maybe more. And then you have this notion of a sub market, which means that there's like we used to call it white space back in the day, remember how many whites? Where's the white space? I mean if you're in the cloud, there's like a zillion white spaces. So talk about this sub market dynamic between markets and that are being enabled by the cloud players and how these sub markets play into it. >>Sure. So first, the first problem was what we did. We downloaded all the services for the big three clowns. Right? And you know what as recalls a database or database service like a document DB and amazon is like Cosmo dB and Azure. So first thing first is we had to like look at all three cloud providers and you? Re categorize all the services almost 500 Apples, Apples, Apples # one number two is you look at all these markets or sub markets and said, okay, how can we cluster these services into things that you know you and I can rock right. That's what amazon Azure and google think about. It is very different and the beauty of the cloud is this kind of fat long tail of services for developers. So instead of like oracle is a single database for all your needs. They're like 20 or 30 different databases from time series um analytics, databases. We're talking rocks at later today. Right. Um uh, document databases like Mongo search database like elastic. And so what happens is there's not one giant market like databases, there's a database market And 30, 40 sub markets that serve the needs developers. So the Great News is cloud has reduced the cost and create something that new for developers. Um also the good news is for a start up you can find plenty of white speeds solving a pain point, very specific to a different type of problem >>and you can sequence up to power law to this. I love the power of a metaphor, you know, used to be a very thin neck note no torso and then a long tail. But now as you're pointing out this expansion of the fat tail of services, but also there's big tam's and markets available at the top of the power law where you see coming like snowflake essentially take on the data warehousing market by basically sitting on amazon re factoring with new services and then getting a flywheel completely changing the economic unit economics completely changing the consumption model completely changing the value proposition >>literally you >>get Snowflake has created like a storm, create a hole, that mode or that castle wall against red shift. Then companies like rock set do your real time analytics is Russian right behind snowflakes saying, hey snowflake is great for data warehouse but it's not fast enough for real time analytics. Let me give you something new to your, to your parallel argument. Even the big optic snowflake have created kind of a wake behind them that created even more white space for Gaza rock set. So that's exciting for guys like me and >>you. And then also as we were talking about our last episode two or quarter two of our showcase. Um, from a VC came on, it's like the old shelf where you didn't know if a company's successful until they had to return the inventory now with cloud you if you're not successful, you know it right away. It's like there's no debate. Like, I mean you're either winning or not. This is like that's so instrumented so a company can have a good better mousetrap and win and fill the white space and then move up. >>It goes both ways. The cloud vendor, the big three amazon google and Azure for sure. They instrument their own class. They know john which ecosystem partners doing well in which ecosystems doing poorly and they hear from the customers exactly what they want. So it goes both ways they can weaponize that. And just as well as you started to weaponize that info >>and that's the big argument of do that snowflake still pays the amazon bills. They're still there. So again, repatriation comes back, That's a big conversation that's come up. What's your quick take on that? Because if you're gonna have a castle in the cloud, then you're gonna bring it back to land. I mean, what's that dynamic? Where do you see that compete? Because on one hand is innovation. The other ones maybe cost efficiency. Is that a growth indicator slow down? What's your view on the movement from and to the cloud? >>I think there's probably three forces you're finding here. One is the cost advantage in the scale advantage of cloud so that I think has been going for the past eight years, there's a repatriation movement for a certain subset of customers, I think for cost purposes makes sense. I think that's a tiny handful that believe they can actually run things better than a cloud. The third thing we're seeing around repatriation is not necessary against cloud, but you're gonna see more decentralized clouds and things pushed to the edge. Right? So you look at companies like Cloudflare Fastly or a company that we're investing in Cato networks. All ideas focus on secure access at the edge. And so I think that's not the repatriation of my own data center, which is kind of a disaggregated of cloud from one giant monolithic cloud, like AWS east or like a google region in europe to multiple smaller clouds for governance purposes, security purposes or legacy purposes. >>So I'm looking at my notes here, looking down on the screen here for this to read this because it's uh to cut and paste from your thesis on the cloud. The excellent cloud. The of the $38 billion invested this quarter. Um Ai and ml number one, um analytics. Number two, security number three. Actually, security number one. But you can see the bubbles here. So all those are data problems I need to ask you. I see data is hot data as intellectual property. How do you look at that? Because we've been reporting on this and we just started the cube conversation around workflows as intellectual property. If you have scale and your motives in the cloud. You could argue that data and the workflows around those data streams is intellectual property. It's a protocol >>I believe both are. And they just kind of go hand in hand like peanut butter and jelly. Right? So data for sure. I. P. So if you know people talk about days in the oil, the new resource. That's largely true because of powers a bunch. But the workflow to your point john is sticky because every company is a unique snowflake right? Like the process used to run the cube and your business different how we run our business. So if you can build a workflow that leverages the data, that's super sticky. So in terms of switching costs, if my work is very bespoke to your business, then I think that's competitive advantage. >>Well certainly your workflow is a lot different than the cube. You guys just a lot of billions of dollars in capital. We're talking to all the people out here jerry. Great to have you on final thought on your thesis. Where does it go from here? What's been the reaction? Uh No, you put it out there. Great love the restart. Think you're on point on this one. Where did we go from here? >>We have to follow pieces um in the near term one around, you know, deep diver on open source. So look out for that pretty soon and how that's been a powerful strategy a second. Is this kind of just aggregation of the cloud be a Blockchain and you know, decentralized apps, be edge applications. So that's in the near term two more pieces of, of deep dive we're doing. And then the goal here is to update this on a quarterly and annual basis. So we're getting submissions from founders that wanted to say, hey, you missed us or he screwed up here. We got the big cloud vendors saying, Hey jerry, we just lost his new things. So our goal here is to update this every single year and then probably do look back saying, okay, uh, where were we wrong? We're right. And then let's say the castle clouds 2022. We'll see the difference were the more unicorns were there more services were the IPO's happening. So look for some short term work from us on analytics, like around open source and clouds. And then next year we hope that all of this forward saying, Hey, you have two year, what's happening? What's changing? >>Great stuff and, and congratulations on the southern news. You guys put another half a billion dollars into early, early stage, which is your roots. Are you still doing a lot of great investments in a lot of unicorns. Congratulations that. Great luck on the team. Thanks for coming on and congratulations you nailed this one. I think I'm gonna look back and say that this is a pretty seminal piece of work here. Thanks for sharing. >>Thanks john thanks for having us. >>Okay. Okay. This is the cube here and 81 startup showcase. We're about to get going in on all the hot companies closing out the kino lisa uh, see jerry Chen cube alumni. He was right from day one. We've been riffing on this, but he nails it here. I think Greylock is lucky to have him as a general partner. He's done great deals, but I think he's hitting the next wave big. This is, this is huge. >>I was listening to you guys talking thinking if if you had a crystal ball back in 2013, some of the things Jerry saying now his narrative now, what did he have a crystal >>ball? He did. I mean he could be a cuBA host and I could be a venture capital. We were both right. I think so. We could have been, you know, doing that together now and all serious now. He was right. I mean, we talked off camera about who's the next amazon who's going to challenge amazon and Andy Jassy was quoted many times in the queue by saying, you know, he was surprised that it took so long for people to figure out what they were doing. Okay, jerry was that VM where he had visibility into the cloud. He saw amazon right away like we did like this is a winning formula and so he was really out front on this one. >>Well in the investments that they're making in these unicorns is exciting. They have this, this lens that they're able to see the opportunities there almost before anybody else can. And finding more white space where we didn't even know there was any. >>Yeah. And what's interesting about the report I'm gonna dig into and I want to get to him while he's on camera because it's a great report, but He says it's like 500 services I think Amazon has 5000. So how you define services as an interesting thing and a lot of amazon services that they have as your doesn't have and vice versa, they do call that out. So I find the report interesting. It's gonna be a feature game in the future between clouds the big three. They're gonna say we do this, you're starting to see the formation, Google's much more developer oriented. Amazon is much more stronger in the governance area with data obviously as he pointed out, they have such experience Microsoft, not so much their developer cloud and more office, not so much on the government's side. So that that's an indicator of my, my opinion of kind of where they rank. So including the number one is still amazon web services as your long second place, way behind google, right behind Azure. So we'll see how the horses come in, >>right. And it's also kind of speaks to the hybrid world in which we're living the hybrid multi cloud world in which many companies are living as companies to not just survive in the last year and a half, but to thrive and really have to become data companies and leverage that data as a competitive advantage to be able to unlock the value of it. And a lot of these startups that we talked to in the showcase are talking about how they're helping organizations unlock that data value. As jerry said, it is the new oil, it's the new gold. Not unless you can unlock that value faster than your competition. >>Yeah, well, I'm just super excited. We got a great day ahead of us with with all the cots startups. And then at the end day, Volonte is gonna interview, hello, fresh practitioners, We're gonna close it out every episode now, we're going to do with the closing practitioner. We try to get jpmorgan chase data measures. The hottest area right now in the enterprise data is new competitive advantage. We know that data workflows are now intellectual property. You're starting to see data really factoring into these applications now as a key aspect of the competitive advantage and the value creation. So companies that are smart are investing heavily in that and the ones that are kind of slow on the uptake are lagging the market and just trying to figure it out. So you start to see that transition and you're starting to see people fall away now from the fact that they're not gonna make it right, You're starting to, you know, you can look at look at any happens saying how much ai is really in there. Real ai what's their data strategy and you almost squint through that and go, okay, that's gonna be losing application. >>Well the winners are making it a board level conversation >>And security isn't built in. Great to have you on this morning kicking it off. Thanks John Okay, we're going to go into the next set of the program at 10:00 we're going to move into the breakouts. Check out the companies is three tracks in there. We have an awesome track on devops pure devops. We've got the data and analytics and we got the cloud management and just to run down real quick check out the sis dig harness. Io system is doing great, securing devops harness. IO modern software delivery platform, White Source. They're preventing and remediating the rest of the internet for them for the company's that's a really interesting and lumbago, effortless acres land and monitoring functions, server list super hot. And of course hacker one is always great doing a lot of great missions and and bounties you see those success continue to send i O there in Palo alto changing the game on data engineering and data pipe lining. Okay. Data driven another new platform, horizontally scalable and of course thought spot ai driven kind of a search paradigm and of course rock set jerry Chen's companies here and press are all doing great in the analytics and then the cloud management cost side 80 operations day to operate. Ops ramps and ops multi cloud are all there and sunny, all all going to present. So check them out. This is the Cubes Adria's startup showcase episode three.

Published Date : Sep 23 2021

SUMMARY :

the hottest companies and devops data analytics and cloud management lisa martin and David want are here to kick the golf PGA championship with the cube Now we got the hybrid model, This is the new normal. We did the show with AWS storage day where the Ceo and their top people cloud management, devops data, nelson security. We've talked to like you said, there's, there's C suite, Dave so the format of this event, you're going to have a fireside chat Well at the highest level john I've always said we're entering that sort of third great wave of cloud. you know, it's a passionate topic of mine. for the folks watching check out David Landes, Breaking analysis every week, highlighting the cutting edge trends So I gotta ask you, the reinvent is on, everyone wants to know that's happening right. I've got my to do list on my desk and I do need to get my Uh, and castles in the cloud where competitive advantages can be built in the cloud. you know, it's kind of cool Jeff, if I may is is, you know, of course in the early days everybody said, the infrastructure simply grows to meet their demand and it's it's just a lot less things that they have to worry about. in the cloud with the cloud scale devops personas, whatever persona you want to talk about but And the interesting to put to use, maybe they're a little bit apprehensive about something brand new and they hear about the cloud, One of the things you're gonna hear today, we're talking about speed traditionally going You hear iterate really quickly to meet those needs in, the cloud scale and again and it's finally here, the revolution of deVOps is going to the next generation I'm actually really looking forward to hearing from Emily. we really appreciate you coming on really, this is about to talk around deVOPS next Thank you for having me. Um, you know, that little secret radical idea, something completely different. that has actually been around since the sixties, which is wild to me um, dusted off all my books from college in the 80s and the sea estimates it And the thing is personas are immutable in my opinion. And I've been discussing with many of these companies around the roles and we're hearing from them directly and they're finding sure that developers have all the tools they need to be productive and honestly happy. And I think he points to the snowflakes of the world. and processes to accelerate their delivery and that is the competitive advantage. Let's now go to your lightning keynote talk. I figure all the things you have to call lawyers for should just live together. David lot is getting ready for the fireside chat ending keynote with the practitioner. The revolution of devops and the creative element was a really nice surprise there. All the cube interviews we do is that you're seeing the leaders, the SVP's of engineering It's really the driver of how we should be looking at this. off the charts in a lot of young people come from discord servers. the folks that have been doing this for since the 60s and the new folks now to really look lens and I think she's a great setup on that lightning top of the 15 companies we got because you ensuring that the security is baked in shifting happening between the groups is interesting because you have this new devops persona has been One of the things you mentioned, there's competitive advantage and these startups are He nailed the idea that this is going to happen. It is exciting that the scale is there, the appetite is there the appetite to challenge and Ai are driving all the change and that's to me is what these new companies represent Thanks for coming on. So smart people seem to gravitate to us. Well, one of the benefits of doing the Cube for 11 years, Jerry's we have videotape of many, Remember the conversation we had eight years ago when amazon re event So the combination of the big three making the market the main markets you, of the cloud is this kind of fat long tail of services for developers. I love the power of a metaphor, Even the big optic snowflake have created kind of a wake behind them that created even more Um, from a VC came on, it's like the old shelf where you didn't know if a company's successful And just as well as you started to weaponize that info and that's the big argument of do that snowflake still pays the amazon bills. One is the cost advantage in the So I'm looking at my notes here, looking down on the screen here for this to read this because it's uh to cut and paste But the workflow to your point Great to have you on final thought on your thesis. We got the big cloud vendors saying, Hey jerry, we just lost his new things. Great luck on the team. I think Greylock is lucky to have him as a general partner. into the cloud. Well in the investments that they're making in these unicorns is exciting. Amazon is much more stronger in the governance area with data And it's also kind of speaks to the hybrid world in which we're living the hybrid multi So companies that are smart are investing heavily in that and the ones that are kind of slow We've got the data and analytics and we got the cloud management and just to run down real quick

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

Emily FreemanPERSON

0.99+

EmilyPERSON

0.99+

JeffPERSON

0.99+

DavidPERSON

0.99+

2008DATE

0.99+

Andy JassyPERSON

0.99+

MicrosoftORGANIZATION

0.99+

2013DATE

0.99+

AmazonORGANIZATION

0.99+

2015DATE

0.99+

amazonORGANIZATION

0.99+

2014DATE

0.99+

JohnPERSON

0.99+

20 spokesQUANTITY

0.99+

lisa martinPERSON

0.99+

jerry ChenPERSON

0.99+

20QUANTITY

0.99+

11 yearsQUANTITY

0.99+

$38 billionQUANTITY

0.99+

JerryPERSON

0.99+

Jeff BarrPERSON

0.99+

ToyotaORGANIZATION

0.99+

lisa DavePERSON

0.99+

500 servicesQUANTITY

0.99+

jpmorganORGANIZATION

0.99+

lisaPERSON

0.99+

31 marketsQUANTITY

0.99+

europeLOCATION

0.99+

two ideasQUANTITY

0.99+

15 companiesQUANTITY

0.99+

firstQUANTITY

0.99+

next yearDATE

0.99+

15 countriesQUANTITY

0.99+

GoogleORGANIZATION

0.99+

each elementQUANTITY

0.99+

last weekDATE

0.99+

AWSORGANIZATION

0.99+

first impressionQUANTITY

0.99+

5000QUANTITY

0.99+

eight years agoDATE

0.99+

both waysQUANTITY

0.99+

februaryDATE

0.99+

two yearQUANTITY

0.99+

OneQUANTITY

0.99+

next weekDATE

0.99+

googleORGANIZATION

0.99+

David LandesPERSON

0.99+

FirstQUANTITY

0.99+

bothQUANTITY

0.99+

todayDATE

0.99+

eightQUANTITY

0.99+

GazaLOCATION

0.99+

twoQUANTITY

0.99+

97 thingsQUANTITY

0.98+

PUBLIC SECTOR Speed to Insight


 

>>Hi, this is Cindy Mikey, vice president of industry solutions at caldera. Joining me today is chef is Molly, our solution engineer for the public sector. Today. We're going to talk about speed to insight. Why using machine learning in the public sector, specifically around fraud, waste and abuse. So topic for today, we'll discuss machine learning, why the public sector uses it to target fraud, waste, and abuse, the challenges. How do we enhance your data and analytical approaches the data landscape analytical methods and shad we'll go over reference architecture and a case study. So by definition at fraud waste and abuse per the government accountability office is broad as an attempt to obtain something about a value through unwelcomed misrepresentation waste is about squandering money or resources and abuse is about behaving improperly or unreasonably to actually obtain something of value for your personal, uh, benefit. So as we look at fraud, um, and across all industries, it's a top of mind, um, area within the public sector. >>Um, the types of fraud that we see is specifically around cyber crime, uh, looking at accounting fraud, whether it be from an individual perspective to also, uh, within organizations, looking at financial statement fraud, to also looking at bribery and corruption, as we look at fraud, it really hits us from all angles, whether it be from external perpetrators or internal perpetrators, and specifically for the research by PWC, the key focus area is we also see over half of fraud is actually through some form of internal or external perpetrators, again, key topics. So as we also look at a report recently by the association of certified fraud examiners, um, within the public sector, the us government, um, in 2017, it was identified roughly $148 billion was attributable to fraud, waste and abuse. Specifically of that 57 billion was focused on reported monetary losses and another 91 billion on areas where that opportunity or the monetary basis had not yet been measured. >>As we look at breaking those areas down again, we look at several different topics from an out payment perspective. So breaking it down within the health system, over $65 billion within social services, over $51 billion to procurement fraud to also, uh, uh, fraud, waste and abuse that's happening in the grants and the loan process to payroll fraud, and then other aspects, again, quite a few different topical areas. So as we look at those areas, what are the areas that we see additional type of focus, those are broad stroke areas. What are the actual use cases that, um, agencies are using the data landscape? What data, what analytical methods can we use to actually help curtail and prevent some of the, uh, the fraud waste and abuse. So, as we look at some of the analytical processes and analytical use great, uh, use cases in the public sector, whether it's from, uh, you know, the taxation areas to looking at, you know, social services, uh, to public safety, to also the, um, our, um, additional agency methods, we're going to focus specifically on some of the use cases around, um, you know, fraud within the tax area. >>Uh, we'll briefly look at some of the aspects of unemployment insurance fraud, uh, benefit fraud, as well as payment integrity. So fraud has its, um, uh, underpinnings in quite a few different government agencies and difficult, different analytical methods and I usage of different data. So I think one of the key elements is, you know, you can look at your, your data landscape on specific data sources that you need, but it's really about bringing together different data sources across a different variety, a different velocity. So, uh, data has different dimensions. So we'll look at on structured types of data of semi-structured data, behavioral data, as well as when we look at, um, you know, predictive models, we're typically looking at historical type information, but if we're actually trying to look at preventing fraud before it actually happens, or when a case may be in flight, which is specifically a use case that Chev is going to talk about later it's how do I look at more, that real, that streaming information? >>How do I take advantage of data, whether it be, uh, you know, uh, financial transactions we're looking at, um, asset verification, we're looking at tax records, we're looking at corporate filings. Um, and we can also look at more, uh, advanced data sources where as we're looking at, um, investigation type information. So we're maybe going out and we're looking at, uh, deep learning type models around, uh, you know, semi or that, uh, behavioral that's unstructured data, whether it be camera analysis and so forth. So for quite a different variety of data and the breadth and the opportunity really comes about when you can integrate and look at data across all different data sources. So in essence, looking at a more extensive, uh, data landscape. So specifically I want to focus on some of the methods, some of the data sources and some of the analytical techniques that we're seeing, uh, being used, um, in the government agencies, as well as opportunities to look at new methods. >>So as we're looking at, you know, from a, um, an audit planning or looking at, uh, the opportunity for the likelihood of non-compliance, um, specifically we'll see data sources where we're maybe looking at a constituents profile, we might actually be investigating the forms that they provided. We might be comparing that data, um, or leveraging internal data sources, possibly looking at net worth, comparing it against other financial data, and also comparison across other constituents groups. Some of the techniques that we use are some of the basic natural language processing, maybe we're going to do some text mining. We might be doing some probabilistic modeling, uh, where we're actually looking at, um, information within the agency to also comparing that against possibly tax forms. A lot of times it's information historically has been done on a batch perspective, both structured and semi-structured type information. And typically the data volumes can be low, but we're also seeing those data volumes on increase exponentially based upon the types of events that we're dealing with, the number of transactions. >>Um, so getting the throughput, um, and chef's going to specifically talk about that in a moment. The other aspect is, as we look at other areas of opportunity is when we're building upon, how do I actually do compliance? How do I actually look at conducting audits or potential fraud to also looking at areas of under-reported tax information? So there you might be pulling in, um, some of our other types of data sources, whether it's being property records, it could be data that's being supplied by the actual constituents or by vendors to also pulling in social media information to geographical information, to leveraging photos on techniques that we're seeing used is possibly some sentiment analysis, link analysis. Um, how do we actually blend those data sources together from a natural language processing? But I think what's important here is also the method and the looking at the data velocity, whether it be batch, whether it be near real time, again, looking at all types of data, whether it's structured semi-structured or unstructured and the key and the value behind this is, um, how do we actually look at increasing the potential revenue or the, uh, under reported revenue? >>Uh, how do we actually look at stopping fraudulent payments before they actually occur? Um, also looking at increasing the amount of, uh, the level of compliance, um, and also looking at the potential of prosecution of fraud cases. And additionally, other areas of opportunity could be looking at, um, economic planning. How do we actually perform some link analysis? How do we bring some more of those things that we saw in the data landscape on customer, or, you know, constituent interaction, bringing in social media, bringing in, uh, potentially police records, property records, um, other tax department, database information. Um, and then also looking at comparing one individual to other individuals, looking at people like a specific constituent, are there areas where we're seeing, uh, um, other aspects of a fraud potentially being occurring. Um, and also as we move forward, some of the more advanced techniques that we're seeing around deep learning is looking at computer vision, um, leveraging geospatial information, looking at social network entity analysis, uh, also looking at, um, agent-based modeling techniques, where we're looking at, uh, simulation Monte Carlo type techniques that we typically see in the financial services industry, actually applying that to fraud, waste, and abuse within the, uh, the public sector. >>Um, and again, that really lends itself to a new opportunities. And on that, I'm going to turn it over to Shev to talk about, uh, the reference architecture for, uh, doing these baskets. >>Thanks, Cindy. Um, so I'm going to walk you through an example, reference architecture for fraud detection using, uh, Cloudera underlying technology. Um, and you know, before I get into the technical details, uh, I want to talk about how this would be implemented at a much higher level. So with fraud detection, what we're trying to do is identify anomalies or novelists behavior within our data sets. Um, now in order to understand what aspects of our incoming data represents anomalous behavior, we first need to understand what normal behavior is. So in essence, once we understand normal behavior, anything that deviates from it can be thought of as an anomaly, right? So in order to understand what normal behavior is, we're going to need to be able to collect store and process a very large amount of historical data. And so then comes clutter's platform and this reference architecture that needs to before you, so, uh, let's start on the left-hand side of this reference architecture with the collect phase. >>So fraud detection will always begin with data collection. Uh, we need to collect large amounts of information from systems that could be in the cloud. It could be in the data center or even on edge devices, and this data needs to be collected so we can create our normal behavior profiles. And these normal behavioral profiles would then in turn, be used to create our predictive models for fraudulent activity. Now, uh, uh, to the data collection side, one of the main challenges that many organizations face, uh, in this phase, uh, involves using a single technology that can handle, uh, data that's coming in all different types of formats and protocols and standards with different porosities and velocities. Um, let me give you an example. Uh, we could be collecting data from a database that gets updated daily, uh, and maybe that data is being collected in Agra format. >>At the same time, we can be collecting data from an edge device that's streaming in every second, and that data may be coming in Jason or a binary format, right? So this is a data collection challenge that can be solved with clutter data flow, which is a suite of technologies built on Apache NIFA and mini five, allowing us to ingest all of this data, do a drag and drop interface. So now we're collecting all of this data, that's required to map out normal behavior. The next thing that we need to do is enrich it, transform it and distribute it to, uh, you know, downstream systems for further process. Uh, so let's, let's walk through how that would work first. Let's taking Richmond for, uh, for enrichment, think of adding additional information to your incoming data, right? Let's take, uh, financial transactions, for example, uh, because Cindy mentioned it earlier, right? >>You can store known locations of an individual in an operational database, uh, with Cloudera that would be HBase. And as an individual makes a new transaction, their geo location that's in that transaction data, it can be enriched with previously known locations of that very same individual and all of that enriched data. It can be later used downstream for predictive analysis, predictable. So the data has been enrich. Uh, now it needs to be transformed. We want the data that's coming in, uh, you know, Avro and Jason and binary and whatever other format to be transformed into a single common format. So it can be used downstream for stream processing. Uh, again, this is going to be done through clutter and data flow, which is backed by NIFA, right? So the transformed semantic data is then going to be stimulated to Kafka and coffin. It's going to serve as that central repository of syndicated services or a buffer zone, right? >>So cough is, you know, pretty much provides you with, uh, extremely fast resilient and fault tolerance storage. And it's also going to give you the consumer APIs that you need that are going to enable a wide variety of applications to leverage that enriched and transformed data within your buffer zone. Uh, I'll add that, you know, 17, so you can store that data, uh, in a distributed file system, give you that historical context that you're going to need later on for machine learning, right? So the next step in the architecture is to leverage a cluttered SQL string builder, which enables us to write, uh, streaming sequel jobs on top of Apache Flink. So we can, uh, filter, analyze and, uh, understand the data that's in the Kafka buffer zone in real time. Uh I'll you know, I'll also add like, you know, if you have time series data, or if you need a lab type of cubing, you can leverage kudu, uh, while EDA or exploratory data analysis and visualization, uh, can all be enabled through clever visual patient technology. >>All right, so we've filtered, we've analyzed and we've explored our incoming data. We can now proceed to train our machine learning models, uh, which will detect anomalous behavior in our historically collected data set, uh, to do this, we can use a combination of supervised unsupervised, uh, even deep learning techniques with neural networks and these models can be tested on new incoming streaming data. And once we've gone ahead and obtain the accuracy of the performance, the scores that we want, we can then take these models and deploy them into production. And once the models are productionalized or operationalized, they can be leveraged within our streaming pipeline. So as new data is ingested in real-time knife, I can query these models to detect if the activity is anomalous or fraudulent. And if it is, they can alert downstream users and systems, right? So this in essence is how fraudulent activity detection works. >>Uh, and this entire pipeline is powered by clutter's technology, right? And so, uh, the IRS is one of, uh, clutters customers. That's leveraging our platform today and implementing, uh, a very similar architecture, uh, to detect fraud, waste, and abuse across a very large set of, uh, historical facts, data. Um, and one of the neat things with the IRS is that they've actually, uh, recently leveraged the partnership between Cloudera and Nvidia to accelerate their Spark-based analytics and their machine learning. Uh, and the results have been nothing short of amazing, right? And in fact, we have a quote here from Joe and salty who's, uh, you know, the technical branch chief for the research analytics and statistics division group within the IRS with zero changes to our fraud detection workflow, we're able to obtain eight times to performance simply by adding GPS to our mainstream big data servers. This improvement translates to half the cost of ownership for the same workloads, right? So embedding GPU's into the reference architecture I covered earlier has enabled the IRS to improve their time to insights by as much as eight X while simultaneously reducing their underlying infrastructure costs by half, uh, Cindy back to you >>Chef. Thank you. Um, and I hope that you found, uh, some of the, the analysis, the information that Sheva and I have provided, um, to give you some insights on how cloud era is actually helping, uh, with the fraud waste and abuse challenges within the, uh, the public sector, um, specifically looking at any and all types of data, how the clutter a platform is bringing together and analyzing information, whether it be you're structured you're semi-structured to unstructured data, both in a fast or in a real time perspective, looking at anomalies, being able to do some of those on detection methods, uh, looking at neural network analysis, time series information. So next steps we'd love to have an additional conversation with you. You can also find on some additional information around, uh, how quad areas working in the federal government by going to cloudera.com solutions slash public sector. And we welcome scheduling a meeting with you again, thank you for joining Chevy and I today, we greatly appreciate your time and look forward to future >>Conversation..

Published Date : Aug 5 2021

SUMMARY :

So as we look at fraud, So as we also look at a So as we look at those areas, what are the areas that we see additional So I think one of the key elements is, you know, you can look at your, looking at, uh, deep learning type models around, uh, you know, So as we're looking at, you know, from a, um, an audit planning or looking and the value behind this is, um, how do we actually look at increasing Um, also looking at increasing the amount of, uh, the level of compliance, And on that, I'm going to turn it over to Shev to talk about, uh, the reference architecture for, before I get into the technical details, uh, I want to talk about how this would be implemented at a much higher It could be in the data center or even on edge devices, and this data needs to be collected so uh, you know, downstream systems for further process. So the data has been enrich. So the next step in the architecture is to leverage a cluttered SQL string builder, historically collected data set, uh, to do this, we can use a combination of supervised And in fact, we have a quote here from Joe and salty who's, uh, you know, the technical branch chief for the the analysis, the information that Sheva and I have provided, um, to give you some insights on

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Cindy MikeyPERSON

0.99+

NvidiaORGANIZATION

0.99+

MollyPERSON

0.99+

2017DATE

0.99+

patrickPERSON

0.99+

NVIDIAORGANIZATION

0.99+

PWCORGANIZATION

0.99+

CindyPERSON

0.99+

Patrick OsbournePERSON

0.99+

JoePERSON

0.99+

PeterPERSON

0.99+

NIFAORGANIZATION

0.99+

TodayDATE

0.99+

todayDATE

0.99+

HPORGANIZATION

0.99+

ClouderaORGANIZATION

0.99+

over $65 billionQUANTITY

0.99+

over $51 billionQUANTITY

0.99+

last yearDATE

0.99+

ShevPERSON

0.99+

57 billionQUANTITY

0.99+

IRSORGANIZATION

0.99+

ShevaPERSON

0.98+

JasonPERSON

0.98+

firstQUANTITY

0.98+

bothQUANTITY

0.97+

oneQUANTITY

0.97+

HPEORGANIZATION

0.97+

IntelORGANIZATION

0.97+

AvroPERSON

0.96+

saltyPERSON

0.95+

eight XQUANTITY

0.95+

ApacheORGANIZATION

0.94+

single technologyQUANTITY

0.92+

eight timesQUANTITY

0.92+

91 billionQUANTITY

0.91+

zero changesQUANTITY

0.9+

next yearDATE

0.9+

calderaORGANIZATION

0.9+

ChevORGANIZATION

0.87+

RichmondLOCATION

0.85+

three prongQUANTITY

0.85+

$148 billionQUANTITY

0.84+

single common formatQUANTITY

0.83+

SQLTITLE

0.82+

KafkaPERSON

0.82+

ChevyPERSON

0.8+

HP LabsORGANIZATION

0.8+

one individualQUANTITY

0.8+

PatrickPERSON

0.78+

Monte CarloTITLE

0.76+

halfQUANTITY

0.75+

over halfQUANTITY

0.68+

17QUANTITY

0.65+

secondQUANTITY

0.65+

HBaseTITLE

0.56+

elementsQUANTITY

0.53+

Apache FlinkORGANIZATION

0.53+

cloudera.comOTHER

0.5+

coffinPERSON

0.5+

SparkTITLE

0.49+

LakeCOMMERCIAL_ITEM

0.48+

HPETITLE

0.47+

mini fiveCOMMERCIAL_ITEM

0.45+

GreenORGANIZATION

0.37+

PUBLIC SECTOR V1 | CLOUDERA


 

>>Hi, this is Cindy Mikey, vice president of industry solutions at caldera. Joining me today is chef is Molly, our solution engineer for the public sector. Today. We're going to talk about speed to insight. Why using machine learning in the public sector, specifically around fraud, waste and abuse. So topic for today, we'll discuss machine learning, why the public sector uses it to target fraud, waste, and abuse, the challenges. How do we enhance your data and analytical approaches the data landscape analytical methods and shad we'll go over reference architecture and a case study. So by definition, fraud, waste and abuse per the government accountability office is fraud. Isn't an attempt to obtain something about value through unwelcome misrepresentation waste is about squandering money or resources and abuse is about behaving improperly or unreasonably to actually obtain something of value for your personal benefit. So as we look at fraud, um, and across all industries, it's a top of mind, um, area within the public sector. >>Um, the types of fraud that we see is specifically around cyber crime, uh, looking at accounting fraud, whether it be from an individual perspective to also, uh, within organizations, looking at financial statement fraud, to also looking at bribery and corruption, as we look at fraud, it really hits us from all angles, whether it be from external perpetrators or internal perpetrators, and specifically from the research by PWC, the key focus area is we also see over half of fraud is actually through some form of internal or external, uh, perpetrators again, key topics. So as we also look at a report recently by the association of certified fraud examiners, um, within the public sector, the us government, um, in 2017, it was identified roughly $148 billion was attributable to fraud, waste and abuse. Specifically about 57 billion was focused on reported monetary losses and another 91 billion on areas where that opportunity or the monetary basis had not yet been measured. >>As we look at breaking those areas down again, we look at several different topics from permit out payment perspective. So breaking it down within the health system, over $65 billion within social services, over $51 billion to procurement fraud to also, um, uh, fraud, waste and abuse that's happening in the grants and the loan process to payroll fraud, and then other aspects, again, quite a few different topical areas. So as we look at those areas, what are the areas that we see additional type of focus, there's a broad stroke areas. What are the actual use cases that our agencies are using the data landscape? What data, what analytical methods can we use to actually help curtail and prevent some of the, uh, the fraud waste and abuse. So, as we look at some of the analytical processes and analytical use crate, uh, use cases in the public sector, whether it's from, uh, you know, the taxation areas to looking at, you know, social services, uh, to public safety, to also the, um, our, um, uh, additional agency methods, we're gonna use focused specifically on some of the use cases around, um, you know, fraud within the tax area. >>Uh, we'll briefly look at some of the aspects of, um, unemployment insurance fraud, uh, benefit fraud, as well as payment and integrity. So fraud has it it's, um, uh, underpinnings inquiry, like you different on government agencies and difficult, different analytical methods, and I usage of different data. So I think one of the key elements is, you know, you can look at your, your data landscape on specific data sources that you need, but it's really about bringing together different data sources across a different variety, a different velocity. So, uh, data has different dimensions. So we'll look at structured types of data of semi-structured data, behavioral data, as well as when we look at, um, you know, predictive models. We're typically looking at historical type information, but if we're actually trying to look at preventing fraud before it actually happens, or when a case may be in flight, which is specifically a use case that shad is going to talk about later is how do I look at more of that? >>Real-time that streaming information? How do I take advantage of data, whether it be, uh, you know, uh, financial transactions we're looking at, um, asset verification, we're looking at tax records, we're looking at corporate filings. Um, and we can also look at more, uh, advanced data sources where as we're looking at, um, investigation type information. So we're maybe going out and we're looking at, uh, deep learning type models around, uh, you know, semi or that, uh, behavioral, uh, that's unstructured data, whether it be camera analysis and so forth. So for quite a different variety of data and the, the breadth and the opportunity really comes about when you can integrate and look at data across all different data sources. So in a looking at a more extensive, uh, data landscape. So specifically I want to focus on some of the methods, some of the data sources and some of the analytical techniques that we're seeing, uh, being used, um, in the government agencies, as well as opportunities, uh, to look at new methods. >>So as we're looking at, you know, from a, um, an audit planning or looking at, uh, the opportunity for the likelihood of non-compliance, um, specifically we'll see data sources where we're maybe looking at a constituents profile, we might actually be investigating the forms that they've provided. We might be comparing that data, um, or leveraging internal data sources, possibly looking at net worth, comparing it against other financial data, and also comparison across other constituents groups. Some of the techniques that we use are some of the basic natural language processing, maybe we're going to do some text mining. We might be doing some probabilistic modeling, uh, where we're actually looking at, um, information within the agency to also comparing that against possibly tax forms. A lot of times it's information historically has been done on a batch perspective, both structured and semi-structured type information. And typically the data volumes can be low, but we're also seeing those data volumes on increase exponentially based upon the types of events that we're dealing with, the number of transactions. >>Um, so getting the throughput, um, and chef's going to specifically talk about that in a moment. The other aspect is, as we look at other areas of opportunity is when we're building upon, how do I actually do compliance? How do I actually look at conducting audits, uh, or potential fraud to also looking at areas of under-reported tax information? So there you might be pulling in some of our other types of data sources, whether it's being property records, it could be data that's being supplied by the actual constituents or by vendors to also pulling in social media information to geographical information, to leveraging photos on techniques that we're seeing used is possibly some sentiment analysis, link analysis. Um, how do we actually blend those data sources together from a natural language processing? But I think what's important here is also the method and the looking at the data velocity, whether it be batch, whether it be near real time, again, looking at all types of data, whether it's structured semi-structured or unstructured and the key and the value behind this is, um, how do we actually look at increasing the potential revenue or the, um, under reported revenue? >>Uh, how do we actually look at stopping fraudulent payments before they actually occur? Um, also looking at increasing the amount of, uh, the level of compliance, um, and also looking at the potential of prosecution of fraud cases. And additionally, other areas of opportunity could be looking at, um, economic planning. How do we actually perform some link analysis? How do we bring some more of those things that we saw in the data landscape on customer, or, you know, constituent interaction, bringing in social media, bringing in, uh, potentially police records, property records, um, other tax department, database information. Um, and then also looking at comparing one individual to other individuals, looking at people like a specific, like a constituent, are there areas where we're seeing, uh, >>Um, other >>Aspects of, of fraud potentially being occurring. Um, and also as we move forward, some of the more advanced techniques that we're seeing around deep learning is looking at computer vision, um, leveraging geospatial information, looking at social network entity analysis, uh, also looking at, uh, agent-based modeling techniques, where we're looking at simulation Monte Carlo type techniques that we typically see in the financial services industry, actually applying that to fraud, waste, and abuse within the, uh, the public sector. Um, and again, that really, uh, lends itself to a new opportunities. And on that, I'm going to turn it over to chef to talk about, uh, the reference architecture for, uh, doing these buckets. >>Thanks, Cindy. Um, so I'm gonna walk you through an example, reference architecture for fraud detection using, uh, Cloudera's underlying technology. Um, and you know, before I get into the technical details, uh, I want to talk about how this would be implemented at a much higher level. So with fraud detection, what we're trying to do is identify anomalies or novelists behavior within our datasets. Um, now in order to understand what aspects of our incoming data represents anomalous behavior, we first need to understand what normal behavior is. So in essence, once we understand normal behavior, anything that deviates from it can be thought of as an anomaly, right? So in order to understand what normal behavior is, we're going to need to be able to collect store and process a very large amount of historical data. And so incomes, clutters platform, and this reference architecture that needs to be for you. >>So, uh, let's start on the left-hand side of this reference architecture with the collect phase. So fraud detection will always begin with data collection. We need to collect large amounts of information from systems that could be in the cloud. It could be in the data center or even on edge devices, and this data needs to be collected so we can create our normal behavior profiles. And these normal behavioral profiles would then in turn, be used to create our predictive models for fraudulent activity. Now, uh, thinking, uh, to the data collection side, one of the main challenges that many organizations face, uh, in this phase, uh, involves using a single technology that can handle, uh, data that's coming in all different types of formats and protocols and standards with different velocities and velocities. Um, let me give you an example. Uh, we could be collecting data from a database that gets updated daily, uh, and maybe that data is being collected in Agra format. >>At the same time, we can be collecting data from an edge device that's streaming in every second, and that data may be coming in Jason or a binary format, right? So this is a data collection challenge that can be solved with cluttered data flow, which is a suite of technologies built on a patch NIFA in mini five, allowing us to ingest all of this data, do a drag and drop interface. So now we're collecting all of this data, that's required to map out normal behavior. The next thing that we need to do is enrich it, transform it and distribute it to, uh, you know, downstream systems for further process. Uh, so let's, let's walk through how that would work first. Let's taking Richmond for, uh, for enrichment, think of adding additional information to your incoming data, right? Let's take, uh, financial transactions, for example, uh, because Cindy mentioned it earlier, right? >>You can store known locations of an individual in an operational database, uh, with Cloudera that would be HBase. And as an individual makes a new transaction, their geolocation that's in that transaction data can be enriched with previously known locations of that very same individual. And all of that enriched data can be later used downstream for predictive analysis, predictable. So the data has been enrich. Uh, now it needs to be transformed. We want the data that's coming in, uh, you know, Avro and Jason and binary and whatever other format to be transformed into a single common format. So it can be used downstream for stream processing. Uh, again, this is going to be done through clutter and data flow, which is backed by NIFA, right? So the transformed semantic data is then going to be stricted to Kafka and coffin. It's going to serve as that central repository of syndicated services or a buffer zone, right? >>So coffee is going to pretty much provide you with, uh, extremely fast resilient and fault tolerance storage. And it's also gonna give you the consumer APIs that you need that are going to enable a wide variety of applications to leverage that enriched and transformed data within your buffer zone, uh, allowed that, you know, 17. So you can store that data in a distributed file system, give you that historical context that you're going to need later on for machine learning, right? So the next step in the architecture is to leverage a cluttered SQL stream builder, which enables us to write, uh, streaming SQL jobs on top of Apache Flink. So we can, uh, filter, analyze and, uh, understand the data that's in the Kafka buffer in real time. Uh I'll you know, I'll also add like, you know, if you have time series data, or if you need a lab type of cubing, you can leverage kudu, uh, while EDA or, you know, exploratory data analysis and visualization, uh, can all be enabled through clever visualization technology. >>All right, so we've filtered, we've analyzed and we've explored our incoming data. We can now proceed to train our machine learning models, uh, which will detect anomalous behavior in our historically collected data set, uh, to do this, we can use a combination of supervised unsupervised, uh, even deep learning techniques with neural networks. And these models can be tested on new incoming streaming data. And once we've gone ahead and obtain the accuracy of the performance, the scores that we want, we can then take these models and deploy them into production. And once the models are productionalized or operationalized, they can be leveraged within our streaming pipeline. So as new data is ingested in real-time knife, I can query these models to detect if the activity is anomalous or fraudulent. And if it is, they can alert downstream users and systems, right? So this in essence is how fraudulent activity detection works. >>Uh, and this entire pipeline is powered by clutters technology, right? And so, uh, the IRS is one of, uh, clutter's customers. That's leveraging our platform today and implementing, uh, a very similar architecture, uh, to detect fraud, waste, and abuse across a very large set of historical facts, data. Um, and one of the neat things with the IRS is that they've actually recently leveraged the partnership between Cloudera and Nvidia to accelerate their spark based analytics and their machine learning, uh, and the results have been nothing short of amazing, right? And in fact, we have a quote here from Joe and salty who's, uh, you know, the technical branch chief for the research analytics and statistics division group within the IRS with zero changes to our fraud detection workflow, we're able to obtain eight times to performance simply by adding GPS to our mainstream big data servers. This improvement translates to half the cost of ownership for the same workloads, right? So embedding GPU's into the reference architecture I covered earlier has enabled the IRS to improve their time to insights by as much as eight X while simultaneously reducing their underlying infrastructure costs by half, uh, Cindy back to you >>Chef. Thank you. Um, and I hope that you found, uh, some of the, the analysis, the information that Sheva and I have provided, um, to give you some insights on how cloud era is actually helping, uh, with the fraud waste and abuse challenges within the, uh, the public sector, um, specifically looking at any and all types of data, how the clutter platform is bringing together and analyzing information, whether it be you're structured you're semi-structured to unstructured data, both in a fast or in a real-time perspective, looking at anomalies, being able to do some of those on detection, uh, looking at neural network analysis, time series information. So next steps we'd love to have additional conversation with you. You can also find on some additional information around, I have caught areas working in the, the federal government by going to cloudera.com solutions slash public sector. And we welcome scheduling a meeting with you again, thank you for joining us Sheva and I today. We greatly appreciate your time and look forward to future progress. >>Good day, everyone. Thank you for joining me. I'm Sydney. Mike joined by Rick Taylor of Cloudera. Uh, we're here to talk about predictive maintenance for the public sector and how to increase assets, service, reliability on today's agenda. We'll talk specifically around how to optimize your equipment maintenance, how to reduce costs, asset failure with data and analytics. We'll go into a little more depth on, um, what type of data, the analytical methods that we're typically seeing used, um, the associated, uh, Brooke, we'll go over a case study as well as a reference architecture. So by basic definition, uh, predictive maintenance is about determining when an asset should be maintained and what specific maintenance activities need to be performed either based upon an assets of actual condition or state. It's also about predicting and preventing failures and performing maintenance on your time on your schedule to avoid costly unplanned downtime. >>McKinsey has looked at analyzing predictive maintenance costs across multiple industries and has identified that there's the opportunity to reduce overall predictive maintenance costs by roughly 50% with different types of analytical methods. So let's look at those three types of models. First, we've got our traditional type of method for maintenance, and that's really about our corrective maintenance, and that's when we're performing maintenance on an asset, um, after the equipment fails. But the challenges with that is we end up with unplanned. We end up with disruptions in our schedules, um, as well as reduced quality, um, around the performance of the asset. And then we started looking at preventive maintenance and preventative maintenance is really when we're performing maintenance on a set schedule. Um, the challenges with that is we're typically doing it regardless of the actual condition of the asset, um, which has resulted in unnecessary downtime and expense. Um, and specifically we're really now focused on pre uh, condition-based maintenance, which is looking at leveraging predictive maintenance techniques based upon actual conditions and real time events and processes. Um, within that we've seen organizations, um, and again, source from McKenzie have a 50% reduction in downtime, as well as an overall 40% reduction in maintenance costs. Again, this is really looking at things across multiple industries, but let's look at it in the context of the public sector and based upon some activity by the department of energy, um, several years ago, >>Um, they've really >>Looked at what does predictive maintenance mean to the public sector? What is the benefit, uh, looking at increasing return on investment of assets, reducing, uh, you know, reduction in downtime, um, as well as overall maintenance costs. So corrective or reactive based maintenance is really about performing once there's been a failure. Um, and then the movement towards, uh, preventative, which is based upon a set schedule or looking at predictive where we're monitoring real-time conditions. Um, and most importantly is now actually leveraging IOT and data and analytics to further reduce those overall downtimes. And there's a research report by the, uh, department of energy that goes into more specifics, um, on the opportunity within the public sector. So, Rick, let's talk a little bit about what are some of the challenges, uh, regarding data, uh, regarding predictive maintenance. >>Some of the challenges include having data silos, historically our government organizations and organizations in the commercial space as well, have multiple data silos. They've spun up over time. There are multiple business units and note, there's no single view of assets. And oftentimes there's redundant information stored in, in these silos of information. Uh, couple that with huge increases in data volume data growing exponentially, along with new types of data that we can ingest there's social media, there's semi and unstructured data sources and the real time data that we can now collect from the internet of things. And so the challenge is to collect all these assets together and begin to extract intelligence from them and insights and, and that in turn then fuels, uh, machine learning and, um, and, and what we call artificial intelligence, which enables predictive maintenance. Next slide. So >>Let's look specifically at, you know, the, the types of use cases and I'm going to Rick and I are going to focus on those use cases, where do we see predictive maintenance coming into the procurement facility, supply chain, operations and logistics. Um, we've got various level of maturity. So, you know, we're talking about predictive maintenance. We're also talking about, uh, using, uh, information, whether it be on a, um, a connected asset or a vehicle doing monitoring, uh, to also leveraging predictive maintenance on how do we bring about, uh, looking at data from connected warehouses facilities and buildings all bring on an opportunity to both increase the quality and effectiveness of the missions within the agencies to also looking at re uh, looking at cost efficiency, as well as looking at risk and safety and the types of data, um, you know, that Rick mentioned around, you know, the new types of information, some of those data elements that we typically have seen is looking at failure history. >>So when has that an asset or a machine or a component within a machine failed in the past? Uh, we've also looking at bringing together a maintenance history, looking at a specific machine. Are we getting error codes off of a machine or assets, uh, looking at when we've replaced certain components to looking at, um, how are we actually leveraging the assets? What were the operating conditions, uh, um, pulling off data from a sensor on that asset? Um, also looking at the, um, the features of an asset, whether it's, you know, engine size it's make and model, um, where's the asset located on to also looking at who's operated the asset, uh, you know, whether it be their certifications, what's their experience, um, how are they leveraging the assets and then also bringing in together, um, some of the, the pattern analysis that we've seen. So what are the operating limits? Um, are we getting service reliability? Are we getting a product recall information from the actual manufacturer? So, Rick, I know the data landscape has really changed. Let's, let's go over looking at some of those components. Sure. >>So this slide depicts sort of the, some of the inputs that inform a predictive maintenance program. So, as we've talked a little bit about the silos of information, the ERP system of record, perhaps the spares and the service history. So we want, what we want to do is combine that information with sensor data, whether it's a facility and equipment sensors, um, uh, or temperature and humidity, for example, all this stuff is then combined together, uh, and then use to develop machine learning models that better inform, uh, predictive maintenance, because we'll do need to keep, uh, to take into account the environmental factors that may cause additional wear and tear on the asset that we're monitoring. So here's some examples of private sector, uh, maintenance use cases that also have broad applicability across the government. For example, one of the busiest airports in Europe is running cloud era on Azure to capture secure and correlate sensor data collected from equipment within the airport, the people moving equipment more specifically, the escalators, the elevators, and the baggage carousels. >>The objective here is to prevent breakdowns and improve airport efficiency and passenger safety. Another example is a container shipping port. In this case, we use IOT data and machine learning, help customers recognize how their cargo handling equipment is performing in different weather conditions to understand how usage relates to failure rates and to detect anomalies and transport systems. These all improve for another example is Navistar Navistar, leading manufacturer of commercial trucks, buses, and military vehicles. Typically vehicle maintenance, as Cindy mentioned, is based on miles traveled or based on a schedule or a time since the last service. But these are only two of the thousands of data points that can signal the need for maintenance. And as it turns out, unscheduled maintenance and vehicle breakdowns account for a large share of the total cost for vehicle owner. So to help fleet owners move from a reactive approach to a more predictive model, Navistar built an IOT enabled remote diagnostics platform called on command. >>The platform brings in over 70 sensor data feeds for more than 375,000 connected vehicles. These include engine performance, trucks, speed, acceleration, cooling temperature, and break where this data is then correlated with other Navistar and third-party data sources, including weather geo location, vehicle usage, traffic warranty, and parts inventory information. So the platform then uses machine learning and advanced analytics to automatically detect problems early and predict maintenance requirements. So how does the fleet operator use this information? They can monitor truck health and performance from smartphones or tablets and prioritize needed repairs. Also, they can identify that the nearest service location that has the relevant parts, the train technicians and the available service space. So sort of wrapping up the, the benefits Navistar's helped fleet owners reduce maintenance by more than 30%. The same platform is also used to help school buses run safely. And on time, for example, one school district with 110 buses that travel over a million miles annually reduce the number of PTOs needed year over year, thanks to predictive insights delivered by this platform. >>So I'd like to take a moment and walk through the data. Life cycle is depicted in this diagram. So data ingest from the edge may include feeds from the factory floor or things like connected vehicles, whether they're trucks, aircraft, heavy equipment, cargo vessels, et cetera. Next, the data lands on a secure and governed data platform. Whereas combined with data from existing systems of record to provide additional insights, and this platform supports multiple analytic functions working together on the same data while maintaining strict security governance and control measures once processed the data is used to train machine learning models, which are then deployed into production, monitored, and retrained as needed to maintain accuracy. The process data is also typically placed in a data warehouse and use to support business intelligence, analytics, and dashboards. And in fact, this data lifecycle is representative of one of our government customers doing condition-based maintenance across a variety of aircraft. >>And the benefits they've discovered include less unscheduled maintenance and a reduction in mean man hours to repair increased maintenance efficiencies, improved aircraft availability, and the ability to avoid cascading component failures, which typically cost more in repair cost and downtime. Also, they're able to better forecast the requirements for replacement parts and consumables and last, and certainly very importantly, this leads to enhanced safety. This chart overlays the secure open source Cloudera platform used in support of the data life cycle. We've been discussing Cloudera data flow, the data ingest data movement and real time streaming data query capabilities. So data flow gives us the capability to bring data in from the asset of interest from the internet of things. While the data platform provides a secure governed data lake and visibility across the full machine learning life cycle eliminates silos and streamlines workflows across teams. The platform includes an integrated suite of secure analytic applications. And two that we're specifically calling out here are Cloudera machine learning, which supports the collaborative data science and machine learning environment, which facilitates machine learning and AI and the cloud era data warehouse, which supports the analytics and business intelligence, including those dashboards for leadership Cindy, over to you, Rick, >>Thank you. And I hope that, uh, Rick and I provided you some insights on how predictive maintenance condition-based maintenance is being used and can be used within your respective agency, bringing together, um, data sources that maybe you're having challenges with today. Uh, bringing that, uh, more real-time information in from a streaming perspective, blending that industrial IOT, as well as historical information together to help actually, uh, optimize maintenance and reduce costs within the, uh, each of your agencies, uh, to learn a little bit more about Cloudera, um, and our, what we're doing from a predictive maintenance please, uh, business@cloudera.com solutions slash public sector. And we look forward to scheduling a meeting with you, and on that, we appreciate your time today and thank you very much.

Published Date : Aug 4 2021

SUMMARY :

So as we look at fraud, Um, the types of fraud that we see is specifically around cyber crime, So as we look at those areas, what are the areas that we see additional So I think one of the key elements is, you know, you can look at your, the breadth and the opportunity really comes about when you can integrate and Some of the techniques that we use and the value behind this is, um, how do we actually look at increasing Um, also looking at increasing the amount of, uh, the level of compliance, I'm going to turn it over to chef to talk about, uh, the reference architecture for, before I get into the technical details, uh, I want to talk about how this would be implemented at a much higher level. It could be in the data center or even on edge devices, and this data needs to be collected At the same time, we can be collecting data from an edge device that's streaming in every second, So the data has been enrich. So the next step in the architecture is to leverage a cluttered SQL stream builder, obtain the accuracy of the performance, the scores that we want, Um, and one of the neat things with the IRS the analysis, the information that Sheva and I have provided, um, to give you some insights on the analytical methods that we're typically seeing used, um, the associated, doing it regardless of the actual condition of the asset, um, uh, you know, reduction in downtime, um, as well as overall maintenance costs. And so the challenge is to collect all these assets together and begin the types of data, um, you know, that Rick mentioned around, you know, the new types on to also looking at who's operated the asset, uh, you know, whether it be their certifications, So we want, what we want to do is combine that information with So to help fleet So the platform then uses machine learning and advanced analytics to automatically detect problems So data ingest from the edge may include feeds from the factory floor or things like improved aircraft availability, and the ability to avoid cascading And I hope that, uh, Rick and I provided you some insights on how predictive

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Cindy MikeyPERSON

0.99+

RickPERSON

0.99+

Rick TaylorPERSON

0.99+

MollyPERSON

0.99+

NvidiaORGANIZATION

0.99+

2017DATE

0.99+

PWCORGANIZATION

0.99+

40%QUANTITY

0.99+

110 busesQUANTITY

0.99+

EuropeLOCATION

0.99+

50%QUANTITY

0.99+

CindyPERSON

0.99+

MikePERSON

0.99+

JoePERSON

0.99+

ClouderaORGANIZATION

0.99+

TodayDATE

0.99+

todayDATE

0.99+

NavistarORGANIZATION

0.99+

FirstQUANTITY

0.99+

twoQUANTITY

0.99+

more than 30%QUANTITY

0.99+

over $51 billionQUANTITY

0.99+

NIFAORGANIZATION

0.99+

over $65 billionQUANTITY

0.99+

IRSORGANIZATION

0.99+

over a million milesQUANTITY

0.99+

firstQUANTITY

0.98+

oneQUANTITY

0.98+

JasonPERSON

0.98+

AzureTITLE

0.98+

BrookePERSON

0.98+

AvroPERSON

0.98+

one school districtQUANTITY

0.98+

SQLTITLE

0.97+

bothQUANTITY

0.97+

$148 billionQUANTITY

0.97+

ShevaPERSON

0.97+

three typesQUANTITY

0.96+

eachQUANTITY

0.95+

McKenzieORGANIZATION

0.95+

more than 375,000 connected vehiclesQUANTITY

0.95+

ClouderaTITLE

0.95+

about 57 billionQUANTITY

0.95+

saltyPERSON

0.94+

several years agoDATE

0.94+

single technologyQUANTITY

0.94+

eight timesQUANTITY

0.93+

91 billionQUANTITY

0.93+

eight XQUANTITY

0.92+

business@cloudera.comOTHER

0.92+

McKinseyORGANIZATION

0.92+

zero changesQUANTITY

0.92+

Monte CarloTITLE

0.92+

calderaORGANIZATION

0.91+

coupleQUANTITY

0.9+

over 70 sensor data feedsQUANTITY

0.88+

RichmondLOCATION

0.84+

Navistar NavistarORGANIZATION

0.82+

single viewQUANTITY

0.81+

17OTHER

0.8+

single common formatQUANTITY

0.8+

thousands of data pointsQUANTITY

0.79+

SydneyLOCATION

0.78+

Cindy Maike & Nasheb Ismaily | Cloudera


 

>>Hi, this is Cindy Mikey, vice president of industry solutions at Cloudera. Joining me today is chef is Molly, our solution engineer for the public sector. Today. We're going to talk about speed to insight. Why using machine learning in the public sector, specifically around fraud, waste and abuse. So topic for today, we'll discuss machine learning, why the public sector uses it to target fraud, waste, and abuse, the challenges. How do we enhance your data and analytical approaches the data landscape analytical methods and Shev we'll go over reference architecture and a case study. So by definition, fraud, waste and abuse per the government accountability office is fraud is an attempt to obtain something about a value through unwelcomed. Misrepresentation waste is about squandering money or resources and abuse is about behaving improperly or unreasonably to actually obtain something of value for your personal benefit. So as we look at fraud and across all industries, it's a top of mind, um, area within the public sector. >>Um, the types of fraud that we see is specifically around cyber crime, uh, looking at accounting fraud, whether it be from an individual perspective to also, uh, within organizations, looking at financial statement fraud, to also looking at bribery and corruption, as we look at fraud, it really hits us from all angles, whether it be from external perpetrators or internal perpetrators, and specifically from the research by PWC, the key focus area is we also see over half of fraud is actually through some form of internal or external are perpetrators again, key topics. So as we also look at a report recently by the association of certified fraud examiners, um, within the public sector, the us government, um, in 2017, it was identified roughly $148 billion was attributable to fraud, waste and abuse. Specifically of that 57 billion was focused on reported monetary losses and another 91 billion on areas where that opportunity or the monetary basis had not yet been measured. >>As we look at breaking those areas down again, we look at several different topics from an out payment perspective. So breaking it down within the health system, over $65 billion within social services, over $51 billion to procurement fraud to also, um, uh, fraud, waste and abuse that's happening in the grants and the loan process to payroll fraud, and then other aspects, again, quite a few different topical areas. So as we look at those areas, what are the areas that we see additional type of focus, there's broad stroke areas? What are the actual use cases that our agencies are using the data landscape? What data, what analytical methods can we use to actually help curtail and prevent some of the, uh, the fraud waste and abuse. So, as we look at some of the analytical processes and analytical use crate, uh, use cases in the public sector, whether it's from, uh, you know, the taxation areas to looking at social services, uh, to public safety, to also the, um, our, um, uh, additional agency methods, we're going to focus specifically on some of the use cases around, um, you know, fraud within the tax area. >>Uh, we'll briefly look at some of the aspects of unemployment insurance fraud, uh, benefit fraud, as well as payment and integrity. So fraud has its, um, uh, underpinnings in quite a few different on government agencies and difficult, different analytical methods and I usage of different data. So I think one of the key elements is, you know, you can look at your, your data landscape on specific data sources that you need, but it's really about bringing together different data sources across a different variety, a different velocity. So, uh, data has different dimensions. So we'll look at on structured types of data of semi-structured data, behavioral data, as well as when we look at, um, you know, predictive models, we're typically looking at historical type information, but if we're actually trying to lock at preventing fraud before it actually happens, or when a case may be in flight, which is specifically a use case, that shadow is going to talk about later it's how do I look at more of that? >>Real-time that streaming information? How do I take advantage of data, whether it be, uh, you know, uh, financial transactions we're looking at, um, asset verification, we're looking at tax records, we're looking at corporate filings. Um, and we can also look at more, uh, advanced data sources where as we're looking at, um, investigation type information. So we're maybe going out and we're looking at, uh, deep learning type models around, uh, you know, semi or that behavioral, uh, that's unstructured data, whether it be camera analysis and so forth. So quite a different variety of data and the, the breadth, um, and the opportunity really comes about when you can integrate and look at data across all different data sources. So in a sense, looking at a more extensive on data landscape. So specifically I want to focus on some of the methods, some of the data sources and some of the analytical techniques that we're seeing, uh, being used, um, in the government agencies, as well as opportunities, uh, to look at new methods. >>So as we're looking at, you know, from a, um, an audit planning or looking at, uh, the opportunity for the likelihood of non-compliance, um, specifically we'll see data sources where we're maybe looking at a constituents profile, we might actually be, um, investigating the forms that they've provided. We might be comparing that data, um, or leveraging internal data sources, possibly looking at net worth, comparing it against other financial data, and also comparison across other constituents groups. Some of the techniques that we use are some of the basic natural language processing, maybe we're going to do some text mining. We might be doing some probabilistic modeling, uh, where we're actually looking at, um, information within the agency to also comparing that against possibly tax forms. A lot of times it's information historically has been done on a batch perspective, both structured and semi-structured type information. And typically the data volumes can be low, but we're also seeing those data volumes increase exponentially based upon the types of events that we're dealing with, the number of transactions. >>Um, so getting the throughput, um, and chef's going to specifically talk about that in a moment. The other aspect is, as we look at other areas of opportunity is when we're building upon, how do I actually do compliance? How do I actually look at conducting audits, uh, or potential fraud to also looking at areas of under reported tax information? So there you might be pulling in some of our other types of data sources, whether it's being property records, it could be data that's being supplied by the actual constituents or by vendors to also pulling in social media information to geographical information, to leveraging photos on techniques that we're seeing used is possibly some sentiment analysis, link analysis. Um, how do we actually blend those data sources together from a natural language processing? But I think what's important here is also the method and the looking at the data velocity, whether it be batch, whether it be near real time, again, looking at all types of data, whether it's structured semi-structured or unstructured and the key and the value behind this is, um, how do we actually look at increasing the potential revenue or the, um, under reported revenue? >>Uh, how do we actually look at stopping fraudulent payments before they actually occur? Um, also looking at increasing the amount of, uh, the level of compliance, um, and also looking at the potential of prosecution of fraud cases. And additionally, other areas of opportunity could be looking at, um, economic planning. How do we actually perform some link analysis? How do we bring some more of those things that we saw in the data landscape on customer, or, you know, constituent interaction, bringing in social media, bringing in, uh, potentially police records, property records, um, other tax department, database information. Um, and then also looking at comparing one individual to other individuals, looking at people like a specific, like, uh, constituent, are there areas where we're seeing, uh, um, other aspects of, of fraud potentially being, uh, occurring. Um, and also as we move forward, some of the more advanced techniques that we're seeing around deep learning is looking at computer vision, um, leveraging geospatial information, looking at social network entity analysis, uh, also looking at, um, agent-based modeling techniques, where we're looking at simulation, Monte Carlo type techniques that we typically see in the financial services industry, actually applying that to fraud, waste, and abuse within the, the public sector. >>Um, and again, that really, uh, lends itself to a new opportunities. And on that, I'm going to turn it over to Chevy to talk about, uh, the reference architecture for doing these buckets. >>Sure. Yeah. Thanks, Cindy. Um, so I'm going to walk you through an example, reference architecture for fraud detection, using Cloudera as underlying technology. Um, and you know, before I get into the technical details, uh, I want to talk about how this would be implemented at a much higher level. So with fraud detection, what we're trying to do is identify anomalies or anomalous behavior within our datasets. Um, now in order to understand what aspects of our incoming data represents anomalous behavior, we first need to understand what normal behavior is. So in essence, once we understand normal behavior, anything that deviates from it can be thought of as an anomaly, right? So in order to understand what normal behavior is, we're going to need to be able to collect store and process a very large amount of historical data. And so incomes, clutters platform, and this reference architecture that needs to be for you. >>So, uh, let's start on the left-hand side of this reference architecture with the collect phase. So fraud detection will always begin with data collection. Uh, we need to collect large amounts of information from systems that could be in the cloud. It could be in the data center or even on edge devices, and this data needs to be collected so we can create from normal behavior profiles and these normal behavioral profiles would then in turn, be used to create our predictive models for fraudulent activity. Now, uh, uh, to the data collection side, one of the main challenges that many organizations face, uh, in this phase, uh, involves using a single technology that can handle, uh, data that's coming in all different types of formats and protocols and standards with different velocities and velocities. Um, let me give you an example. Uh, we could be collecting data from a database that gets updated daily, uh, and maybe that data is being collected in Agra format. >>At the same time, we can be collecting data from an edge device that's streaming in every second, and that data may be coming in Jace on or a binary format, right? So this is a data collection challenge that can be solved with cluttered data flow, which is a suite of technologies built on Apache NIFA and mini five, allowing us to ingest all of this data, do a drag and drop interface. So now we're collecting all of this data, that's required to map out normal behavior. The next thing that we need to do is enrich it, transform it and distribute it to know downstream systems for further process. Uh, so let's, let's walk through how that would work first. Let's taking Richmond for, uh, for enrichment, think of adding additional information to your incoming data, right? Let's take, uh, financial transactions, for example, uh, because Cindy mentioned it earlier, right? >>You can store known locations of an individual in an operational database, uh, with Cloudera that would be HBase. And as an individual makes a new transaction, their geo location that's in that transaction data, it can be enriched with previously known locations of that very same individual and all of that enriched data. It can be later used downstream for predictive analysis, predictable. So the data has been enrich. Uh, now it needs to be transformed. We want the data that's coming in, uh, you know, Avro and Jason and binary and whatever other format to be transformed into a single common format. So it can be used downstream for stream processing. Uh, again, this is going to be done through clutter and data flow, which is backed by NIFA, right? So the transformed semantic data is then going to be stimulated to Kafka and coffin is going to serve as that central repository of syndicated services or a buffer zone, right? >>So cough is, you know, pretty much provides you with, uh, extremely fast resilient and fault tolerance storage. And it's also going to give you the consumer API APIs that you need that are going to enable a wide variety of applications to leverage that enriched and transform data within your buffer zone. Uh, I'll add that, you know, 17, so you can store that data, uh, in a distributed file system, give you that historical context that you're going to need later on from machine learning, right? So the next step in the architecture is to leverage, uh, clutter SQL stream builder, which enables us to write, uh, streaming sequel jobs on top of Apache Flink. So we can, uh, filter, analyze and, uh, understand the data that's in the Kafka buffer zone in real-time. Uh, I'll, you know, I'll also add like, you know, if you have time series data, or if you need a lab type of cubing, you can leverage Q2, uh, while EDA or, you know, exploratory data analysis and visualization, uh, can all be enabled through clever visualization technology. >>All right, so we've filtered, we've analyzed, and we've our incoming data. We can now proceed to train our machine learning models, uh, which will detect anomalous behavior in our historically collected data set, uh, to do this, we can use a combination of supervised unsupervised, even deep learning techniques with neural networks. Uh, and these models can be tested on new incoming streaming data. And once we've gone ahead and obtain the accuracy of the performance, the X one, uh, scores that we want, we can then take these models and deploy them into production. And once the models are productionalized or operationalized, they can be leveraged within our streaming pipeline. So as new data is ingested in real time knife, I can query these models to detect if the activity is anomalous or fraudulent. And if it is, they can alert downstream users and systems, right? So this in essence is how fraudulent activity detection works. Uh, and this entire pipeline is powered by clutters technology. Uh, Cindy, next slide please. >>Right. And so, uh, the IRS is one of, uh, clutter as customers. That's leveraging our platform today and implementing a very similar architecture, uh, to detect fraud, waste, and abuse across a very large set of, uh, historical facts, data. Um, and one of the neat things with the IRS is that they've actually recently leveraged the partnership between Cloudera and Nvidia to accelerate their Spark-based analytics and their machine learning. Uh, and the results have been nothing short of amazing, right? And in fact, we have a quote here from Joe and salty who's, uh, you know, the technical branch chief for the research analytics and statistics division group within the IRS with zero changes to our fraud detection workflow, we're able to obtain eight times to performance simply by adding GPS to our mainstream big data servers. This improvement translates to half the cost of ownership for the same workloads, right? So embedding GPU's into the reference architecture I covered earlier has enabled the IRS to improve their time to insights by as much as eight X while simultaneously reducing their underlying infrastructure costs by half, uh, Cindy back to you >>Chef. Thank you. Um, and I hope that you found, uh, some of the, the analysis, the information that Sheva and I have provided, uh, to give you some insights on how cloud era is actually helping, uh, with the fraud waste and abuse challenges within the, uh, the public sector, um, specifically looking at any and all types of data, how the clutter a platform is bringing together and analyzing information, whether it be you're structured you're semi-structured to unstructured data, both in a fast or in a real-time perspective, looking at anomalies, being able to do some of those on detection methods, uh, looking at neural network analysis, time series information. So next steps we'd love to have an additional conversation with you. You can also find on some additional information around how called areas working in federal government, by going to cloudera.com solutions slash public sector. And we welcome scheduling a meeting with you again, thank you for joining us today. Uh, we greatly appreciate your time and look forward to future conversations. Thank you.

Published Date : Jul 22 2021

SUMMARY :

So as we look at fraud and across So as we also look at a report So as we look at those areas, what are the areas that we see additional So I think one of the key elements is, you know, you can look at your, Um, and we can also look at more, uh, advanced data sources So as we're looking at, you know, from a, um, an audit planning or looking and the value behind this is, um, how do we actually look at increasing Um, also looking at increasing the amount of, uh, the level of compliance, um, And on that, I'm going to turn it over to Chevy to talk about, uh, the reference architecture for doing Um, and you know, before I get into the technical details, uh, I want to talk about how this It could be in the data center or even on edge devices, and this data needs to be collected so At the same time, we can be collecting data from an edge device that's streaming in every second, So the data has been enrich. So the next step in the architecture is to leverage, uh, clutter SQL stream builder, obtain the accuracy of the performance, the X one, uh, scores that we want, And in fact, we have a quote here from Joe and salty who's, uh, you know, the technical branch chief for the the analysis, the information that Sheva and I have provided, uh, to give you some insights

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Cindy MikeyPERSON

0.99+

NvidiaORGANIZATION

0.99+

MollyPERSON

0.99+

Nasheb IsmailyPERSON

0.99+

PWCORGANIZATION

0.99+

JoePERSON

0.99+

CindyPERSON

0.99+

ClouderaORGANIZATION

0.99+

2017DATE

0.99+

Cindy MaikePERSON

0.99+

TodayDATE

0.99+

over $65 billionQUANTITY

0.99+

todayDATE

0.99+

NIFAORGANIZATION

0.99+

over $51 billionQUANTITY

0.99+

57 billionQUANTITY

0.99+

saltyPERSON

0.99+

singleQUANTITY

0.98+

firstQUANTITY

0.98+

JasonPERSON

0.98+

oneQUANTITY

0.97+

91 billionQUANTITY

0.97+

IRSORGANIZATION

0.96+

ShevPERSON

0.95+

bothQUANTITY

0.95+

AvroPERSON

0.94+

ApacheORGANIZATION

0.93+

eightQUANTITY

0.93+

$148 billionQUANTITY

0.92+

zero changesQUANTITY

0.91+

RichmondLOCATION

0.91+

ShevaPERSON

0.88+

single technologyQUANTITY

0.86+

ClouderaTITLE

0.85+

Monte CarloTITLE

0.84+

eight timesQUANTITY

0.83+

cloudera.comOTHER

0.79+

KafkaTITLE

0.77+

secondQUANTITY

0.77+

one individualQUANTITY

0.76+

coffinPERSON

0.72+

KafkaPERSON

0.69+

JaceTITLE

0.69+

SQLTITLE

0.68+

17QUANTITY

0.68+

over halfQUANTITY

0.63+

ChevyORGANIZATION

0.57+

elementsQUANTITY

0.56+

halfQUANTITY

0.56+

mini fiveCOMMERCIAL_ITEM

0.54+

Apache FlinkORGANIZATION

0.52+

HBaseTITLE

0.45+

F1 Racing at the Edge of Real-Time Data: Omer Asad, HPE & Matt Cadieux, Red Bull Racing


 

>>Edge computing is predict, projected to be a multi-trillion dollar business. You know, it's hard to really pinpoint the size of this market. Let alone fathom the potential of bringing software, compute, storage, AI, and automation to the edge and connecting all that to clouds and on-prem systems. But what, you know, what is the edge? Is it factories? Is it oil rigs, airplanes, windmills, shipping containers, buildings, homes, race cars. Well, yes and so much more. And what about the data for decades? We've talked about the data explosion. I mean, it's mind boggling, but guess what, we're gonna look back in 10 years and laugh. What we thought was a lot of data in 2020, perhaps the best way to think about edge is not as a place, but when is the most logical opportunity to process the data and maybe it's the first opportunity to do so where it can be decrypted and analyzed at very low latencies that that defines the edge. And so by locating compute as close as possible to the sources of data, to reduce latency and maximize your ability to get insights and return them to users quickly, maybe that's where the value lies. Hello everyone. And welcome to this cube conversation. My name is Dave Vellante and with me to noodle on these topics is Omar Assad, VP, and GM of primary storage and data management services at HPE. Hello, Omer. Welcome to the program. >>Hey Steve. Thank you so much. Pleasure to be here. >>Yeah. Great to see you again. So how do you see the edge in the broader market shaping up? >>Uh, David? I think that's a super important, important question. I think your ideas are quite aligned with how we think about it. Uh, I personally think, you know, as enterprises are accelerating their sort of digitization and asset collection and data collection, uh, they're typically, especially in a distributed enterprise, they're trying to get to their customers. They're trying to minimize the latency to their customers. So especially if you look across industries manufacturing, which is distributed factories all over the place, they are going through a lot of factory transformations where they're digitizing their factories. That means a lot more data is being now being generated within their factories. A lot of robot automation is going on that requires a lot of compute power to go out to those particular factories, which is going to generate their data out there. We've got insurance companies, banks that are creating and interviewing and gathering more customers out at the edge for that. >>They need a lot more distributed processing out at the edge. What this is requiring is what we've seen is across analysts. A common consensus is that more than 50% of an enterprise is data, especially if they operate globally around the world is going to be generated out at the edge. What does that mean? More data is new data is generated at the edge, but needs to be stored. It needs to be processed data. What is not required needs to be thrown away or classified as not important. And then it needs to be moved for Dr. Purposes either to a central data center or just to another site. So overall in order to give the best possible experience for manufacturing, retail, uh, you know, especially in distributed enterprises, people are generating more and more data centric assets out at the edge. And that's what we see in the industry. >>Yeah. We're definitely aligned on that. There's some great points. And so now, okay. You think about all this diversity, what's the right architecture for these deploying multi-site deployments, robo edge. How do you look at that? >>Oh, excellent question. So now it's sort of, you know, obviously you want every customer that we talk to wants SimpliVity, uh, in, in, and, and, and, and no pun intended because SimpliVity is reasoned with a simplistic edge centric architecture, right? So because let's, let's take a few examples. You've got large global retailers, uh, they have hundreds of global retail stores around the world that is generating data that is producing data. Then you've got insurance companies, then you've got banks. So when you look at a distributed enterprise, how do you deploy in a very simple and easy to deploy manner, easy to lifecycle, easy to mobilize and easy to lifecycle equipment out at the edge. What are some of the challenges that these customers deal with these customers? You don't want to send a lot of ID staff out there because that adds costs. You don't want to have islands of data and islands of storage and promote sites, because that adds a lot of States outside of the data center that needs to be protected. >>And then last but not the least, how do you push lifecycle based applications, new applications out at the edge in a very simple to deploy better. And how do you protect all this data at the edge? So the right architecture in my opinion, needs to be extremely simple to deploy. So storage, compute and networking, uh, out towards the edge in a hyperconverged environment. So that's, we agree upon that. It's a very simple to deploy model, but then comes, how do you deploy applications on top of that? How do you manage these applications on top of that? How do you back up these applications back towards the data center, all of this keeping in mind that it has to be as zero touch as possible. We at HBS believe that it needs to be extremely simple. Just give me two cables, a network cable, a power cable, tied it up, connected to the network, push it state from the data center and back up at state from the ed back into the data center. Extremely simple. >>It's gotta be simple because you've got so many challenges. You've got physics that you have to deal your latency to deal with. You got RPO and RTO. What happens if something goes wrong, you've gotta be able to recover quickly. So, so that's great. Thank you for that. Now you guys have hard news. W what is new from HPE in this space >>From a, from a, from a, from a deployment perspective, you know, HPE SimpliVity is just gaining like it's exploding, like crazy, especially as distributed enterprises adopt it as it's standardized edge architecture, right? It's an HCI box has got stories, computer networking, all in one. But now what we have done is not only you can deploy applications all from your standard V-Center interface, from a data center, what have you have now added is the ability to backup to the cloud, right? From the edge. You can also back up all the way back to your core data center. All of the backup policies are fully automated and implemented in the, in the distributed file system. That is the heart and soul of, of the SimpliVity installation. In addition to that, the customers now do not have to buy any third-party software into backup is fully integrated in the architecture and it's van efficient. >>In addition to that, now you can backup straight to the client. You can backup to a central, uh, high-end backup repository, which is in your data center. And last but not least, we have a lot of customers that are pushing the limit in their application transformation. So not only do we previously were, were one-on-one them leaving VMware deployments out at the edge sites. Now revolver also added both stateful and stateless container orchestration, as well as data protection capabilities for containerized applications out at the edge. So we have a lot, we have a lot of customers that are now deploying containers, rapid manufacturing containers to process data out at remote sites. And that allows us to not only protect those stateful applications, but back them up, back into the central data center. >>I saw in that chart, it was a light on no egress fees. That's a pain point for a lot of CEOs that I talked to. They grit their teeth at those entities. So, so you can't comment on that or >>Excellent, excellent question. I'm so glad you brought that up and sort of at that point, uh, uh, pick that up. So, uh, along with SimpliVity, you know, we have the whole green Lake as a service offering as well. Right? So what that means, Dave, is that we can literally provide our customers edge as a service. And when you compliment that with, with Aruba wired wireless infrastructure, that goes at the edge, the hyperconverged infrastructure, as part of SimpliVity, that goes at the edge, you know, one of the things that was missing with cloud backups is the every time you backup to the cloud, which is a great thing, by the way, anytime you restore from the cloud, there is that breastfeed, right? So as a result of that, as part of the GreenLake offering, we have cloud backup service natively now offered as part of HPE, which is included in your HPE SimpliVity edge as a service offering. So now not only can you backup into the cloud from your edge sites, but you can also restore back without any egress fees from HBS data protection service. Either you can restore it back onto your data center, you can restore it back towards the edge site and because the infrastructure is so easy to deploy centrally lifecycle manage, it's very mobile. So if you want to deploy and recover to a different site, you could also do that. >>Nice. Hey, uh, can you, Omar, can you double click a little bit on some of the use cases that customers are choosing SimpliVity for, particularly at the edge, and maybe talk about why they're choosing HPE? >>What are the major use cases that we see? Dave is obviously, uh, easy to deploy and easy to manage in a standardized form factor, right? A lot of these customers, like for example, we have large retailer across the us with hundreds of stores across us. Right now you cannot send service staff to each of these stores. These data centers are their data center is essentially just a closet for these guys, right? So now how do you have a standardized deployment? So standardized deployment from the data center, which you can literally push out and you can connect a network cable and a power cable, and you're up and running, and then automated backup elimination of backup and state and BR from the edge sites and into the data center. So that's one of the big use cases to rapidly deploy new stores, bring them up in a standardized configuration, both from a hardware and a software perspective, and the ability to backup and recover that instantly. >>That's one large use case. The second use case that we see actually refers to a comment that you made in your opener. Dave was where a lot of these customers are generating a lot of the data at the edge. This is robotics automation that is going to up in manufacturing sites. These is racing teams that are out at the edge of doing post-processing of their cars data. Uh, at the same time, there is disaster recovery use cases where you have, uh, you know, campsites and local, uh, you know, uh, agencies that go out there for humanity's benefit. And they move from one site to the other. It's a very, very mobile architecture that they need. So those, those are just a few cases where we were deployed. There was a lot of data collection, and there's a lot of mobility involved in these environments. So you need to be quick to set up quick, to up quick, to recover, and essentially you're up to your next, next move. >>You seem pretty pumped up about this, uh, this new innovation and why not. >>It is, it is, uh, you know, especially because, you know, it is, it has been taught through with edge in mind and edge has to be mobile. It has to be simple. And especially as, you know, we have lived through this pandemic, which, which I hope we see the tail end of it in at least 2021, or at least 2022. They, you know, one of the most common use cases that we saw, and this was an accidental discovery. A lot of the retail sites could not go out to service their stores because, you know, mobility is limited in these, in these strange times that we live in. So from a central center, you're able to deploy applications, you're able to recover applications. And, and a lot of our customers said, Hey, I don't have enough space in my data center to back up. Do you have another option? So then we rolled out this update release to SimpliVity verse from the edge site. You can now directly back up to our backup service, which is offered on a consumption basis to the customers, and they can recover that anywhere they want. >>Fantastic Omer, thanks so much for coming on the program today. >>It's a pleasure, Dave. Thank you. >>All right. Awesome to see you. Now, let's hear from red bull racing and HPE customer, that's actually using SimpliVity at the edge. Countdown really begins when the checkered flag drops on a Sunday. It's always about this race to manufacture >>The next designs to make it more adapt to the next circuit to run those. Of course, if we can't manufacture the next component in time, all that will be wasted. >>Okay. We're back with Matt kudu, who is the CIO of red bull racing? Matt, it's good to see you again. >>Great to say, >>Hey, we're going to dig into a real-world example of using data at the edge and in near real time to gain insights that really lead to competitive advantage. But, but first Matt, tell us a little bit about red bull racing and your role there. >>Sure. So I'm the CIO at red bull racing and that red bull race. And we're based in Milton Keynes in the UK. And the main job job for us is to design a race car, to manufacture the race car, and then to race it around the world. So as CIO, we need to develop the ITT group needs to develop the applications is the design, manufacturing racing. We also need to supply all the underlying infrastructure and also manage security. So it's really interesting environment. That's all about speed. So this season we have 23 races and we need to tear the car apart and rebuild it to a unique configuration for every individual race. And we're also designing and making components targeted for races. So 20 a movable deadlines, um, this big evolving prototype to manage with our car. Um, but we're also improving all of our tools and methods and software that we use to design and make and race the car. >>So we have a big can do attitude of the company around continuous improvement. And the expectations are that we continuously make the car faster. That we're, that we're winning races, that we improve our methods in the factory and our tools. And, um, so for, I take it's really unique and that we can be part of that journey and provide a better service. It's also a big challenge to provide that service and to give the business the agility, agility, and needs. So my job is, is really to make sure we have the right staff, the right partners, the right technical platforms. So we can live up to expectations >>That tear down and rebuild for 23 races. Is that because each track has its own unique signature that you have to tune to, or are there other factors involved there? >>Yeah, exactly. Every track has a different shape. Some have lots of strengths. Some have lots of curves and lots are in between. Um, the track surface is very different and the impact that has some tires, um, the temperature and the climate is very different. Some are hilly, some, a big curves that affect the dynamics of the power. So all that in order to win, you need to micromanage everything and optimize it for any given race track. >>Talk about some of the key drivers in your business and some of the key apps that give you a competitive advantage to help you win races. >>Yeah. So in our business, everything is all about speed. So the car obviously needs to be fast, but also all of our business operations needed to be fast. We need to be able to design a car and it's all done in the virtual world, but the, the virtual simulations and designs need to correlate to what happens in the real world. So all of that requires a lot of expertise to develop the simulation is the algorithms and have all the underlying infrastructure that runs it quickly and reliably. Um, in manufacturing, um, we have cost caps and financial controls by regulation. We need to be super efficient and control material and resources. So ERP and MES systems are running and helping us do that. And at the race track itself in speed, we have hundreds of decisions to make on a Friday and Saturday as we're fine tuning the final configuration of the car. And here again, we rely on simulations and analytics to help do that. And then during the race, we have split seconds, literally seconds to alter our race strategy if an event happens. So if there's an accident, um, and the safety car comes out, or the weather changes, we revise our tactics and we're running Monte Carlo for example. And he is an experienced engineers with simulations to make a data-driven decision and hopefully a better one and faster than our competitors, all of that needs it. Um, so work at a very high level. >>It's interesting. I mean, as a lay person, historically we know when I think about technology and car racing, of course, I think about the mechanical aspects of a self-propelled vehicle, the electronics and the light, but not necessarily the data, but the data's always been there. Hasn't it? I mean, maybe in the form of like tribal knowledge, if somebody who knows the track and where the Hills are and experience and gut feel, but today you're digitizing it and you're, you're processing it and close to real time. >>It's amazing. I think exactly right. Yeah. The car's instrumented with sensors, we post-process at Virgin, um, video, um, image analysis, and we're looking at our car, our competitor's car. So there's a huge amount of, um, very complicated models that we're using to optimize our performance and to continuously improve our car. Yeah. The data and the applications that can leverage it are really key. Um, and that's a critical success factor for us. >>So let's talk about your data center at the track, if you will. I mean, if I can call it that paint a picture for us, what does that look like? >>So we have to send, um, a lot of equipment to the track at the edge. Um, and even though we have really a great wide area network linked back to the factory and there's cloud resources, a lot of the trucks are very old. You don't have hardened infrastructure, don't have ducks that protect cabling, for example, and you could lose connectivity to remote locations. So the applications we need to operate the car and to make really critical decisions, all that needs to be at the edge where the car operates. So historically we had three racks of equipment, like a safe infrastructure, um, and it was really hard to manage, um, to make changes. It was too flexible. Um, there were multiple panes of glass, um, and, um, and it was too slow. It didn't run her applications quickly. Um, it was also too heavy and took up too much space when you're cramped into a garage with lots of environmental constraints. >>So we, um, we'd, we'd introduced hyperconvergence into the factory and seen a lot of great benefits. And when we came time to refresh our infrastructure at the track, we stepped back and said, there's a lot smarter way of operating. We can get rid of all the slow and flexible, expensive legacy and introduce hyperconvergence. And we saw really excellent benefits for doing that. Um, we saw a three X speed up for a lot of our applications. So I'm here where we're post-processing data, and we have to make decisions about race strategy. Time is of the essence in a three X reduction in processing time really matters. Um, we also, um, were able to go from three racks of equipment down to two racks of equipment and the storage efficiency of the HPE SimpliVity platform with 20 to one ratios allowed us to eliminate a rack. And that actually saved a hundred thousand dollars a year in freight costs by shipping less equipment, um, things like backup, um, mistakes happen. >>Sometimes the user makes a mistake. So for example, a race engineer could load the wrong data map into one of our simulations. And we could restore that VDI through SimpliVity backup at 90 seconds. And this makes sure it enables engineers to focus on the car to make better decisions without having downtime. And we sent them to, I take guys to every race they're managing 60 users, a really diverse environment, juggling a lot of balls and having a simple management platform like HPE SimpliVity gives us, allows them to be very effective and to work quickly. So all of those benefits were a huge step forward relative to the legacy infrastructure that we used to run at the edge. >>Yeah. So you had the nice Petri dish and the factory. So it sounds like your, your goals, obviously your number one KPI is speed to help shave seconds time, but also costs just the simplicity of setting up the infrastructure. >>Yeah. It's speed. Speed, speed. So we want applications absolutely fly, you know, get to actionable results quicker, um, get answers from our simulations quicker. The other area that speed's really critical is, um, our applications are also evolving prototypes, and we're always, the models are getting bigger. The simulations are getting bigger and they need more and more resource and being able to spin up resource and provision things without being a bottleneck is a big challenge in SimpliVity. It gives us the means of doing that. >>So did you consider any other options or was it because you had the factory knowledge? It was HCI was, you know, very clearly the option. What did you look at? >>Yeah, so, um, we have over five years of experience in the factory and we eliminated all of our legacy, um, um, infrastructure five years ago. And the benefits I've described, um, at the track, we saw that in the factory, um, at the track we have a three-year operational life cycle for our equipment. When into 2017 was the last year we had legacy as we were building for 2018. It was obvious that hyper-converged was the right technology to introduce. And we'd had years of experience in the factory already. And the benefits that we see with hyper-converged actually mattered even more at the edge because our operations are so much more pressurized time has even more of the essence. And so speeding everything up at the really pointy end of our business was really critical. It was an obvious choice. >>Why, why SimpliVity? What why'd you choose HPE SimpliVity? >>Yeah. So when we first heard about hyperconverged way back in the, in the factory, um, we had, um, a legacy infrastructure, overly complicated, too slow, too inflexible, too expensive. And we stepped back and said, there has to be a smarter way of operating. We went out and challenged our technology partners. We learned about hyperconvergence within enough, the hype, um, was real or not. So we underwent some PLCs and benchmarking and, and the, the PLCs were really impressive. And, and all these, you know, speed and agility benefits, we saw an HP for our use cases was the clear winner in the benchmarks. So based on that, we made an initial investment in the factory. Uh, we moved about 150 VMs in the 150 VDI into it. Um, and then as, as we've seen all the benefits we've successfully invested, and we now have, um, an estate to the factory of about 800 VMs and about 400 VDI. So it's been a great platform and it's allowed us to really push boundaries and, and give the business, um, the service that expects. >>So w was that with the time in which you were able to go from data to insight to recommendation or, or edict, uh, was that compressed, you kind of indicated that, but >>So we, we all telemetry from the car and we post-process it, and that reprocessing time really it's very time consuming. And, um, you know, we went from nine, eight minutes for some of the simulations down to just two minutes. So we saw big, big reductions in time and all, ultimately that meant an engineer could understand what the car was during a practice session, recommend a tweak to the configuration or setup of it, and just get more actionable insight quicker. And it ultimately helps get a better car quicker. >>Such a great example. How are you guys feeling about the season, Matt? What's the team's sentiment? >>Yeah, I think we're optimistic. Um, we w we, um, uh, we have a new driver >>Lineup. Uh, we have, um, max for stopping his carries on with the team and Sergio joins the team. So we're really excited about this year and, uh, we want to go and win races. Great, Matt, good luck this season and going forward and thanks so much for coming back in the cube. Really appreciate it. And it's my pleasure. Great talking to you again. Okay. Now we're going to bring back Omer for quick summary. So keep it real >>Without having solutions from HB, we can't drive those five senses, CFD aerodynamics that would undermine the simulations being software defined. We can bring new apps into play. If we can bring new them's storage, networking, all of that can be highly advises is a hugely beneficial partnership for us. We're able to be at the cutting edge of technology in a highly stressed environment. That is no bigger challenge than the formula. >>Okay. We're back with Omar. Hey, what did you think about that interview with Matt? >>Great. Uh, I have to tell you I'm a big formula one fan, and they are one of my favorite customers. Uh, so, you know, obviously, uh, one of the biggest use cases as you saw for red bull racing is Trackside deployments. There are now 22 races in a season. These guys are jumping from one city to the next, they've got to pack up, move to the next city, set up, set up the infrastructure very, very quickly and average formula. One car is running the thousand plus sensors on that is generating a ton of data on track side that needs to be collected very quickly. It needs to be processed very quickly, and then sometimes believe it or not, snapshots of this data needs to be sent to the red bull back factory back at the data center. What does this all need? It needs reliability. >>It needs compute power in a very short form factor. And it needs agility quick to set up quick, to go quick, to recover. And then in post processing, they need to have CPU density so they can pack more VMs out at the edge to be able to do that processing now. And we accomplished that for, for the red bull racing guys in basically two are you have two SimpliVity nodes that are running track side and moving with them from one, one race to the next race, to the next race. And every time those SimpliVity nodes connect up to the data center collector to a satellite, they're backing up back to their data center. They're sending snapshots of data back to the data center, essentially making their job a whole lot easier, where they can focus on racing and not on troubleshooting virtual machines, >>Red bull racing and HPE SimpliVity. Great example. It's agile, it's it's cost efficient, and it shows a real impact. Thank you very much. I really appreciate those summary comments. Thank you, Dave. Really appreciate it. All right. And thank you for watching. This is Dave Volante. >>You.

Published Date : Mar 30 2021

SUMMARY :

as close as possible to the sources of data, to reduce latency and maximize your ability to get Pleasure to be here. So how do you see the edge in the broader market shaping up? A lot of robot automation is going on that requires a lot of compute power to go out to More data is new data is generated at the edge, but needs to be stored. How do you look at that? a lot of States outside of the data center that needs to be protected. We at HBS believe that it needs to be extremely simple. You've got physics that you have to deal your latency to deal with. In addition to that, the customers now do not have to buy any third-party In addition to that, now you can backup straight to the client. So, so you can't comment on that or So as a result of that, as part of the GreenLake offering, we have cloud backup service natively are choosing SimpliVity for, particularly at the edge, and maybe talk about why from the data center, which you can literally push out and you can connect a network cable at the same time, there is disaster recovery use cases where you have, uh, out to service their stores because, you know, mobility is limited in these, in these strange times that we always about this race to manufacture The next designs to make it more adapt to the next circuit to run those. it's good to see you again. insights that really lead to competitive advantage. So this season we have 23 races and we So my job is, is really to make sure we have the right staff, that you have to tune to, or are there other factors involved there? So all that in order to win, you need to micromanage everything and optimize it for Talk about some of the key drivers in your business and some of the key apps that So all of that requires a lot of expertise to develop the simulation is the algorithms I mean, maybe in the form of like tribal So there's a huge amount of, um, very complicated models that So let's talk about your data center at the track, if you will. So the applications we need to operate the car and to make really Time is of the essence in a three X reduction in processing So for example, a race engineer could load the wrong but also costs just the simplicity of setting up the infrastructure. So we want applications absolutely fly, So did you consider any other options or was it because you had the factory knowledge? And the benefits that we see with hyper-converged actually mattered even more at the edge And, and all these, you know, speed and agility benefits, we saw an HP So we saw big, big reductions in time and all, How are you guys feeling about the season, Matt? we have a new driver Great talking to you again. We're able to be at Hey, what did you think about that interview with Matt? and then sometimes believe it or not, snapshots of this data needs to be sent to the red bull And we accomplished that for, for the red bull racing guys in And thank you for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

SergioPERSON

0.99+

MattPERSON

0.99+

DavidPERSON

0.99+

DavePERSON

0.99+

two racksQUANTITY

0.99+

StevePERSON

0.99+

Dave VolantePERSON

0.99+

2020DATE

0.99+

OmarPERSON

0.99+

Omar AssadPERSON

0.99+

2018DATE

0.99+

Matt CadieuxPERSON

0.99+

20QUANTITY

0.99+

Red Bull RacingORGANIZATION

0.99+

HBSORGANIZATION

0.99+

Milton KeynesLOCATION

0.99+

2017DATE

0.99+

23 racesQUANTITY

0.99+

60 usersQUANTITY

0.99+

22 racesQUANTITY

0.99+

three-yearQUANTITY

0.99+

90 secondsQUANTITY

0.99+

eight minutesQUANTITY

0.99+

Omer AsadPERSON

0.99+

UKLOCATION

0.99+

two cablesQUANTITY

0.99+

One carQUANTITY

0.99+

more than 50%QUANTITY

0.99+

twoQUANTITY

0.99+

nineQUANTITY

0.99+

each trackQUANTITY

0.99+

ITTORGANIZATION

0.99+

SimpliVityTITLE

0.99+

last yearDATE

0.99+

two minutesQUANTITY

0.99+

VirginORGANIZATION

0.99+

HPE SimpliVityTITLE

0.99+

three racksQUANTITY

0.99+

Matt kuduPERSON

0.99+

oneQUANTITY

0.99+

hundreds of storesQUANTITY

0.99+

five sensesQUANTITY

0.99+

hundredsQUANTITY

0.99+

about 800 VMsQUANTITY

0.99+

bothQUANTITY

0.98+

green LakeORGANIZATION

0.98+

about 400 VDIQUANTITY

0.98+

10 yearsQUANTITY

0.98+

second use caseQUANTITY

0.98+

one cityQUANTITY

0.98+

ArubaORGANIZATION

0.98+

one siteQUANTITY

0.98+

five years agoDATE

0.98+

F1 RacingORGANIZATION

0.98+

todayDATE

0.98+

SimpliVityORGANIZATION

0.98+

this yearDATE

0.98+

150 VDIQUANTITY

0.98+

about 150 VMsQUANTITY

0.98+

SundayDATE

0.98+

red bullORGANIZATION

0.97+

firstQUANTITY

0.97+

OmerPERSON

0.97+

multi-trillion dollarQUANTITY

0.97+

over five yearsQUANTITY

0.97+

one large use caseQUANTITY

0.97+

first opportunityQUANTITY

0.97+

HPEORGANIZATION

0.97+

eachQUANTITY

0.96+

decadesQUANTITY

0.96+

one ratiosQUANTITY

0.96+

HPORGANIZATION

0.96+

one raceQUANTITY

0.95+

GreenLakeORGANIZATION

0.94+

Venkat Krishnamachari and Kandice Hendricks | CUBE Conversation, March 2021


 

>>Hold on. Welcome to this special cube conversation. I'm John ferry, host of the queue here in Palo Alto, California. Got a great deep dive conversation with multicloud, who we were featuring on our AWS showcase of cloud startups. Uh, Venkat Krista who's the CEO. And co-founder great to see you again and Candace Hendrix delivery architect at green pages, a partner customer. Great to see you. Thanks for coming on as always cube conversations are fun to get the deep dive. Good to see you. >>Oh, great to have, uh, have this opportunity, John. Thank you so much. Uh, Candace, thank you for joining us. It's been a pleasure work in pages, John, we're looking forward to this conversation today. >>Yeah. One of the things I'm really excited about that came out of our coupon cloud startups showcase was you guys talking about day two operations, which has been kicked around, but you guys drilled into it and put some quantification around the value proposition, but this is every company has a day to problem an opportunity and then usually our problems and most people see, but they're really opportunities to create this value proposition around something that's now going to be an operational, um, standard table-stakes. So let's get into it, take us through, uh, what you guys have with day two offers that, do a deep dive on this. Take, take it away. >>Thanks, John. Uh, John, we'll do a little bit of an involved conversation today. We'll switch between a little bit of a slide and, um, we are actually happy to show a quick demo as well. So our customers can, uh, what they see is what they get kind of demo. Um, so, uh, to give a quick background on context a day, two operations in the cloud are important for customers who are trying to get, uh, self-service provisioning, going standardization going, uh, have a way to help their developers move fast on the innovation. What we are experiencing now is developers are increasingly having a seat at the table and they would like their infrastructure architects and infrastructure solution providers to enable them to do things that they want to do with fewer friction points. What day two platform that we built does is it upskills our it teams so that they can deli work, uh, what the developers need so that the sandbox environments that they want comes to life quickly. >>And on top of that, developers can move fast with the innovation with guard rails that are in place, the guard rails that are it, administrators, it leaders are able to set for developers, include cost guard, rails, governance, guard, rails, security, and compliance guard rails, a, you know, bot based approach to getting out of the way of the developers so they can move fast while the, uh, technology provides them the Alcoa to go innovate without running into the common cloud problems, such as cost overruns or security or compliance challenges today, I'll go show and tell a little bit of all of this, and then we'll bring in partners or partner, canvas as well, so that she can talk about how we help the fortune 200, uh, innovate, uh, faster with our platform. >>Awesome. Well, let's get into it. I, you know, as you know, I, I think that day two operations is really a cloud, uh, lingua. Frank was going to be part of everyone's, uh, operational standard. And it's not just for making sure you've got cost-effectiveness, but innovation strategies that rely on cloud, they need to have new things in place. So take us through the show and tell. >>Great, well, let's switch to the slide deck here. So I'm going to give a quick background and then go from there. Great. So, um, uh, you know, Montclair is an intelligent cloud man and platform company. We help customers of all sizes. Uh, we are an AWS partner that is a cloud management tool, competency partner, super happy to be in a wedding on the AWS platform for AWS customers. Our platform is an autonomous cloud operations platform. What our mission is, we empower ID teams to go deliver to their developers and become cloud powerhouses. Uh, I'm going to go through a quick three sections of the Manticore platform that delivers value to our customers first with our platform without needing additional skillsets or hiring, uh, needing to hire, uh, you know, hard to find talent or having to use third party tools. Our customers can use AWS native solutions to achieve full visibility into their cloud environments. >>They can enable consistent self-service deployments and simplify them. They can also reduce the total cost of cloud operations, all in just a few clicks. Uh, I'm going to show and tell, uh, what customers get quickly moving into the slide where customers can get visibility into the footprint, a comprehensive security posture management and compliance posture management, click away and solve these problems. They can enable their innovation teams with operations ready environments that can provision anything from server-based workloads to serverless workloads, to containerized environments. All of that are available readily in the platform. And of course, uh, all of this can be done with a few clicks and no code. That's our platform. And a nutshell I'm happy to switch to a demo from here on John. How does that sound >>Great. Sounds awesome. Let's get the demo. Thanks for the overview. By the way, we cover that in a great video too, and a high level, um, in our new show startup showcase, people can check that out online, um, check it out, but let's get into the demo. >>Sounds good. So I'm going to switch to my laptop again here to show the browser window and go into the demo environment. Great. So this is Monte cloud.com. Uh, customers can go to app.monica.com. I'm going to move fast in a demo environment show and tell here, uh, customers split login, assuming they have signed up for the platform. It's free to sign up. Uh, the platform activates immediately. This is their full first run experience. Uh, customers can get started in about a couple of clicks. There's a welcome screen here. They can walk through this. What this provides is a way a guard had experience for customers to be able to gain visibility, security, compliance, and set up the cloud operations, uh, environment in just a couple of clicks. So in this case, customers can get continuous resource visibility. They click next from a security point of view, we'll assess about 2,220 plus security best practices and customers can select saying they would like to remediate the issues. >>We'll help do that. That's a bot based approach that does it click next compliance, a similar situation. We do compliance assessments in the platform. Customers can remediate it. Uh, click next. We have provisioning templates, John. We had a really good conversation yesterday about this, a whole set of, uh, well-architected, uh, templates that customers can click and provision anything from, uh, basic core networking, all the way up to high performance computing and minds that all is available in the platform. Again, click next to go select that customers can manage servers, windows, or Linux servers running on any cloud could be hybrid cloud, uh, Azure, AWS GCP. Again, we can manage them in a single interface and last but not the least application management, our ID operators and leaders want to have a position on how their cloud applications are performing. They want to react quickly to it best possible platform. Uh, that's it they've selected all the features. All the, which is free in the platform. Some features are available in the free trial. Customers can click and say they would like to try for 14 days. That's all. So click next platform sets itself up. This is how quick we can get to helping customers understanding what they need to do. I'm going to try and show you if I can go to the next screen here and say, this is my company name. >>So I'm going to enter some details here that, uh, helps, um, capture some basic information about, uh, our customers, uh, departments. Uh, let's say this is a demo account, or I'm going to say, um, HR, um, uh, account, let's say there's a human resources department that I'm trying to connect and manage their cloud environment, but click next >>And that's it. They connect to the AWS account. We now take our customers back to an AWS console where they're familiar interface. They're going to click next on this cloud formation stack here, which automatically starts creating what we need on the customer's account. And click, click a button here. It's going to run in the background, what my platform in this case, my view, the other view does is, uh, it instantly receives notification back from the customer's account. As you can see now, day two has recognized that, Hey, the customer is trying to connect the cloud account. It's a question. Do you want to manage these regions? We can manage 15 plus regions click next. Uh, that is pretty much it. Uh, I'm going to skip this one so that we can get to the dashboard. I'm going to skip this as well, because you can invite your team members. Uh, you can get weekly reports, uh, long story short, that's it about 10 clicks. We are already in, in a cloud environment where customers can begin to manage, operate and start taking control of the cloud footprint. >>Got it. And physical you, you skipped over the collaboration feature that's for what team members do. Kind of see the same dashboard. >>The great question. Uh, our customers can invite additional team members could be an educator who wants to look at the total cost of cloud operations. Uh, they could invite another team member who wants to be enabled only for certain parts of the platform. Very simple. We have SSO integration as well in the platform. So, uh, invite additional users start using day two in less than 10 minutes, no additional, uh, you know, configuration required. >>You know, Amazon's got that slogan always day one. You guys are always day to always go to >>About all about ensuring data was taken care of. >>Awesome. Great stuff. Candace, what's your take on this? How do you fit in here? Talk about what it's like to work with these guys. What's the, what's your perspective on this? A new multicloud day two operations dashboard. >>Hi, thank you, John. Hi, Ben Kat. Thank you very much for the introduction. Um, basically our interaction is collaborative and we're great team partners, and we work well with, with multicloud often and, and have been partners working together for quite some time and solutioning products for our clients. >>Great. Vinca you want to chime in as well and share some color commentary on, um, your partners value? >>Sure. Thanks Justin. So, uh, so green pages, uh, they offer cloud services and a whole suite of solutions to their customers. Some of the customers are ranging from fortune a hundred enterprises, uh, to a wide variety of customers. Perhaps we can actually switch over to a slide deck here, but Candace, if you're up for it, maybe we can walk through a liberal green pages and solutions that you've implemented. We can talk from the customer point of view, which we think would be more beneficial to our audience as well. >>Yes. Thank you. That's very helpful. Um, again, my name is Candice Hendrix and I'm a delivery architect here at green pages technology solutions. And what I'd like to do is share a few examples of collaboration that we have achieved through our partnership with Moni cloud first to give a better history of green pages we've been in business since 1992, we maintain a wide range of customer base, um, approximately 500 different, uh, customers and all different workflows from insurance to government to, um, um, manufacturing and the such. We've also made the CRN tech elite two 50 less for, uh, sense its inception in 2011. And basically what that is, is it's all of the companies and, or the top 250 companies in the U S and Canada, having the highest level of experience top of their game, maintaining the highest levels of training and certifications. We also offer managed services, support, professional services, cloud readiness assessments, and migrations, as well as growing a CSP or cloud service provider today, I would like to highlight a few innovative projects that we've executed with multicloud is our partner for AWS compliance needs as well as, um, AWS Dr. >>So this slide first outlines a business scenario that we dealt with with one of our clients to address cost security compliance standardization across a global AWS environment. And the challenge with this was that we experienced was the complexity of the cloud environment and the size of the environment and how can they stay compliant, optimize costs and scale the outcome with the teamwork of Mani cloud and green pages, we were able to achieve all the facets of the challenge, also enabling and, and creating what we coined it, the compliance bot and what that provided was a platform to easily parameterize some of the, um, options such as configurable schedules, configurable target servers, departments, um, options to choose between automated and manual remediation processes in compliance ability to choose whether that remediation process also, uh, auto reboots versus approval based reboots on, um, infrastructure or resources integrations into a Slack channel for manual remediation approval process, as well as daily noncompliance reporting the compliance bot also can ensure proper patching necessary agents required software versions and resources, um, that they maintain compliance through the use of tagging Lambda functions, AWS fleet manager, AWS config, and AWS CloudWatch. >>Uh, another, um, opportunity we've had to work with, um, Moni cloud in this use case, the scenario that the green pages customer needed to solve was the automation of Dr to address the requirement of an entire AWS regional failure within requirements was a RTO of four hours and an RPO of less than one minute uncertain ESE, two instances. So the challenge that we had was to develop this solution with only the use of AWS native services meeting the required RTO and RPO with no custom tooling integration. So with mighty clouds assistance and teamwork, what we were able to achieve is what we now refer to as the Dr. Bot, we solution the automation to replicate everything from their production, uh, environment in AWS to the Dr. Region in AWS, such as subnets, um, IP cider ranges, LAN IP addresses, security groups, load balancers, and all associated configuration settings. >>So with the pilot light scripting that runs daily through a Lambda function, we can manage those Delta copies into the Dr production or the Dr. Region from production and address any changes that may occur in the production environment to meet the RPO. What we used is cloud door, which is also a native AWS service. And we used AWS backup for the more static instances, we then created an integration to send any health alerts in the event of an AWS outage to their Slack channel. Then upon approval, um, they could kick off through a manual approval process. They could kick off and execute an end to end fail over from production to an AWS region and to their Dr. Region in AWS, both the compliance spot and the Dr. Bot automations can be ported and variabilize for any AWS environment. We welcome the opportunity to discuss this further and assist you in your cloud journey. I hope this explain some of the great innovation that we've been able to work with money cloud on. Thanks, Ben Capra, allowing me to speak and back to you. >>Thank you, Candace. This is fantastic. John Lassie Seesaw, right? The challenge with cloud operations is there's a lot of moving parts and, uh, visibility, compliance, security, uh, you know, all of that. Typically customers have to write custom code or integrate ten-plus tools, suddenly what, you know, customers we're seeing they're spinning up their own cloud operating teams. They're spinning up their own homegrown cloud operations model, which in invariably results in more attacks, symptoms of maintenance tasks, our platform can do all of this abstract, the complexity, and put this kind of automation within the reach of customers who are trying to transform their it departments by clicking away. That's the attack that we built on top. >>Yeah, I think that's a great example. I think Candace highlights some of the things we were talking about last time around intelligent applications, meeting, intelligent infrastructure, and to your point about operations, this comes up huge all the time in every conversation we're in and we're seeing it in the marketplace where there's a new operational model developing in real time. You're seeing people, um, homegrown ops, transforming ops. I mean, there's new roles and responsibilities are emerging and that's just the nature of the beast right now. This is kind of the new normal that it's not your traditional ops model. It's transitioning to a new, new way. This is a great example. Um, you see that the same way? >>Well, that's a, that's a great description, John you're right. That is the model that is evolving that, uh, once, um, that demands more from it teams and on the runway that is shrinking to transform and the cloud surface, it has grown how that's exactly where the becoming to help. And, uh, uh, we did do a little bit of a deep dive into what the platform does today to talk to our audience so that they can get this value. Thank you for that. Uh, you know, uh, depth in diving, happy to chat a little bit more if you'd like about, uh, where customers could go and that they can get started. >>Yeah. Looking forward to it. Vanco. Thanks for coming on, Candace. Thank you very much for sharing. Um, green pages. Congratulations. Love the Dr. Bot. That's phenomenal. I mean, I w I want a cube bottom. You're just doing these interviews is boss, but I'm looking forward to having a follow on conversation vanco. We're going to certainly see you out on the internet on Twitter. Um, maybe get you on our clubhouse, uh, chats, a lot of action out there. A lot of people talking about this, and you're seeing things from observability to new kinds of monitoring, to modern application development techniques that are just evolving in real time. So day two is here. Thanks for sharing. >>Looking forward, John, and, uh, where customers could go to is they could go to montclair.com today. They could get started in just a few place. We have a free version on the platform. They can activate this account in 10 months. They now have the power of the automation that we've built, and they can start taking control of the cloud operations in about 10 minutes. So we encourage persons to go find some free monitor.com and thank you candidates for taking the time, uh, uh, does it's fantastic that we'll be able to go solve some problems together. >>Mazi cloud turning teams into cloud powerhouses. That's their slogan. Check them out. I'm John Farrar with the cube. Thanks for watching.

Published Date : Mar 30 2021

SUMMARY :

And co-founder great to see you again and Candace Hendrix delivery architect at green pages, Oh, great to have, uh, have this opportunity, John. around something that's now going to be an operational, um, standard table-stakes. enable them to do things that they want to do with fewer friction points. place, the guard rails that are it, administrators, it leaders are able to set for developers, they need to have new things in place. Uh, I'm going to go through a quick three sections of the Manticore platform that Uh, I'm going to show and tell, uh, what customers get quickly moving into the slide By the way, we cover that in a great video too, I'm going to move fast in a demo environment show and tell here, uh, customers split login, I'm going to try and show you if I can go to the next screen here and So I'm going to enter some details here that, uh, helps, um, capture Uh, I'm going to skip this one so that we can get to the dashboard. Kind of see the same dashboard. no additional, uh, you know, configuration required. You guys are always day to always How do you fit in here? Thank you very much for the introduction. Vinca you want to chime in as well and share some color commentary on, We can talk from the customer point of view, which we think would be more beneficial like to do is share a few examples of collaboration that we have achieved through our partnership with Moni And the challenge with this was that we experienced the automation to replicate everything from their production, any changes that may occur in the production environment to meet the RPO. That's the attack that we built on top. This is kind of the new normal that it's not your traditional ops model. on the runway that is shrinking to transform and the cloud surface, We're going to certainly see you out on the internet on Twitter. They now have the power of the automation that we've built, I'm John Farrar with the cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
CandacePERSON

0.99+

JohnPERSON

0.99+

Ben CapraPERSON

0.99+

AWSORGANIZATION

0.99+

2011DATE

0.99+

John FarrarPERSON

0.99+

JustinPERSON

0.99+

AmazonORGANIZATION

0.99+

March 2021DATE

0.99+

Ben KatPERSON

0.99+

Venkat KristaPERSON

0.99+

14 daysQUANTITY

0.99+

U SLOCATION

0.99+

VancoPERSON

0.99+

Kandice HendricksPERSON

0.99+

Candice HendrixPERSON

0.99+

Candace HendrixPERSON

0.99+

John Lassie SeesawPERSON

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

app.monica.comOTHER

0.99+

yesterdayDATE

0.99+

FrankPERSON

0.99+

oneQUANTITY

0.99+

ten-plus toolsQUANTITY

0.99+

four hoursQUANTITY

0.99+

less than one minuteQUANTITY

0.99+

1992DATE

0.99+

Venkat KrishnamachariPERSON

0.99+

VincaPERSON

0.99+

CanadaLOCATION

0.98+

less than 10 minutesQUANTITY

0.98+

John ferryPERSON

0.98+

15 plus regionsQUANTITY

0.98+

todayDATE

0.98+

10 monthsQUANTITY

0.98+

Moni cloudORGANIZATION

0.98+

twoQUANTITY

0.98+

LinuxTITLE

0.98+

approximately 500QUANTITY

0.98+

about 10 minutesQUANTITY

0.98+

vancoPERSON

0.97+

firstQUANTITY

0.97+

OneQUANTITY

0.96+

two instancesQUANTITY

0.96+

about 10 clicksQUANTITY

0.96+

about 2,220 plusQUANTITY

0.96+

SlackTITLE

0.96+

BotPERSON

0.96+

bothQUANTITY

0.95+

day twoQUANTITY

0.95+

first runQUANTITY

0.95+

montclair.comOTHER

0.94+

MontclairORGANIZATION

0.94+

Dr.PERSON

0.93+

three sectionsQUANTITY

0.92+

Breaking Analysis: Unpacking Oracle’s Autonomous Data Warehouse Announcement


 

(upbeat music) >> On February 19th of this year, Barron's dropped an article declaring Oracle, a cloud giant and the article explained why the stock was a buy. Investors took notice and the stock ran up 18% over the next nine trading days and it peaked on March 9th, the day before Oracle announced its latest earnings. The company beat consensus earnings on both top-line and EPS last quarter, but investors, they did not like Oracle's tepid guidance and the stock pulled back. But it's still, as you can see, well above its pre-Barron's article price. What does all this mean? Is Oracle a cloud giant? What are its growth prospects? Now many parts of Oracle's business are growing including Fusion ERP, Fusion HCM, NetSuite, we're talking deep into the double digits, 20 plus percent growth. It's OnPrem legacy licensed business however, continues to decline and that moderates, the overall company growth because that OnPrem business is so large. So the overall Oracle's growing in the low single digits. Now what stands out about Oracle is it's recurring revenue model. That figure, the company says now it represents 73% of its revenue and that's going to continue to grow. Now two other things stood out on the earnings call to us. First, Oracle plans on increasing its CapEX by 50% in the coming quarter, that's a lot. Now it's still far less than AWS Google or Microsoft Spend on capital but it's a meaningful data point. Second Oracle's consumption revenue for Autonomous Database and Cloud Infrastructure, OCI or Oracle Cloud Infrastructure grew at 64% and 139% respectively and these two factors combined with the CapEX Spend suggest that the company has real momentum. I mean look, it's possible that the CapEx announcements maybe just optics in they're front loading, some spend to show the street that it's a player in cloud but I don't think so. Oracle's Safra Catz's usually pretty disciplined when it comes to it's spending. Now today on March 17th, Oracle announced updates towards Autonomous Data Warehouse and with me is David Floyer who has extensively researched Oracle over the years and today we're going to unpack the Oracle Autonomous Data Warehouse, ADW announcement. What it means to customers but we also want to dig into Oracle's strategy. We want to compare it to some other prominent database vendors specifically, AWS and Snowflake. David Floyer, Welcome back to The Cube, thanks for making some time for me. >> Thank you Vellante, great pleasure to be here. >> All right, I want to get into the news but I want to start with this idea of the autonomous database which Oracle's announcement today is building on. Oracle uses the analogy of a self-driving car. It's obviously powerful metaphor as they call it the self-driving database and my takeaway is that, this means that the system automatically provisions, it upgrades, it does all the patching for you, it tunes itself. Oracle claims that all reduces labor costs or admin costs by 90%. So I ask you, is this the right interpretation of what Oracle means by autonomous database? And is it real? >> Is that the right interpretation? It's a nice analogy. It's a test to that analogy, isn't it? I would put it as the first stage of the Autonomous Data Warehouse was to do the things that you talked about, which was the tuning, the provisioning, all of that sort of thing. The second stage is actually, I think more interesting in that what they're focusing on is making it easy to use for the end user. Eliminating the requirement for IT, staff to be there to help in the actual using of it and that is a very big step for them but an absolutely vital step because all of the competition focusing on ease of use, ease of use, ease of use and cheapness of being able to manage and deploy. But, so I think that is the really important area that Oracle has focused on and it seemed to have done so very well. >> So in your view, is this, I mean you don't really hear a lot of other companies talking about this analogy of the self-driving database, is this unique? Is it differentiable for Oracle? If so, why, or maybe you could help us understand that a little bit better. >> Well, the whole strategy is unique in its breadth. It has really brought together a whole number of things together and made it of its type the best. So it has a single, whole number of data sources and database types. So it's got a very broad range of different ways that you can look at the data and the second thing that is also excellent is it's a platform. It is fully self provisioned and its functionality is very, very broad indeed. The quality of the original SQL and the query languages, etc, is very, very good indeed and it's a better agent to do joints for example, is excellent. So all of the building blocks are there and together with it's sharing of the same data with OLTP and inference and in memory data paces as well. All together the breadth of what they have is unique and very, very powerful. >> I want to come back to this but let's get into the news a little bit and the announcement. I mean, it seems like what's new in the autonomous data warehouse piece for Oracle's new tooling around four areas that so Andy Mendelsohn, the head of this group instead of the guy who releases his baby, he talked about four things. My takeaway, faster simpler loads, simplified transforms, autonomous machine learning models which are facilitating, What do you call it? Citizen data science and then faster time to insights. So tooling to make those four things happen. What's your take and takeaways on the news? >> I think those are all correct. I would add the ease of use in terms of being able to drag and drop, the user interface has been dramatically improved. Again, I think those, strategically are actually more important that the others are all useful and good components of it but strategically, I think is more important. There's ease of use, the use of apex for example, are more important. And, >> Why are they more important strategically? >> Because they focus on the end users capability. For example, one of other things that they've started to introduce is Python together with their spatial databases, for example. That is really important that you reach out to the developer as they are and what tools they want to use. So those type of ease of use things, those types of things are respecting what the end users use. For example, they haven't come out with anything like click or Tableau. They've left that there for that marketplace for the end user to use what they like best. >> Do you mean, they're not trying to compete with those two tools. They indeed had a laundry list of stuff that they supported, Talend, Tableau, Looker, click, Informatica, IBM, I had IBM there. So their claim was, hey, we're open. But so that's smart. That's just, hey, they realized that people use these tools. >> I'm trying to exclude other people, be a platform and be an ecosystem for the end users. >> Okay, so Mendelsohn who made the announcement said that Oracle's the smartphone of databases and I think, I actually think Alison kind of used that or maybe that was us planing to have, I thought he did like the iPhone of when he announced the exit data way back when the integrated hardware and software but is that how you see it, is Oracle, the smartphone of databases? >> It is, I mean, they are trying to own the complete stack, the hardware with the exit data all the way up to the databases at the data warehouses and the OLTP databases, the inference databases. They're trying to own the complete stack from top to bottom and that's what makes autonomy process possible. You can make it autonomous when you control all of that. Take away all of the requirements for IT in the business itself. So it's democratizing the use of data warehouses. It is pushing it out to the lines of business and it's simplifying it and making it possible to push out so that they can own their own data. They can manage their own data and they do not need an IT person from headquarters to help them. >> Let's stay in this a little bit more and then I want to go into some of the competitive stuff because Mendelsohn mentioned AWS several times. One of the things that struck me, he said, hey, we're basically one API 'cause we're doing analytics in the cloud, we're doing data in the cloud, we're doing integration in the cloud and that's sort of a big part of the value proposition. He made some comparisons to Redshift. Of course, I would say, if you can't find a workload where you beat your big competitor then you shouldn't be in this business. So I take those things with a grain of salt but one of the other things that caught me is that migrating from OnPrem to Oracle, Oracle Cloud was very simple and I think he might've made some comparisons to other platforms. And this to me is important because he also brought in that Gartner data. We looked at that Gardner data when they came out with it in the operational database class, Oracle smoked everybody. They were like way ahead and the reason why I think that's important is because let's face it, the Mission Critical Workloads, when you look at what's moving into AWS, the Mission Critical Workloads, the high performance, high criticality OLTP stuff. That's not moving in droves and you've made the point often that companies with their own cloud particularly, Oracle you've mentioned this about IBM for certain, DB2 for instance, customers are going to, there should be a lower risk environment moving from OnPrem to their cloud, because you could do, I don't think you could get Oracle RAC on AWS. For example, I don't think EXIF data is running in AWS data centers and so that like component is going to facilitate migration. What's your take on all that spiel? >> I think that's absolutely right. You all crown Jewels, the most expensive and the most valuable applications, the mission-critical applications. The ones that have got to take a beating, keep on taking. So those types of applications are where Oracle really shines. They own a very large high percentage of those Mission Critical Workloads and you have the choice if you're going to AWS, for example of either migrating to Oracle on AWS and that is frankly not a good fit at all. There're a lot of constraints to running large systems on AWS, large mission critical systems. So that's not an option and then the option, of course, that AWS will push is move to a Roller, change your way of writing applications, make them tiny little pieces and stitch them all together with microservices and that's okay if you're a small organization but that has got a lot of problems in its own, right? Because then you, the user have to stitch all those pieces together and you're responsible for testing it and you're responsible for looking after it. And that as you grow becomes a bigger and bigger overhead. So AWS, in my opinion needs to have a move towards a tier-one database of it's own and it's not in that position at the moment. >> Interesting, okay. So, let's talk about the competitive landscape and the choices that customers have. As I said, Mendelssohn mentioned AWS many times, Larry on the calls often take shy, it's a compliment to me. When Larry Ellison calls you out, that means you've made it, you're doing well. We've seen it over the years, whether it's IBM or Workday or Salesforce, even though Salesforce's big Oracle customer 'cause AWS, as we know are Oracle customer as well, even though AWS tells us they've off called when you peel the onion >> Five years should be great, some of the workers >> Well, as I said, I believe they're still using Oracle in certain workloads. Way, way, we digress. So AWS though, they take a different approach and I want to push on this a little bit with database. It's got more than a dozen, I think purpose-built databases. They take this kind of right tool for the right job approach was Oracle there converging all this function into a single database. SQL JSON graph databases, machine learning, blockchain. I'd love to talk about more about blockchain if we have time but seems to me that the right tool for the right job purpose-built, very granular down to the primitives and APIs. That seems to me to be a pretty viable approach versus kind of a Swiss Army approach. How do you compare the two? >> Yes, and it is to many initial programmers who are very interested for example, in graph databases or in time series databases. They are looking for a cheap database that will do the job for a particular project and that makes, for the program or for that individual piece of work is making a very sensible way of doing it and they pay for ads on it's clear cloud dynamics. The challenge as you have more and more data and as you're building up your data warehouse in your data lakes is that you do not want to have to move data from one place to another place. So for example, if you've got a Roller,, you have to move the database and it's a pretty complicated thing to do it, to move it to Redshift. It's a five or six steps to do that and each of those costs money and each of those take time. More importantly, they take time. The Oracle approach is a single database in terms of all the pieces that obviously you have multiple databases you have different OLTP databases and data warehouse databases but as a single architecture and a single design which means that all of the work in terms of moving stuff from one place to another place is within Oracle itself. It's Oracle that's doing that work for you and as you grow, that becomes very, very important. To me, very, very important, cost saving. The overhead of all those different ones and the databases themselves originate with all as open source and they've done very well with it and then there's a large revenue stream behind the, >> The AWS, you mean? >> Yes, the original database is in AWS and they've done a lot of work in terms of making it set with the panels, etc. But if a larger organization, especially very large ones and certainly if they want to combine, for example data warehouse with the OLTP and the inference which is in my opinion, a very good thing that they should be trying to do then that is incredibly difficult to do with AWS and in my opinion, AWS has to invest enormously in to make the whole ecosystem much better. >> Okay, so innovation required there maybe is part of the TAM expansion strategy but just to sort of digress for a second. So it seems like, and by the way, there are others that are doing, they're taking this converged approach. It seems like that is a trend. I mean, you certainly see it with single store. I mean, the name sort of implies that formerly MemSQL I think Monte Zweben of splice machine is probably headed in a similar direction, embedding AI in Microsoft's, kind of interesting. It seems like Microsoft is willing to build this abstraction layer that hides that complexity of the different tooling. AWS thus far has not taken that approach and then sort of looking at Snowflake, Snowflake's got a completely different, I think Snowflake's trying to do something completely different. I don't think they're necessarily trying to take Oracle head-on. I mean, they're certainly trying to just, I guess, let's talk about this. Snowflake simplified EDW, that's clear. Zero to snowflake in 90 minutes. It's got this data cloud vision. So you sign on to this Snowflake, speaking of layers they're abstracting the complexity of the underlying cloud. That's what the data cloud vision is all about. They, talk about this Global Mesh but they've not done a good job of explaining what the heck it is. We've been pushing them on that, but we got, >> Aspiration of moment >> Well, I guess, yeah, it seems that way. And so, but conceptually, it's I think very powerful but in reality, what snowflake is doing with data sharing, a lot of reading it's probably mostly read-only and I say, mostly read-only, oh, there you go. You'll get better but it's mostly read and so you're able to share the data, it's governed. I mean, it's exactly, quite genius how they've implemented this with its simplicity. It is a caching architecture. We've talked about that, we can geek out about that. There's good, there's bad, there's ugly but generally speaking, I guess my premise here I would love your thoughts. Is snowflakes trying to do something different? It's trying to be not just another data warehouse. It's not just trying to compete with data lakes. It's trying to create this data cloud to facilitate data sharing, put data in the hands of business owners in terms of a product build, data product builders. That's a different vision than anything I've seen thus far, your thoughts. >> I agree and even more going further, being a place where people can sell data. Put it up and make it available to whoever needs it and making it so simple that it can be shared across the country and across the world. I think it's a very powerful vision indeed. The challenge they have is that the pieces at the moment are very, very easy to use but the quality in terms of the, for example, joints, I mentioned, the joints were very powerful in Oracle. They don't try and do joints. They, they say >> They being Snowflake, snowflake. Yeah, they don't even write it. They would say use another Postgres >> Yeah. >> Database to do that. >> Yeah, so then they have a long way to go. >> Complex joints anyway, maybe simple joints, yeah. >> Complex joints, so they have a long way to go in terms of the functionality of their product and also in my opinion, they sure be going to have more types of databases inside it, including OLTP and they can do that. They have obviously got a great market gap and they can do that by acquisition as well as they can >> They've started. I think, I think they support JSON, right. >> Do they support JSON? And graph, I think there's a graph database that's either coming or it's there, I can't keep all that stuff in my head but there's no reason they can't go in that direction. I mean, in speaking to the founders in Snowflake they were like, look, we're kind of new. We would focus on simple. A lot of them came from Oracle so they know all database and they know how hard it is to do things like facilitate complex joints and do complex workload management and so they said, let's just simplify, we'll put it in the cloud and it will spin up a separate data warehouse. It's a virtual data warehouse every time you want one to. So that's how they handle those things. So different philosophy but again, coming back to some of the mission critical work and some of the larger Oracle customers, they said they have a thousand autonomous database customers. I think it was autonomous database, not ADW but anyway, a few stood out AON, lift, I think Deloitte stood out and as obviously, hundreds more. So we have people who misunderstand Oracle, I think. They got a big install base. They invest in R and D and they talk about lock-in sure but the CIO that I talked to and you talked to David, they're looking for business value. I would say that 75 to 80% of them will gravitate toward business value over the fear of lock-in and I think at the end of the day, they feel like, you know what? If our business is performing, it's a better business decision, it's a better business case. >> I fully agree, they've been very difficult to do business with in the past. Everybody's in dread of the >> The audit. >> The knock on the door from the auditor. >> Right. >> And that from a purchasing point of view has been really bad experience for many, many customers. The users of the database itself are very happy indeed. I mean, you talk to them and they understand why, what they're paying for. They understand the value and in terms of availability and all of the tools for complex multi-dimensional types of applications. It's pretty well, the only game in town. It's only DB2 and SQL that had any hope of doing >> Doing Microsoft, Microsoft SQL, right. >> Okay, SQL >> Which, okay, yeah, definitely competitive for sure. DB2, no IBM look, IBM lost its dominant position in database. They kind of seeded that. Oracle had to fight hard to win it. It wasn't obvious in the 80s who was going to be the database King and all had to fight. And to me, I always tell people the difference is that the chairman of Oracle is also the CTO. They spend money on R and D and they throw off a ton of cash. I want to say something about, >> I was just going to make one extra point. The simplicity and the capability of their cloud versions of all of this is incredibly good. They are better in terms of spending what you need or what you use much better than AWS, for example or anybody else. So they have really come full circle in terms of attractiveness in a cloud environment. >> You mean charging you for what you consume. Yeah, Mendelsohn talked about that. He made a big point about the granularity, you pay for only what you need. If you need 33 CPUs or the other databases you've got to shape, if you need 33, you've got to go to 64. I know that's true for everyone. I'm not sure if that's true too for snowflake. It may be, I got to dig into that a little bit, but maybe >> Yes, Snowflake has got a front end to hiding behind. >> Right, but I didn't want to push it that a little bit because I want to go look at their pricing strategies because I still think they make you buy, I may be wrong. I thought they make you still do a one-year or two-year or three-year term. I don't know if you can just turn it off at any time. They might allow, I should hold off. I'll do some more research on that but I wanted to make a point about the audits, you mentioned audits before. A big mistake that a lot of Oracle customers have made many times and we've written about this, negotiating with Oracle, you've got to bring your best and your brightest when you negotiate with Oracle. Some of the things that people didn't pay attention to and I think they've sort of caught onto this is that Oracle's SOW is adjudicate over the MSA, a lot of legal departments and procurement department. Oh, do we have an MSA? With all, Yes, you do, okay, great and because they think the MSA, they then can run. If they have an MSA, they can rubber stamp it but the SOW really dictateS and Oracle's gotcha there and they're really smart about that. So you got to bring your best and the brightest and you've got to really negotiate hard with Oracle, you get trouble. >> Sure. >> So it is what it is but coming back to Oracle, let's sort of wrap on this. Dominant position in mission critical, we saw that from the Gartner research, especially for operational, giant customer base, there's cloud-first notion, there's investing in R and D, open, we'll put a question Mark around that but hey, they're doing some cool stuff with Michael stuff. >> Ecosystem, I put that, ecosystem they're promoting their ecosystem. >> Yeah, and look, I mean, for a lot of their customers, we've talked to many, they say, look, there's actually, a tail at the tail way, this saves us money and we don't have to migrate. >> Yeah. So interesting, so I'll give you the last word. We started sort of focusing on the announcement. So what do you want to leave us with? >> My last word is that there are platforms with a certain key application or key parts of the infrastructure, which I think can differentiate themselves from the Azures or the AWS. and Oracle owns one of those, SAP might be another one but there are certain platforms which are big enough and important enough that they will, in my opinion will succeed in that cloud strategy for this. >> Great, David, thanks so much, appreciate your insights. >> Good to be here. Thank you for watching everybody, this is Dave Vellante for The Cube. We'll see you next time. (upbeat music)

Published Date : Mar 17 2021

SUMMARY :

and that moderates, the great pleasure to be here. that the system automatically and it seemed to have done so very well. So in your view, is this, I mean and the second thing and the announcement. that the others are all useful that they've started to of stuff that they supported, and be an ecosystem for the end users. and the OLTP databases, and the reason why I and the most valuable applications, and the choices that customers have. for the right job approach was and that makes, for the program OLTP and the inference that complexity of the different tooling. put data in the hands of business owners that the pieces at the moment Yeah, they don't even write it. Yeah, so then they Complex joints anyway, and also in my opinion, they sure be going I think, I think they support JSON, right. and some of the larger Everybody's in dread of the and all of the tools is that the chairman of The simplicity and the capability He made a big point about the granularity, front end to hiding behind. and because they think the but coming back to Oracle, Ecosystem, I put that, ecosystem Yeah, and look, I mean, on the announcement. and important enough that much, appreciate your insights. Good to be here.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

MendelsohnPERSON

0.99+

Andy MendelsohnPERSON

0.99+

OracleORGANIZATION

0.99+

David FloyerPERSON

0.99+

AWSORGANIZATION

0.99+

Dave VellantePERSON

0.99+

IBMORGANIZATION

0.99+

March 9thDATE

0.99+

February 19thDATE

0.99+

fiveQUANTITY

0.99+

DeloitteORGANIZATION

0.99+

75QUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

Larry EllisonPERSON

0.99+

MendelssohnPERSON

0.99+

twoQUANTITY

0.99+

eachQUANTITY

0.99+

90%QUANTITY

0.99+

one-yearQUANTITY

0.99+

GartnerORGANIZATION

0.99+

73%QUANTITY

0.99+

SnowflakeORGANIZATION

0.99+

two toolsQUANTITY

0.99+

MichaelPERSON

0.99+

64%QUANTITY

0.99+

two factorsQUANTITY

0.99+

more than a dozenQUANTITY

0.99+

last quarterDATE

0.99+

SQLTITLE

0.99+

Omer Asad, HPE ft Matt Cadieux, Red Bull Racing full v1 (UNLISTED)


 

(upbeat music) >> Edge computing is projected to be a multi-trillion dollar business. It's hard to really pinpoint the size of this market let alone fathom the potential of bringing software, compute, storage, AI and automation to the edge and connecting all that to clouds and on-prem systems. But what is the edge? Is it factories? Is it oil rigs, airplanes, windmills, shipping containers, buildings, homes, race cars. Well, yes and so much more. And what about the data? For decades we've talked about the data explosion. I mean, it's a mind-boggling but guess what we're going to look back in 10 years and laugh what we thought was a lot of data in 2020. Perhaps the best way to think about Edge is not as a place but when is the most logical opportunity to process the data and maybe it's the first opportunity to do so where it can be decrypted and analyzed at very low latencies. That defines the edge. And so by locating compute as close as possible to the sources of data to reduce latency and maximize your ability to get insights and return them to users quickly, maybe that's where the value lies. Hello everyone and welcome to this CUBE conversation. My name is Dave Vellante and with me to noodle on these topics is Omer Asad, VP and GM of Primary Storage and Data Management Services at HPE. Hello Omer, welcome to the program. >> Thanks Dave. Thank you so much. Pleasure to be here. >> Yeah. Great to see you again. So how do you see the edge in the broader market shaping up? >> Dave, I think that's a super important question. I think your ideas are quite aligned with how we think about it. I personally think enterprises are accelerating their sort of digitization and asset collection and data collection, they're typically especially in a distributed enterprise, they're trying to get to their customers. They're trying to minimize the latency to their customers. So especially if you look across industries manufacturing which has distributed factories all over the place they are going through a lot of factory transformations where they're digitizing their factories. That means a lot more data is now being generated within their factories. A lot of robot automation is going on, that requires a lot of compute power to go out to those particular factories which is going to generate their data out there. We've got insurance companies, banks, that are creating and interviewing and gathering more customers out at the edge for that. They need a lot more distributed processing out at the edge. What this is requiring is what we've seen is across analysts. A common consensus is this that more than 50% of an enterprises data especially if they operate globally around the world is going to be generated out at the edge. What does that mean? New data is generated at the edge what needs to be stored. It needs to be processed data. Data which is not required needs to be thrown away or classified as not important. And then it needs to be moved for DR purposes either to a central data center or just to another site. So overall in order to give the best possible experience for manufacturing, retail, especially in distributed enterprises, people are generating more and more data centric assets out at the edge. And that's what we see in the industry. >> Yeah. We're definitely aligned on that. There's some great points and so now, okay. You think about all this diversity what's the right architecture for these multi-site deployments, ROBO, edge? How do you look at that? >> Oh, excellent question, Dave. Every customer that we talked to wants SimpliVity and no pun intended because SimpliVity is reasoned with a simplistic edge centric architecture, right? Let's take a few examples. You've got large global retailers, they have hundreds of global retail stores around the world that is generating data that is producing data. Then you've got insurance companies, then you've got banks. So when you look at a distributed enterprise how do you deploy in a very simple and easy to deploy manner, easy to lifecycle, easy to mobilize and easy to lifecycle equipment out at the edge. What are some of the challenges that these customers deal with? These customers, you don't want to send a lot of IT staff out there because that adds cost. You don't want to have islands of data and islands of storage and promote sites because that adds a lot of states outside of the data center that needs to be protected. And then last but not the least how do you push lifecycle based applications, new applications out at the edge in a very simple to deploy manner. And how do you protect all this data at the edge? So the right architecture in my opinion needs to be extremely simple to deploy so storage compute and networking out towards the edge in a hyper converged environment. So that's we agree upon that. It's a very simple to deploy model but then comes how do you deploy applications on top of that? How do you manage these applications on top of that? How do you back up these applications back towards the data center, all of this keeping in mind that it has to be as zero touch as possible. We at HPE believe that it needs to be extremely simple, just give me two cables, a network cable, a power cable, fire it up, connect it to the network, push it state from the data center and back up it state from the edge back into the data center, extremely simple. >> It's got to be simple 'cause you've got so many challenges. You've got physics that you have to deal, you have latency to deal with. You got RPO and RTO. What happens if something goes wrong you've got to be able to recover quickly. So that's great. Thank you for that. Now you guys have heard news. What is new from HPE in this space? >> Excellent question, great. So from a deployment perspective, HPE SimpliVity is just gaining like it's exploding like crazy especially as distributed enterprises adopted as it's standardized edge architecture, right? It's an HCI box has got storage computer networking all in one. But now what we have done is not only you can deploy applications all from your standard V-Center interface from a data center, what have you have now added is the ability to backup to the cloud right from the edge. You can also back up all the way back to your core data center. All of the backup policies are fully automated and implemented in the distributed file system that is the heart and soul of the SimpliVity installation. In addition to that, the customers now do not have to buy any third-party software. Backup is fully integrated in the architecture and it's then efficient. In addition to that now you can backup straight to the client. You can back up to a central high-end backup repository which is in your data center. And last but not least, we have a lot of customers that are pushing the limit in their application transformation. So not only, we previously were one-on-one leaving VMware deployments out at the edge site now evolved also added both stateful and stateless container orchestration as well as data protection capabilities for containerized applications out at the edge. So we have a lot of customers that are now deploying containers, rapid manufacture containers to process data out at remote sites. And that allows us to not only protect those stateful applications but back them up back into the central data center. >> I saw in that chart, it was a line no egress fees. That's a pain point for a lot of CIOs that I talked to. They grit their teeth at those cities. So you can't comment on that or? >> Excellent question. I'm so glad you brought that up and sort of at the point that pick that up. So along with SimpliVity, we have the whole Green Lake as a service offering as well, right? So what that means Dave is, that we can literally provide our customers edge as a service. And when you compliment that with with Aruba Wired Wireless Infrastructure that goes at the edge, the hyperconverged infrastructure as part of SimpliVity that goes at the edge. One of the things that was missing with cloud backups is that every time you back up to the cloud, which is a great thing by the way, anytime you restore from the cloud there is that egress fee, right? So as a result of that, as part of the GreenLake offering we have cloud backup service natively now offered as part of HPE, which is included in your HPE SimpliVity edge as a service offering. So now not only can you backup into the cloud from your edge sites, but you can also restore back without any egress fees from HPE's data protection service. Either you can restore it back onto your data center, you can restore it back towards the edge site and because the infrastructure is so easy to deploy centrally lifecycle manage, it's very mobile. So if you want to deploy and recover to a different site, you could also do that. >> Nice. Hey, can you, Omer, can you double click a little bit on some of the use cases that customers are choosing SimpliVity for particularly at the edge and maybe talk about why they're choosing HPE? >> Excellent question. So one of the major use cases that we see Dave is obviously easy to deploy and easy to manage in a standardized form factor, right? A lot of these customers, like for example, we have large retailer across the US with hundreds of stores across US, right? Now you cannot send service staff to each of these stores. Their data center is essentially just a closet for these guys, right? So now how do you have a standardized deployment? So standardized deployment from the data center which you can literally push out and you can connect a network cable and a power cable and you're up and running and then automated backup, elimination of backup and state and DR from the edge sites and into the data center. So that's one of the big use cases to rapidly deploy new stores, bring them up in a standardized configuration both from a hardware and a software perspective and the ability to backup and recover that instantly. That's one large use case. The second use case that we see actually refers to a comment that you made in your opener, Dave, was when a lot of these customers are generating a lot of the data at the edge. This is robotics automation that is going up in manufacturing sites. These is racing teams that are out at the edge of doing post-processing of their cars data. At the same time there is disaster recovery use cases where you have campsites and local agencies that go out there for humanity's benefit. And they move from one site to the other. It's a very, very mobile architecture that they need. So those are just a few cases where we were deployed. There was a lot of data collection and there was a lot of mobility involved in these environments, so you need to be quick to set up, quick to backup, quick to recover. And essentially you're up to your next move. >> You seem pretty pumped up about this new innovation and why not. >> It is, especially because it has been taught through with edge in mind and edge has to be mobile. It has to be simple. And especially as we have lived through this pandemic which I hope we see the tail end of it in at least 2021 or at least 2022. One of the most common use cases that we saw and this was an accidental discovery. A lot of the retail sites could not go out to service their stores because mobility is limited in these strange times that we live in. So from a central recenter you're able to deploy applications. You're able to recover applications. And a lot of our customers said, hey I don't have enough space in my data center to back up. Do you have another option? So then we rolled out this update release to SimpliVity verse from the edge site. You can now directly back up to our backup service which is offered on a consumption basis to the customers and they can recover that anywhere they want. >> Fantastic Omer, thanks so much for coming on the program today. >> It's a pleasure, Dave. Thank you. >> All right. Awesome to see you, now, let's hear from Red Bull Racing an HPE customer that's actually using SimpliVity at the edge. (engine revving) >> Narrator: Formula one is a constant race against time Chasing in tens of seconds. (upbeat music) >> Okay. We're back with Matt Cadieux who is the CIO Red Bull Racing. Matt, it's good to see you again. >> Great to see you Dave. >> Hey, we're going to dig in to a real world example of using data at the edge in near real time to gain insights that really lead to competitive advantage. But first Matt tell us a little bit about Red Bull Racing and your role there. >> Sure. So I'm the CIO at Red Bull Racing and at Red Bull Racing we're based in Milton Keynes in the UK. And the main job for us is to design a race car, to manufacture the race car and then to race it around the world. So as CIO, we need to develop, the IT group needs to develop the applications use the design, manufacturing racing. We also need to supply all the underlying infrastructure and also manage security. So it's really interesting environment that's all about speed. So this season we have 23 races and we need to tear the car apart and rebuild it to a unique configuration for every individual race. And we're also designing and making components targeted for races. So 23 and movable deadlines this big evolving prototype to manage with our car but we're also improving all of our tools and methods and software that we use to design make and race the car. So we have a big can-do attitude of the company around continuous improvement. And the expectations are that we continue to say, make the car faster. That we're winning races, that we improve our methods in the factory and our tools. And so for IT it's really unique and that we can be part of that journey and provide a better service. It's also a big challenge to provide that service and to give the business the agility of needs. So my job is really to make sure we have the right staff, the right partners, the right technical platforms. So we can live up to expectations. >> And Matt that tear down and rebuild for 23 races, is that because each track has its own unique signature that you have to tune to or are there other factors involved? >> Yeah, exactly. Every track has a different shape. Some have lots of straight, some have lots of curves and lots are in between. The track surface is very different and the impact that has on tires, the temperature and the climate is very different. Some are hilly, some have big curbs that affect the dynamics of the car. So all that in order to win you need to micromanage everything and optimize it for any given race track. >> COVID has of course been brutal for sports. What's the status of your season? >> So this season we knew that COVID was here and we're doing 23 races knowing we have COVID to manage. And as a premium sporting team with Pharma Bubbles we've put health and safety and social distancing into our environment. And we're able to able to operate by doing things in a safe manner. We have some special exceptions in the UK. So for example, when people returned from overseas that they did not have to quarantine for two weeks, but they get tested multiple times a week. And we know they're safe. So we're racing, we're dealing with all the hassle that COVID gives us. And we are really hoping for a return to normality sooner instead of later where we can get fans back at the track and really go racing and have the spectacle where everyone enjoys it. >> Yeah. That's awesome. So important for the fans but also all the employees around that ecosystem. Talk about some of the key drivers in your business and some of the key apps that give you competitive advantage to help you win races. >> Yeah. So in our business, everything is all about speed. So the car obviously needs to be fast but also all of our business operations need to be fast. We need to be able to design a car and it's all done in the virtual world, but the virtual simulations and designs needed to correlate to what happens in the real world. So all of that requires a lot of expertise to develop the simulations, the algorithms and have all the underlying infrastructure that runs it quickly and reliably. In manufacturing we have cost caps and financial controls by regulation. We need to be super efficient and control material and resources. So ERP and MES systems are running and helping us do that. And at the race track itself. And in speed, we have hundreds of decisions to make on a Friday and Saturday as we're fine tuning the final configuration of the car. And here again, we rely on simulations and analytics to help do that. And then during the race we have split seconds literally seconds to alter our race strategy if an event happens. So if there's an accident and the safety car comes out or the weather changes, we revise our tactics and we're running Monte-Carlo for example. And use an experienced engineers with simulations to make a data-driven decision and hopefully a better one and faster than our competitors. All of that needs IT to work at a very high level. >> Yeah, it's interesting. I mean, as a lay person, historically when I think about technology in car racing, of course I think about the mechanical aspects of a self-propelled vehicle, the electronics and the light but not necessarily the data but the data's always been there. Hasn't it? I mean, maybe in the form of like tribal knowledge if you are somebody who knows the track and where the hills are and experience and gut feel but today you're digitizing it and you're processing it and close to real time. Its amazing. >> I think exactly right. Yeah. The car's instrumented with sensors, we post process and we are doing video image analysis and we're looking at our car, competitor's car. So there's a huge amount of very complicated models that we're using to optimize our performance and to continuously improve our car. Yeah. The data and the applications that leverage it are really key and that's a critical success factor for us. >> So let's talk about your data center at the track, if you will. I mean, if I can call it that. Paint a picture for us what does that look like? >> So we have to send a lot of equipment to the track at the edge. And even though we have really a great wide area network link back to the factory and there's cloud resources a lot of the tracks are very old. You don't have hardened infrastructure, don't have ducks that protect cabling, for example and you can lose connectivity to remote locations. So the applications we need to operate the car and to make really critical decisions all that needs to be at the edge where the car operates. So historically we had three racks of equipment like I said infrastructure and it was really hard to manage, to make changes, it was too flexible. There were multiple panes of glass and it was too slow. It didn't run our applications quickly. It was also too heavy and took up too much space when you're cramped into a garage with lots of environmental constraints. So we'd introduced hyper convergence into the factory and seen a lot of great benefits. And when we came time to refresh our infrastructure at the track, we stepped back and said, there's a lot smarter way of operating. We can get rid of all the slow and flexible expensive legacy and introduce hyper convergence. And we saw really excellent benefits for doing that. We saw up three X speed up for a lot of our applications. So I'm here where we're post-processing data. And we have to make decisions about race strategy. Time is of the essence. The three X reduction in processing time really matters. We also were able to go from three racks of equipment down to two racks of equipment and the storage efficiency of the HPE SimpliVity platform with 20 to one ratios allowed us to eliminate a rack. And that actually saved a $100,000 a year in freight costs by shipping less equipment. Things like backup mistakes happen. Sometimes the user makes a mistake. So for example a race engineer could load the wrong data map into one of our simulations. And we could restore that DDI through SimpliVity backup at 90 seconds. And this enables engineers to focus on the car to make better decisions without having downtime. And we sent two IT guys to every race, they're managing 60 users a really diverse environment, juggling a lot of balls and having a simple management platform like HPE SimpliVity gives us, allows them to be very effective and to work quickly. So all of those benefits were a huge step forward relative to the legacy infrastructure that we used to run at the edge. >> Yeah. So you had the nice Petri dish in the factory so it sounds like your goals are obviously number one KPIs speed to help shave seconds, awesome time, but also cost just the simplicity of setting up the infrastructure is-- >> That's exactly right. It's speed, speed, speed. So we want applications absolutely fly, get to actionable results quicker, get answers from our simulations quicker. The other area that speed's really critical is our applications are also evolving prototypes and we're always, the models are getting bigger. The simulations are getting bigger and they need more and more resource and being able to spin up resource and provision things without being a bottleneck is a big challenge in SimpliVity. It gives us the means of doing that. >> So did you consider any other options or was it because you had the factory knowledge? It was HCI was very clearly the option. What did you look at? >> Yeah, so we have over five years of experience in the factory and we eliminated all of our legacy infrastructure five years ago. And the benefits I've described at the track we saw that in the factory. At the track we have a three-year operational life cycle for our equipment. When in 2017 was the last year we had legacy as we were building for 2018, it was obvious that hyper-converged was the right technology to introduce. And we'd had years of experience in the factory already. And the benefits that we see with hyper-converged actually mattered even more at the edge because our operations are so much more pressurized. Time is even more of the essence. And so speeding everything up at the really pointy end of our business was really critical. It was an obvious choice. >> Why SimpliVity, why'd you choose HPE SimpliVity? >> Yeah. So when we first heard about hyper-converged way back in the factory, we had a legacy infrastructure overly complicated, too slow, too inflexible, too expensive. And we stepped back and said there has to be a smarter way of operating. We went out and challenged our technology partners, we learned about hyperconvergence, would enough the hype was real or not. So we underwent some PLCs and benchmarking and the PLCs were really impressive. And all these speed and agility benefits we saw and HPE for our use cases was the clear winner in the benchmarks. So based on that we made an initial investment in the factory. We moved about 150 VMs and 150 VDIs into it. And then as we've seen all the benefits we've successfully invested and we now have an estate in the factory of about 800 VMs and about 400 VDIs. So it's been a great platform and it's allowed us to really push boundaries and give the business the service it expects. >> Awesome fun stories, just coming back to the metrics for a minute. So you're running Monte Carlo simulations in real time and sort of near real-time. And so essentially that's if I understand it, that's what ifs and it's the probability of the outcome. And then somebody got to make, then the human's got to say, okay, do this, right? Was the time in which you were able to go from data to insight to recommendation or edict was that compressed and you kind of indicated that. >> Yeah, that was accelerated. And so in that use case, what we're trying to do is predict the future and you're saying, well and before any event happens, you're doing what ifs and if it were to happen, what would you probabilistic do? So that simulation, we've been running for awhile but it gets better and better as we get more knowledge. And so that we were able to accelerate that with SimpliVity but there's other use cases too. So we also have telemetry from the car and we post-process it. And that reprocessing time really, is it's very time consuming. And we went from nine, eight minutes for some of the simulations down to just two minutes. So we saw big, big reductions in time. And ultimately that meant an engineer could understand what the car was doing in a practice session, recommend a tweak to the configuration or setup of it and just get more actionable insight quicker. And it ultimately helps get a better car quicker. >> Such a great example. How are you guys feeling about the season, Matt? What's the team's sentiment? >> I think we're optimistic. Thinking our simulations that we have a great car we have a new driver lineup. We have the Max Verstapenn who carries on with the team and Sergio Cross joins the team. So we're really excited about this year and we want to go and win races. And I think with COVID people are just itching also to get back to a little degree of normality and going racing again even though there's no fans, it gets us into a degree of normality. >> That's great, Matt, good luck this season and going forward and thanks so much for coming back in theCUBE. Really appreciate it. >> It's my pleasure. Great talking to you again. >> Okay. Now we're going to bring back Omer for quick summary. So keep it right there. >> Narrator: That's where the data comes face to face with the real world. >> Narrator: Working with Hewlett Packard Enterprise is a hugely beneficial partnership for us. We're able to be at the cutting edge of technology in a highly technical, highly stressed environment. There is no bigger challenge than Formula One. (upbeat music) >> Being in the car and driving in on the limit that is the best thing out there. >> Narrator: It's that innovation and creativity to ultimately achieves winning of this. >> Okay. We're back with Omer. Hey, what did you think about that interview with Matt? >> Great. I have to tell you, I'm a big formula One fan and they are one of my favorite customers. So obviously one of the biggest use cases as you saw for Red Bull Racing is track side deployments. There are now 22 races in a season. These guys are jumping from one city to the next they got to pack up, move to the next city, set up the infrastructure very very quickly. An average Formula One car is running the thousand plus sensors on, that is generating a ton of data on track side that needs to be collected very quickly. It needs to be processed very quickly and then sometimes believe it or not snapshots of this data needs to be sent to the Red Bull back factory back at the data center. What does this all need? It needs reliability. It needs compute power in a very short form factor. And it needs agility quick to set up, quick to go, quick to recover. And then in post processing they need to have CPU density so they can pack more VMs out at the edge to be able to do that processing. And we accomplished that for the Red Bull Racing guys in basically two of you have two SimpliVity nodes that are running track side and moving with them from one race to the next race to the next race. And every time those SimpliVity nodes connect up to the data center, collect up to a satellite they're backing up back to their data center. They're sending snapshots of data back to the data center essentially making their job a whole lot easier where they can focus on racing and not on troubleshooting virtual machines. >> Red bull Racing and HPE SimpliVity. Great example. It's agile, it's it's cost efficient and it shows a real impact. Thank you very much Omer. I really appreciate those summary comments. >> Thank you, Dave. Really appreciate it. >> All right. And thank you for watching. This is Dave Volante for theCUBE. (upbeat music)

Published Date : Mar 5 2021

SUMMARY :

and connecting all that to Pleasure to be here. So how do you see the edge in And then it needs to be moved for DR How do you look at that? and easy to deploy It's got to be simple and implemented in the So you can't comment on that or? and because the infrastructure is so easy on some of the use cases and the ability to backup You seem pretty pumped up about A lot of the retail sites on the program today. It's a pleasure, Dave. SimpliVity at the edge. a constant race against time Matt, it's good to see you again. in to a real world example and then to race it around the world. So all that in order to win What's the status of your season? and have the spectacle So important for the fans So the car obviously needs to be fast and close to real time. and to continuously improve our car. data center at the track, So the applications we Petri dish in the factory and being able to spin up the factory knowledge? And the benefits that we see and the PLCs were really impressive. Was the time in which you And so that we were able to about the season, Matt? and Sergio Cross joins the team. and thanks so much for Great talking to you again. going to bring back Omer comes face to face with the real world. We're able to be at the that is the best thing out there. and creativity to ultimately that interview with Matt? So obviously one of the biggest use cases and it shows a real impact. Thank you, Dave. And thank you for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Matt CadieuxPERSON

0.99+

DavePERSON

0.99+

Dave VellantePERSON

0.99+

Sergio CrossPERSON

0.99+

2017DATE

0.99+

2018DATE

0.99+

Red Bull RacingORGANIZATION

0.99+

MattPERSON

0.99+

2020DATE

0.99+

Milton KeynesLOCATION

0.99+

two weeksQUANTITY

0.99+

three-yearQUANTITY

0.99+

20QUANTITY

0.99+

Red Bull RacingORGANIZATION

0.99+

Omer AsadPERSON

0.99+

Dave VolantePERSON

0.99+

USLOCATION

0.99+

OmerPERSON

0.99+

Red BullORGANIZATION

0.99+

UKLOCATION

0.99+

two racksQUANTITY

0.99+

23 racesQUANTITY

0.99+

Max VerstapennPERSON

0.99+

90 secondsQUANTITY

0.99+

60 usersQUANTITY

0.99+

22 racesQUANTITY

0.99+

eight minutesQUANTITY

0.99+

more than 50%QUANTITY

0.99+

each trackQUANTITY

0.99+

twoQUANTITY

0.99+

one raceQUANTITY

0.99+

two minutesQUANTITY

0.99+

two cablesQUANTITY

0.99+

nineQUANTITY

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

150 VDIsQUANTITY

0.99+

SimpliVityTITLE

0.99+

Pharma BubblesORGANIZATION

0.99+

oneQUANTITY

0.99+

five years agoDATE

0.99+

first opportunityQUANTITY

0.99+

last yearDATE

0.99+

OneQUANTITY

0.99+

about 800 VMsQUANTITY

0.99+

three racksQUANTITY

0.98+

firstQUANTITY

0.98+

one siteQUANTITY

0.98+

HPEORGANIZATION

0.98+

Monte CarloTITLE

0.98+

about 400 VDIsQUANTITY

0.98+

Primary Storage and Data Management ServicesORGANIZATION

0.98+

hundreds of storesQUANTITY

0.98+

Red bull RacingORGANIZATION

0.98+

bothQUANTITY

0.98+

thousand plus sensorsQUANTITY

0.98+

tens of secondsQUANTITY

0.98+

second use caseQUANTITY

0.98+

multi-trillion dollarQUANTITY

0.98+

over five yearsQUANTITY

0.98+

todayDATE

0.97+

GreenLakeORGANIZATION

0.97+

one cityQUANTITY

0.97+

10 yearsQUANTITY

0.96+

HPE SimpliVityTITLE

0.96+

COVIDOTHER

0.96+

hundreds of global retail storesQUANTITY

0.96+

about 150 VMsQUANTITY

0.96+

Matt Cadieux, CIO Red Bull Racing v2


 

(mellow music) >> Okay, we're back with Matt Cadieux who is the CIO Red Bull Racing. Matt, it's good to see you again. >> Yeah, great to see you, Dave. >> Hey, we're going to dig into a real world example of using data at the edge and in near real-time to gain insights that really lead to competitive advantage. But first Matt, tell us a little bit about Red Bull Racing and your role there. >> Sure, so I'm the CIO at Red Bull Racing. And at Red Bull Racing we're based in Milton Keynes in the UK. And the main job for us is to design a race car, to manufacture the race car, and then to race it around the world. So as CIO, we need to develop, the IT team needs to develop the applications used for the design, manufacturing, and racing. We also need to supply all the underlying infrastructure, and also manage security. So it's a really interesting environment that's all about speed. So this season we have 23 races, and we need to tear the car apart, and rebuild it to a unique configuration for every individual race. And we're also designing and making components targeted for races. So 23 immovable deadlines, this big evolving prototype to manage with our car. But we're also improving all of our tools and methods and software that we use to design and make and race the car. So we have a big can-do attitude in the company, around continuous improvement. And the expectations are that we continue to make the car faster, that we're winning races, that we improve our methods in the factory and our tools. And so for IT it's really unique and that we can be part of that journey and provide a better service. It's also a big challenge to provide that service and to give the business the agility it needs. So my job is really to make sure we have the right staff, the right partners, the right technical platforms, so we can live up to expectations. >> And Matt that tear down and rebuild for 23 races. Is that because each track has its own unique signature that you have to tune to or are there other factors involved there? >> Yeah, exactly. Every track has a different shape. Some have lots of straight, some have lots of curves and lots are in between. The track's surface is very different and the impact that has on tires, the temperature and the climate is very different. Some are hilly, some are big curves that affect the dynamics of the car. So all that in order to win, you need to micromanage everything and optimize it for any given race track. >> And, you know, COVID has, of course, been brutal for sports. What's the status of your season? >> So this season we knew that COVID was here and we're doing 23 races knowing we have COVID to manage. And as a premium sporting team we've formed bubbles, we've put health and safety and social distancing into our environment. And we're able to operate by doing things in a safe manner. We have some special exhibitions in the UK. So for example, when people return from overseas that they do not have to quarantine for two weeks but they get tested multiple times a week and we know they're safe. So we're racing, we're dealing with all the hassle that COVID gives us. And we are really hoping for a return to normality sooner instead of later where we can get fans back at the track and really go racing and have the spectacle where everyone enjoys it. >> Yeah, that's awesome. So important for the fans but also all the employees around that ecosystem. Talk about some of the key drivers in your business and some of the key apps that give you competitive advantage to help you win races. >> Yeah, so in our business everything is all about speed. So the car obviously needs to be fast but also all of our business operations need to be fast. We need to be able to design our car and it's all done in the virtual world but the virtual simulations and designs need to correlate to what happens in the real world. So all of that requires a lot of expertise to develop the simulations, the algorithms, and have all the underlying infrastructure that runs it quickly and reliably. In manufacturing, we have cost caps and financial controls by regulation. We need to be super efficient and control material and resources. So ERP and MES systems are running, helping us do that. And at the race track itself in speed, we have hundreds of decisions to make on a Friday and Saturday as we're fine tuning the final configuration of the car. And here again, we rely on simulations and analytics to help do that. And then during the race, we have split seconds, literally seconds to alter our race strategy if an event happens. So if there's an accident and the safety car comes out or the weather changes, we revise our tactics. And we're running Monte Carlo for example. And using experienced engineers with simulations to make a data-driven decision and hopefully a better one and faster than our competitors. All of that needs IT to work at a very high level. >> You know it's interesting, I mean, as a lay person, historically when I think about technology and car racing, of course, I think about the mechanical aspects of a self-propelled vehicle, the electronics and the like, but not necessarily the data. But the data's always been there, hasn't it? I mean, maybe in the form of like tribal knowledge, if it's somebody who knows the track and where the hills are and experience and gut feel. But today you're digitizing it and you're processing it in close to real-time. It's amazing. >> Yeah, exactly right. Yeah, the car is instrumented with sensors, we post-process, we're doing video, image analysis and we're looking at our car, our competitor's car. So there's a huge amount of very complicated models that we're using to optimize our performance and to continuously improve our car. Yeah, the data and the applications that leverage it are really key. And that's a critical success factor for us. >> So let's talk about your data center at the track, if you will, I mean, if I can call it that. Paint a picture for us. >> Sure. What does that look like? >> So we have to send a lot of equipment to the track, at the edge. And even though we have really a great lateral network link back to the factory and there's cloud resources, a lot of the tracks are very old. You don't have hardened infrastructure, you don't have docks that protect cabling, for example, and you can lose connectivity to remote locations. So the applications we need to operate the car and to make really critical decisions, all that needs to be at the edge where the car operates. So historically we had three racks of equipment, legacy infrastructure and it was really hard to manage, to make changes, it was too inflexible. There were multiple panes of glass, and it was too slow. It didn't run our applications quickly. It was also too heavy and took up too much space when you're cramped into a garage with lots of environmental constraints. So we'd introduced hyper-convergence into the factory and seen a lot of great benefits. And when we came time to refresh our infrastructure at the track, we stepped back and said there's a lot smarter way of operating. We can get rid of all this slow and inflexible expensive legacy and introduce hyper-convergence. And we saw really excellent benefits for doing that. We saw a three X speed up for a lot of our applications. So here where we're post-processing data, and we have to make decisions about race strategy, time is of the essence and a three X reduction in processing time really matters. We also were able to go from three racks of equipment down to two racks of equipment and the storage efficiency of the HPE SimpliVity platform with 20 to one ratios allowed us to eliminate a rack. And that actually saved a $100,000 a year in freight costs by shipping less equipment. Things like backup, mistakes happen. Sometimes a user makes a mistake. So for example a race engineer could load the wrong data map into one of our simulations. And we could restore that DDI through SimpliVity backup in 90 seconds. And this makes sure, enables engineers to focus on the car, to make better decisions without having downtime. And we send two IT guys to every race. They're managing 60 users, a really diverse environment, juggling a lot of balls and having a simple management platform like HP SimpliVity gives us, allows them to be very effective and to work quickly. So all of those benefits were a huge step forward relative to the legacy infrastructure that we used to run at the edge. >> Yes, so you had the nice Petri dish in the factory, so it sounds like your goals obviously, number one KPI is speed to help shave seconds off the time, but also cost. >> That's right. Just the simplicity of setting up the infrastructure is key. >> Yeah, that's exactly right. >> It's speed, speed, speed. So we want applications that absolutely fly, you know gets actionable results quicker, get answers from our simulations quicker. The other area that speed's really critical is our applications are also evolving prototypes and we're always, the models are getting bigger, the simulations are getting bigger, and they need more and more resource. And being able to spin up resource and provision things without being a bottleneck is a big challenge. And SimpliVity gives us the means of doing that. >> So did you consider any other options or was it because you had the factory knowledge, HCI was, you know, very clearly the option? What did you look at? >> Yeah, so we have over five years of experience in the factory and we eliminated all of our legacy infrastructure five years ago. And the benefits I've described at the track we saw that in the factory. At the track, we have a three-year operational life cycle for our equipment. 2017 was the last year we had legacy. As we were building for 2018, it was obvious that hyper-converged was the right technology to introduce. And we'd had years of experience in the factory already. And the benefits that we see with hyper-converged actually mattered even more at the edge because our operations are so much more pressurized. Time is even more of the essence. And so speeding everything up at the really pointy end of our business was really critical. It was an obvious choice. >> So why SimpliVity? Why do you choose HPE SimpliVity? >> Yeah, so when we first heard about hyper-converged, way back in the factory. We had a legacy infrastructure, overly complicated, too slow, too inflexible, too expensive. And we stepped back and said there has to be a smarter way of operating. We went out and challenged our technology partners. We learned about hyper-convergence. We didn't know if the hype was real or not. So we underwent some PLCs and benchmarking and the PLCs were really impressive. And all these, you know, speed and agility benefits we saw and HPE for our use cases was the clear winner in the benchmarks. So based on that we made an initial investment in the factory. We moved about 150 VMs and 150 VDIs into it. And then as we've seen all the benefits we've successfully invested, and we now have an estate in the factory of about 800 VMs and about 400 VDIs. So it's been a great platform and it's allowed us to really push boundaries and give the business the service it expects. >> Well that's a fun story. So just coming back to the metrics for a minute. So you're running Monte Carlo simulations in real-time and sort of near real-time. >> Yeah. And so essentially that's, if I understand it, that's what-ifs and it's the probability of the outcome. And then somebody's got to make, >> Exactly. then a human's got to say, okay, do this, right. And so was that, >> Yeah. with the time in which you were able to go from data to insight to recommendation or edict was that compressed? You kind of indicated that, but. >> Yeah, that was accelerated. And so in that use case, what we're trying to do is predict the future and you're saying well, and before any event happens, you're doing what-ifs. Then if it were to happen, what would you probabilistically do? So, you know, so that simulation we've been running for a while but it gets better and better as we get more knowledge. And so that we were able to accelerate that with SimpliVity. But there's other use cases too. So we offload telemetry from the car and we post-process it. And that reprocessing time really is very time consuming. And, you know, we went from nine, eight minutes for some of the simulations down to just two minutes. So we saw big, big reductions in time. And ultimately that meant an engineer could understand what the car was doing in a practice session, recommend a tweak to the configuration or setup of it, and just get more actionable insight quicker. And it ultimately helps get a better car quicker. >> Such a great example. How are you guys feeling about the season, Matt? What's the team's, the sentiment? >> Yeah, I think we're optimistic. We with thinking our simulations that we have a great car. We have a new driver lineup. We have Max Verstappen who carries on with the team and Sergio Perez joins the team. So we're really excited about this year and we want to go and win races. And I think with COVID people are just itching also to get back to a little degree of normality, and, you know, and going racing again, even though there's no fans, it gets us into a degree of normality. >> That's great, Matt, good luck this season and going forward and thanks so much for coming back in theCUBE. Really appreciate it. >> It's my pleasure. Great talking to you again. >> Okay, now we're going to bring back Omar for a quick summary. So keep it right there. (mellow music)

Published Date : Mar 4 2021

SUMMARY :

Matt, it's good to see you again. and in near real-time and that we can be part of that journey And Matt that tear down and the impact that has on tires, What's the status of your season? and have the spectacle and some of the key apps So the car obviously needs to be fast the electronics and the like, and to continuously improve our car. data center at the track, What does that look like? So the applications we Petri dish in the factory, Just the simplicity of And being able to spin up And the benefits that we and the PLCs were really impressive. So just coming back to probability of the outcome. And so was that, from data to insight to recommendation And so that we were able to What's the team's, the sentiment? and Sergio Perez joins the team. and going forward and thanks so much Great talking to you again. So keep it right there.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Max VerstappenPERSON

0.99+

Matt CadieuxPERSON

0.99+

Sergio PerezPERSON

0.99+

MattPERSON

0.99+

two weeksQUANTITY

0.99+

Milton KeynesLOCATION

0.99+

Red Bull RacingORGANIZATION

0.99+

DavePERSON

0.99+

OmarPERSON

0.99+

2018DATE

0.99+

60 usersQUANTITY

0.99+

UKLOCATION

0.99+

20QUANTITY

0.99+

90 secondsQUANTITY

0.99+

23 racesQUANTITY

0.99+

150 VDIsQUANTITY

0.99+

three-yearQUANTITY

0.99+

two racksQUANTITY

0.99+

each trackQUANTITY

0.99+

2017DATE

0.99+

two minutesQUANTITY

0.99+

eight minutesQUANTITY

0.99+

nineQUANTITY

0.99+

three racksQUANTITY

0.99+

last yearDATE

0.99+

five years agoDATE

0.98+

hundredsQUANTITY

0.98+

todayDATE

0.98+

about 800 VMsQUANTITY

0.98+

HPORGANIZATION

0.98+

about 150 VMsQUANTITY

0.98+

about 400 VDIsQUANTITY

0.98+

one ratiosQUANTITY

0.98+

firstQUANTITY

0.96+

over five yearsQUANTITY

0.95+

this yearDATE

0.95+

SimpliVityTITLE

0.94+

$100,000 a yearQUANTITY

0.93+

23 immovableQUANTITY

0.93+

HCIORGANIZATION

0.93+

two ITQUANTITY

0.91+

SaturdayDATE

0.91+

Monte CarloTITLE

0.91+

oneQUANTITY

0.88+

Every trackQUANTITY

0.84+

a minuteQUANTITY

0.77+

COVIDOTHER

0.77+

threeQUANTITY

0.76+

Monte CarloCOMMERCIAL_ITEM

0.75+

every raceQUANTITY

0.75+

times a weekQUANTITY

0.75+

secondsQUANTITY

0.64+

FridayDATE

0.6+

of curvesQUANTITY

0.58+

noQUANTITY

0.56+

number oneQUANTITY

0.56+

straightQUANTITY

0.52+

SimpliVityOTHER

0.52+

COVIDTITLE

0.5+

HPETITLE

0.34+

Jamie Thomas, IBM | IBM Think 2020


 

Narrator: From theCUBE studios in Palo Alto and Boston, it's theCUBE, covering IBM Think, brought to you by IBM. >> We're back. You're watching theCUBE and our coverage of IBM Think 2020, the digital IBM thinking. We're here with Jamie Thomas, who's the general manager of strategy and development for IBM Systems. Jamie, great to see you. >> It's great to see you as always. >> You have been knee deep in qubits, the last couple years. And we're going to talk quantum. We've talked quantum a lot in the past, but it's a really interesting field. We spoke to you last year at IBM Think about this topic. And a year in this industry is a long time, but so give us the update what's new in quantum land? >> Well, Dave first of all, I'd like to say that in this environment we find ourselves in, I think we can all appreciate why innovation of this nature is perhaps more important going forward, right? If we look at some of the opportunities to solve some of the unsolvable problems, or solve problems much more quickly, in the case of pharmaceutical research. But for us in IBM, it's been a really busy year. First of all, we worked to advance the technology, which is first and foremost in terms of this journey to quantum. We just brought online our 53 qubit computer, which also has a quantum volume of 32, which we can talk about. And we've continued to advance the software stack that's attached to the technology because you have to have both the software and the hardware thing, right rate and pace. We've advanced our new network, which you and I have spoken about, which are those individuals across the commercial enterprises, academic and startups, who are working with us to co-create around quantum to help us understand the use cases that really can be solved in the future with quantum. And we've also continued to advance our community, which is serving as well in this new digital world that we're finding ourselves in, in terms of reaching out to developers. Now, we have over 300,000 unique downloads of the programming model that represents the developers that we're touching out there every day with quantum. These developers have, in the last year, have run over 140 billion quantum circuits. So, our machines in the cloud are quite active, and the cloud model, of course, is serving us well. The data's, in addition, to all the other things that I mentioned. >> So Jamie, what metrics are you trying to optimize on? You mentioned 53 qubits I saw that actually came online, I think, last fall. So you're nearly six months in now, which is awesome. But what are you measuring? Are you measuring stability or coherence or error rates? Number of qubits? What are the things that you're trying to optimize on to measure progress? >> Well, that's a good question. So we have this metric that we've defined over the last year or two called quantum volume. And quantum volume 32, which is the capacity of our current machine really is a representation of many of the things that you mentioned. It represents the power of the quantum machine, if you will. It includes a definition of our ability to provide error correction, to maintain states, to really accomplish workloads with the computer. So there's a number of factors that go into quantum volume, which we think are important. Now, qubits and the number of qubits is just one such metric. It really depends on the coherence and the effect of error correction, to really get the value out of the machine, and that's a very important metric. >> Yeah, we love to boil things down to a single metric. It's more complicated than that >> Yeah, yeah. >> specifically with quantum. So, talk a little bit more about what clients are doing and I'm particularly interested in the ecosystem that you're forming around quantum. >> Well, as I said, the ecosystem is both the network, which are those that are really intently working with us to co-create because we found, through our long history in IBM, that co-creation is really important. And also these researchers and developers realize that some of our developers today are really researchers, but as you as you go forward you get many different types of developers that are part of this mix. But in terms of our ecosystem, we're really fundamentally focused on key problems around chemistry, material science, financial services. And over the last year, there's over 200 papers that have been written out there from our network that really embody their work with us on this journey. So we're looking at things like quadratic speed up of things like Monte Carlo simulation, which is used in the financial services arena today to quantify risk. There's papers out there around topics like trade settlements, which in the world today trade settlements is a very complex domain with very interconnected complex rules and trillions of dollars in the purview of trade settlement. So, it's just an example. Options pricing, so you see examples around options pricing from corporations like JPMC in the area of financial services. And likewise in chemistry, there's a lot of research out there focused on batteries. As you can imagine, getting everything to electric powered batteries is an important topic. But today, the way we manufacture batteries can in fact create air pollution, in terms of the process, as well as we want batteries to have more retention in life to be more effective in energy conservation. So, how do we create batteries and still protect our environment, as we all would like to do? And so we've had a lot of research around things like the next generation of electric batteries, which is a key topic. But if you can think, you know Dave, there's so many topics here around chemistry, also pharmaceuticals that could be advanced with a quantum computer. Obviously, if you look at the COVID-19 news, our supercomputer that we installed at Oak Ridge National Laboratory for instance, is being used to analyze 8000 different compounds for specifically around COVID-19 and the possibilities of using those compounds to solve COVID-19, or influence it in a positive manner. You can think of the quantum computer when it comes online as an accelerator to a supercomputer like that, helping speed up this kind of research even faster than what we're able to do with something like the Summit supercomputer. Oak Ridge is one of our prominent clients with the quantum technology, and they certainly see it that way, right, as an accelerator to the capacity they already have. So a great example that I think is very germane in the time that we find ourselves in. >> How 'about startups in this ecosystem? Are you able to-- I mean there must be startups popping up all over the place for this opportunity. Are you working with any startups or incubating any startups? Can you talk about that? >> Oh yep. Absolutely. There's about a third of our network are in VC startups and there's a long list of them out there. They're focused on many different aspects of quantum computing. Many of 'em are focused on what I would call loosely, the programming model, looking at improving algorithms across different industries, making it easier for those that are, perhaps more skilled in domains, whether that is chemistry or financial services or mathematics, to use the power of the quantum computer. Many of those startups are leveraging our Qiskit, our quantum information science open programming model that we put out there so it's open. Many of the startups are using that programming model and then adding their own secret sauce, if you will, to understand how they can help bring on users in different ways. So it depends on their domain. You see some startups that are focused on the hardware as well, of course, looking at different hardware technologies that can be used to solve quantum. I would say I feel like more of them are focused on the software programming model. >> Well Jamie, it was interesting hear you talk about what some of the clients are doing. I mean obviously in pharmaceuticals, and battery manufacturers do a lot of advanced R and D, but you mentioned financial services, you know JPMC. It's almost like they're now doing advanced R and D trying to figure out how they can apply quantum to their business down the road. >> Absolutely, and we have a number of financial institutions that we've announced as part of the network. JPMC is just one of our premiere references who have written papers about it. But I would tell you that in the world of Monte Carlo simulation, options pricing, risk management, a small change can make a big difference in dollars. So we're talking about operations that in many cases they could achieve, but not achieve in the right amount of time. The ability to use quantum as an accelerator for these kind of operations is very important. And I can tell you, even in the last few weeks, we've had a number of briefings with financial companies for five hours on this topic. Looking at what could they do and learning from the work that's already done out there. I think this kind of advanced research is going to be very important. We also had new members that we announced at the beginning of the year at the CES show. Delta Airlines joined. First Transportation Company, Amgen joined, a pharmaceutical, an example of pharmaceuticals, as well as a number of other research organizations. Georgia Tech, University of New Mexico, Anthem Insurance, just an example of the industries that are looking to take advantage of this kind of technology as it matures. >> Well, and it strikes me too, that as you start to bring machine intelligence into the equation, it's a game changer. I mean, I've been saying that it's not Moore's Law driving the industry anymore, it's this combination of data, AI, and cloud for scale, but now-- Of course there are alternative processors going on, we're seeing that, but now as you bring in quantum that actually adds to that innovation cocktail, doesn't it? >> Yes, and as you recall when you and I spoke last year about this, there are certain domains today where you really cannot get as much effective gain out of classical computing. And clearly, chemistry is one of those domains because today, with classical computers, we're really unable to model even something as simple as a caffeine molecule, which we're all so very familiar with. I have my caffeine here with me today. (laughs) But you know, clearly, to the degree we can actually apply molecular modeling and the advantages that quantum brings to those fields, we'll be able to understand so much more about materials that affect all of us around the world, about energy, how to explore energy, and create energy without creating the carbon footprint and the bad outcomes associated with energy creation, and how to obviously deal with pharmaceutical creation much more effectively. There's a real promise in a lot of these different areas. >> I wonder if you could talk a little bit about some of the landscape and I'm really interested in what IBM brings to the table that's sort of different. You're seeing a lot of companies enter this space, some big and many small, what's the unique aspect that IBM brings to the table? You've mentioned co-creating before. Are you co-creating, coopertating with some of the other big guys? Maybe you could address that. >> Well, obviously this is a very hot topic, both within the technology industry and across government entities. I think that some of the key values we bring to the table is we are the only vendor right now that has a fleet of systems available in the cloud, and we've been out there for several years, enabling clients to take advantage of our capacity. We have both free access and premium access, which is what the network is paying for because they get access to the highest fidelity machines. Clearly, we understand intently, classical computing and the ability to leverage classical with quantum for advantage across many of these different industries, which I think is unique. We understand the cloud experience that we're bringing to play here with quantum since day one, and most importantly, I think we have strong relationships. We have, in many cases, we're still running the world. I see it every day coming through my clients' port vantage point. We understand financial services. We understand healthcare. We understand many of these important domains, and we're used to solving tough problems. So, we'll bring that experience with our clients and those industries to the table here and help them on this journey. >> You mentioned your experience in sort of traditional computing, basically if I understand it correctly, you're still using traditional silicon microprocessors to read and write the data that's coming out of quantum. I don't know if they're sitting physically side by side, but you've got this big cryogenic unit, cables coming in. That's the sort of standard for some time. It reminds me, can it go back to ENIAC? And now, which is really excites me because you look at the potential to miniaturize this over the next several decades, but is that right, you're sort of side by side with traditional computing approaches? >> Right, effectively what we do with quantum today does not happen without classical computers. The front end, you're coming in on classical computers. You're storing your data on classical computers, so that is the model that we're in today, and that will continue to happen. In terms of the quantum processor itself, it is a silicon based processor, but it's a superconducting technology, in our case, that runs inside that cryogenics unit at a very cold temperature. It is powered by next-generation electronics that we in IBM have innovated around and created our own electronic stack that actually sends microwave pulses into the processor that resides in the cryogenics unit. So when you think about the components of the system, you have to be innovating around the processor, the cryogenics unit, the custom electronic stack, and the software all at the same time. And yes, we're doing that in terms of being surrounded by this classical backplane that allows our Q network, as well as the developers around the world to actually communicate with these systems. >> The other thing that I really like about this conversation is it's not just R and D for the sake of R and D, you've actually, you're working with partners to, like you said, co-create, customers, financial services, airlines, manufacturing, et cetera. I wonder if you could maybe kind of address some of the things that you see happening in the sort of near to midterm, specifically as it relates to where people start. If I'm interested in this, what do I do? Do I need new skills? Do I need-- It's in the cloud, right? >> Yeah. >> So I can spit it up there, but where do people get started? >> Well they can certainly come to the Quantum Experience, which is our cloud experience and start to try out the system. So, we have both easy ways to get started with visual composition of circuits, as well as using the programming model that I mentioned, the Qiskit programming model. We've provided extensive YouTube videos out there already. So, developers who are interested in starting to learn about quantum can go out there and subscribe to our YouTube channel. We've got over 40 assets already recorded out there, and we continue to do those. We did one last week on quantum circuits for those that are more interested in that particular domain, but I think that's a part of this journey is making sure that we have all the assets out there digitally available for those around the world that want to interact with us. We have tremendous amount of education. We're also providing education to our business partners. One of our key network members, who I'll be speaking with later, I think today, is from Accenture. Accenture's an example of an organization that's helping their clients understand this quantum journey, and of course they're providing their own assets, if you will, but once again, taking advantage of the education that we're providing to them as a business partner. >> People talk about quantum being a decade away, but I think that's the wrong way to think about it, and I'd love your thoughts on this. It feels like, almost like the return coming out of COVID-19, it's going to come in waves, and there's parts that are going to be commercialized thoroughly and it's not binary. It's not like all of a sudden one day we're going to wake, "Hey, quantum is here!" It's really going to come in layers. Your thoughts? >> Yeah, I definitely agree with that. It's very important, that thought process because if you want to be competitive in your industry, you should think about getting started now. And that's why you see so many financial services, industrial firms, and others joining to really start experimentation around some of these domain areas to understand jointly how we evolve these algorithms to solve these problems. I think that the production level characteristics will curate the rate and pace of the industry. The industry, as we know, can drive things together faster. So together, we can make this a reality faster, and certainly none of us want to say it's going to be a decade, right. I mean, we're getting advantage today, in terms of the experimentation and the understanding of these problems, and we have to expedite that, I think, in the next few years. And certainly, with this arms race that we see, that's going to continue. One of the things I didn't mention is that IBM is also working with certain countries and we have significant agreements now with the countries of Germany and Japan to put quantum computers in an IBM facility in those countries. It's in collaboration with Fraunhofer Institute or miR Scientific Organization in Germany and with the University of Tokyo in Japan. So you can see that it's not only being pushed by industry, but it's also being pushed from the vantage of countries and bringing this research and technology to their countries. >> All right, Jamie, we're going to have to leave it there. Thanks so much for coming on theCUBE and give us the update. It's always great to see you. Hopefully, next time I see you, it'll be face to face. >> That's right, I hope so too. It's great to see you guys, thank you. Bye. >> All right, you're welcome. Keep it right there everybody. This is Dave Vellante for theCUBE. Be back right after this short break. (gentle music)

Published Date : May 5 2020

SUMMARY :

brought to you by IBM. the digital IBM thinking. We spoke to you last year at in the future with quantum. What are the things that you're trying of many of the things that you mentioned. things down to a single metric. interested in the ecosystem in the time that we find ourselves in. all over the place for this opportunity. Many of the startups are to their business down the road. just an example of the that actually adds to that and the bad outcomes associated of the other big guys? and the ability to leverage That's the sort of standard for some time. so that is the model that we're in today, in the sort of near to midterm, and subscribe to our YouTube channel. that are going to be One of the things I didn't It's always great to see you. It's great to see you guys, thank you. Be back right after this short break.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

Dave VellantePERSON

0.99+

Jamie ThomasPERSON

0.99+

JamiePERSON

0.99+

Fraunhofer InstituteORGANIZATION

0.99+

GermanyLOCATION

0.99+

University of New MexicoORGANIZATION

0.99+

AccentureORGANIZATION

0.99+

Georgia TechORGANIZATION

0.99+

JPMCORGANIZATION

0.99+

First Transportation CompanyORGANIZATION

0.99+

five hoursQUANTITY

0.99+

DavePERSON

0.99+

JapanLOCATION

0.99+

AmgenORGANIZATION

0.99+

Delta AirlinesORGANIZATION

0.99+

BostonLOCATION

0.99+

Palo AltoLOCATION

0.99+

Anthem InsuranceORGANIZATION

0.99+

Monte CarloTITLE

0.99+

last yearDATE

0.99+

miR Scientific OrganizationORGANIZATION

0.99+

University of TokyoORGANIZATION

0.99+

53 qubitsQUANTITY

0.99+

Oak RidgeORGANIZATION

0.99+

last fallDATE

0.99+

YouTubeORGANIZATION

0.99+

oneQUANTITY

0.99+

COVID-19OTHER

0.99+

8000 different compoundsQUANTITY

0.99+

ENIACORGANIZATION

0.99+

over 200 papersQUANTITY

0.99+

trillions of dollarsQUANTITY

0.99+

53 qubitQUANTITY

0.99+

bothQUANTITY

0.98+

CESEVENT

0.98+

OneQUANTITY

0.98+

todayDATE

0.98+

single metricQUANTITY

0.97+

32QUANTITY

0.97+

firstQUANTITY

0.96+

FirstQUANTITY

0.96+

IBM ThinkORGANIZATION

0.95+

over 40 assetsQUANTITY

0.94+

twoQUANTITY

0.94+

IBM SystemsORGANIZATION

0.93+

over 140 billion quantum circuitsQUANTITY

0.93+

a yearQUANTITY

0.93+

last couple yearsDATE

0.92+

over 300,000 unique downloadsQUANTITY

0.92+

Oak Ridge National LaboratoryORGANIZATION

0.89+

one such metricQUANTITY

0.87+

nearly six monthsQUANTITY

0.87+

Bill Vass, AWS | AWS re:Invent 2019


 

>> Announcer: Live from Las Vegas, it's theCUBE! Covering AWS re:Invent 2019. Brought to you by Amazon Web Services and Intel. Along with it's ecosystem partners. >> Okay, welcome back everyone. It's theCUBE's live coverage here in Las Vegas for Amazon Web Series today, re:Invent 2019. It's theCUBE's seventh year covering re:Invent. Eight years they've been running this event. It gets bigger every year. It's been a great wave to ride on. I'm John Furrier, my cohost, Dave Vellante. We've been riding this wave, Dave, for years. It's so exciting, it gets bigger and more exciting. >> Lucky seven. >> This year more than ever. So much stuff is happening. It's been really exciting. I think there's a sea change happening, in terms of another wave coming. Quantum computing, big news here amongst other great tech. Our next guest is Bill Vass, VP of Technology, Storage Automation Management, part of the quantum announcement that went out. Bill, good to see you. >> Yeah, well, good to see you. Great to see you again. Thanks for having me on board. >> So, we love quantum, we talk about it all the time. My son loves it, everyone loves it. It's futuristic. It's going to crack everything. It's going to be the fastest thing in the world. Quantum supremacy. Andy referenced it in my one-on-one with him around quantum being important for Amazon. >> Yes, it is, it is. >> You guys launched it. Take us through the timing. Why, why now? >> Okay, so the Braket service, which is based on quantum notation made by Dirac, right? So we thought that was a good name for it. It provides for you the ability to do development in quantum algorithms using gate-based programming that's available, and then do simulation on classical computers, which is what we call our digital computers today now. (men chuckling) >> Yeah, it's a classic. >> These are classic computers all of a sudden right? And then, actually do execution of your algorithms on, today, three different quantum computers, one that's annealing and two-bit gate-based machines. And that gives you the ability to test them in parallel and separate from each other. In fact, last week, I was working with the team and we had two machines, an ion trap machine and an electromagnetic tunneling machine, solving the same problem and passing variables back and forth from each other, you could see the cloud watch metrics coming out, and the data was going to an S3 bucket on the output. And we do it all in a Jupiter notebook. So it was pretty amazing to see all that running together. I think it's probably the first time two different machines with two different technologies had worked together on a cloud computer, fully integrated with everything else, so it was pretty exciting. >> So, quantum supremacy has been a word kicked around. A lot of hand waving, IBM, Google. Depending on who you talk to, there's different versions. But at the end of the day, quantum is a leap in computing. >> Bill: Yes, it can be. >> It can be. It's still early days, it would be day zero. >> Yeah, well I think if you think of, we're about where computers were with tubes if you remember, if you go back that far, right, right? That's about where we are right now, where you got to kind of jiggle the tubes sometimes to get them running. >> A bug gets in there. Yeah, yeah, that bug can get in there, and all of those kind of things. >> Dave: You flip 'em off with a punch card. Yeah, yeah, so for example, a number of the machines, they run for four hours and then they come down for a half hour for calibration. And then they run for another four hours. So we're still sort of at that early stage, but you can do useful work on them. And more mature systems, like for example D-Wave, which is annealer, a little different than gate-based machines, is really quite mature, right? And so, I think as you go back and forth between these machines, the gate-based machines and annealers, you can really get a sense for what's capable today with Braket and that's what we want to do is get people to actually be able to try them out. Now, quantum supremacy is a fancy word for we did something you can't do on a classical computer, right? That's on a quantum computer for the first time. And quantum computers have the potential to exceed the processing power, especially on things like factoring and other things like that, or on Hamiltonian simulations for molecules, and those kids of things, because a quantum computer operates the way a molecule operates, right, in a lot of ways using quantum mechanics and things like that. And so, it's a fancy term for that. We don't really focus on that at Amazon. We focus on solving customer's problems. And the problem we're solving with Braket is to get them to learn it as it's evolving, and be ready for it, and continue to develop the environment. And then also offer a lot of choice. Amazon's always been big on choice. And if you look at our processing portfolio, we have AMD, Intel x86, great partners, great products from them. We have Nvidia, great partner, great products from them. But we also have our Graviton 1 and Graviton 2, and our new GPU-type chip. And those are great products, too, I've been doing a lot on those, as well. And the customer should have that choice, and with quantum computers, we're trying to do the same thing. We will have annealers, we will have ion trap machines, we will have electromagnetic machines, and others available on Braket. >> Can I ask a question on quantum if we can go back a bit? So you mentioned vacuum tubes, which was kind of funny. But the challenge there was with that, it was cooling and reliability, system downtime. What are the technical challenges with regard to quantum in terms of making it stable? >> Yeah, so some of it is on classical computers, as we call them, they have error-correction code built in. So you have, whether you know it or not, there's alpha particles that are flipping bits on your memory at all times, right? And if you don't have ECC, you'd get crashes constantly on your machine. And so, we've built in ECC, so we're trying to build the quantum computers with the proper error correction, right, to handle these things, 'cause nothing runs perfectly, you just think it's perfect because we're doing all the error correction under the covers, right? And so that needs to evolve on quantum computing. The ability to reproduce them in volume from an engineering perspective. Again, standard lithography has a yield rate, right? I mean, sometimes the yield is 40%, sometimes it's 20%, sometimes it's a really good fab and it's 80%, right? And so, you have a yield rate, as well. So, being able to do that. These machines also generally operate in a cryogenic world, that's a little bit more complicated, right? And they're also heavily affected by electromagnetic radiation, other things like that, so you have to sort of faraday cage them in some cases, and other things like that. So there's a lot that goes on there. So it's managing a physical environment like cryogenics is challenging to do well, having the fabrication to reproduce it in a new way is hard. The physics is actually, I shudder to say well understood. I would say the way the physics works is well understood, how it works is not, right? No one really knows how entanglement works, they just knows what it does, and that's understood really well, right? And so, so a lot of it is now, why we're excited about it, it's an engineering problem to solve, and we're pretty good at engineering. >> Talk about the practicality. Andy Jassy was on the record with me, quoted, said, "Quantum is very important to Amazon." >> Yes it is. >> You agree with that. He also said, "It's years out." You said that. He said, "But we want to make it practical "for customers." >> We do, we do. >> John: What is the practical thing? Is it just kicking the tires? Is it some of the things you mentioned? What's the core goal? >> So, in my opinion, we're at a point in the evolution of these quantum machines, and certainly with the work we're doing with Cal Tech and others, that the number of available cubits are starting to increase at an astronomic rate, a Moore's Law kind of of rate, right? Whether it's, no matter which machine you're looking at out there, and there's about 200 different companies building quantum computers now, and so, and they're all good technology. They've all got challenges, as well, as reproducibility, and those kind of things. And so now's a good time to start learning how to do this gate-based programming knowing that it's coming, because quantum computers, they won't replace a classical computer, so don't think that. Because there is no quantum ram, you can't run 200 petabytes of data through a quantum computer today, and those kind of things. What it can do is factoring very well, or it can do probability equations very well. It'll have affects on Monte Carlo simulations. It'll have affects specifically in material sciences where you can simulate molecules for the first time that you just can't do on classical computers. And when I say you can't do on classical computers, my quantum team always corrects me. They're like, "Well, no one has proven "that there's an algorithm you can run "on a classical computer that will do that yet," right? (men chuckle) So there may be times when you say, "Okay, I did this on a quantum computer," and you can only do it on a quantum computer. But then someone's very smart mathematician says, "Oh, I figured out how to do it on a regular computer. "You don't need a quantum computer for that." And that's constantly evolving, as well, in parallel, right? And so, and that's what's that argument between IBM and Google on quantum supremacy is that. And that's an unfortunate distraction in my opinion. What Google did was quite impressive, and if you're in the quantum world, you should be very happy with what they did. They had a very low error rate with a large number of cubits, and that's a big deal. >> Well, I just want to ask you, this industry is an arms race. But, with something like quantum where you've got 200 companies actually investing in it so early days, is collaboration maybe a model here? I mean, what do think? You mentioned Cal Tech. >> It certainly is for us because, like I said, we're going to have multiple quantum computers available, just like we collaborate with Intel, and AMD, and the other partners in that space, as well. That's sort of the nice thing about being a cloud service provider is we can give customers choice, and we can have our own innovation, plus their innovations available to customers, right? Innovation doesn't just happen in one place, right? We got a lot of smart people at Amazon, we don't invent everything, right? (Dave chuckles) >> So I got to ask you, obviously, we can take cube quantum and call it cubits, not to be confused with theCUBE video highlights. Joking aside, classical computers, will there be a classical cloud? Because this is kind of a futuristic-- >> Or you mean a quantum cloud? >> Quantum cloud, well then you get the classic cloud, you got the quantum cloud. >> Well no, they'll be together. So I think a quantum computer will be used like we used to use a math coprocessor if you like, or FPGAs are used today, right? So, you'll go along and you'll have your problem. And I'll give you a real, practical example. So let's say you had a machine with 125 cubits, okay? You could just start doing some really nice optimization algorithms on that. So imagine there's this company that ships stuff around a lot, I wonder who that could be? And they need to optimize continuously their delivery for a truck, right? And that changes all the time. Well that algorithm, if you're doing hundreds of deliveries in a truck, it's very complicated. That traveling salesman algorithm is a NP-hard problem when you do it, right? And so, what would be the fastest best path? But you got to take into account weather and traffic, so that's changing. So you might have a classical computer do those algorithms overnight for all the delivery trucks and then send them out to the trucks. The next morning they're driving around. But it takes a lot of computing power to do that, right? Well, a quantum computer can do that kind of problemistic or deterministic equation like that, not deterministic, a best-fit algorithm like that, much faster. And so, you could have it every second providing that. So your classical computer is sending out the manifests, interacting with the person, it's got the website on it. And then, it gets to the part where here's the problem to calculate, we call it a shot when you're on a quantum computer, it runs it in a few seconds that would take an hour or more. >> It's a fast job, yeah. >> And it comes right back with the result. And then it continues with it's thing, passes it to the driver. Another update occurs, (buzzing) and it's just going on all the time. So those kind of things are very practical and coming. >> I've got to ask for the younger generations, my sons super interested as I mentioned before you came on, quantum attracts the younger, smart kids coming into the workforce, engineering talent. What's the best path for someone who has an either advanced degree, or no degree, to get involved in quantum? Is there a certain advice you'd give someone? >> So the reality is, I mean, obviously having taken quantum mechanics in school and understanding the physics behind it to an extent, as much as you can understand the physics behind it, right? I think the other areas, there are programs at universities focused on quantum computing, there's a bunch of them. So, they can go into that direction. But even just regular computer science, or regular mechanical and electrical engineering are all neat. Mechanical around the cooling, and all that other stuff. Electrical, these are electrically-based machines, just like a classical computer is. And being able to code at low level is another area that's tremendously valuable right now. >> Got it. >> You mentioned best fit is coming, that use case. I mean, can you give us a sense of a timeframe? And people will say, "Oh, 10, 15, 20 years." But you're talking much sooner. >> Oh, I don't, I think it's sooner than that, I do. And it's hard for me to predict exactly when we'll have it. You can already do, with some of the annealing machines, like D- Wave, some of the best fit today, right? So it's a matter of people want to use a quantum computer because they need to do something fast, they don't care how much it costs, they need to do something fast. Or it's too expensive to do it on a classical computer, or you just can't do it at all on a classical computer. Today, there isn't much of that last one, you can't do it at all, but that's coming. As you get to around 52, 50, 52 cubits, it's very hard to simulate that on a classical computer. You're starting to reach the edge of what you can practically do on a classical computer. At about 125 cubits, you probably are at a point where you can't just simulate it anymore. >> But you're talking years, not decades, for this use case? >> Yeah, I think you're definitely talking years. I think, and you know, it's interesting, if you'd asked me two years ago how long it would take, I would've said decades. So that's how fast things are advancing right now, and I think that-- >> Yeah, and the computers just getting faster and faster. >> Yeah, but the ability to fabricate, the understanding, there's a number of architectures that are very well proven, it's just a matter of getting the error rates down, stability in place, the repeatable manufacturing in place, there's a lot of engineering problems. And engineering problems are good, we know how to do engineering problems, right? And we actually understand the physics, or at least we understand how the physics works. I won't claim that, what is it, "Spooky action at a distance," is what Einstein said for entanglement, right? And that's a core piece of this, right? And so, those are challenges, right? And that's part of the mystery of the quantum computer, I guess. >> So you're having fun? >> I am having fun, yeah. >> I mean, this is pretty intoxicating, technical problems, it's fun. >> It is. It is a lot of fun. Of course, the whole portfolio that I run over at AWS is just really a fun portfolio, between robotics, and autonomous systems, and IOT, and the advanced storage stuff that we do, and all the edge computing, and all the monitor and management systems, and all the real-time streaming. So like Kinesis Video, that's the back end for the Amazon ghost stores, and working with all that. It's a lot of fun, it really is, it's good. >> Well, Bill, we need an hour to get into that, so we may have to come up and see you, do a special story. >> Oh, definitely! >> We'd love to come up and dig in, and get a special feature program with you at some point. >> Yeah, happy to do that, happy to do that. >> Talk some robotics, some IOT, autonomous systems. >> Yeah, you can see all of it around here, we got it up and running around here, Dave. >> What a portfolio. >> Congratulations. >> Alright, thank you so much. >> Great news on the quantum. Quantum is here, quantum cloud is happening. Of course, theCUBE is going quantum. We've got a lot of cubits here. Lot of CUBE highlights, go to SiliconAngle.com. We got all the data here, we're sharing it with you. I'm John Furrier with Dave Vellante talking quantum. Want to give a shout out to Amazon Web Services and Intel for setting up this stage for us. Thanks to our sponsors, we wouldn't be able to make this happen if it wasn't for them. Thank you very much, and thanks for watching. We'll be back with more coverage after this short break. (upbeat music)

Published Date : Dec 4 2019

SUMMARY :

Brought to you by Amazon Web Services and Intel. It's so exciting, it gets bigger and more exciting. part of the quantum announcement that went out. Great to see you again. It's going to be the fastest thing in the world. You guys launched it. It provides for you the ability to do development And that gives you the ability to test them in parallel Depending on who you talk to, there's different versions. It's still early days, it would be day zero. we're about where computers were with tubes if you remember, can get in there, and all of those kind of things. And the problem we're solving with Braket But the challenge there was with that, And so that needs to evolve on quantum computing. Talk about the practicality. You agree with that. And when I say you can't do on classical computers, But, with something like quantum and the other partners in that space, as well. So I got to ask you, you get the classic cloud, you got the quantum cloud. here's the problem to calculate, we call it a shot and it's just going on all the time. quantum attracts the younger, smart kids And being able to code at low level is another area I mean, can you give us a sense of a timeframe? And it's hard for me to predict exactly when we'll have it. I think, and you know, it's interesting, Yeah, and the computers Yeah, but the ability to fabricate, the understanding, I mean, this is and the advanced storage stuff that we do, so we may have to come up and see you, and get a special feature program with you Yeah, happy to do that, Talk some robotics, some IOT, Yeah, you can see all of it We got all the data here, we're sharing it with you.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

IBMORGANIZATION

0.99+

Dave VellantePERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

two machinesQUANTITY

0.99+

AmazonORGANIZATION

0.99+

Cal TechORGANIZATION

0.99+

AMDORGANIZATION

0.99+

AndyPERSON

0.99+

BillPERSON

0.99+

Andy JassyPERSON

0.99+

EinsteinPERSON

0.99+

John FurrierPERSON

0.99+

40%QUANTITY

0.99+

DavePERSON

0.99+

Bill VassPERSON

0.99+

GoogleORGANIZATION

0.99+

20%QUANTITY

0.99+

NvidiaORGANIZATION

0.99+

IntelORGANIZATION

0.99+

80%QUANTITY

0.99+

last weekDATE

0.99+

AWSORGANIZATION

0.99+

an hourQUANTITY

0.99+

four hoursQUANTITY

0.99+

200 companiesQUANTITY

0.99+

10QUANTITY

0.99+

Las VegasLOCATION

0.99+

two-bitQUANTITY

0.99+

15QUANTITY

0.99+

TodayDATE

0.99+

125 cubitsQUANTITY

0.99+

200 petabytesQUANTITY

0.99+

20 yearsQUANTITY

0.99+

two different machinesQUANTITY

0.99+

oneQUANTITY

0.99+

50QUANTITY

0.99+

two different technologiesQUANTITY

0.99+

Eight yearsQUANTITY

0.98+

first timeQUANTITY

0.98+

Monte CarloTITLE

0.98+

todayDATE

0.98+

two years agoDATE

0.98+

52 cubitsQUANTITY

0.97+

BraketORGANIZATION

0.97+

x86COMMERCIAL_ITEM

0.97+

This yearDATE

0.96+

next morningDATE

0.96+

about 125 cubitsQUANTITY

0.95+

Graviton 1COMMERCIAL_ITEM

0.95+

DiracORGANIZATION

0.95+

Graviton 2COMMERCIAL_ITEM

0.94+

about 200 different companiesQUANTITY

0.93+

three different quantum computersQUANTITY

0.93+

Moore's LawTITLE

0.91+

seventh yearQUANTITY

0.9+

decadesQUANTITY

0.87+

secondsQUANTITY

0.86+

every secondQUANTITY

0.85+

re:EVENT

0.82+

half hourQUANTITY

0.81+

Vikas Sindwani, Accenture, Loic Giraud and Fang Deng, Novartis | Accenture Executive Summit 2019


 

>>live from Las Vegas. It's the Q covering AWS executive. Something brought to you by extension. >>Welcome back, everyone to the cubes. Live coverage of the ex Censure Executive Summit here in AWS. Reinvent I'm your host, Rebecca Knight. We have three guests for this segment. We have Fang Deng. She is the big data and an Advanced Analytics program. Lead analytic Seo hee at Novartis. Thank you so much for coming on the show. Thank you. We have low eq zero. He is Novartis head of Analytic Seo Hee. Thanks so much. Look, and Vika sinned. Wan Hee hee is applied intelligence delivery lead at Accenture. Thank you so much. Thank you. So I want to start with you. Look, no. Novartis, of course, is a household name. It's one of the largest pharmaceutical companies in the world. But that left you to just walk our viewers a little bit through your business and sort of the pain points you were looking to solve with this journey Thio to the cloud >>you think you ever care? So I think if I if we look at the company, I think Wayne realized that it is more and more difficult to bring new trucks to market, so it takes about 12 years and on $1.2 billion to find a new trick. So at the same time, we see that there's more and more patient that need access to medicines. So in the last two years, I think we tried toe clear the new strategy where we're trying to re imagine medicine for user's data and technology. So in 2018 we've recruited a new studio that's came and I tried to build a digital ambition which is around fabulous, which is the innovation, the operation and the engagement on the innovation. What we're trying to do is to find new compound, will application off existing compounds into our business, make sure that I think patients can get access to drugs much faster and earlier on in the operation. We are trying to optimize the backbone off day to day processes, beat in the manufacturing or in the supply chain, or in the commercialization to ensure that the patient also get access to that much faster in the engagement. We're trying to healthy a cheapie and the players and then the and the patients to better understand the tracks reproduce as well as on the medication they need to have to receive treatment. So if you look at these three pillars, the cloud strategy is an essential portion of it. Because in all of its processes we have a lot of data and full cloud. I think we can make use off his data to help to innovate, open, right and engage. >>So as you as you said, it's really about reimagining medicine. I mean, from the drug discovery process to how it's helping patients live, live longer, healthier lives. Thanks. So talk about the vision for the Formula One platform. >>Yeah, aside, like a mission before we trying to re imagine our products for the patient. And we're trying to use more the more data history data and also the public data try to support our products. And the Formula One is our future enterprise data and the next perform for our new artists. So our objective is trying to love you all the new technology and also trying to consolidate over data in our Macleod and build up this platform for the whole notice Users support our business, do better products full patient. >>So when it comes to these these new new platforms, new technologies that are being introduced. We know that oftentimes the technology is the easy part. Or at least the more straightforward part I should say. But it's it's sort of getting people on board the change management. What are some of the challenges that you that you know of artists faced in terms of of the culture and the skills for your for your workforce? >>So if you look at that, the are in disgrace, very traditionally nature. And when we embarking the details confirmation, I think the first thing we had to change the culture of the company. So when you when you listen to our CEO, I think you tried to promote this invoice culture where all of us are Syrian leaders. And then we walk, you know, as a thing as an organization where we try to help each other and more and more collaborate when it comes to digital transformation. When we started this having this period, we've realised actually that workforce was not trained, so the first few things that we did disease is a tight wire new workforce, but also try to actually identify the advocate ambassadors. I could go and then go into residual confirmation early on to be able to help and to guide the office to get for that. So it's actually it's totally immaterial, Johnny. And then we are now in the second year and we've seen already a tremendous four guys, right? >>Can you describe some of the changes that you've seen him? I mean, I'm really interested in what you talk about. The ambassador's, the people who are going to spread the good word. What are what are some of the changes that you've seen in your workforce? Yeah, we can mention >>that. It's like you mentioned before. Um, like, talking about regarding overall catch a bus back to tried to leverage a new attack. Knowledge like the delivery perspective. We trying to do more automation, and the May 1 side is trying to get more efficiency and also another side. Try to ensure the intern responsibility for one product to be produced and also at the same time, let me through more automation to think about this secret inside the compound inside. Help us a lot of in pulling that part also, because >>maybe I can compliment that so I think if you look at it when the initial studying part of our journey, I think that a lot of people were reluctant to go and then tie to work on a cloud and to work with digital technology. So we found few projects where we felt there's a good ready for money. And as we can deliver fast in fact, Andi to things like, I don't get reviewed t piece every. Make sure that when we went, our field falls, go then and talk to the hippies. They know what to talk about an orphan, and then which format. We also look at that we can reduce costs internally and for the food, different projects and then on product that we've established, we build credibility within the organization that helped to disseminate the cultural transformation. >>So once others air seeing, seeing the benefits that that captured, they're more likely to to feel good about the cloud work. >>Yeah, that's that's the true and also notes of the news. Things like our teams, they are interesting about that. You see more and more people talking about our driveway and also talk about the UAV's and how can we improve the did he re efficiency and the same time is come back to say that teams think about how to make themselves to be a product owner and the product the way of the great. Let's the glistening for the whole team >>because I want to bring you in here a little bit. So talk to me about how ex Center is helping Novartis, particularly in in this eight of us. Caught initiative. >>Six incher is a leader in business and technical i t transformation programmes. So what we're bringing on the table is in the expertise with not only the technology and the AWS elements, but also the business and technical transformation expertise that have we have over the years in the firm. On additionally, I think you know, it's not only about technology change. As you mentioned, it's all a lot of change and operating model and and also kind of working with a very blended team. Across that expertise and experience is what you bring to the table >>a blended team, culturally, regionally, actually, all of it >>one of that belief. I mean, just to give an example. We are working across steams in roughly about six geography ese from various cultures. Where's countries? And it's it's, ah, various time zones, which makes it quite challenging to make it all work together. So you started the journey. I hope you succeed in it. And, uh, you know, it's working well, so far, >>so Cloud is is really a megatrend right now. What are the differences that you're seeing across Regions, countries, industries? >>So I think it's this many answers many parts of the answer to the question. So I think if I talk about, um, industries So you know, initially when clouds started, we had seen a major up take off the cloud technology and the company that manufactured the clown technology and telecommunications, and you know where the older infrastructure and technology aspects were, Whereas companies like health care and media and metals and mining, We're kind of behind the curve in adoption rates because off their respective, you know, concerns around compliance and security of data. But I think that trends is slowly shifting. US. Companies are becoming more open. I think I've seen how the public cloud has matured. The security models, you know, are speaking for themselves. People can understand the benefits from moving to the cloud in terms off, you know, cost rationalization from producing maintenance costs, focusing their proteins on things that they were not able to divert their attention on. >>The fact we had, I think I will say for me and then where I've seen a Novartis if it is access to innovation. So I think loud offering brings a lot off innovation at happy face. That's one hand and also access to extend our collaboration. So when you're in, you know, inside focus I think the relatives from over there wants to walk and collaborate with you. But when you work on the cloud, everybody goes on the cloud. So that's really a stream manifested ate a collaboration with Nextel Partners. >>So how is that changing the culture of Novartis itself? In terms of there, there are more opportunities to collaborate. And it also is maybe changing the kinds of workers you attract because it is is people who want to be doing that in their day to day. >>Well, if you look at it, um, in the past, I think we used to have our own workforce, and then we tried to do a lot of things with our own workers, but I think he's in the on Monte. Workers are full of us, so we have more and more partnerships being announced, and this publishing, I mean used actually to help the company to in revenge himself. So that's actually on one hand on the other side. As you said, I think that to attract with talents I think you need. You also need to have a different future. But you need also to be able to give them the flexibility to work and do the things they like, and we're in a context and a framework. >>One of the things that we hear about so much at the's technology conference is this buzzword of digital transformation and of artisans obviously embarking on its own digital transformation as well as his journey to the cloud. There happen. They're powering each other, they're accelerating each other. How would you describe what is happening to the industry and to know Vargas with it within this, the pharmaceutical industry? >>Yeah, I think, based on our knowledge, to send the why this may be the first. The company can't be trying to build this kind of enterprise level data and also an Alex platform, and based on that, we will be able to counseling date off the history potato intended date on public date, huh? And the Human Industry Day. Then they tried to help us to produce the better products for the patient the same time it gave also the team a chance as you mentioned before, and the look at former more opportunities and the China to leverage in your technology particles of Kayla. >>It's also changed the way that we work every day. So if you look at it now, um, we won't be virtual assistant. We I think we use machine learning elements politics to be able to talkto you are a cheap piece. We actually monitor clickers, Kyle real time having using common centers. So every single day, I think the use off, digital at work and atom in the physical man thinks. And I think we have seen that the adoptions has increased since we have I ever to launch successful products. And I think >>one of the things which, which I really like about working in the bodies, is also I think there's there's an ambition to drive business value quickly. So you know you take a very agile use case, best approach on things rather than having to wait for very long years of time. Plus, the company kind of encourages a culture which is based on mutual cooperation and sharing knowledge, which is great >>because Novartis is really on the vanguard of companies in terms of how much it's embraced, the cloud and how much it's using it. What do you think? Other companies, pharmaceutical companies, but maybe even in other industries as well could learn from the nerve artists example. >>I think one thing people really shy about is, you know, when they moved to the cloud is the security aspect. I think what people probably had failed to realize in the past that there's been so much developments on security in the public cloud, which has bean key focus areas, something nobody's has taken the challenge and has understood that very well. And I think companies can learn from all the different aspects of security that you know were built into our entire transformation work, starting from ingesting data, the user management to access and all of that thing, so that's kind of one thing. Similarly, compliance related aspects as well, you know, So we've g x p compliance is at the core off how we're building our solution. So I think on dhe, if you understand how we built the rules around compliance. But in architecture, I think couples can learn from that a swell and build that is integral part off your not only technology solution, but the process that goes along with it. >>We started our conversation talking about Novartis and its quest to reimagine medicine. How How do you think that your industry is gonna look 5 10 years from now? I mean, the drug discovery process is slow on purpose. I mean, we need to think of patient health and safety for most. But how do you think it really could change the course of how we treat people? >>If if you look at it is more and more treatment required that actually I used and required data as a service or are being actually process for data. So when I am, when we look at the things the way that the industry is changing, I think the times to develop drugs, yes, takes longer. But I think for your use off the data that you have. I think you can try to reduce I cycle. So one of the objective is to reduce the cycle by one firm. Between that, we could bring the day. Is a new director market in eight years, rescues 12 years Today. The other thing is that way for user's data. You can monitor them patient, and you can recommend it the treatment of 80% off foundation. They don't go in and finish her treatment. So I think if we can show the audience to treatment, then there's a lower risk off the admissions to the season and sickness that they have. >>So it's not even not not just Novartis seeing the value of the date. It's the patients themselves, efficiency >>and the d. A r C as well, right? Because I think if you're if the situation is not six and I think the insurance doesn't have to pay. So I think all the value chances is being comes from >>well, sang Loic, because thank you so much for coming on the Cube. It was a really fascinating segment. Thank you. I'm Rebecca night. Stay tuned for more of the cubes. Live coverage of the Ex Center Executive Summit coming up in just a little bit

Published Date : Dec 3 2019

SUMMARY :

Something brought to you by extension. But that left you to just walk our viewers a little bit through your business and sort of the pain points you were or in the commercialization to ensure that the patient also get access to that much I mean, from the drug discovery process to how it's helping So our objective is trying to love you all the new technology and We know that oftentimes the technology is the easy part. the details confirmation, I think the first thing we had to change the culture of the company. I mean, I'm really interested in what you talk about. to be produced and also at the same time, let me through more automation to think maybe I can compliment that so I think if you look at it when the initial studying So once others air seeing, seeing the benefits that that captured, they're more likely to and the same time is come back to say that teams think about how to make So talk to me about how ex Center is helping Novartis, On additionally, I think you know, it's not only about technology change. So you started the journey. What are the differences that you're seeing across So I think if I talk about, um, industries So you know, But when you work on the cloud, everybody goes on the cloud. And it also is maybe changing the kinds of workers you attract because Well, if you look at it, um, in the past, I think we used to have our own workforce, One of the things that we hear about so much at the's technology conference is this buzzword of digital transformation products for the patient the same time it gave also the team a chance as you mentioned So if you look at it now, um, So you know you take a very agile use case, because Novartis is really on the vanguard of companies in terms of how much it's embraced, So I think on dhe, if you understand how we built the rules around compliance. I mean, the drug discovery process is slow on purpose. So one of the objective is to reduce the cycle by So it's not even not not just Novartis seeing the value of the date. and the d. A r C as well, right? Live coverage of the Ex Center Executive Summit coming up in just a little bit

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Rebecca KnightPERSON

0.99+

2018DATE

0.99+

Fang DengPERSON

0.99+

Wan Hee heePERSON

0.99+

Las VegasLOCATION

0.99+

$1.2 billionQUANTITY

0.99+

12 yearsQUANTITY

0.99+

NovartisORGANIZATION

0.99+

AWSORGANIZATION

0.99+

AccentureORGANIZATION

0.99+

sixQUANTITY

0.99+

WaynePERSON

0.99+

eight yearsQUANTITY

0.99+

JohnnyPERSON

0.99+

80%QUANTITY

0.99+

VikaPERSON

0.99+

firstQUANTITY

0.99+

Nextel PartnersORGANIZATION

0.99+

three guestsQUANTITY

0.99+

one firmQUANTITY

0.99+

Ex Center Executive SummitEVENT

0.98+

May 1DATE

0.98+

Vikas SindwaniPERSON

0.98+

eightQUANTITY

0.98+

three pillarsQUANTITY

0.98+

second yearQUANTITY

0.98+

about 12 yearsQUANTITY

0.98+

one productQUANTITY

0.98+

TodayDATE

0.98+

oneQUANTITY

0.97+

KylePERSON

0.97+

four guysQUANTITY

0.97+

RebeccaPERSON

0.97+

VargasPERSON

0.97+

Seo heePERSON

0.96+

Seo HeePERSON

0.96+

LoicPERSON

0.96+

USLOCATION

0.95+

OneQUANTITY

0.95+

AlexTITLE

0.93+

couplesQUANTITY

0.91+

Human Industry DayEVENT

0.91+

SyrianOTHER

0.9+

Accenture Executive Summit 2019EVENT

0.9+

exEVENT

0.89+

Formula OneTITLE

0.88+

Loic GiraudPERSON

0.86+

first thingQUANTITY

0.86+

Censure Executive SummitEVENT

0.84+

AndiPERSON

0.82+

ThioPERSON

0.82+

5 10 yearsQUANTITY

0.82+

last two yearsDATE

0.79+

one thingQUANTITY

0.78+

single dayQUANTITY

0.69+

Formula OneEVENT

0.67+

about six geographyQUANTITY

0.66+

KaylaORGANIZATION

0.65+

ChinaLOCATION

0.62+

SixQUANTITY

0.59+

MacleodORGANIZATION

0.51+

zeroQUANTITY

0.49+

CenterORGANIZATION

0.41+

eqQUANTITY

0.39+

MontePERSON

0.37+

incherORGANIZATION

0.36+

Tom Barton, Diamanti | CUBEConversations, August 2019


 

>> from our studios in the heart of Silicon Valley, Palo Alto, California It is a cute conversation. >> Welcome to this Cube conversation here in Palo Alto, California. At the Cube Studios. I'm John for a host of the Cube. We're here for a company profile coming called De Monte. Here. Tom Barton, CEO. As V M World approaches a lot of stuff is going to be talked about kubernetes applications. Micro Service's will be the top conversation, Certainly in the underlying infrastructure to power that Tom Barton is the CEO of De Monte, which is in that business. Tom, we've known each other for a few years. You've done a lot of great successful ventures. Thehe Monty's new one. Your got on your plate here right now? >> Yes, sir. And I'm happy to be here, so I've been with the Amante GIs for about a year or so. Um, I found out about the company through a head turner. Andi, I have to admit I had not heard of the company before. Um, but I was a huge believer in containers and kubernetes. So has already sold on that. And so I had a friend of mine. His name is Brian Walden. He had done some massive kubernetes cloud based deployments for us at Planet Labs, a company that I was out for a little over three years. So I had him do technical due diligence. Brian was also the number three guy, a core OS, um, and so deeply steeped in all of the core technologies around kubernetes, including things like that CD and other elements of the technology. So he looked at it, came back and gave me two thumbs up. Um, he liked it so much that I then hired him. So he is now our VP of product management. And the the cool thing about the Amanti is essentially were a purpose built solution for running container based workloads in kubernetes on premises and then hooking that in with the cloud. So we believe that's very much gonna be a hybrid cloud world where for the major corporations that we serve Fortune 500 companies like banks like energy and utilities and so forth Ah, lot of their workload will maintain and be maintained on premises. They still want to be cloud compatible. So you need a purpose built platform to sort of manage both environments >> Yeah, we certainly you guys have compelling on radar, but I was really curious to see when you came in and took over at the helm of the CEO. Because your entrepreneurial career really has been unique. You're unique. Executive. Both lost their lands. And as an operator you have an open source and software background. And also you have to come very successful companies and exits there as well as in the hardware side with trackable you took. That company went public. So you got me. It's a unique and open source software, open source and large hardware. Large data center departments at scale, which is essentially the hybrid cloud market right now. So you kind of got the unique. You have seen the view from all the different sides, and I think now more than ever, with Public Cloud certainly being validated. Everyone knows Amazon of your greenfield. You started the cloud, but the reality is hybrid. Cloud is the operating model of the genesis. Next generation of companies drive for the next 20 to 30 years, and this is the biggest conversation. The most important story in tech. You're in the middle of it with a hot start up with a name that probably no one's ever heard of, >> right? We hope to change that. >> Wassily. Why did you join this company? What got your attention? What was the key thing once you dug in there? What was the secret sauce was what Got your attention? Yes. So to >> me again, the market environment. I'm a huge believer that if you look at the history of the last 15 years, we went from an environment that was 0% virtualized too. 95% virtualized with, you know, Vienna based technologies from VM Wear and others. I think that fundamentally, containers in kubernetes are equally as important. They're going to be equally as transformative going forward and how people manage their workloads both on premises and in the clouds. Right? And the fact that all three public cloud providers have anointed kubernetes as the way of the future and the doctor image format and run time as the wave of the future means, you know, good things were gonna happen there. What I thought was unique about the company was for the first time, you know, surprisingly, none of the exit is sick. Senders, um, in companies like Nutanix that have hyper converse solutions. They really didn't have anything that was purpose built for native container support. And so the founders all came from Cisco UCS. They had a lot of familiarity with the underpinnings of hyper converged architectures in the X 86 server landscape and networking, subsistence and storage subsystems. But they wanted to build it using the latest technologies, things like envy and me based Flash. Um, and they wanted to do it with a software stack that was native containers in Kubernetes. And today we support two flavors of that one that's fully open source around upstream kubernetes in another that supports our partner Red hat with open shift. >> I think you're really onto something pretty big here because one of things that day Volonte and Mine's too many men and our team had been looking at is we're calling a cloud to point over the lack of a better word kind of riff on the Web to point out concept. But cloud one daughter was Amazon. Okay, Dev ops agile, Great. Check the box. They move on with life. It's always a great resource, is never gonna stop. But cloud 2.0, is about networking. It's about securities but data. And if you look at all the innovation startups, we'll have one characteristic. They're all playing in this hyper converged hardware meat software stack with data and agility, kind of to make the original Dev ops monocle better. The one daughter which was storage and compute, which were virtualization planes. So So you're seeing that pattern and it's wide ranging at security is data everything else So So that's kind of what we call the Cloud two point game. So if you look at V m World, you look at what's going on the conversations around micro service red. It's an application centric conversation in an infrastructure show. So do you see that same vision? And if so, how do you guys see you enabling the customer at this saying, Hey, you know what? I have all this legacy. I got full scale data centers. I need to go full scale cloud and I need zero and disruption to my developer. Yeah, so >> this is the beauty of containers and kubernetes, which is they know it'll run on the premises they know will run in the cloud, right? Um and it's it is all about micro service is so whether they're trying to adopt them on our database, something like manga TB or Maria de B or Crunchy Post Grey's, whether it's on the operational side to enable sort of more frequent and incremental change, or whether it's on a developer side to take advantage of new ways of developing and delivering APS with C I. C. D. Tools and so forth. It's pretty much what people want to do because it's future proofing your software development effort, right? So there's sort of two streams of demand. One is re factoring legacy applications that are insufficiently kind of granule, arised on, behave and fail in a monolithic way. Um, as well as trying to adopt modern, modern, cloud based native, you know, solutions for things like databases, right? And so that the good news is that customers don't have to re factor everything. There are logical break points in their applications stack where they can say, Okay, maybe I don't have the time and energy and resource is too totally re factor a legacy consumer banking application. But at least I can re factor the data based here and serve up you know container in Kubernetes based service is, as Micro Service's database is, a service to be consumed by. >> They don't need to show the old to bring in the new right. It's used containers in our orchestration, Layla Kubernetes, and still be positioned for whether it's service measures or other things. Floor That piece of the shirt and everything else could run, as is >> right, and there are multiple deployments scenarios. Four containers. You can run containers, bare metal. Most of our customers choose to do that. You can also run containers on top of virtual machines, and you can actually run virtual machines on top of containers. So one of our major media customers actually run Splunk on top of K B M on top of containers. So there's a lot of different deployment scenarios. And really, a lot of the genius of our architecture was to make it easy for people that are coming from traditional virtualized environments to remap system. Resource is from the bm toe to a container at a native level or through Vienna. >> You mentioned the history lesson there around virtualization. How 15 years ago there was no virtualization now, but everything's virtualized we agree with you that containers and compares what is gonna change that game for the next 15 years? But what's it about VM? Where would made them successful was they could add virtualization without requiring code modification, right? And they did it kind of under the covers. And that's a concern Customs have. I have developers out there. They're building stacks. The building code. I got preexisting legacy. They don't really want to change their code, right? Do you guys fit into that narrative? >> We d'oh, right, So every customer makes their own choice about something like that. At the end of the day, I mentioned Splunk. So at the time that we supported this media customer on Splunk, Splunk had not yet provided a container based version for their application. Now they do have that, but at the time they supported K B M, but not native containers and so unmodified Splunk unmodified application. We took them from a batch job that ran for 23 hours down the one hour based on accelerating and on our perfect converged appliance and running unmodified code on unmodified K B m on our gear. Right, So some customers will choose to do that. But there are also other customers, particularly at scale for transaction the intensive applications like databases and messaging and analytics, where they say, You know, we could we could preserve our legacy virtualized infrastructure. But let's try it as a pair a metal container approach. And they they discovered that there's actually some savings from both a business standpoint and a technology tax standpoint or an overhead standpoint. And so, as I mentioned most of our customers, actually really. Deficiencies >> in the match is a great example sticking to the product technology differentiate. What's the big secret sauce describe the product? Why are you winning in accounts? What's the lift in your business right now? You guys were getting some traction from what I'm hearing. Yeah, >> sure. So look at the at the highest level of value Proposition is simplicity. There is no other purpose built, you know, complete hardware software stack that delivers coup bernetti coproduction kubernetes environment up and running in 15 minutes. Right. The X 86 server guys don't really have it. Nutanix doesn't really have it. The software companies that are active in this space don't really have it. So everything that you need that? The hardware platform, the storage infrastructure, the actual distribution of the operating system sent the West, for example. We distribute we actually distributed kubernetes distribution upstream and unmodified. And then, very importantly, in the combinations landscape, you have to have a storage subsystem in a networking subsystem using something called C s I container storage interface in C N I. Container networking interface. So we've got that full stack solution. No one else has that. The second thing is the performance. So we do a certain amount of hardware offload. Um, and I would say, Amazons purchase of Annapurna so Amazon about a company called Annapurna its basis of their nitro technology and its little known. But the reality is more than 50% of all new instances at E. C to our hardware assisted with the technology that they thought were offloaded. Yeah, exactly. So we actually offload storage and network processing via to P C I. D cards that can go into any industry server. Right? So today we ship on until whites, >> your hyper converge containers >> were African verge containers. Yeah, exactly. >> So you're selling a box. We sell a box with software that's the >> with software. But increasingly, our customers are asking us to unbundle it. So not dissimilar from the sort of journey that Nutanix went through. If a customer wants to buy and l will support Del customer wants to buy a Lenovo will support Lenovo and we'll just sell >> it. Or have you unbundled? Yetta, you're on bundling. >> We are actively taking orders for on bundling at the present time in this quarter, we have validated Del and Lenovo as alternate platforms, toothy intel >> and subscription revenue. On that, we >> do not yet. But that's the golden mask >> Titanic struggle with. So, yeah, and then they had to take their medicine. >> They did. But, you know, they had to do that as a public company. We're still a private company, so we can do that outside the limelight of the public >> markets. So, um, I'm expecting that you guys gonna get pretty much, um I won't say picked off, but certainly I think your doors are gonna be knocked on by the big guys. Certainly. Delic Deli and see, for instance, I think it's dirty. And you said yes. You're doing business with del name. See, >> um, we are doing as a channel partner and as an OM partner with them at the present time there, I wouldn't call them a customer. >> How do you look at V M were actually there in the V M, where business impact Gelsinger's on the record. It'll be on the Cube, he said. You know Cu Bernays the dial tone of the Internet, they're investing their doubling down on it. They bought Hep D O for half a billion dollars. They're big and cloud native. We expect to see a V M World tons of cloud Native conversation. Yes, good, bad for you. What's the take? The way >> legitimizes what we're doing right? And so obviously, VM, where is a large and successful company? That kind of, you know, legacy and presence in the data center isn't gonna go anywhere overnight. There's a huge set of tooling an infrastructure that bm where has developed in offers to their customers. But that said, I think they've recognized in their acquisition of Hep Theo is is indicative of the fact that they know that the world's moving this way. I think that at the end of the day, it's gonna be up to the customer right. The customer is going to say, Do I want to run containers inside? Of'em? Do I want to run on bare metal? Um, but importantly, I think because of, you know, the impact of the cloud providers in particular. If you think of the lingua franca of cloud Native, it's gonna be around Dr Image format. It's gonna be around kubernetes. It's not necessarily gonna be around V M, d K and BMX and E s X right. So these are all very good technologies, but I think increasingly, you know, the open standard and open source community >> people kubernetes on switches directly is no. No need, Right. Have anything else there? So I gotta ask you on the customer equation. You mentioned you, you get so you're taking orders. How you guys doing business today? Where you guys winning, given example of of why people while you're winning And then for anyone watching, how would they know if they should be a customer of yours? What's is there like? Is there any smoke signs and signals? Inside the enterprise? They mentioned batch to one hour. That's just music. Just a lot of financial service is used, for instance, you know they have timetables, and whether they're pulling back ups back are doing all the kinds of things. Timing's critical. What's the profile customer? Why would someone call you? What's the situation? The >> profile is heavy duty production requirements to run in both the developer context and an operating contact container in kubernetes based workloads on premises. They're compatible with the cloud right so increasingly are controlled. Plane makes it easy to manage workloads not just on premises but also back and forth to the public cloud. So I would argue that essentially all Fortune 500 companies Global 1000 companies are all wrestling with what's the right way to implement industry standard X 86 based hardware on site that supports containers and kubernetes in his cloud compatible Right? So that that is the number one question then, >> so I can buy a box and or software put it on my data center. Yes, and then have that operate with Amazon? Absolutely. Or Google, >> which is the beauty of the kubernetes standards, right? As long as you are kubernetes certified, which we are, you can develop and run any workload on our gear on the cloud on anyone else that's carbonated certified, etcetera. So you know that there isn't >> given example the workload that would be indicative. >> So Well, I'll cite one customer, Right. So, um, the reason that I feel confident actually saying the name is that they actually sort of went public with us at the recent Gardner conference a week or so ago when the customer is Duke Energy. So very typical trajectory of journey for a customer like this, which is? A couple years ago, they decided that they wanted re factor some legacy applications to make them more resilient to things like hurricanes and weather events and spikes in demand that are associated with that. And so they said, What's the right thing to do? And immediately they pick containers and kubernetes. And then he went out and they looked at five different vendors, and we were the only vendor that got their POC up and running in the required time frame and hit all five use case scenarios that they wanted to do right. So they ended up a re factoring core applications for how they manage power outages using containers and kubernetes, >> a real production were real. Production were developing standout, absolutely in a sandbox, pushing into production, working Absolutely. So you sounds like you guys were positioned to handle any workload. >> We can handle any workload, but I would say that where we shine is things that transaction the intensive because we have the hardware assist in the I o off load for the storage and the networking. You know, the most demanding applications, things like databases, things like analytics, things like messaging, Kafka and so forth are where we're really gonna >> large flow data, absolutely transactional data. >> We have customers that are doing simpler things like C I. C D. Which at the end of the day involves compiling things right and in managing code bases. But so we certainly have customers in less performance intensive applications, but where nobody can really touch us in morning. What I mean is literally sort of 10 to 30 times faster than something that Nutanix could do, for example, is just So >> you're saying you're 30 times faster Nutanix >> absolutely in trans actually intensive applications >> just when you sell a prescription not to dig into this small little bit. But does the customer get the hardware assist on that as well >> it is. To date, we've always bundled everything together. So the customers have automatically got in the heart >> of the finest on the hard on box. Yes. If I buy the software, I got a loaded on a machine. That's right. But that machine Give me the hardware. >> You will not unless you have R two p C I. D. Cards. Right? And so this is how you know we're just in the very early stages of negotiating with companies like Dell to make it easy for them to integrate her to P. C. I. D cards into their server platform. >> So the preferred flagship is the is the device. It's a think if they want the hardware sit, that they still need to software meeting at that intensive. It's right. If they don't need to have 30 times faster than Nutanix, they can just get the software >> right, right. And that will involve RCS. I plug in RCN I plug in our OS distribution are kubernetes distribution, and the control plane that manages kubernetes clusters >> has been great to get the feature on new company, um, give a quick plug for the company. What's your objectives? Were you trying to do. I'll see. Probably hiring. Get some financing, Any news, Any kind of Yeah, we share >> will be. And we will be announcing some news about financing. I'm not prepared to announce that today, but we're in very good shape with respected being funded for our growth. Um, and consequently, so we're now in growth mode. So today we're 55 people. I want to double back over the course of the next 4/4 and increasingly just sort of build out our sales force. Right? We didn't have a big enough sales force in North America. We've gotta establish a beachhead in India. We do have one large commercial banking customer in Europe right now. Um, we also have a large automotive manufacturer in a pack. But, um, you know, the total sales and marketing reach has been too low. And so a huge focus of what I'm doing now is building out our go to market model and, um, sort of 10 Xing the >> standing up, a lot of field going, going to market. How about on the biz, Dev side? I might imagine that you mentioned delicate. Imagine that there's a a large appetite for the hardware offload >> absolution? Absolutely. So something is. Deb boils down to striking partnerships with the cloud providers really on two fronts, both with respect the hardware offload and assist, but also supporting their on premises strategy. So Google, for example, is announced. Antos. This is their approach to supporting, you know, on premises, kubernetes workloads and how they interact with cool cloud. Right. As you can imagine, Microsoft and Amazon also have on premises aspirations and strategies, and we want to support those as well. This goes well beyond something like Amazon Outpost, which is really a narrow use case in point solution for certain markets. So cloud provider partnerships are very important. Exit E six server vendor partnership. They're very important. And then major, I s V. So we've announced some things with red hat. We were at the Red Hat Open summit in Boston a few months ago and announced our open ship project and product. Um, that is now G a. Also working with eyes, he's like Maria de be Mondo di B Splunk and others to >> the solid texting product team. You guys are solid. You feel good on the product. I feel very good about the product. What aboutthe skeptics are out there? Just to put the hard question to use? Man, it's crowded field. How do you gonna compete? What do you chances? How do you like your chances known? That's a very crowded field. You're going to rely on your fastballs, they say. And on the speed, what's the what's What's your thinking? Well, it's unique. >> And so part of the way or approve point that I would cite There is the channel, right? So when you go to the channel and channel is afraid that you're gonna piss off Del or E M. C or Net app or Nutanix or somebody you know, then they're not gonna promote you. But our channel partners air promoting us and talking about companies like Life Boat at the distribution level. Talking about companies like CD W S H. I, um, you know, W W t these these major North American distributors and resellers have basically said, Look, we have to put you in our line car because you're unique. There is no other purpose built >> and why that, like they get more service is around that they wrap service's around it. >> They want to kill the murder where they want to. Wrap service's around it, absolutely, and they want to do migrations from legacy environments towards Micro Service's etcetera. >> Great to have you on share the company update. Just don't get personal. If you don't mind personal perspective. You've been on the hardware side. You've seen the large scale data centers from racquetball and that experience you'll spit on the software side. Open source. What's your take on the industry right now? Because you're seeing, um, I talked a lot of sea cells around the security space and, you know, they all say, Oh, multi clouds a bunch of B s because I'm not going to split my development team between four clouds. I need to have my people building software stacks for my AP eyes, and then I go to the vendors. They support my AP eyes where you can't be a supplier. Now that's on the sea suicide. But the big mega trend is there's software stacks being built inside the premise of the enterprise. Yes, that not mean they had developers before building. You know, Kobol, lapse in the old days, mainframes to client server wraps. But now you're seeing a Renaissance of developers building a stack for the domain specific applications that they need. I think that requires that they have to run on premise hyper scale like environment. What's your take on it >> might take is it's absolutely right. There is more software based innovation going on, so customers are deciding to write their own software in areas where they could differentiate right. They're not gonna do it in areas that they could get commodities solutions from a sass standpoint or from other kinds of on Prem standpoint. But increasingly they are doing software development, but they're all 99% of the time now. They're choosing doctor and containers and kubernetes as the way in which they're going to do that, because it will run either on Prem or in the Cloud. I do think that multi cloud management or a multi multi cloud is not a reality. Are our primary modality that we see our customers chooses tons of on premises? Resource is, that's gonna continue for the foreseeable future one preferred cloud provider, because it's simply too difficult to to do more than one. But at the same time they want an environment that will not allow themselves to be locked into that cloud bender. Right? So they want a potentially experiment with the second public cloud provider, or just make sure that they adhere to standards like kubernetes that are universally shared so that they can't be held hostage. But in practice, people don't. >> Or if they do have a militant side, it might be applications. Like if you're running office 3 65 right, That's Microsoft. It >> could be Yes, exactly. On one >> particular domain specific cloud, but not core cloud. Have a backup use kubernetes as the bridge. Right that you see that. Do you see that? I mean, I would agree with by the way we agreed to you on that. But the question we always ask is, we think you Bernays is gonna be that interoperability layer the way T c p I. P was with an I p Networks where you had this interoperability model. We think that there will be a future state of some point us where I could connect to Google and use that Microsoft and use Amazon. That's right together, but not >> this right. And so nobody's really doing that today, But I believe and we believe that there is, ah, a future world where a vendor neutral vendor, neutral with respect to public cloud providers, can can offer a hybrid cloud control plane that manages and brokers workloads for both production, as well as data protection and disaster recovery across any arbitrary cloud vendor that you want to use. Um, and so it's got to be an independent third party. So you know you're never going to trust Amazon to broker a workload to Google. You're never going to trust Google to broker a workload of Microsoft. So it's not gonna be one of the big three. And if you look at who could it be? It could be VM where pivotal. Now it's getting interesting. Appertaining. Cisco's got an interesting opportunity. Red hats got an interesting opportunity, but there is actually, you know, it's less than the number of companies could be counted on one hand that have the technical capability to develop hybrid cloud abstraction that that spans both on premises and all three. And >> it's super early. Had to peg the inning on this one first inning, obviously first inning really early. >> Yeah, we like our odds, though, because the disruption, the fundamental disruption here is containers and kubernetes and the interest that they're generating and the desire on the part of customers to go to micro service is so a ton of application re factoring in a ton of cloud native application development is going on. And so, you know, with that kind of disruption, you could say >> you're targeting opening application re factoring that needs to run on a cloud operating >> model on premise in public. That's correct. In a sense, dont really brings the cloud to theon premises environment, right? So, for example, we're the only company that has the concept of on premises availability zones. We have synchronous replication where you can have multiple clusters that air synchronously replicated. So if one fails the other one, you have no service disruption or loss of data, even for a state full application, right? So it's cloud like service is that we're bringing on Prem and then providing the links, you know, for both d. R and D P and production workloads to the public Cloud >> block locked Unpack with you guys. You might want to keep track of humaneness. Stateville date. It's a whole nother topic, as stateless data is easy to manage with AP Eyes and Service's wouldn't GET state. That's when it gets interesting. Com Part in the CEO. The new chief executive officer. Demonte Day How long you guys been around before you took over? >> About five years. Four years before me about been on board about a year. >> I'm looking forward to tracking your progress. We'll see ya next week and seven of'em Real Tom Barton, Sea of de Amante Here inside the Cube Hot startup. I'm John Ferrier. >> Thanks for watching.

Published Date : Aug 22 2019

SUMMARY :

from our studios in the heart of Silicon Valley, Palo Alto, power that Tom Barton is the CEO of De Monte, which is in that business. And the the cool thing about the Amanti is essentially Next generation of companies drive for the next 20 to 30 years, and this is the biggest conversation. We hope to change that. What was the key thing once you dug I'm a huge believer that if you look at the history of the last 15 years, So if you look at V m World, But at least I can re factor the data based here and serve up you know Floor That piece of the shirt and everything else could run, as is And really, a lot of the genius of our architecture was to make it easy now, but everything's virtualized we agree with you that containers and compares what is gonna So at the time that we supported this media customer on Splunk, in the match is a great example sticking to the product technology differentiate. So everything that you need Yeah, exactly. So you're selling a box. from the sort of journey that Nutanix went through. it. Or have you unbundled? On that, we But that's the golden mask So, yeah, and then they had to take their medicine. But, you know, they had to do that as a public company. And you said yes. um, we are doing as a channel partner and as an OM partner with them at the present time there, How do you look at V M were actually there in the V M, where business impact Gelsinger's on the record. Um, but importantly, I think because of, you know, the impact of the cloud providers in particular. So I gotta ask you on the customer equation. So that that is the number one question Yes, and then have that operate with Amazon? So you know that there isn't saying the name is that they actually sort of went public with us at the recent Gardner conference a So you sounds like you guys were positioned to handle any workload. the most demanding applications, things like databases, things like analytics, We have customers that are doing simpler things like C I. C D. Which at the end of the day involves compiling But does the customer get the hardware assist So the customers have automatically got in the heart But that machine Give me the hardware. And so this is how you know we're just in the very early So the preferred flagship is the is the device. are kubernetes distribution, and the control plane that manages kubernetes clusters give a quick plug for the company. But, um, you know, the total sales and marketing reach has been too low. I might imagine that you mentioned delicate. This is their approach to supporting, you know, on premises, kubernetes workloads And on the speed, what's the what's What's your thinking? And so part of the way or approve point that I would cite There is the channel, right? They want to kill the murder where they want to. Great to have you on share the company update. But at the same time they want an environment that will not allow themselves to be locked into that cloud Or if they do have a militant side, it might be applications. On one But the question we always ask is, we think you Bernays is gonna be that interoperability layer the of companies could be counted on one hand that have the technical capability to develop hybrid Had to peg the inning on this one first inning, obviously first inning really And so, you know, with that kind of disruption, So if one fails the other one, you have no service disruption or loss of data, block locked Unpack with you guys. Four years before me about been on board about a year. Sea of de Amante Here inside the Cube Hot startup.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Diane GreenePERSON

0.99+

Eric HerzogPERSON

0.99+

James KobielusPERSON

0.99+

Jeff HammerbacherPERSON

0.99+

DianePERSON

0.99+

IBMORGANIZATION

0.99+

Mark AlbertsonPERSON

0.99+

MicrosoftORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Rebecca KnightPERSON

0.99+

JenniferPERSON

0.99+

ColinPERSON

0.99+

Dave VellantePERSON

0.99+

CiscoORGANIZATION

0.99+

Rob HofPERSON

0.99+

UberORGANIZATION

0.99+

Tricia WangPERSON

0.99+

FacebookORGANIZATION

0.99+

SingaporeLOCATION

0.99+

James ScottPERSON

0.99+

ScottPERSON

0.99+

Ray WangPERSON

0.99+

DellORGANIZATION

0.99+

Brian WaldenPERSON

0.99+

Andy JassyPERSON

0.99+

VerizonORGANIZATION

0.99+

Jeff BezosPERSON

0.99+

Rachel TobikPERSON

0.99+

AlphabetORGANIZATION

0.99+

Zeynep TufekciPERSON

0.99+

TriciaPERSON

0.99+

StuPERSON

0.99+

Tom BartonPERSON

0.99+

GoogleORGANIZATION

0.99+

Sandra RiveraPERSON

0.99+

JohnPERSON

0.99+

QualcommORGANIZATION

0.99+

Ginni RomettyPERSON

0.99+

FranceLOCATION

0.99+

Jennifer LinPERSON

0.99+

Steve JobsPERSON

0.99+

SeattleLOCATION

0.99+

BrianPERSON

0.99+

NokiaORGANIZATION

0.99+

EuropeLOCATION

0.99+

Peter BurrisPERSON

0.99+

Scott RaynovichPERSON

0.99+

RadisysORGANIZATION

0.99+

HPORGANIZATION

0.99+

DavePERSON

0.99+

EricPERSON

0.99+

Amanda SilverPERSON

0.99+

Varun Chhabra, Dell EMC & Muneyb Minhazuddin, VMware | Dell Technologies World 2019


 

>> live from Las Vegas. It's the queue covering del Technologies. World twenty nineteen. Brought to you by Del Technologies and its ecosystem partners. >> Welcome back to the cubes Live coverage of Del World Technologies Here in Las Vegas. I'm your host, Rebecca Night, along with my co host Stew Minutemen. We have two guests on the seven, both both Cube veterans. So we have Varun Cabra. He is the VP product Marketing Cloud Delhi Emcee and Moeneeb unit. Minute Soudan VP Solutions Product marketing at VM. Where. Thank you so much for coming on the show. >> Thanks for having >> thanks for having us. So we just had the keynote address we heard from Michael Dell Satya Nadella Pack Girl Singer It's a real who's who of this of this ecosystem. Break it down for us. What? What did we hear? What is what is sort of the most exciting thing from your perspective? >> So, Rebecca, what? What we hear from customers again and again is it's a multi cloud world, right? Everybody has multiple cloud deployments, but we saw that mentioned five on average cloud architectures in customer environments and what we keep hearing from them is they There are operational silos that developed as part of the to set the fellas that are different. The machine formats. All of these things just lied a lot of lead to a lot of operational silos in complexity, and the customers are overwhelming or willingly asking William C. As well as being Where is that? How do we reduce this complexity? How do we we'll be able to move, were close together? How do we manage all of this in a common framework and reduce some of the complexity? So there's really they could take advantage off the promise of Monte Club. >> Yeah, so many. The Cube goes to all the big industry shows. I feel like everywhere I go used to be, you know, it's like intel and in video, up on stage for the next generation. Well, for the last year, it felt like, you know, patent Sanjay, or, you know, somebody like that, you know, up on stage with Google Cloud of a couple of years ago, there was Sanjay up on St Come here. They're searching Adela up on stage. So let's talk about that public cloud piece China. We know you know the relationship with a wsbn were clad in a ws sent ripples through the industry on you know, the guru cloud piece. So tell us what's new and different peace when it comes to come up to public clouded. How does that fit with in relation to all the other clouds? >> Sure, no, I'll amplify. You know what Aaron said, Right? We think about customer choice first. Andrea Lee, customer choice. As you know, you got multiple cloud providers. We've seen customers make this choice off. I need to make this, you know, a multi cloud world. Why're they going towards the multi clothing world? It's because applications air going there on really well, where strategy has bean to say, How do we empower customers without choice? Are you know, eight of us partnership is as strong as ever, but we continue to eat away there, and that was their first going to choice a platform. And Patty alluded to this on the stage. We have four thousand cloud provider partners right on the four thousand block provider partners we've built over the years, and that includes, you know, not small names. They include IBM. They, like, you know, they've got in Iraq space. Some of the biggest cloud providers. So our strategy is always being. How do we take our stack and and lighted and as many public laws? It's possible. So we took the first step off IBM. Then you know, about four thousand. You know, other plot providers being Rackspace, Fujitsu, it's Archie. Then came Amazon. I'm is on being the choice of destination for a lot of public clouds. Today we kind of further extend that with Microsoft and, you know, a few weeks ago with Google, right? So there's really about customer choice and customers when they want the hybrid multi Claude fees his abdomen right. You got two worlds, you couldn't existing application and you're looking Just get some scale out of that existing application and you're building a lot of, you know, native cloud native applications. They want this, you know, in multiple places. >> All right, so if I could just drilled down one level deep, you know? So if I'm in as your customer today, my understanding it's Veum or STD. Sea Stack. What does that mean? You know what I use, You know? How is that? You can feel compare? Do I use the Microsoft? You know System Center. Am I using V Center? You know, >> shark now, and this is really again in an abdomen. Calm conversation, right where they were multiple announcements in here just to unpack them there. It's like, Hey, we had the Del Technologies Cloud platform. The Del Technologies clock platform is powered by, you know, Delhi emcee infrastructure and be aware Cloud Foundation on top, where slicing your full computer network storage with the sphere of visa and a sex and management. Right. And the second part was really We've got being where cloud on a deli emcee. The system brings a lot of the workloads which stood in public clouds. We're seeing this repatriation off workloads back on. You know, on the data center are the edge. This is really driven by a lot of customers and who have built native I pee in the public cloud beyond Amazon beat ashore who want to now bring some of those workloads closer to the, you know, data center or the edge. Now this comes to how do I take my azure workloads and bring it closer to the edge or my data center? Why's that? I need you know, we have large customers, you know. You know, large customers multinational. They have, you know, five hundred thousand employees, ninety locations will wide. Who built to I p or when I say I p applications natively in cloud suddenly for five thousand employees and ninety locations, they're going ingress egress. Traffic to the cloud public cloud is huge. How do I bring it closer to my data centers? Right. And this is where taking us your workloads. Bringing them, you know, on prime closer salts. That big problem for them. Now, how do I take that workloads and bring them closer? Is that where we landed in the Veum wear on Del, you know AMC Infrastructure? Because this big sea closer to the data center gives me either Lowell agency data governance and you know, control as well as flexibility to bring these work clothes back on. Right? So the two tangent that you're driving both your cloud growth and back to the edge The second tangent of growth or explosion is cloud native workloads. We're able to bring them closer. Your data center is freely though the value proposition, right? >> Well, we heard so much about that on the main stage this morning about just how differently with modern workforce works in terms of the number of devices that used the different locations they are when they are doing the tasks of their job. >> You talk a little bit about the >> specifics in terms of customers you're working with. You don't need a name names. But just how you are enabling the >> way get feedback from customers in all industries, right? So you don't even share a few as well Way have large banks that are, you know, they're standardized their workloads on VM where today, right as as have many Morgan is ations, and they're looking for the flexibility to be able to move stuff to the cloud or moving back on premises and not have to reformat, not have to change that machine formats and just make it a little easy. They want the flexibility to be able to run applications in their bank branches right in the cloud, right? But then they don't they don't necessarily want adopt a new machine format for a new standardized platform. That's really what Thie azure announcement helps them do, Just like with eight of us, can now move workloads seamlessly to azure USVI center. Use your other you know, tools that you're familiar with today. Already to be ableto provision in your work clothes. All >> right, so for and what? Wonder if we could drill into the stack a little bit here? You know, I went to the Microsoft show last year, and it was like, Oh, WSSD is very different than Azure Stack even if you look at the box and it's very much the same underneath the covers, there was a lot of discussion of the ex rail. We know how fast that's been growing. Can you believe there's two pieces? This there's the VCF on Vieques rail and then, you know, just help. Help explain >> s o for the Del Technologies Cloud Platform announcement, which is, as you said, VX rail in first hcea infrastructure with Mia McLeod foundations tightly integrated, right, so that the storage compute and networking capabilities of off the immortal foundation are all incorporated and taken advantage off it. In the end structure. This is all about making things easier to consume, right, producing the complexity for customers. When they get the X trail, they overwhelmingly tell us they want to use the metal foundations to be able to manage and automate those workloads. So we're packaging this up out of the box. So when customers get it, they have That's cloud experience on premises without the complexity of having to deploy it because it's already integrated, cited the engineering teams have actually worked together. And then you can then, as we mentioned, extend those workloads to public loud, using the same tools, the same, the MSR foundation tools. >> And, you know, uh, we built on Cloud Foundation for a while, and I'm sure you followed us on the Cloud Foundation. And that has bean when you know yes, we talk about consistent infrastructure, consistent operations, this hybrid cloud world and what we really mean. Is that really where? Cloud foundation stack, right? So when we talk about the emcee on eight of us, is that Cloud foundation stack running inside of Amazon? When we talk about you know, our partnership with the shore, he's not being where Cloud Foundation stack running on a shore. We talk about this four thousand partners. Cloud certified IBM. It is the Cloud Foundation stack and the key components being pulled. Stack the Sphere v. Santana Sex and there's a critical part in Cloud Foundation called lifecycle management. It's, you know, it's missed quite easily, right? The benefit of running a public cloud. The key through the attributes you get is you know, you get everything as a service, you get all your infrastructure of software. And the third part is you don't spend any time maintaining the interoperability between you compete network storage. And that is a huge deal for customers. They spent a lot of time just maintaining this interrupt and, you know, view Marie Claude Foundation has this life cycle manager which solves that problem. Not not just Kee. >> Thank you for bringing it up because, right, one of the big differences you talk about Public Cloud, go talk to your customer and say, Hey, what version of Microsoft Azure are you running and the laughter you and say like, Well, Microsoft takes care of that. Well, when I differentiate and I say Okay, well, I want to run the the same stack in my environment. How do I keep that up today? We know the VM where you know customers like there's lots of incentives to get them there, but oftentimes they're n minus one two or something like that. So how do we manage and make sure that it's more cloud like enough today? >> Yeah, absolutely. So. So there's two ways to do that to one of them is because the V m. A and L E M C team during working on engineering closely together, we're going to have the latest word in supported right right out the gate. So you have an update, you know that it's gonna work on your your hardware or vice versa. So that's one level and then with via MacLeod and L E M C. We're also providing the ability to basically have hands off management and have that infrastructure running your data center or your eyes locations, but at the same time not have to manage it. You leave that management to tell technologies into somewhere. To be able to manage that solution for you is really, as Moody said, bringing that public loud experience to your own premise. Locations is long, >> and I think that's one of the big, different trainers that's going to come right. People want to get that consumption model, and they're trying to say, Hey, how do I build my own data center, maintain it, but the same time I want to rely on, you know, dull and beyond Where to come and help us build it together. Right? And the second part of announcement was really heavy and wear dull on the d l E M C. Is that Manager's offered the demo you saw from June. Yang was being able to have a consumption interface where you could connect click of a button, roll it back into a data center as well. It's an edge because you have real Italy. Very little skill sets where night in the edge environment and as EJ Compute needs become more prolific with five g i ot devices, you need that same kind of data governance model and data center model. There is well and not really the beauty off, you know, coming to be aware. And Delta, you know Del DMC del. Technology's power is to maintain that everywhere, right? I >> won't ask you about >> innovation. One of the things that's really striking during American executive, Even though I obviously have my own customers, >> I think it really comes down to listening to customers. Write as as Del Technologies is Liam, where we have the advantage of working with so many customers, hundreds of thousand customers around the world we get to hear and listen and understand what are the cutting edge things that customers are looking for? And then we can not take that back to customers like Bank of America who may have taught about certain scenarios right that we will learn from. But they may not have thought about other industries where things could be applicable to their street, so that drives a lot of our innovation. Very. We are very proud about the fact that we're customer focused. Our invasion is really driven by listening to customers on. And, you know, having smart people just work on this one to work on this problems. And, >> you know, customer wise is a big deal customer choice. That's why we're doing what we're doing with multiple cloud providers, right? And I think this is really a key, too. If you just look at being where's innovation were already talking about this multi claude world where it was like, Hey, you've got workloads natively. So we How do you manage? Those were already ahead and thinking about, you know come in eighties with acquisition of Hip Tio and you if you think about it, you know, we've done this innovation in the cloud space established this hybrid credibility on we've launched with Del Technology. Now we're already ahead in this multi cloud operational model. We're already ahead in this coop in eighties. Evolution will bring it back with the family and listen to the customers for choice. Because of the end of the day, we're here to South customer problems. I >> think that's another dimension of choice that we offer, which is both traditional applications as well as applications of the future that will increasingly, because container based, >> yeah, I just wonder if you could spend on a little bit. You know what? One of the things I said via Moore is great. It really simplified and by environment, I go back. Fifteen years ago, one of things that did is let me take my old application that was probably long in the tooth. Begin with my heart was out of date, my operating system at eight, sticking in of'em and leave it for another five years, and the users that are like, Oh my gosh, I'd need an update. How do we get beyond that and allow this joint solution to be an accelerant for applications? >> Yeah, and I think you know the application is probably the crux of the business, right? >> We'Ll call in the tent from >> change applications of Evolve. This is actually the evolution journey of itself is where they used to be, like support systems. Now they become actually translate to business dollars because, you know, the first thing that your customer awful customer touches is an application and you can drive business value from it. And customers are thinking about this old applications and new applications. And they have to start thinking about where do I take my applications? Where do they need to line and then make a choice off? What infrastructures? The best black mom for it. So really can't flip the thing on. Don't think infrastructure first and then retrospect APS to it. I think at first and then make a charge on infrastructure based on the application need and and really look like you said being where kind of took the abstraction layer away from infrastructure and make sure that you'll be EMS could run everywhere. We're taking the same for applications to say. Doesn't matter if it's of'Em based. It's a cloud native will give you the same, you know, inconsistent infrastructure in operations. >> Okay, we're in that last thing. Could you just tell us of the announcements that were made? What's available today? What's coming later this year? >> Absolutely So Del Technologies Cloud Platform that's based on the X Trail and via MacLeod Foundation is available now as an integrated solution via MacLeod and Daddy and see the fully managed offer is available in >> the second half of this >> year. It's invader right now. And as you saw, we have really good feedback >> from our customers. And then I think >> the, uh, the Azure BMR Solutions offer will be available soon as well. >> All right, well, Varun and many Congratulations on the progress. We look forward to talking to the customers as they roll this out, and Rebecca and I will be back with lots more coverage here. Del Technologies World twenty nineteen. Little coverage to sets three days, tenth year, The Cube at M. C and L World. I'm still many men. And thanks so much for watching

Published Date : Apr 29 2019

SUMMARY :

Brought to you by Del Technologies Thank you so much for coming on the show. So we just had the keynote address we heard from Michael Dell Satya Nadella Pack Girl Singer are operational silos that developed as part of the to set the fellas Well, for the last year, it felt like, you know, patent Sanjay, or, you know, and that includes, you know, not small names. All right, so if I could just drilled down one level deep, you know? closer to the, you know, data center or the edge. Well, we heard so much about that on the main stage this morning about just how differently with But just how you are enabling the banks that are, you know, they're standardized their workloads on VM where today, right as as have many This there's the VCF on Vieques rail and then, you know, just help. s o for the Del Technologies Cloud Platform announcement, which is, as you said, VX rail in first hcea When we talk about you know, our partnership with the shore, he's not being where Cloud Foundation stack running We know the VM where you So you have an update, you know that it's gonna work on your your hardware or vice versa. really the beauty off, you know, coming to be aware. One of the things that's really striking during American executive, And, you know, having smart people just So we How do you manage? yeah, I just wonder if you could spend on a little bit. you know, the first thing that your customer awful customer touches is an application and you can drive Could you just tell us of the announcements that were made? And as you saw, we have really good feedback And then I think the, uh, the Azure BMR Solutions offer will be available soon We look forward to talking to the customers as they

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
RebeccaPERSON

0.99+

Andrea LeePERSON

0.99+

IBMORGANIZATION

0.99+

FujitsuORGANIZATION

0.99+

AaronPERSON

0.99+

MicrosoftORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

PattyPERSON

0.99+

Bank of AmericaORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

sevenQUANTITY

0.99+

fiveQUANTITY

0.99+

JuneDATE

0.99+

Rebecca NightPERSON

0.99+

Del TechnologiesORGANIZATION

0.99+

Varun CabraPERSON

0.99+

del TechnologiesORGANIZATION

0.99+

RackspaceORGANIZATION

0.99+

five thousand employeesQUANTITY

0.99+

ninety locationsQUANTITY

0.99+

Las VegasLOCATION

0.99+

two piecesQUANTITY

0.99+

SanjayPERSON

0.99+

two guestsQUANTITY

0.99+

three daysQUANTITY

0.99+

Cloud FoundationORGANIZATION

0.99+

five hundred thousand employeesQUANTITY

0.99+

William C.PERSON

0.99+

firstQUANTITY

0.99+

two waysQUANTITY

0.99+

MorganORGANIZATION

0.99+

second partQUANTITY

0.99+

MacLeod FoundationORGANIZATION

0.99+

OneQUANTITY

0.99+

eightQUANTITY

0.99+

oneQUANTITY

0.99+

tenth yearQUANTITY

0.99+

TodayDATE

0.99+

LowellORGANIZATION

0.99+

DeltaORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

Michael DellPERSON

0.99+

Varun ChhabraPERSON

0.99+

Del TechnologyORGANIZATION

0.99+

Fifteen years agoDATE

0.99+

ItalyLOCATION

0.99+

eightiesDATE

0.99+

last yearDATE

0.99+

Hip TioORGANIZATION

0.99+

IraqLOCATION

0.99+

bothQUANTITY

0.99+

todayDATE

0.98+

MacLeodORGANIZATION

0.98+

AMC InfrastructureORGANIZATION

0.98+

ArchieORGANIZATION

0.98+

later this yearDATE

0.98+

five yearsQUANTITY

0.98+

L E M C.PERSON

0.98+

Marie Claude FoundationORGANIZATION

0.98+

Cloud FoundationTITLE

0.97+

four thousand partnersQUANTITY

0.97+

third partQUANTITY

0.97+

one levelQUANTITY

0.97+

LiamPERSON

0.97+

two tangentQUANTITY

0.97+

ChinaLOCATION

0.97+

Vieques railORGANIZATION

0.97+

Mia McLeodPERSON

0.97+

Dell EMCORGANIZATION

0.97+

two worldsQUANTITY

0.96+

about four thousandQUANTITY

0.96+

Stew MinutemenPERSON

0.96+

KeePERSON

0.96+

first stepQUANTITY

0.96+

MoorePERSON

0.95+

couple of years agoDATE

0.95+

Caryn Woodruff, IBM & Ritesh Arora, HCL Technologies | IBM CDO Summit Spring 2018


 

>> Announcer: Live from downtown San Francisco, it's the Cube, covering IBM Chief Data Officer Strategy Summit 2018. Brought to you by IBM. >> Welcome back to San Francisco everybody. We're at the Parc 55 in Union Square and this is the Cube, the leader in live tech coverage and we're covering exclusive coverage of the IBM CDO strategy summit. IBM has these things, they book in on both coasts, one in San Francisco one in Boston, spring and fall. Great event, intimate event. 130, 150 chief data officers, learning, transferring knowledge, sharing ideas. Cayn Woodruff is here as the principle data scientist at IBM and she's joined by Ritesh Ororo, who is the director of digital analytics at HCL Technologies. Folks welcome to the Cube, thanks for coming on. >> Thank you >> Thanks for having us. >> You're welcome. So we're going to talk about data management, data engineering, we're going to talk about digital, as I said Ritesh because digital is in your title. It's a hot topic today. But Caryn let's start off with you. Principle Data Scientist, so you're the one that is in short supply. So a lot of demand, you're getting pulled in a lot of different directions. But talk about your role and how you manage all those demands on your time. >> Well, you know a lot of, a lot of our work is driven by business needs, so it's really understanding what is critical to the business, what's going to support our businesses strategy and you know, picking the projects that we work on based on those items. So it's you really do have to cultivate the things that you spend your time on and make sure you're spending your time on the things that matter and as Ritesh and I were talking about earlier, you know, a lot of that means building good relationships with the people who manage the systems and the people who manage the data so that you can get access to what you need to get the critical insights that the business needs, >> So Ritesh, data management I mean this means a lot of things to a lot of people. It's evolved over the years. Help us frame what data management is in this day and age. >> Sure, so there are two aspects of data in my opinion. One is the data management, another the data engineering, right? And over the period as the data has grown significantly. Whether it's unstructured data, whether it's structured data, or the transactional data. We need to have some kind of governance in the policies to secure data to make data as an asset for a company so the business can rely on your data. What you are delivering to them. Now, the another part comes is the data engineering. Data engineering is more about an IT function, which is data acquisition, data preparation and delivering the data to the end-user, right? It can be business, it can be third-party but it all comes under the governance, under the policies, which are designed to secure the data, how the data should be accessed to different parts of the company or the external parties. >> And how those two worlds come together? The business piece and the IT piece, is that where you come in? >> That is where data science definitely comes into the picture. So if you go online, you can find Venn diagrams that describe data science as a combination of computer science math and statistics and business acumen. And so where it comes in the middle is data science. So it's really being able to put those things together. But, you know, what's what's so critical is you know, Interpol, actually, shared at the beginning here and I think a few years ago here, talked about the five pillars to building a data strategy. And, you know, one of those things is use cases, like getting out, picking a need, solving it and then going from there and along the way you realize what systems are critical, what data you need, who the business users are. You know, what would it take to scale that? So these, like, Proof-point projects that, you know, eventually turn into these bigger things, and for them to turn into bigger things you've got to have that partnership. You've got to know where your trusted data is, you've got to know that, how it got there, who can touch it, how frequently it is updated. Just being able to really understand that and work with partners that manage the infrastructure so that you can leverage it and make it available to other people and transparent. >> I remember when I first interviewed Hilary Mason way back when and I was asking her about that Venn diagram and she threw in another one, which was data hacking. >> Caryn: Uh-huh, yeah. >> Well, talk about that. You've got to be curious about data. You need to, you know, take a bath in data. >> (laughs) Yes, yes. I mean yeah, you really.. Sometimes you have to be a detective and you have to really want to know more. And, I mean, understanding the data is like the majority of the battle. >> So Ritesh, we were talking off-camera about it's not how titles change, things evolve, data, digital. They're kind of interchangeable these days. I mean we always say the difference between a business and a digital business is how they have used data. And so digital being part of your role, everybody's trying to get digital transformation, right? As an SI, you guys are at the heart of it. Certainly, IBM as well. What kinds of questions are our clients asking you about digital? >> So I ultimately see data, whatever we drive from data, it is used by the business side. So we are trying to always solve a business problem, which is to optimize the issues the company is facing, or try to generate more revenues, right? Now, the digital as well as the data has been married together, right? Earlier there are, you can say we are trying to analyze the data to get more insights, what is happening in that company. And then we came up with a predictive modeling that based on the data that will statically collect, how can we predict different scenarios, right? Now digital, we, over the period of the last 10 20 years, as the data has grown, there are different sources of data has come in picture, we are talking about social media and so on, right? And nobody is looking for just reports out of the Excel, right? It is more about how you are presenting the data to the senior management, to the entire world and how easily they can understand it. That's where the digital from the data digitization, as well as the application digitization comes in picture. So the tools are developed over the period to have a better visualization, better understanding. How can we integrate annotation within the data? So these are all different aspects of digitization on the data and we try to integrate the digital concepts within our data and analytics, right? So I used to be more, I mean, I grew up as a data engineer, analytics engineer but now I'm looking more beyond just the data or the data preparation. It's more about presenting the data to the end-user and the business. How it is easy for them to understand it. >> Okay I got to ask you, so you guys are data wonks. I am too, kind of, but I'm not as skilled as you are, but, and I say that with all due respect. I mean you love data. >> Caryn: Yes. >> As data science becomes a more critical skill within organizations, we always talk about the amount of data, data growth, the stats are mind-boggling. But as a data scientist, do you feel like you have access to the right data and how much of a challenge is that with clients? >> So we do have access to the data but the challenge is, the company has so many systems, right? It's not just one or two applications. There are companies we have 50 or 60 or even hundreds of application built over last 20 years. And there are some applications, which are basically duplicate, which replicates the data. Now, the challenge is to integrate the data from different systems because they maintain different metadata. They have the quality of data is a concern. And sometimes with the international companies, the rules, for example, might be in US or India or China, the data acquisitions are different, right? And you are, as you become more global, you try to integrate the data beyond boundaries, which becomes a more compliance issue sometimes, also, beyond the technical issues of data integration. >> Any thoughts on that? >> Yeah, I think, you know one of the other issues too, you have, as you've heard of shadow IT, where people have, like, servers squirreled away under their desks. There's your shadow data, where people have spreadsheets and databases that, you know, they're storing on, like a small server or that they share within their department. And so you know, you were discussing, we were talking earlier about the different systems. And you might have a name in one system that's one way and a name in another system that's slightly different, and then a third system, where it's it's different and there's extra granularity to it or some extra twist. And so you really have to work with all of the people that own these processes and figure out what's the trusted source? What can we all agree on? So there's a lot of... It's funny, a lot of the data problems are people problems. So it's getting people to talk and getting people to agree on, well this is why I need it this way, and this is why I need it this way, and figuring out how you come to a common solution so you can even create those single trusted sources that then everybody can go to and everybody knows that they're working with the the right thing and the same thing that they all agree on. >> The politics of it and, I mean, politics is kind of a pejorative word but let's say dissonance, where you have maybe of a back-end syst6em, financial system and the CFO, he or she is looking at the data saying oh, this is what the data says and then... I remember I was talking to a, recently, a chef in a restaurant said that the CFO saw this but I know that's not the case, I don't have the data to prove it. So I'm going to go get the data. And so, and then as they collect that data they bring together. So I guess in some ways you guys are mediators. >> [Caryn And Ritesh] Yes, yes. Absolutely. >> 'Cause the data doesn't lie you just got to understand it. >> You have to ask the right question. Yes. And yeah. >> And sometimes when you see the data, you start, that you don't even know what questions you want to ask until you see the data. Is that is that a challenge for your clients? >> Caryn: Yes, all the time. Yeah >> So okay, what else do we want to we want to talk about? The state of collaboration, let's say, between the data scientists, the data engineer, the quality engineer, maybe even the application developers. Somebody, John Fourier often says, my co-host and business partner, data is the new development kit. Give me the data and I'll, you know, write some code and create an application. So how about collaboration amongst those roles, is that something... I know IBM's gone on about some products there but your point Caryn, it's a lot of times it's the people. >> It is. >> And the culture. What are you seeing in terms of evolution and maturity of that challenge? >> You know I have a very good friend who likes to say that data science is a team sport and so, you know, these should not be, like, solo projects where just one person is wading up to their elbows in data. This should be something where you've got engineers and scientists and business, people coming together to really work through it as a team because everybody brings really different strengths to the table and it takes a lot of smart brains to figure out some of these really complicated things. >> I completely agree. Because we see the challenges, we always are trying to solve a business problem. It's important to marry IT as well as the business side. We have the technical expert but we don't have domain experts, subject matter experts who knows the business in IT, right? So it's very very important to collaborate closely with the business, right? And data scientist a intermediate layer between the IT as well as business I will say, right? Because a data scientist as they, over the years, as they try to analyze the information, they understand business better, right? And they need to collaborate with IT to either improve the quality, right? That kind of challenges they are facing and I need you to, the data engineer has to work very hard to make sure the data delivered to the data scientist or the business is accurate as much as possible because wrong data will lead to wrong predictions, right? And ultimately we need to make sure that we integrate the data in the right way. >> What's a different cultural dynamic that was, say ten years ago, where you'd go to a statistician, she'd fire up the SPSS.. >> Caryn: We still use that. >> I'm sure you still do but run some kind of squares give me some, you know, probabilities and you know maybe run some Monte Carlo simulation. But one person kind of doing all that it's your point, Caryn. >> Well you know, it's it's interesting. There are there are some students I mentor at a local university and you know we've been talking about the projects that they get and that you know, more often than not they get a nice clean dataset to go practice learning their modeling on, you know? And they don't have to get in there and clean it all up and normalize the fields and look for some crazy skew or no values or, you know, where you've just got so much noise that needs to be reduced into something more manageable. And so it's, you know, you made the point earlier about understanding the data. It's just, it really is important to be very curious and ask those tough questions and understand what you're dealing with. Before you really start jumping in and building a bunch of models. >> Let me add another point. That the way we have changed over the last ten years, especially from the technical point of view. Ten years back nobody talks about the real-time data analysis. There was no streaming application as such. Now nobody talks about the batch analysis, right? Everybody wants data on real-time basis. But not if not real-time might be near real-time basis. That has become a challenge. And it's not just that prediction, which are happening in their ERP environment or on the cloud, they want the real-time integration with the social media for the marketing and the sales and how they can immediately do the campaign, right? So, for example, if I go to Google and I search for for any product, right, for example, a pressure cooker, right? And I go to Facebook, immediately I see the ad within two minutes. >> Yeah, they're retargeting. >> So that's a real-time analytics is happening under different application, including the third-party data, which is coming from social media. So that has become a good source of data but it has become a challenge for the data analyst and the data scientist. How quickly we can turn around is called data analysis. >> Because it used to be you would get ads for a pressure cooker for months, even after you bought the pressure cooker and now it's only a few days, right? >> Ritesh: It's a minute. You close this application, you log into Facebook... >> Oh, no doubt. >> Ritesh: An ad is there. >> Caryn: There it is. >> Ritesh: Because everything is linked either your phone number or email ID you're done. >> It's interesting. We talked about disruption a lot. I wonder if that whole model is going to get disrupted in a new way because everybody started using the same ad. >> So that's a big change of our last 10 years. >> Do you think..oh go ahead. >> oh no, I was just going to say, you know, another thing is just there's so much that is available to everybody now, you know. There's not this small little set of tools that's restricted to people that are in these very specific jobs. But with open source and with so many software-as-a-service products that are out there, anybody can go out and get an account and just start, you know, practicing or playing or joining a cackle competition or, you know, start getting their hands on.. There's data sets that are out there that you can just download to practice and learn on and use. So, you know, it's much more open, I think, than it used to be. >> Yeah, community additions of software, open data. The number of open day sources just keeps growing. Do you think that machine intelligence can, or how can machine intelligence help with this data quality challenge? >> I think that it's it's always going to require people, you know? There's always going to be a need for people to train the machines on how to interpret the data. How to classify it, how to tag it. There's actually a really good article in Popular Science this month about a woman who was training a machine on fake news and, you know, it did a really nice job of finding some of the the same claims that she did. But she found a few more. So, you know, I think it's, on one hand we have machines that we can augment with data and they can help us make better decisions or sift through large volumes of data but then when we're teaching the machines to classify the data or to help us with metadata classification, for example, or, you know, to help us clean it. I think that it's going to be a while before we get to the point where that's the inverse. >> Right, so in that example you gave, the human actually did a better job from the machine. Now, this amazing to me how.. What, what machines couldn't do that humans could, you know last year and all of a sudden, you know, they can. It wasn't long ago that robots couldn't climb stairs. >> And now they can. >> And now they can. >> It's really creepy. >> I think the difference now is, earlier you know, you knew that there is an issue in the data. But you don't know that how much data is corrupt or wrong, right? Now, there are tools available and they're very sophisticated tools. They can pinpoint and provide you the percentage of accuracy, right? On different categories of data that that you come across, right? Even forget about the structure data. Even when you talk about unstructured data, the data which comes from social media or the comments and the remarks that you log or are logged by the customer service representative, there are very sophisticated text analytics tools available, which can talk very accurately about the data as well as the personality of the person who is who's giving that information. >> Tough problems but it seems like we're making progress. All you got to do is look at fraud detection as an example. Folks, thanks very much.. >> Thank you. >> Thank you very much. >> ...for sharing your insight. You're very welcome. Alright, keep it right there everybody. We're live from the IBM CTO conference in San Francisco. Be right back, you're watching the Cube. (electronic music)

Published Date : May 2 2018

SUMMARY :

Brought to you by IBM. of the IBM CDO strategy summit. and how you manage all those demands on your time. and you know, picking the projects that we work on I mean this means a lot of things to a lot of people. and delivering the data to the end-user, right? so that you can leverage it and make it available about that Venn diagram and she threw in another one, You need to, you know, take a bath in data. and you have to really want to know more. As an SI, you guys are at the heart of it. the data to get more insights, I mean you love data. and how much of a challenge is that with clients? Now, the challenge is to integrate the data And so you know, you were discussing, I don't have the data to prove it. [Caryn And Ritesh] Yes, yes. You have to ask the right question. And sometimes when you see the data, Caryn: Yes, all the time. Give me the data and I'll, you know, And the culture. and so, you know, these should not be, like, and I need you to, the data engineer that was, say ten years ago, and you know maybe run some Monte Carlo simulation. and that you know, more often than not And I go to Facebook, immediately I see the ad and the data scientist. You close this application, you log into Facebook... Ritesh: Because everything is linked I wonder if that whole model is going to get disrupted that is available to everybody now, you know. Do you think that machine intelligence going to require people, you know? Right, so in that example you gave, and the remarks that you log All you got to do is look at fraud detection as an example. We're live from the IBM CTO conference

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Ritesh OroroPERSON

0.99+

CarynPERSON

0.99+

John FourierPERSON

0.99+

RiteshPERSON

0.99+

IBMORGANIZATION

0.99+

USLOCATION

0.99+

50QUANTITY

0.99+

Cayn WoodruffPERSON

0.99+

BostonLOCATION

0.99+

San FranciscoLOCATION

0.99+

ChinaLOCATION

0.99+

IndiaLOCATION

0.99+

last yearDATE

0.99+

ExcelTITLE

0.99+

oneQUANTITY

0.99+

Caryn WoodruffPERSON

0.99+

Ritesh AroraPERSON

0.99+

Hilary MasonPERSON

0.99+

60QUANTITY

0.99+

130QUANTITY

0.99+

OneQUANTITY

0.99+

Monte CarloTITLE

0.99+

HCL TechnologiesORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

third systemQUANTITY

0.98+

todayDATE

0.98+

InterpolORGANIZATION

0.98+

ten years agoDATE

0.98+

two applicationsQUANTITY

0.98+

firstQUANTITY

0.98+

Parc 55LOCATION

0.98+

five pillarsQUANTITY

0.98+

one systemQUANTITY

0.98+

GoogleORGANIZATION

0.97+

two aspectsQUANTITY

0.97+

both coastsQUANTITY

0.97+

one personQUANTITY

0.96+

Ten years backDATE

0.96+

two minutesQUANTITY

0.95+

this monthDATE

0.95+

Union SquareLOCATION

0.95+

two worldsQUANTITY

0.94+

Spring 2018DATE

0.94+

Popular ScienceTITLE

0.9+

CTOEVENT

0.88+

daysQUANTITY

0.88+

one wayQUANTITY

0.87+

SPSSTITLE

0.86+

single trusted sourcesQUANTITY

0.85+

VennORGANIZATION

0.84+

few years agoDATE

0.84+

150 chief data officersQUANTITY

0.83+

last 10 20 yearsDATE

0.83+

Officer Strategy Summit 2018EVENT

0.82+

hundreds of applicationQUANTITY

0.8+

last 10 yearsDATE

0.8+

CubeCOMMERCIAL_ITEM

0.79+

IBM ChiefEVENT

0.79+

IBM CDO strategy summitEVENT

0.72+

last ten yearsDATE

0.7+

IBM CDO SummitEVENT

0.7+

fallDATE

0.68+

CubeTITLE

0.66+

springDATE

0.65+

last 20 yearsDATE

0.63+

minuteQUANTITY

0.49+

Chad Anderson, Chris Wegmann & Steven Jones | AWS Summit SF 2018


 

>> Announcer: Live from the Moscone center it's theCUBE covering AWS Summits San Francisco 2018. Brought to you by Amazon Web Services. >> Welcome back, this is theCUBE's coverage of AWS Summit San Francisco. Here at the Moscone Center West. I'm Stu Miniman, happy to have a distinguished panel of guests on the program. Starting down of the fair side, Steven Jones whose the Director of Solution Architecture with AWS, helping us talk about how AWS gets to market is Chris Wegmann, Manager and Director of Accenture, and then super excited to have a customer on the program Chad Anderson is the IT Director of Operations at Del Monte Foods. Gentleman, thank you so much for joining us. >> Thanks for having us. >> Alright Chad, we're going to start with you, talk to us a little bit about your role inside Del Monte and really the journey of the cloud, something we've been talking about for years, but Del Monte has an interesting story. I want to kind of understand your role in that. Start us off. >> Ya so I oversaw the project for us to migrate everything to AWS. We started off with just needing to really understand if were missing something here. Like, shouldn't we be moving to the cloud and that ended up in a study where we just kind of went threw the numbers, we looked at what the benefits were going to be and it kind of just turned into a obvious choice for us to do it. >> Back us up for a second, give us you know your organization Del Monte Foods and your technology group is this global and scope kind of how many end user do you have? How many sites? Can you give us a little bit of the speeds and feeds of what what was being considered, was it everything or some pieces, what was the impetus for the journey of the cloud? >> Ya, so we have about a thousand users, globally we are mostly in Manila, for our global share services our business back office work is done there and then most of it is U.S. footprint of plants and distribution centers and headquarters, et cetera operations. >> Alright so Chris, the SI partner for this cloud journey. So bring us a little bit of insight, bring us back to you know, kind of what was the business challenge and what was your teams role in helping along those journeys? >> The business challenge was getting Del Monte, getting the heart of their organization SAP to AWS quickly. Alright, there was a short time frame, I learned a lot about fruit packing during the project, but it was about how quickly could we get there? So, when we actually started, we started looking at taking seven months to do the migration of their environment. We really got into it and really got focused on what needed to be done. We looked at a lot of automation, put a lot of automation on the process, a very diligent approach, and we were able to do it, we thought we could do it in four months, and we did in three and a half months so very rapid, and I think as Chad will tell you we really kind of focused on building the right architecture, putting a lot of automation, and then also getting it in there with the right performance and then being able to tune things down, because you can you can move so quickly between engine sizes and memory and it was a really really exciting process to go through. >> Ya, so you said it originally we thought it was seven months, and it was good and done in half that time. That's not my experience with Enterprise Software roll outs. So, what was the delta there? How was the team able to move so fast? >> A lot of it was obviously AWS, being able to spin up the infrastructure, being able to automate a lot of the tasks that had to be done. Alright we did it threw three different environment sets. So we started diligent, moved to test, then went to production, and in each step we automated more and more of the process so we were able to condense the speed of the technical work that had to take place in a really short amount of time. >> We had to treat it also, like a mission critical thing across it wasn't just a infrastructure move it really the application guys were focused on this, we stopped all development of other activities going on. We really just kind of turned everybody and say "Let's get this done as soon as possible "and not be competing with each other." >> When you say stop everything, but of course the business didn't stop, but was transition pretty seamless. >> I mean other projects. >> Ya, ya, ya I understand, but I mean from the cut over and from your users stand point, did it go pretty smoothly? >> Oh definitely, these guys did an amazing job of putting together a plan that was really ready to be executed against. It took some, it took a lot of, I mean on my part it was really just to negotiate the extended maintenance window, but as far as the best compliment I ever got was people were like what did you do? Like I didn't even know that you guys did anything. From day one they took it and ran with it and we were stable. I mean it was pretty awesome. >> A black box, magic happens here and all of a sudden everything is running faster, scaling easier, cost is better, some of those types of thing? >> Ya, cocktails and beach time. >> Steve cocktails? I didn't realize that when I moved my enterprise application to cloud cocktails were involved. >> A few cocktails are involved. >> I mean look, I remember a few years ago where it was like well it's your development will do in the cloud, but I mean SAP has really raised cloud full boar and you know very strong partner, but bring us up to how does AWS help customers make sure that, this is critical things running the business, that it runs so smoothly. What have you learned along the way? What is different in 2018, then say it was even a year or two ago? >> A lot of great questions in there Stu, I would say this is become the new normal. Right? It use to be full disclosure, dev test, training type work loads in the early days but over the course of years we have taking a lot of learning with partners like Accenture and customers like Del Monte and we've taking those learnings and put them back into the platforms, so what you see today is a platform that a partner like Accenture could come in build a lot of automation tooling around, to reduce time frame from seven months down to three and a half. I think it was around two hundred servers, 50 of those were SAP related, and 25 terabytes of data that were moved in a short amount of time. So it's a combination of years worth of effort to build a platform that is scalable, resilient, and flexible. As well as the work that we have done directly with SAP that has gone right back into the platform. >> Chad bring us inside kind of operations on your team. What is the before and after? What's it look like? Was there change in personal or roles or skills? >> We transition services with our migration. So the Accenture team has taking over the long term operational activities as well as helping us through the migration efforts. We had a lot of preparation that was going on besides the server migration that was happening and I think what is really unique about them is because they can deliver these capabilities of the migration they have got a lot of the tooling and the automation is built into the operational mana services model as well. So it's been a much easier kind of hand over from those teams because we are working with the same vendor. >> Most of the time its not just that I've migrated from my environment to the cloud, but how does that enable the new services either Accenture from AWS from the marketplace. What has changed as to how you look at your SAP environment and kind of capability wise? >> It's just incredibly flexible now. It's just one of those situations where we can start small and we can scale so rapidly and it's like I feel like its kind of like walking into a fast food restaurant and just like oh, I'll take one of these, one of these, and one of these. You wait there and the food comes out, it just happens automatically. So, it's a great thing. >> Chris, I remember I interviewed a CEO a few years ago, and he said use to give me a million dollars in 18 months and I'll build you the Taj Mahal from my applications. Today I need to move faster and it's not a one time migration, but there's ongoing I've heard it a time and again there, so where does Accenture, it's not just the planning, where's Accenture involved? What is kind of the ongoing engagement? >> We go end to end. Right? So, we start out with strategy, we start out with a migration. The migration takes planning and execution, but really we focus on the run area as well using our Accenture platform and tooling that we have built. We really focus on how do you continue to optimize? How do you continue to improve performance? How do you govern? How do you do things like quota and security management and that type of stuff. I do think that a lot of our customers start with cloud think I can spin this stuff up, I can run it just like I ran my on premise data center and it's not the same. You go from a capacity planning person to a cost management person. You need to have a cloud architect understanding how you build your applications to be Cloud ready and AWS ready. There are a lot of great services, but if your not taking advantages of those services you can't auto scale, you can't do that stuff. So, we really help our clients go threw that entire process and make sure their getting the most value out of AWS all the way through the run for many years after they have done the migration. >> Chad, do you have any discussion of how are you reporting back to the business as to what were the hero numbers or success factors that said hey this was actually the right thing to do? >> Ya I mean we're a canned food company, so people are very interested in making sure that we are keeping our cost low. Most people from a business prospect want to talk to me about the efficiencies that their seeing and how's that going to show up a reduction in SG and A. We have seen it, I mean when you move to a group of people that can manage a larger set of infrastructure with a smaller group of people and the underline services can be turned on and off, so you only utilize what you really absolutely have. Those numbers show up on our bottom line. >> Steve, any other similar, what do you hear from customers when it comes to SAP, and what is the main driver, and what are the big hero things? >> So in the early days, it was all about cost right, driving cost out of the system. Now it's the flexibility, the ability to move quicker. Chad was relating earlier how you would spend a lot of time sizing environment and now there actually able to right size their environments using purpose built equipment the AWS has built for SAP. It's enabled them to actually reduce cost and move quicker. That's what we are hearing is common theme now these days. It's okay to move faster, to maybe not worry about sizing as much as we use to. >> Ya for future initiatives, I mean it's, there's all these windows of time that are just gone for us to stand up new services whether it's traditional application that needs servers and computer, whether it's SAP services, we are kind of all on that platform now where we can just click and plug in items much easier. >> Chad, what do things like digital transformation and innervation mean to a canned food company? >> We are desperately trying to get in touch of our consumer. So, whether were figuring out how to get improve kind of how we are managing our digital assets, how were managing, our pages on Amazon, or our pages on Walmart.com. We need to be much more in touch and much more consumer focused and a lot of these newer technologies, et cetera there built to run on AWS and we ready to kind of integrate that into our existing enterprise environment. >> Innervation has been a big part of our customers reason for moving to cloud. I'd say 18 months ago, we saw a big transition in our enterprise customers a lot of them were starting off with cost savings, for operational savings, just overall improvement of their operations, and then we seen about 18 months ago we saw a big shift of people very much focused on innervation and using AWS platform as that catalyst renovation. So, the businesses asking for Alexa apps, they're asking for the integration. Well, the SAP data has to be there to support that stuff. Right, and your enterprise tech has to be there, so by doing that it's enabled a lot of innervation in our processors. >> Chad, last question when you talk about innovation, are there certain areas that your team's investing in is it AI, is it IOT, you know what are some of the areas that you think will be the most promising and how do Accenture and AWS fit into those from your planning? >> Ya, I mean IOT is definitely an interesting area for us, and getting to a point where we can measure our effectiveness and our manufacturing processes, those are all really initiatives now that we're starting to focus on, now that we kind of gotten some of the infrastructure related stuff and were ready to kind of build out those platforms. We're talking about scaling out our OE software and our infrastructure its just such an easier conversation to kind of plan for those activities. We turned a three month sizing exercise as to how much IOT did and we think were going to have to process through these engines into a hey let's go with this and if it doesn't work then we'll take it out and increase the size. It really helps us deliver capabilities new capabilities and new types of ways of measuring or helping our business run in a much more effective and efficient way. >> Anything that you've learned along the way that you've turned to peers and say "Here's something I did, maybe do it faster or do it a little bit different way?" >> I think Accenture has been an amazing partner. I think a lot of people are skeptical about running their entire enterprise across the network and once you kind of bring them in and you really let them look under the cover of what you have. One of the reasons we went with them was just the trust and confidence that they had that we could do this. Once I kind of saw that it was like well I mean let's trust the process here. I mean these guys are the experts and so so that's been a big thing is just reach out learn about what people are doing. There's no reason why you can't do this. >> Well Chad, Chris, and Steve thank you both so much for highlighting the story of a customer's journey to the cloud. We will be back with lots more coverage here at AWS Summit in San Francisco. I'm Stu Miniman. You're watching theCUBE. (upbeat music)

Published Date : Apr 4 2018

SUMMARY :

Brought to you by Amazon Web Services. Starting down of the fair side, and really the journey of the cloud, Ya so I oversaw the project for us Ya, so we have about a Alright so Chris, the SI and then being able to tune and it was good and and more of the process so the application guys were focused on this, but of course the business and we were stable. my enterprise application to do in the cloud, but I mean of effort to build a platform What is the before and after? capabilities of the migration Most of the time its and we can scale so rapidly What is kind of the ongoing engagement? and tooling that we have built. and the underline services the ability to move quicker. that are just gone for us to stand up improve kind of how we are Well, the SAP data has to be kind of gotten some of the One of the reasons we went highlighting the story

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ChrisPERSON

0.99+

StevePERSON

0.99+

Steven JonesPERSON

0.99+

ChadPERSON

0.99+

AWSORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

2018DATE

0.99+

Chris WegmannPERSON

0.99+

Chad AndersonPERSON

0.99+

seven monthsQUANTITY

0.99+

ManilaLOCATION

0.99+

AccentureORGANIZATION

0.99+

50QUANTITY

0.99+

Del Monte FoodsORGANIZATION

0.99+

25 terabytesQUANTITY

0.99+

18 monthsQUANTITY

0.99+

Del MonteORGANIZATION

0.99+

TodayDATE

0.99+

AmazonORGANIZATION

0.99+

four monthsQUANTITY

0.99+

three and a half monthsQUANTITY

0.99+

San FranciscoLOCATION

0.99+

AlexaTITLE

0.99+

each stepQUANTITY

0.99+

three monthQUANTITY

0.98+

Walmart.comORGANIZATION

0.98+

OneQUANTITY

0.98+

oneQUANTITY

0.98+

U.S.LOCATION

0.98+

three and a halfQUANTITY

0.98+

18 months agoDATE

0.98+

bothQUANTITY

0.98+

one timeQUANTITY

0.98+

todayDATE

0.97+

a yearDATE

0.96+

Del Monte FoodsORGANIZATION

0.96+

AWS SummitEVENT

0.95+

around two hundred serversQUANTITY

0.94+

SAPORGANIZATION

0.93+

a million dollarsQUANTITY

0.92+

GentlemanPERSON

0.91+

about a thousand usersQUANTITY

0.9+

Moscone Center WestLOCATION

0.9+

theCUBEORGANIZATION

0.89+

AWS Summits San Francisco 2018EVENT

0.87+

two agoDATE

0.86+

about 18 months agoDATE

0.85+

halfQUANTITY

0.84+

few years agoDATE

0.83+

day oneQUANTITY

0.81+

three different environmentQUANTITY

0.78+

AWS Summit SanEVENT

0.72+

AWS Summit SF 2018EVENT

0.72+

one of those situationsQUANTITY

0.72+

Becky Wanta, RSW1C Consulting - CloudNOW Awards 2017


 

(click) >> Hey, Lisa Martin on the ground with theCUBE at Google for the Sixth Annual CloudNOW Top Women in Cloud Awards Event, our second year covering this, very excited to be joined by tonight's emcee, Becky Wanta, the founder of RSW1C. Welcome to theCUBE. >> Thank you. >> It's great to have you here. So tell us a little bit about what you do and your background as a technology leader. >> So, I've been in technology for close to 40 years. I started out as a software. >> Sorry, I don't even, what? (laughing) >> Ha, ha, ha, it's a long time ago, yeah. So I started out as a developer back in the Department of Defense. So it wasn't rocket science in the early days when I began because it was back when computers took up whole rooms and I realized I had an affinity for that. So, I leveraged that, but then I got into, at that time, and I'm from northern California, if you remember right, the Department of Defense was drawing down. And so I decided I was going to leverage my experience in IT to get into either integrative financial services or healthcare, right. So I took over running all of tech for the Money Store at the time which you would have no idea who that is. And then that got acquired by Wells Fargo First Union, so I took over as their Global CTO for Wells Fargo. And what you'll see is, so let me just tell you about RSW1C because what it is is it's a technology consulting firm that's me. And the reason I have it is because tech changes so much that it's easy to stay current. And when I get brought into companies, and you'll look at me, so I've been the executive officer for tiny little companies like PepsiCo, Wells Fargo, Southwest Airlines. >> The small ones. >> Yeah, tiny, not really, MGM Resorts International, the largest worker's comp company in California, a company that, unborn midsize SMB in southern California that just wrapped up last year. And when I get brought into these companies, I get brought in to transform them. It's at a time in the maturation of these companies, these tiny little brands we've mentioned, where they're ready to jettison IT. So I take that very seriously because I know technology is that gateway to keep that competitive advantage. And the beauty is of that the companies I've mentioned, they're all number one in their markets. And when you're number one, there's only one direction to go, so they take that very seriously. >> How do you come in there and help an MGM Grand Resorts transform? >> So what happened in MGM's case and probably in the last five CIO positions that I've taken, they've met me as a consultant, again, from RSW1C. And then when I look into what needs to happen and I have the conversation, because everybody thinks they want to do digital transformation, and it's not an easy journey and if you don't have the executive sponsorship, don't even try it at home, right? And so, in MGM's case, they had been talking. MGM's the largest taxpayer in Nevada. People think about it as MGM Grand. It's 19 brands on The Strip. >> Is that right? >> It's Bellagio, MGM, so it's the largest taxpayer in Nevada. So it owns 44,860 rooms on The Strip. So if I just counted now, you have Circa Circa, Slots of Fun, Mirage, Bellagio, Monte Carlo, New York, New York, um, MGM Grand Las Vegas, MGM Grand Detroit. They're in the countries and so forth. So it's huge. And that includes Mandalay, ARIA, and all those, so it's huge, right? And so in MGM's case, they knew they wanted to do M life, so M life game changes their industry. And I put that in. This will be our nine year anniversary coming up on Valentine's Day. Thirty years they talked about it, and I put in with a great team And that was part of the transformation into a new way of running their business. >> Wow, we have a couple of minutes left. I'd love to get your perspective on being a female leader in tech. Who were your mentors back in the day? And who are your mentors now? >> So, I don't have any mentors. I never did. Because when I started in the industry, there wasn't a lot of women. And obviously, technology was fairly new which is why one of my passions is around helping the next generation be hugely successful. And one of the things that's important is in the space of tech, I like this mantra, this mantra that says, "How about brains "and beauty that gets you in the door? "How about having the confidence in yourself?" So I want to help a lot of the next generation be hugely successful. And that's what Jocelyn has built with CloudNow, her and Susan. And I'm a big proponent of this because I think it's a chance for us to give back and help the next generation of leaders in a non-traditional way be hugely successful in brands, in companies that are going to unleash their passion and show them how to do that. Because, the good news is that I'm a total bum, Lisa. I've never had a job. I love what I do, and I do it around the clock, so. >> Oh, if only more people could say that. That's so cool. But what we've seen with CloudNow, this is our second year covering it, I love talking to the winners and even the folks that are keynoting or helping to sponsor scholarships. There's so much opportunity. >> There really is. >> And it's so exciting when you can see someone whose life is changing as a result of finding a mentor or having enough conviction to say, "You know what? "I am interested in a STEM field. "I'm going to pursue that." >> Right. >> So, we thank you so much Becky for stopping by theCUBE. And your career is amazing. >> Thanks. >> And I'm sure you probably are mentors to countless, countless men and women out there. >> Absolutely. >> Well, thanks again for stopping by. >> Thank you, Lisa. >> Thank you for watching theCUBE. I'm Lisa Martin on the ground at Google with the CloudNow Sixth Annual Top Women in Cloud Awards Event. Stick around, we'll be right back.

Published Date : Dec 8 2017

SUMMARY :

Hey, Lisa Martin on the ground with theCUBE It's great to have you here. So, I've been in technology for close to 40 years. And the reason I have it is because tech changes so much And the beauty is of that the companies I've mentioned, And then when I look into what needs to happen And I put that in. And who are your mentors now? And one of the things that's important is and even the folks that are keynoting And it's so exciting when you can see someone And your career is amazing. And I'm sure you probably are mentors for stopping by. I'm Lisa Martin on the ground at Google

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
PepsiCoORGANIZATION

0.99+

MGMORGANIZATION

0.99+

CaliforniaLOCATION

0.99+

NevadaLOCATION

0.99+

Southwest AirlinesORGANIZATION

0.99+

Becky WantaPERSON

0.99+

JocelynPERSON

0.99+

Wells FargoORGANIZATION

0.99+

Lisa MartinPERSON

0.99+

MGM Resorts InternationalORGANIZATION

0.99+

LisaPERSON

0.99+

BeckyPERSON

0.99+

MGM GrandORGANIZATION

0.99+

SusanPERSON

0.99+

19 brandsQUANTITY

0.99+

44,860 roomsQUANTITY

0.99+

RSW1CORGANIZATION

0.99+

BellagioORGANIZATION

0.99+

last yearDATE

0.99+

Department of DefenseORGANIZATION

0.99+

northern CaliforniaLOCATION

0.99+

Thirty yearsQUANTITY

0.99+

second yearQUANTITY

0.99+

New YorkLOCATION

0.98+

southern CaliforniaLOCATION

0.98+

Valentine's DayEVENT

0.98+

Wells Fargo First UnionORGANIZATION

0.97+

theCUBEORGANIZATION

0.97+

Sixth Annual CloudNOW Top Women in Cloud Awards EventEVENT

0.97+

MGM Grand DetroitORGANIZATION

0.97+

oneQUANTITY

0.97+

GoogleORGANIZATION

0.97+

CloudNow Sixth Annual Top Women in Cloud Awards EventEVENT

0.95+

tonightDATE

0.95+

one directionQUANTITY

0.94+

ARIAORGANIZATION

0.93+

CloudNOW Awards 2017EVENT

0.88+

MGM Grand ResortsORGANIZATION

0.88+

nine year anniversaryQUANTITY

0.87+

closeQUANTITY

0.83+

Slots of FunORGANIZATION

0.83+

CloudNowORGANIZATION

0.81+

Circa CircaORGANIZATION

0.8+

40 yearsQUANTITY

0.78+

MandalayORGANIZATION

0.78+

MGM Grand Las VegasORGANIZATION

0.78+

Money StoreORGANIZATION

0.77+

MonteORGANIZATION

0.77+

MirageORGANIZATION

0.75+

five CIO positionsQUANTITY

0.75+

CarloLOCATION

0.74+

SMBORGANIZATION

0.71+

The StripLOCATION

0.71+

The StripORGANIZATION

0.66+

CloudORGANIZATION

0.63+

emceePERSON

0.63+

RSW1CEVENT

0.6+

Sharad Singhal, The Machine & Matthias Becker, University of Bonn | HPE Discover Madrid 2017


 

>> Announcer: Live from Madrid, Spain, it's theCUBE, covering HPE Discover Madrid 2017, brought to you by Hewlett Packard Enterprise. >> Welcome back to Madrid, everybody, this is theCUBE, the leader in live tech coverage and my name is Dave Vellante, and I'm here with Peter Burris, this is day two of HPE Hewlett Packard Enterprise Discover in Madrid, this is their European version of a show that we also cover in Las Vegas, kind of six month cadence of innovation and organizational evolution of HPE that we've been tracking now for several years. Sharad Singal is here, he covers software architecture for the machine at Hewlett Packard Enterprise, and Matthias Becker, who's a postdoctoral researcher at the University of Bonn. Gentlemen, thanks so much for coming in theCUBE. >> Thank you. >> No problem. >> You know, we talk a lot on theCUBE about how technology helps people make money or save money, but now we're talking about, you know, something just more important, right? We're talking about lives and the human condition and >> Peter: Hard problems to solve. >> Specifically, yeah, hard problems like Alzheimer's. So Sharad, why don't we start with you, maybe talk a little bit about what this initiative is all about, what the partnership is all about, what you guys are doing. >> So we started on a project called the Machine Project about three, three and a half years ago and frankly at that time, the response we got from a lot of my colleagues in the IT industry was "You guys are crazy", (Dave laughs) right. We said we are looking at an enormous amount of data coming at us, we are looking at real time requirements on larger and larger processing coming up in front of us, and there is no way that the current architectures of the computing environments we create today are going to keep up with this huge flood of data, and we have to rethink how we do computing, and the real question for those of us who are in research in Hewlett Packard Labs, was if we were to design a computer today, knowing what we do today, as opposed to what we knew 50 years ago, how would we design the computer? And this computer should not be something which solves problems for the past, this should be a computer which deals with problems in the future. So we are looking for something which would take us for the next 50 years, in terms of computing architectures and what we will do there. In the last three years we have gone from ideas and paper study, paper designs, and things which were made out of plastic, to a real working system. We have around Las Vegas time, we'd basically announced that we had the entire system working with actual applications running on it, 160 terabytes of memory all addressable from any processing core in 40 computing nodes around it. And the reason is, although we call it memory-driven computing, it's really thinking in terms of data-driven computing. The reason is that the data is now at the center of this computing architecture, as opposed to the processor, and any processor can return to any part of the data directly as if it was doing, addressing in local memory. This provides us with a degree of flexibility and freedom in compute that we never had before, and as a software person, I work in software, as a software person, when we started looking at this architecture, our answer was, well, we didn't know we could do this. Now if, given now that I can do this and I assume that I can do this, all of us in the programmers started thinking differently, writing code differently, and we suddenly had essentially a toy to play with, if you will, as programmers, where we said, you know, this algorithm I had written off decades ago because it didn't work, but now I have enough memory that if I were to think about this algorithm today, I would do it differently. And all of a sudden, a new set of algorithms, a new set of programming possibilities opened up. We worked with a number of applications, ranging from just Spark on this kind of an environment, to how do you do large scale simulations, Monte Carlo simulations. And people talk about improvements in performance from something in the order of, oh I can get you a 30% improvement. We are saying in the example applications we saw anywhere from five, 10, 15 times better to something which where we are looking at financial analysis, risk management problems, which we can do 10,000 times faster. >> So many orders of magnitude. >> Many, many orders >> When you don't have to wait for the horrible storage stack. (laughs) >> That's right, right. And these kinds of results gave us the hope that as we look forward, all of us in these new computing architectures that we are thinking through right now, will take us through this data mountain, data tsunami that we are all facing, in terms of bringing all of the data back and essentially doing real-time work on those. >> Matthias, maybe you could describe the work that you're doing at the University of Bonn, specifically as it relates to Alzheimer's and how this technology gives you possible hope to solve some problems. >> So at the University of Bonn, we work very closely with the German Center for Neurodegenerative Diseases, and in their mission they are facing all diseases like Alzheimer's, Parkinson's, Multiple Sclerosis, and so on. And in particular Alzheimer's is a really serious disease and for many diseases like cancer, for example, the mortality rates improve, but for Alzheimer's, there's no improvement in sight. So there's a large population that is affected by it. There is really not much we currently can do, so the DZNE is focusing on their research efforts together with the German government in this direction, and one thing about Alzheimer's is that if you show the first symptoms, the disease has already been present for at least a decade. So if you really want to identify sources or biomarkers that will point you in this direction, once you see the first symptoms, it's already too late. So at the DZNE they have started on a cohort study. In the area around Bonn, they are now collecting the data from 30,000 volunteers. They are planning to follow them for 30 years, and in this process we generate a lot of data, so of course we do the usual surveys to learn a bit about them, we learn about their environments. But we also do very more detailed analysis, so we take blood samples and we analyze the complete genome, and also we acquire imaging data from the brain, so we do an MRI at an extremely high resolution with some very advanced machines we have, and all this data is accumulated because we do not only have to do this once, but we try to do that repeatedly for every one of the participants in the study, so that we can later analyze the time series when in 10 years someone develops Alzheimer's we can go back through the data and see, maybe there's something interesting in there, maybe there was one biomarker that we are looking for so that we can predict the disease better in advance. And with this pile of data that we are collecting, basically we need something new to analyze this data, and to deal with this, and when we heard about the machine, we though immediately this is a system that we would need. >> Let me see if I can put this in a little bit of context. So Dave lives in Massachusetts, I used to live there, in Framingham, Massachusetts, >> Dave: I was actually born in Framingham. >> You were born in Framingham. And one of the more famous studies is the Framingham Heart Study, which tracked people over many years and discovered things about heart disease and relationship between smoking and cancer, and other really interesting problems. But they used a paper-based study with an interview base, so for each of those kind of people, they might have collected, you know, maybe a megabyte, maybe a megabyte and a half of data. You just described a couple of gigabytes of data per person, 30,000, multiple years. So we're talking about being able to find patterns in data about individuals that would number in the petabytes over a period of time. Very rich detail that's possible, but if you don't have something that can help you do it, you've just collected a bunch of data that's just sitting there. So is that basically what you're trying to do with the machine is the ability to capture all this data, to then do something with it, so you can generate those important inferences. >> Exactly, so with all these large amounts of data we do not only compare the data sets for a single person, but once we find something interesting, we have also to compare the whole population that we have captured with each other. So there's really a lot of things we have to parse and compare. >> This brings together the idea that it's not just the volume of data. I also have to do analytics and cross all of that data together, right, so every time a scientist, one of the people who is doing biology studies or informatic studies asks a question, and they say, I have a hypothesis which this might be a reason for this particular evolution of the disease or occurrence of the disease, they then want to go through all of that data, and analyze it as as they are asking the question. Now if the amount of compute it takes to actually answer their questions takes me three days, I have lost my train of thought. But if I can get that answer in real time, then I get into this flow where I'm asking a question, seeing the answer, making a different hypothesis, seeing a different answer, and this is what my colleagues here were looking for. >> But if I think about, again, going back to the Framingham Heart Study, you know, I might do a query on a couple of related questions, and use a small amount of data. The technology to do that's been around, but when we start looking for patterns across brain scans with time series, we're not talking about a small problem, we're talking about an enormous sum of data that can be looked at in a lot of different ways. I got one other question for you related to this, because I gotta presume that there's the quid pro quo for getting people into the study, is that, you know, 30,000 people, is that you'll be able to help them and provide prescriptive advice about how to improve their health as you discover more about what's going on, have I got that right? >> So, we're trying to do that, but also there are limits to this, of course. >> Of course. >> For us it's basically collecting the data and people are really willing to donate everything they can from their health data to allow these large studies. >> To help future generations. >> So that's not necessarily quid pro quo. >> Okay, there isn't, okay. But still, the knowledge is enough for them. >> Yeah, their incentive is they're gonna help people who have this disease down the road. >> I mean if it is not me, if it helps society in general, people are willing to do a lot. >> Yeah of course. >> Oh sure. >> Now the machine is not a product yet that's shipping, right, so how do you get access to it, or is this sort of futures, or... >> When we started talking to one another about this, we actually did not have the prototype with us. But remember that when we started down this journey for the machine three years ago, we know back then that we would have hardware somewhere in the future, but as part of my responsibility, I had to deal with the fact that software has to be ready for this hardware. It does me no good to build hardware when there is no software to run on it. So we have actually been working at the software stack, how to think about applications on that software stack, using emulation and simulation environments, where we have some simulators with essentially instruction level simulator for what the machine does, or what that prototype would have done, and we were running code on top of those simulators. We also had performance simulators, where we'd say, if we write the application this way, this is how much we think we would gain in terms of performance, and all of those applications on all of that code we were writing was actually on our large memory machines, Superdome X to be precise. So by the time we started talking to them, we had these emulation environments available, we had experience using these emulation environments on our Superdome X platform. So when they came to us and started working with us, we took their software that they brought to us, and started working within those emulation environments to see how fast we could make those problems, even within those emulation environments. So that's how we started down this track, and most of the results we have shown in the study are all measured results that we are quoting inside this forum on the Superdome X platform. So even in that emulated environment, which is emulating the machine now, on course in the emulation Superdome X, for example, I can only hold 24 terabytes of data in memory. I say only 24 terabytes >> Only! because I'm looking at much larger systems, but an enormously large number of workloads fit very comfortably inside the 24 terabytes. And for those particular workloads, the programming techniques we are developing work at that scale, right, they won't scale beyond the 24 terabytes, but they'll certainly work at that scale. So between us we then started looking for problems, and I'll let Matthias comment on the problems that they brought to us, and then we can talk about how we actually solved those problems. >> So we work a lot with genomics data, and usually what we do is we have a pipeline so we connect multiple tools, and we thought, okay, this architecture sounds really interesting to us, but if we want to get started with this, we should pose them a challenge. So if they can convince us, we went through the literature, we took a tool that was advertised as the new optimal solution. So prior work was taking up to six days for processing, they were able to cut it to 22 minutes, and we thought, okay, this is a perfect challenge for our collaboration, and we went ahead and we took this tool, we put it on the Superdome X that was already running and stepped five minutes instead of just 22, and then we started modifying the code and in the end we were able to shrink the time down to just 30 seconds, so that's two magnitudes faster. >> We took something which was... They were able to run in 22 minutes, and that was already had been optimized by people in the field to say "I want this answer fast", and then when we moved it to our Superdome X platform, the platform is extremely capable. Hardware-wise it compares really well to other platforms which are out there. That time came down to five minutes, but that was just the beginning. And then as we modified the software based on the emulation results we were seeing underneath, we brought that time down to 13 seconds, which is a hundred times faster. We started this work with them in December of last year. It takes time to set up all of this environment, so the serious coding was starting in around March. By June we had 9X improvement, which is already a factor of 10, and since June up to now, we have gotten another factor of 10 on that application. So I'm now at a 100X faster than what the application was able to do before. >> Dave: Two orders of magnitude in a year? >> Sharad: In a year. >> Okay, we're out of time, but where do you see this going? What is the ultimate outcome that you're hoping for? >> For us, we're really aiming to analyze our data in real time. Oftentimes when we have biological questions that we address, we analyze our data set, and then in a discussion a new question comes up, and we have to say, "Sorry, we have to process the data, "come back in a week", and our idea is to be able to generate these answers instantaneously from our data. >> And those answers will lead to what? Just better care for individuals with Alzheimer's, or potentially, as you said, making Alzheimer's a memory. >> So the idea is to identify Alzheimer long before the first symptoms are shown, because then you can start an effective treatment and you can have the biggest impact. Once the first symptoms are present, it's not getting any better. >> Well thank you for your great work, gentlemen, and best of luck on behalf of society, >> Thank you very much >> really appreciate you coming on theCUBE and sharing your story. You're welcome. All right, keep it right there, buddy. Peter and I will be back with our next guest right after this short break. This is theCUBE, you're watching live from Madrid, HPE Discover 2017. We'll be right back.

Published Date : Nov 29 2017

SUMMARY :

brought to you by Hewlett Packard Enterprise. that we also cover in Las Vegas, So Sharad, why don't we start with you, and frankly at that time, the response we got When you don't have to computing architectures that we are thinking through and how this technology gives you possible hope and in this process we generate a lot of data, So Dave lives in Massachusetts, I used to live there, is the Framingham Heart Study, which tracked people that we have captured with each other. Now if the amount of compute it takes to actually the Framingham Heart Study, you know, there are limits to this, of course. and people are really willing to donate everything So that's not necessarily But still, the knowledge is enough for them. people who have this disease down the road. I mean if it is not me, if it helps society in general, Now the machine is not a product yet and most of the results we have shown in the study that they brought to us, and then we can talk about and in the end we were able to shrink the time based on the emulation results we were seeing underneath, and we have to say, "Sorry, we have to process the data, Just better care for individuals with Alzheimer's, So the idea is to identify Alzheimer Peter and I will be back with our next guest

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NeilPERSON

0.99+

Dave VellantePERSON

0.99+

JonathanPERSON

0.99+

JohnPERSON

0.99+

Ajay PatelPERSON

0.99+

DavePERSON

0.99+

$3QUANTITY

0.99+

Peter BurrisPERSON

0.99+

Jonathan EbingerPERSON

0.99+

AnthonyPERSON

0.99+

Mark AndreesenPERSON

0.99+

Savannah PetersonPERSON

0.99+

EuropeLOCATION

0.99+

Lisa MartinPERSON

0.99+

IBMORGANIZATION

0.99+

YahooORGANIZATION

0.99+

AWSORGANIZATION

0.99+

Paul GillinPERSON

0.99+

Matthias BeckerPERSON

0.99+

Greg SandsPERSON

0.99+

AmazonORGANIZATION

0.99+

Jennifer MeyerPERSON

0.99+

Stu MinimanPERSON

0.99+

TargetORGANIZATION

0.99+

Blue Run VenturesORGANIZATION

0.99+

RobertPERSON

0.99+

Paul CormierPERSON

0.99+

PaulPERSON

0.99+

OVHORGANIZATION

0.99+

Keith TownsendPERSON

0.99+

PeterPERSON

0.99+

CaliforniaLOCATION

0.99+

MicrosoftORGANIZATION

0.99+

SonyORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

Andy JassyPERSON

0.99+

RobinPERSON

0.99+

Red CrossORGANIZATION

0.99+

Tom AndersonPERSON

0.99+

Andy JazzyPERSON

0.99+

KoreaLOCATION

0.99+

HowardPERSON

0.99+

Sharad SingalPERSON

0.99+

DZNEORGANIZATION

0.99+

U.S.LOCATION

0.99+

five minutesQUANTITY

0.99+

$2.7 millionQUANTITY

0.99+

TomPERSON

0.99+

John FurrierPERSON

0.99+

MatthiasPERSON

0.99+

MattPERSON

0.99+

BostonLOCATION

0.99+

JessePERSON

0.99+

Red HatORGANIZATION

0.99+

Terry Ramos, Palo Alto Networks | Splunk .conf 2017


 

>> Announcer: Live from Washington, DC, it's the Cube, covering .conf2017, brought to you by Splunk. (busy electronic music) >> Welcome back to the Washington Convention Center, the Walter Washington Convention Center, in our nation's capital as our coverage continues here of .conf2017. We're here at Splunk along with Dave Vellante. I'm John Walls, and kind of coming down the home stretch, Dave. There's just something about the crowd's lingering still, the show for, still has that good vibe to it, late second day, hasn't let off yet. >> Oh, no, remember, the show goes on through tomorrow. There's some event tonight, I think. I don't know, the band's here. >> Yeah, but-- >> Be hanging out, partying tonight. >> But you can tell the Splunkers are alive and well. We have Terry Ramos with us, who's going to join us for the next 15 minutes or so, the VP of Business Development of Palo Alto Networks. Terry, good to see you, sir. >> Good, really appreciate you having me here. >> You bet, you bet, thanks for joining us. You've got a partnership now, you've synced up with Splunk. >> Terry: Yes. >> Tell us a little bit about that. Then we'll get into the customer value after that. But first off, what's the partnership all about? >> Sure. We've actually been partners for about five years, really helping us solve some customer needs. We've got about several thousand customers who are actually using both products together to solve the needs I'll talk about in a minute. The partnership is really key to us. We've invested a ton of time, money, effort into it, we have executive level sponsorship all the way down to sales. In the field, we have reps working together to really position the solution to customers, both us and Splunk and then how we tie together. We're the number one downloaded app for Splunk by far that's a third party, so they have a couple that are more downloaded than us, but for third party, we've done that. We develop it all in house ourselves. For customers out there who think the app's great, I'll talk about the new version coming, I'd love any feedback on what should we do next, what are the next things we should do in the app, because we're really developing this and making this investment for customers to get the value out of it. >> What about the business update for Palo Alto Networks? I mean, can you give us the sort of quick rundown on what's going on in your world? >> Sure. I think most people know Palo Alto Networks has done pretty well. We just finished our FY '17, finished with about 42,500 customers. Revenue was, I think, 1.8 billion, approximately. We're still a very high growth company, and been growing the product set pretty well, from products next-gen firewall, all the attached subscriptions. Then we've got things like the Endpoint Traps now that's really doing well in the market, where customers need help on preventing exploits on the endpoint. That's been a growing market for us. >> It's the hottest space in the data center right now, and everybody wants to partner with you guys. Obviously, Splunk, you go to all the big shows, and they're touting their partnerships with Palo Alto. What do you attribute that sort of success to? >> Customers, truly. I run the partnerships for the company. If we do not have a customer who will be invested in the integration and the partnership, we don't do it. The number one thing we ask when somebody says, I want to partner with you, is, who's the customer, what's the use case, and why, right. Then if we can get good answers to that, then we go down the path of a partnership. Even then, though, we're still pretty selective. We've got 150 partners today that are technology partnerships. But we've got a limited number, Splunk's a big one, that we really invest heavily in, far more than the others, far more than just an API integration, the stuff of getting out to customers in the field the development of apps and integration, those things. >> Talk about, we laugh about Barney deals sometimes, I love you, you love me, let's do a press release. What differentiates that sort of Splunk level of partnership? Is it engineering resources? Is it deeper go to market? Maybe talk about that a little. >> Yeah, I hate Barney partnerships completely. If I do those, fire me, truthfully. I think the value that we've done with Splunk that we've really drawn out is, we've built this app, right, so BD has a team of developers on our team that writes the app for Splunk. We have spent four years developing this app. We were the first company to do adaptive response before it was called adaptive response. You see something in Splunk, you can actually take action back to a firewall to actually block something, quarantine something, anything like that. The app today is really focused on our products, right, through Endpoint, WildFire, things like that, right, so it's very product focused. We're actually putting in a lot of time and effort into a brand new app that we're developing that we're showing off now that we'll ship in about a month a half that's really focused on adversaries and incidents. We have something called the adversary score card where it'll show you, this is what's actually happening on my network, how far is this threat penetrating my network and my endpoints, is it being stopped, when is it being stopped. Then we've got an incident flow, too, that shows that level down to Traps prevented this, and here's how it prevented it. Then if we go back to the adversary score card, it ties into what part of the kill chain did we actually stop it at. For a CISO, when you come in and you say, there's a new outbreak, there's a new worm, there's a new threat that's happening, how do I know that I'm protected? Well, Splunk gives you great access to that data. What we've done is an app on top of it that's a single click. A SOC guy can say, here's where we're at, here's where we've blocked it. >> I guess I've been talking to a lot of folks here the last two days, and we've got a vendor right over here, we're talking, they have a little scorecard up, and they tell you about how certain intrusions are detected at certain intervals, 190 days to 300 and some odd days. Then I hear talk about a scorecard that tells you, hey, you've got this risk threat, and this is what's happened. I mean, I guess I'm having a hard time squaring that all up with, it sounds like a real time examination. But it's really not, because we're talking about maybe half a year or longer, in some cases, before a threat is detected. >> Yeah, so as a company, we've really focused on prevention. Prevent as much as you can. We have a product called WildFire, where we have tens of thousands of customers who actually share data with us, files and other things, files, URLs, other things. What we do is we run those through sandboxing, dynamic analysis, static analysis, all sorts of stuff, to identify if it's malicious. If it's malicious, we don't just start blocking that file, we also send down to the firewall all the things that it does. Does it connect to another website to download a different payload, does it connect to a C&C site, command and control site? What's that malware actually doing? We send that down to the customer, but we also send it to all of our customers. It may hit a target, right, the zero day hit one customer, but then we start really, how do we prevent this along the way, both in the network and at the endpoint? Yeah, there are a lot of people that talk about breaches long term, all that, what we're trying to make sure is we're preventing as much as we can and letting the SOC guys really focus on the things that they need to. A simple piece of malware, they shouldn't be having to look at that. That should be automatically stopped, prevented. But that advanced attack, they need to focus on that and what are they doing about it. >> The payloads have really evolved in the last decade. You mentioned zero day. Think about them, we didn't even know what it was in the early 2000s. I wonder if you could talk about how your business has evolved as the sophistication of the attackers has evolved from hacktivist to organized crime to nation state. >> Yeah, yeah. It has evolved a lot, and when you think about the company, 42,500 customers says a lot. We've been able to grow that out. When you talk about a product, something like WildFire that does this payload analysis, when we launched the product it was free. You'd get an update about every 24 hours, right. We moved it down to, I think it was four hours, then it was an hour, 20 minutes, and now it's about five minutes. In about five minutes, we do all that analysis and how do we stop it. Back to the question is, when you're talking about guys that are just using malware and running it over and over, that's one thing. But when you're talking about sophisticated nation states, that's where you've got to get this, prevent it as quickly as you possibly can. >> If we're talking about customer value, you've kind of touched on it a little bit, but ultimately, you said you've got some to deal with Splunk, some to deal with you, some are now dealing with both. End of the day, what does that mean to me, that you're bringing this extra arsenal in? How am I going to leverage that in my operations? What can I do with it better, I guess, down the road? >> Yeah, I think it really comes down to that, how quickly can you react, how do you know what to react to. I mean, it's as simple as that, I know it sounds super simple, but it is that. If I'm a SOC guy sitting in a SOC, looking at the threats that are happening on my network, what's happening on my endpoints, and being able to say, this one actually got through the firewall. It was a total zero day, we had never seen it before. But it landed at the endpoint, and it tried to run and we prevented it there. Now you can go and take action down to that endpoint and say, let's get it off the endpoint, the firewall's going to be updated in a few minutes anyway. But let's go really focus on that. It's the focus of, what do you need to worry about. >> Dave: Do you know what a zero day is? >> You've kind of, yeah, I mean, it's the movie, right? >> He's going, no, no, there was a movie because of the concept-- >> Because of the idea. >> David's note, there's been zero days of protection. But you can explain it better than I can. >> Yeah, zero day means it's a brand new attack, never seen before, whether it be-- >> Unique characteristics and traits in a new way that infiltrate, and something that's totally off from left field. >> When you think about it, those are hard to create. They take a lot of time and effort to go find the bugs in programs, right. If it's something in a Microsoft or an Oracle, that's a lot of effort, right, to go find that new way to do a buffer overflow or a heap spray or whatever it is. That's a lot of work, that's a lot of money. One of the things we focused on is, if we can prevent it faster, that money, that investment those people are making is out the window. We really, again, are going to focus on the high end, high fidelity stuff. >> The documentary called "Zero Days," but there was, I don't know how many zero day viruses inside of Stuxnet, like, I don't know, four or five. You maybe used to see, the antivirus guys would tell you, we maybe see one or two a year, and there were four or five inside of this code. >> Loaded into one invasion, yeah, yeah, yeah. >> It's the threat from within. I mean, one of the threats, if I recall correctly, was actually, they had to go in and steal some chip at some Taiwanese semiconductor manufacturer, so they had to have a guy infiltrate, who knows, with a mop or something, stick a, had to break in, basically. These are, when you see a payload like that, you know it's a nation state, not just some hacktivist, right, or even organized crime doesn't necessarily have the resources for the most part, right? >> It's a big investment, it is. Zero days are a big investment, because you've got to figure it out, you may have to get hardware, you have to get the software. It's a lot of work to fund that. >> They're worth a lot of money on the black market. I mean, you can sell those things. >> That's why, if we make them unusable fairly quickly, it stops that investment. >> We were talking with Monte Mercer earlier, just talking about his comments this morning, keynotes about you could be successful defending, right. It's not all bets are off, we're hopeless here. But it still sounds as if, in your world, there are these inherent frustrations, because bad guys are really smart. All of a sudden, you've got a whole new way, a whole new world that you have to combat, just when you thought you had enough prophylactic activity going on in one place, boom, here you are now. Can you successfully defend? Do you feel like you have the tools to be that watch at the gate? >> I'd be a liar if I say you can prevent everything, right. It's just not possible. But what you've got to be able to prevent is everything that's known, and then take the unknown, make it known as quickly as possible, and start preventing that. That's the goal. If anybody out here is saying they prevent everything, it's just not true, it can't be true. But the faster you take that unknown and make it known and start preventing it, that's what you do. >> Well, and it's never just one thing in this world, right? Now there's much more emphasis being placed on response and predicting the probability of the severity and things of that nature. It really is an ecosystem, right. >> Terry: It is, that's what I do. >> Which is kind of back to what you do. How do you see this ecosystem evolving? What are your objectives? >> I think that from my standpoint, we'll continue to build out new partnerships for customers. We really focus on those ones that are important to customers. We recently did a lot with authentication partners, right, because that's another level of, if people are getting those credentials and using them then what are they doing with them, right? We did some new stuff in the product with a number of partners where we look at the credentials, and if they're leaving the network, going to an unknown site, that should never happen, right? Your corporate credentials should never go to some unknown site. That's a good example of how we build out new things for customers that weren't seen before with a partner. We don't do authentication, so we rely on partners to do that with us. As we continue to talk about partnership and BD, we're going to continue to focus on those things that really solve that need for our customer. >> Well, I don't know how you guys sleep at night, but I'm glad you do. >> Dave: No, we don't. What do you mean? I'm glad you don't. >> It's 24/7, that's for sure. >> Terry: Yes. >> Terry, thanks for being with us. >> Thank you very much. >> We appreciate the time, glad to have you on the Cube. The Cube will continue live from Washington, DC, we're at .conf2017. (busy electronic music)

Published Date : Sep 27 2017

SUMMARY :

conf2017, brought to you by Splunk. There's just something about the crowd's lingering still, I don't know, the band's here. But you can tell the Splunkers are alive and well. You bet, you bet, thanks for joining us. But first off, what's the partnership all about? In the field, we have reps working together and been growing the product set pretty well, and everybody wants to partner with you guys. the stuff of getting out to customers in the field Is it deeper go to market? We have something called the adversary score card and they tell you about how certain intrusions are detected We send that down to the customer, The payloads have really evolved in the last decade. and how do we stop it. End of the day, what does that mean to me, It's the focus of, what do you need to worry about. But you can explain it better than I can. and something that's totally off from left field. One of the things we focused on is, and there were four or five inside of this code. I mean, one of the threats, if I recall correctly, you may have to get hardware, you have to get the software. I mean, you can sell those things. it stops that investment. just when you thought you had enough prophylactic But the faster you take that unknown and make it known and predicting the probability of the severity Which is kind of back to what you do. We did some new stuff in the product but I'm glad you do. What do you mean? We appreciate the time, glad to have you on the Cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
TerryPERSON

0.99+

Dave NicholsonPERSON

0.99+

AWSORGANIZATION

0.99+

Ian ColeyPERSON

0.99+

Dave VellantePERSON

0.99+

Terry RamosPERSON

0.99+

DavePERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

EuropeLOCATION

0.99+

Paul GellPERSON

0.99+

DavidPERSON

0.99+

Paul GillumPERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

John FurrierPERSON

0.99+

Andy JassyPERSON

0.99+

190 daysQUANTITY

0.99+

AmazonORGANIZATION

0.99+

PaulPERSON

0.99+

European Space AgencyORGANIZATION

0.99+

Max PetersonPERSON

0.99+

DellORGANIZATION

0.99+

CIAORGANIZATION

0.99+

AfricaLOCATION

0.99+

oneQUANTITY

0.99+

Arcus GlobalORGANIZATION

0.99+

fourQUANTITY

0.99+

BahrainLOCATION

0.99+

D.C.LOCATION

0.99+

EvereeORGANIZATION

0.99+

AccentureORGANIZATION

0.99+

JohnPERSON

0.99+

UKLOCATION

0.99+

four hoursQUANTITY

0.99+

USLOCATION

0.99+

DallasLOCATION

0.99+

Stu MinimanPERSON

0.99+

Zero DaysTITLE

0.99+

NASAORGANIZATION

0.99+

WashingtonLOCATION

0.99+

Palo Alto NetworksORGANIZATION

0.99+

CapgeminiORGANIZATION

0.99+

Department for Wealth and PensionsORGANIZATION

0.99+

IrelandLOCATION

0.99+

Washington, DCLOCATION

0.99+

an hourQUANTITY

0.99+

ParisLOCATION

0.99+

five weeksQUANTITY

0.99+

1.8 billionQUANTITY

0.99+

thousandsQUANTITY

0.99+

GermanyLOCATION

0.99+

450 applicationsQUANTITY

0.99+

Department of DefenseORGANIZATION

0.99+

AsiaLOCATION

0.99+

John WallsPERSON

0.99+

Satish IyerPERSON

0.99+

LondonLOCATION

0.99+

GDPRTITLE

0.99+

Middle EastLOCATION

0.99+

42%QUANTITY

0.99+

Jet Propulsion LabORGANIZATION

0.99+